uuid int64 541B 3,299B | dataset stringclasses 1
value | text stringlengths 1 4.29M |
|---|---|---|
2,877,628,091,454 | arxiv | \section{Introduction}
Diluted magnetic semiconductors (DMS) have been attracting considerable attention as a promising candidate material
for spintronics devices. The basic concept is to utilize spin polarized carriers created by strong exchange interaction
between localized $d$-electrons of diluted magnetic ions and itinerant $sp$-band electrons.\cite{Dietl00,Ohno98}
Since the discovery of room temperature ferromagnetism in Co-doped anatase and rutile TiO$_2$ thin film,\cite{Matsumoto01a,Matsumoto01b}
there have been a number of reports on high-$T_C$ ferromagnetism observed in various kinds of
DMS, particularly in oxides.\cite{Fukumura05,Janisch05}
Co-doped TiO$_2$ is the material that has been studied most intensively.
However, the origin of the ferromagnetism has not yet been elucidated. One of the experimental
issues is whether the ferromagnetism indeed originates from the Co spins that are randomly substituted on the Ti sites,
and not from the segregated Co clusters. There have also been a number of reports that ascribe the ferromagnetism to
the Co segregation.\cite{Kim03,Shinde04,Higgins04}
Strong evidence in support of intrinsic ferromagnetism has been given by the observations of
anomalous Hall effect (AHE) and magnetic circular dichroism (MCD).\cite{Toyosaki04,Toyosaki05a,Fukumura03,Yamada04,Ueno07}
If the ferromagnetism were intrinsic, the charge carriers should be spin polarized due to the exchange interaction
with the Co spins. AHE and MCD are considered to probe the ferromagnetic response of the carriers introduced by the oxygen deficiency.
In addition, rutile Ti$_{1-x}$Co$_{x}$O$_{2-\delta}$ has already been functionalized as a spin tunneling junction
working up to 180 K.\cite{Toyosaki05b}
Other studies that focus on this issue are mainly based on spectroscopic measurements
such as x-ray absorption fine structure (XAFS) for samples in which no Co segregation was recognized by imaging probes
such as scanning electron microscope (SEM), atomic force microscope (AFM), and transmission electron microscopy
(TEM).\cite{Chambers03,Shimizu04,Griffin05,Murakami04}
In these spectroscopic measurements, the spectral line shapes differ from that of Co metal.
They state that the local structure of the Co site is close to those of Co oxides such as CoTiO$_3$, in which the Co ion is surrounded by
six oxygen atoms, forming a CoO$_6$ cluster, and that the valence state of Co is divalent.
Cui \textit{et al.} performed electron beam diffraction and composition analysis for anatase TiO$_2$ in a selected area
without Co segregation and confirmed both the diffraction peaks of TiO$_2$ and the existence of Co.\cite{Cui04}
These results seem to support that the Co ions randomly substitute on the Ti sites.
Although these spectroscopic methods provide valuable information on the local structure around Co, however, they give little information
on the crystallographic position of Co and the orientation of the CoO$_6$ cluster: the former could be interstitial and the latter
could be deformed due to the oxygen vacancy.
In this context, we still do not have unambiguous evidence that Co indeed substitutes for Ti in a crystallographic sense.
The fact that Co is not soluble in TiO$_2$ in a thermodynamically stable manner also casts doubt on the assumption
that Co substitutes for Ti.\cite{Li03}
From a theoretical point of view, knowledge on the local structure of a Co ion is of fundamental importance to construct a valid model and proceed the calculations on the electronic states of $3d$ and $sp$-band electrons.
In the present paper, we report on the results of x-ray diffraction utilizing anomalous scattering from Co.
This is a fundamentally different approach than spectroscopy.
We observe a Bragg peak as a result of the interference among the x rays scattered from many Co ions in the sample and discuss the
average crystallographic site of Co in the TiO$_2$ lattice.
If Co substitute exactly on the Ti site, the Co ions have the same periodicity as the TiO$_2$ lattice and contribute to the Bragg peak.
Our results on anatase and rutile Co-doped TiO$_2$ films, however, lead us to a conclusion
that the Co ions are not exactly located on the Ti sites, implying a significant lattice deformation.
On the other hand, it is shown that Co indeed substitutes on the Zn sites in paramagnetic Co-doped ZnO thin films.
\section{Experiment}
The basic principle of the method is simple and direct.
If the Co ions of concentration $x$ randomly substitute on the Ti sites of TiO$_2$, the unit-cell structure-factor can be expressed as
\begin{eqnarray}
F &=& \sum_{i} \{(1-x)f_{\text{Ti}} + xf_{\text{Co}}\}\exp i\bm{\kappa}\cdot\bm{R}_i^{\text{(Ti)}} \nonumber \\
&& + \sum_{i} f_{\text{O}} \exp i\bm{\kappa}\cdot\bm{R}_i^{\text{(O)}}\;, \label{eq:1}
\end{eqnarray}
where $f_{\text{Ti}}$, $f_{\text{Co}}$, and $f_{\text{O}}$ are the atomic scattering factors of Ti, Co, and O, respectively.
$\bm{R}_i^{\text{(Ti)}}$ and $\bm{R}_i^{\text{(O)}}$ represent the $i$-th atomic site of Ti and O in the unit cell, respectively.
$\bm{\kappa}$ is the scattering vector. Here, the atomic scattering factors are energy dependent and are
generally expressed as
\begin{equation}
f(E) = f^{0} + f'(E) + if''(E)\;,
\end{equation}
where $f^0$ is the Thomson scattering factor and $f'$ and $f''$ are the real and imaginary parts, respectively,
of the anomalous scattering factor, which exhibit a significant anomaly near an absorption edge of the element.
Therefore, when we measure the energy dependence of the intensity of a Bragg reflection from the Co-doped TiO$_2$ film,
the intensity should exhibit an anomaly at the absorption edge of Co if the structure factor involves $f_{\text{Co}}$.
This measurement can be performed using a synchrotron radiation source, where the incident energy of the x rays can be varied.
Information of the crystal structures, reflection indices examined in the present experiment, and their structure factors
are listed in Table~\ref{table:1}.
In rutile, the Ti atoms occupy the crystallographic site of $2a$: (0, 0, 0) and ($\frac{1}{2}$, $\frac{1}{2}$, $\frac{1}{2}$).
In anatase, they are at the $4a$ site: (0, 0, 0), (0, $\frac{1}{2}$, $\frac{1}{4}$), ($\frac{1}{2}$, $\frac{1}{2}$, $\frac{1}{2}$), and
($\frac{1}{2}$, 0, $\frac{3}{4}$).
The Zn atoms of ZnO occupy the $2b$ site of the wurtzite structure:
($\frac{1}{3}$, $\frac{2}{3}$, 0) and ($\frac{2}{3}$, $\frac{1}{3}$, $\frac{1}{2}$).
\begin{table*}
\caption{Reflection index and the structure factor examined in this experiment. }
\label{table:1}
\begin{ruledtabular}
\begin{tabular}{lllll}
sample & structure & space group & index & structure factor \\
\hline
Ti$_{1-x}$Co$_{x}$O$_2$ & anatase & $I4_1/amd$ (\#141) & 0 0 4 & $F=4(1-x)f_{\text{Ti}}+4xf_{\text{Co}}+8f_{\text{O}}\cos 1.66\pi$ \\
Ti$_{1-x}$Co$_{x}$O$_2$ & rutile & $P4_2/mnm$ (\#136) & 2 0 2 & $F=2(1-x)f_{\text{Ti}}+2xf_{\text{Co}}+4f_{\text{O}}\cos 1.22\pi$ \\
Zn$_{1-x}$Co$_{x}$O & wurtzite & $P6_3mc$ (\#186) & 0 0 2 & $F=2(1-x)f_{\text{Zn}}+2xf_{\text{Co}}+2f_{\text{O}}(\cos 1.53\pi + i\sin 1.53\pi)$
\end{tabular}
\end{ruledtabular}
\end{table*}
We used the same rutile Ti$_{1-x}$Co$_{x}$O$_{2-\delta}$ epitaxial thin film samples as those studied in
Refs.~\onlinecite{Toyosaki04} and \onlinecite{Toyosaki05a}.
The films were deposited on TiO$_2$ buffered r-sapphire substrates by laser molecular beam epitaxy (MBE).
Oxygen deficiency $\delta$ is controlled by the oxygen partial pressure varying from 10$^{-4}$ to 10$^{-8}$ Torr.
A systematic relationship among Co concentration $x$, oxygen partial pressure, carrier density, conductivity,
ferromagnetic moment, AHE, and MCD is well established for these samples.
Results of x-ray photoemission spectroscopy (XPS) and XMCD at the Co $L_{2,3}$-edges for these samples are also
reported,\cite{Quilty06,Mamiya06} both of which conclude that the spectrum is that of a high-spin Co$^{2+}$ ion in a
crystal field of oxygen octahedron and that the room-temperature ferromagnetism is intrinsic.
Anatase Ti$_{1-x}$Co$_{x}$O$_{2-\delta}$ epitaxial thin film was deposited on LaAlO$_3$-(001) substrate by
laser MBE in an oxygen partial pressure of $1\times10^{-6}$ Torr and at a growth temperature of 700 $^{\circ}$C.
The appearance of room-temperature ferromagnetism was checked by a SQUID magnetometer and no Co segregation was
recognized by AFM and SEM. Zn$_{1-x}$Co$_{x}$O epitaxial thin films are the same as those of
Ref.~\onlinecite{Jin01}. These exhibit large MCD without an appearance of ferromagnetism, although several studies
reported high-$T_C$ ferromagnetism in the same compound.\cite{Fukumura05,Janisch05}
X-ray diffraction experiments were performed using four-circle diffractometers installed at beamlines 1A, 4C, and 16A2 of the
Photon Factory in KEK, Japan. The incident beam was monochromatized by a Si-111 double crystals and focused by
a bent cylindrical mirror.
The energy was calibrated using the absorption edge of a Co metal foil.
For each thin-film sample, we first measured the fluorescence spectrum near
the Co $K$-edge to check if Co was actually included in the area where the beam was irradiated. Next, we measured
the energy dependence of the intensity of the Bragg reflection. The typical beam size was $\sim 1\times1$ mm$^2$.
All the measurements were carried out at room temperature.
\section{Experimental Results}
The results for Zn$_{1-x}$Co$_{x}$O with $x=0.02$, 0.04, and 0.12 are shown in Fig.~\ref{fig1}.
The base line of each spectrum is normalized to unity. The step in the fluorescence at 7.72 keV, the $K$-edge of Co, is
roughly proportional to the Co concentration, indicating that Co is actually included in the irradiated area with concentrations
proportional to the nominal value.
Energy dependence of the intensity of the 002 Bragg reflection also exhibits a clear anomaly at the $K$-edge.
This directly indicates that the Co ions indeed substitute on the Zn sites randomly. Comparison with the calculated curve represented
by the lines is also satisfactory.\cite{SCM-AXS}
\begin{figure}[tb]
\begin{center}
\includegraphics[width=7.5cm]{Fig1.eps}
\end{center}
\caption{(a) Fluorescence spectra of Zn$_{1-x}$Co$_{x}$O as a function of Co concentration $x$.
(b) X-ray energy dependences of the intensity of the 002 Bragg reflection. Data are shifted for $x$=0.04 and 0.12.
Lines represent the calculated curves.
}
\label{fig1}
\end{figure}
Ti$_{1-x}$Co$_{x}$O$_{2-\delta}$ samples exhibit contrasting results with Zn$_{1-x}$Co$_{x}$O as described in the following.
Figure~\ref{fig2} shows the results for anatase Ti$_{1-x}$Co$_{x}$O$_{2-\delta}$ with a nominal concentration of $x$=0.05.
The base lines are normalized to unity. The fluorescence spectrum indicates that the Co ions are indeed included in the sample.
However, the intensity of the 004 Bragg reflection does not exhibit any anomaly at the absorption edge of Co.
If the Co ions with $x=0.05$ substitute for Ti, an anomaly as demonstrated by the solid line is expected,
which is as large as about 5\% of the anomaly actually observed at the $K$-edge of Ti as shown in the inset.
These results mean that the doped Co ions are not located exactly on the Ti sites, although the Co ions indeed exist in the sample.
\begin{figure}[tb]
\begin{center}
\includegraphics[width=7.5cm]{Fig2.eps}
\end{center}
\caption{(a) Fluorescence spectrum of anatase Ti$_{1-x}$Co$_{x}$O$_{2-\delta}$ for $x=0.05$.
(b) X-ray energy dependence of the intensity of the 004 Bragg reflection. Solid line represents a calculated curve assuming
random substitution of Ti with 5\% Co ions. Dashed line represents a simulation considering local deformations as described in the text.
Inset shows the result around the $K$-edge of Ti.
}
\label{fig2}
\end{figure}
Figure~\ref{fig3} shows the results for rutile Ti$_{1-x}$Co$_{x}$O$_{2-\delta}$ with nominal concentrations of $x=0.05$ and 0.1,
grown under an oxygen partial pressure of 10$^{-7}$ Torr.
The fluorescence spectra show that the Co ions indeed exist in the samples with actual concentrations proportional to the nominal value.
However, as in anatase, the intensities of the 202 Bragg reflection do not exhibit any anomaly at the absorption edge,
even in the high concentration sample of $x=0.1$.
These results again mean that the doped Co ions are not exactly on the Ti sites, although the Co ions indeed exist in the film.
Since we expect an anomaly as large as the one demonstrated by the solid line in Fig.~\ref{fig3}, the actual amount of substitution,
if any, is estimated to be much less than 1 \% both for $x=0.05$ and 0.1.
The measurements on other reflections, e.g., 103 for anatase and 101 for rutile, and also on a few other samples,
did not exhibit any anomaly.
\begin{figure}[tb]
\begin{center}
\includegraphics[width=7.5cm]{Fig3.eps}
\end{center}
\caption{(a) Fluorescence spectra of rutile Ti$_{1-x}$Co$_{x}$O$_{2-\delta}$ for $x=0.05$ and $x=0.1$.
(b),(c) X-ray energy dependence of the intensity of the 202 Bragg reflection for $x=0.05$ and $x=0.1$.
Solid line represents a calculated curve assuming random substitution of Ti with 5\% Co ions. Dashed line represents a
simulation considering local deformations as described in the text.
}
\label{fig3}
\end{figure}
\section{Discussions}
The present experimental results unambiguously show that the atomic scattering factor of Co is not included in the
structure factor of either anatase or rutile Ti$_{1-x}$Co$_{x}$O$_{2-\delta}$.
In other words, the Co ions, on average, do not occupy the Ti site nor any specific crystallographic site in the TiO$_2$ unit cell;
therefore, the interference among x-rays scattered from randomly distributed Co ions are prevented.
In contrast, from spectroscopic measurements such as XAFS, XPS and XMCD, it is concluded that the local environment of Co is
close to that of oxygen octahedron.\cite{Chambers03,Shimizu04,Griffin05,Murakami04,Cui04,Quilty06,Mamiya06}
In addition, the strong correlation among Co concentration $x$, oxygen deficiency $\delta$, conductivity, ferromagnetism, AHE, and MCD investigated in
Refs.~\onlinecite{Toyosaki04,Toyosaki05a,Fukumura03,Yamada04,Ueno07} support that the carriers are associated with the ferromagnetism originating
from the randomly distributed Co ions.
Taking all these experimental results into consideration, we speculate that the doped Co ions exist in a locally deformed structure,
although they are randomly distributed in the sample without making a Co metal-cluster.
When a Co$^{2+}$ ion is substituted for a Ti$^{4+}$ in TiO$_2$, an oxygen vacancy is necessarily created
to maintain the charge neutrality. As a result, the number of oxygen in the ligands becomes less than six,\cite{Chambers03}
which would lead to a deformation of the local structure around Co and a deviation of Co from the exact Ti site, i.e., the
$2a$ site in rutile and the $4a$ site in anatase.
There is also a possibility that Co substitutes for the interstitial sites among oxygen octahedrons.
In rutile TiO$_2$, in particular, the interstitial occupation at positions such as ($\frac{1}{2}$, 0, 0), (0, $\frac{1}{2}$, 0), ($\frac{1}{2}$, 0, $\frac{1}{2}$),
and (0, $\frac{1}{2}$, $\frac{1}{2}$) leads to a structure similar to the Magneli phase as illustrated in Fig.~\ref{fig4},
from which we may speculate that the interstitial site could be also stable for Co.
\begin{figure}[t]
\begin{center}
\includegraphics[width=7.5cm]{Fig4.eps}
\end{center}
\caption{(Color online) Crystal structure of (a) anatase-TiO$_2$, (b) rutile-TiO$_2$, and
(c) Ti$_4$O$_7$ Magneli phase.\cite{VICS-II}
}
\label{fig4}
\end{figure}
In order to examine if such a deformation could suppress the $K$-edge anomaly of Co, we have made simulations assuming
random shifts of the Co ions from the Ti site or the interstitial site. The result is demonstrated by the dashed lines in Figs.~\ref{fig2} and \ref{fig3}.
The process of the simulation is as follows.
First, 5\% of Co is doped into a super cell of $10\times 10\times 10$ unit cells, randomly
substituting for the Ti sites in anatase and the interstitial sites in rutile, respectively.\cite{interstitial}
At this stage, the anomaly as demonstrated by the solid lines in Fig.~\ref{fig2} and \ref{fig3} appears because the Co ions occupy a
specific crystallographic site and Eq.~(\ref{eq:1}) is valid. This is the first kind of randomness.
Next, we consider the second kind of randomness; i.e., one oxygen vacancy is randomly created in the local octahedron at the Co site,
and the Co atom is shifted to the oxygen vacancy by 1~\AA\ in anatase and 0.6~\AA\ in rutile, respectively.
Then, we calculate $|F|^2$ for the super cell, which is shown by the dashed lines in the figures.
Although it is not our intention to emphasize that this is really the case, the second kind of randomness explains that the anomaly at the edge
becomes very weak; absence of the anomaly in the 103 reflection of anatase and 101 of rutile can also be explained.
In this simulation, the weakened anomaly is associated with the increased number of possible Co sites by a factor six as a result of the
second kind of randomness.
There should also be a tilting of the octahedron due to the vacancy, which would weaken the anomaly even more because
it further increases the possible positions and weakens the correlation among the Co sites; the relatively large shift values assumed above
can be reduced.
The spectroscopic measurements, which analyze the local structural environment on average, may hide information on this kind of
deformation we considered here.
In the case of Zn$_{1-x}$Co$_{x}$O, on the other hand, oxygen vacancy is not necessary when doping because both Zn and Co are divalent;
then, there arises little deformation in the local structure and the doped Co sits exactly on the Zn site, which is the prerequisite for Eq.~(\ref{eq:1}).
To search for possible deformed structure or another phase that could contain Co such as the Magneli phase,
we performed additional x-ray diffraction experiment using an imaging plate Debye-Scherrer camera
installed at beamline 1B. However, no other phase was detected as a diffraction peak
with its intensity higher than $\sim 0.2$\% of the strongest peak of rutile or anatase structure of the TiO$_2$ film.
Therefore, the deformed structure, if it existed, is suspected to be short ranged and randomly oriented,
which only gives rise to incoherent scattering.
Although the position and the local structure of Co is still uncertain, our experimental results support a theoretical investigation
that the oxygen vacancy near Co induces structural deformation and enhances the spin density
associated with the ferromagnetism.\cite{Weng04}
A recent theoretical model for high-$T_C$ ferromagnetism in oxide DMS is also based on the oxygen vacancy,
which causes impurity band exchange.\cite{Coey05}
Elucidation of the microscopic structure around Co and its relation with the mechanism of the ferromagnetism is strongly required.
Structural analysis by x-ray fluorescence holography, which determines the three dimensional atomic arrangement around the fluorescing atom,
could solve this problem.\cite{Hosokawa07}
\section{Conclusion}
By utilizing x-ray anomalous dispersion, we have directly examined whether the doped Co ions substitute for Ti
in anatase and rutile Ti$_{1-x}$Co$_{x}$O$_{2-\delta}$ for well characterized thin-film samples exhibiting intrinsic high-$T_C$ ferromagnetism.
Although the intensity of the Bragg reflections should exhibit an anomaly at the $K$-edge of Co if the Co ions were randomly substituted
exactly on the Ti site, no anomaly was detected in the experiment, indicating that the Co ions are not located exactly on the Ti site.
However, the fluorescence spectra show that the Co ions exist in the sample in a certain form;
XPS and XMCD spectra on the identical sample support that the Co ions are randomly distributed and are surrounded by the oxygens.
These contrasting results suggest that the local structure around Co is strongly deformed, leading to a significant shift of Co from
the high symmetry positions of Ti sites or interstitials, probably because of the oxygen vacancy.
We have proposed a scenario how the anomaly disappears by the local deformations.
On the other hand, in our paramagnetic Zn$_{1-x}$Co$_{x}$O thin film, the substitution of Co for the Zn sites has been verified.
These results may imply a significant role of lattice deformation for the high-$T_C$ ferromagnetism in oxide DMS's.
\begin{acknowledgments}
We wish to acknowledge the technical support of Y. Wakabayashi and H. Sawa during the experiments at the photon factory.
This work was supported by a Grant-in-Aid for Scientific Research on Priority Area, "Invention of anomalous quantum materials",
from the Ministry of Education, Science, Sports and Culture of Japan.
T. F is supported by NEDO, Industrial Research Grant Program (05A24020d).
\end{acknowledgments}
|
2,877,628,091,455 | arxiv | \section{Introduction}
The mechanisms that lead to separation of boundary currents in the ocean are poorly understood. Numerical models can provide only a small contribution to a better understanding of boundary separation, since the representation of the coast line in today's ocean models differs in essential properties from the coast line in real-world oceans,
mainly due to the coarse resolution, the high viscosity, and the imprint of the used grid structure.
This paper investigates boundary separation in finite element models. It is possible that finite element discretization methods will be a common choice to build up future ocean models (see for example the model development in \cite{Danilov2004,Piggott2008,Comblen2009,Dueben2012}).
For the separation of boundary currents in the ocean, such as the Gulf Stream, there are many possible mechanisms that might influence the position of the separation point such as a change of the direction in the wind field, a potential vorticity crisis, an adverse pressure gradient, boundary conditions, a collision with another western boundary current, interactions with the deep western boundary current, the coast line geometry, the bottom topography or eddy-topography interactions (see for example \cite{Stommel1948,Cessi1987,Chassignet1991,Ezer1992,Haidvogel1992,Oezgoekmen1997,Tansley2000,Munday2005,Chassignet2008} and the references therein).
While the Gulf Stream tends to overshoot the separation point of the real world in standard numerical ocean models, state-of-the-art high-resolution model simulations, with a grid resolution of $1/10^{\circ}$ or higher, obtain an improved representation of Gulf Stream separation \cite{Bryan2007, Chassignet2001}.
However, high-resolution does not guarantee a proper representation of the Gulf Stream, and the separation point remains sensitive to changes in the model setup, such as changes in viscosity parameterization \cite{Bryan2007}.
The choice of boundary conditions is also known to have a significant influence on the separation behavior \cite{Dengg1993,Haidvogel1992}.
The two discretization methods that are widely used in today's ocean models are the finite difference and the finite volume method. The finite difference method offers only a poor representation of the coast line. To introduce a coast line into a finite difference model, grid points on land are typically removed from a fixed grid. Structured longitude/latitude grids which are typically used allow an angle of 0 or 90 degrees between neighboring grid edges along the boundary.
This leads to staircase pattern along the coast line. Furthermore, due to the staggering of the velocity components, the effective boundary conditions can be dependent on the angle between the coast line and the coordinate axis of the numerical grid (see \cite{Adcroft1997} for the analysis on an Arakawa C-grid and B-grid). Finite volume and finite element methods offer higher geometric flexibility. In finite volume methods, the velocity field is represented as one dimensional vector perpendicular to grid edges. In finite element methods, the velocity field is typically defined as two-dimensional vector quantity all along the coast line. Both methods allow the use of boundary conforming grid generators, in which the vertices at the boundary of a grid are aligned to the coast line.
Despite the improved coast line representation, a detailed analysis of the properties of boundary currents and boundary separation has not been done for finite element models with realistic coast lines for ocean models.
We study the numerical representation of western boundary currents in a finite element model and compare the results to finite difference simulations from the literature. The used finite element model uses a discontinuous linear representation for velocity and a continuous second order representation for height. It was developed for the particular use in atmosphere and ocean model and fulfills the Ladyzhenskaya-Babuska-Brezzi-condition -- which is a necessary condition for convergence in finite element modeling -- and is able to represent the geostrophic balance at the same time \cite{Cotter2009, Cotter2009LBB}. We simulate the separation of steady western boundary currents from idealized coast lines, and coast lines as used in ocean models. We vary the resolution, the eddy viscosity, the grid structure, the coast line, the alignment between the velocity components and the coast line, and between no-slip and free-slip boundary conditions.
We evaluate the influence of these properties on boundary currents, and boundary separation. The test cases studied in this publication do not fundamentally differ from test setups used in publications such as \cite{Dengg1993}, \cite{Haidvogel1992}, or \cite{Oezgoekmen1997} for simulations with finite difference models with vorticity as prognostic quantity. The main differences are that we use a finite element model, and velocity and height as prognostic quantities.
In section two, we give a very short description of the model setup, including the shallow-water equations, the discretization in space and time, and grid refinement. In section three, we introduce the test cases and present the numerical results. In section four, we discuss the results.
\section{Model setup}
This section provides a brief introduction to the functionality of the used model, including the shallow-water equations, the discretization in space and time, and the used grids. A detailed description of the model setup can be found in \cite{Dueben2012}.
\subsection{The viscous shallow-water equations}
Our finite element model simulates the viscous shallow-water equations in non-conservative form
\begin{equation}
\label{bo_shallowu}
\partial_t \mathbf{u} + \mathbf{u} \cdot \nabla \mathbf{u} + f \mathbf{k} \times \mathbf{u} + g \nabla h - \frac{1}{H} \nabla \cdot \left( H \nu \nabla \mathbf{u} \right) = \frac{\boldsymbol{\tau}^s}{H } - \gamma_f \mathbf{u},
\end{equation}
\begin{equation}
\label{bo_shallowh}
\partial_t h + \nabla \cdot \left( H \mathbf{u} \right) = 0, \notag
\end{equation}
where $\mathbf{u}$ is the two dimensional velocity vector, $f$ is the Coriolis parameter, $\mathbf{k}$ is the vertical unit vector, $g$ is the gravitational acceleration, $\nu$ is the eddy viscosity, $\boldsymbol{\tau}^s$ is the surface wind forcing, $\gamma_f$ is the bottom friction coefficient, $h$ is the surface elevation and $H$ is the height of the fluid column given by $H=h-h_b$, where $h_b$ is the bathymetry. The prognostic variables are the surface elevation and the velocity.
The used model can run with either free-slip ($\mathbf{u} \cdot \mathbf{n} = 0 $, and $\partial_\mathbf{n} \mathbf{u} = 0 $ on the boundary $\partial \Omega$), or no-slip boundary conditions ($\mathbf{u} = 0 $ on $\partial \Omega$).
We apply a weak representation of no-slip boundary conditions of zero tangential velocity. To this end, we add the penalty term $-\sigma \mathbf{u}$ to right hand side of equation \ref{bo_shallowu} for all velocities along the boundary that pushes the tangential velocity along the boundary towards zero. $\sigma$ is a constant that needs to be adjusted experimentally.
All other boundary conditions are realized as strong boundary condition by adjusting the corresponding numerical fluxes through the boundary.
\subsection{Discretization in space and time}
Following the typical finite element approach, we expand the physical fields into sets of basis functions $N_i$ and $M_i$
\begin{equation}
\mathbf{u} = \sum_{i=1}^{N_u} \mathbf{u}_i N_i \qquad \mbox{ and } \qquad h = \sum_{i=1}^{N_h} h_i M_i. \notag
\end{equation}
We use a $P_1^{DG}P_2$ finite element approach to discretize the equations. This means that we employ discontinuous linear Lagrange polynomials for the representation of the velocity field ($N_i$), and globally continuous quadratic Lagrange polynomials for the representation of the height field ($M_i$).
Each triangular cell has three degrees of freedom for each component of velocity located at the vertices of the cells, and six degrees of freedom for the height field located at the vertices and edges. While the degrees of freedom of the height field are shared with the surrounding cells, the degrees of freedom of the velocity field belong to a specific cell, which leads to a discontinuous representation.
Time integration is performed by an explicit three-level Adams-Bashforth method. The equation
\begin{equation}
\partial_t \boldsymbol{\psi} = R(\boldsymbol{\psi}), \notag
\end{equation}
where $R$ denotes the right-hand-side of the system, and $\boldsymbol{\psi}$ is the vector of prognostic variables, is discretized in time by
\begin{equation}
\boldsymbol{\psi}^{n+1} = \boldsymbol{\psi}^{n} + \Delta t \left( \frac{23}{12} R(\boldsymbol{\psi}^n) - \frac{4}{3} R(\boldsymbol{\psi}^{n-1}) + \frac{5}{12} R(\boldsymbol{\psi}^{n-2}) \right), \notag
\end{equation}
where $\boldsymbol{\psi}^i$ is the vector of state variables at the $i$-th time step.
\subsection{Grids and grid refinement}
We use two types of standard grids on which refinement is performed. The first type of grids are structured triangular grids that provide a uniform coverage of the longitude/latitude space. The grids are derived from rectangular grids by bisecting each rectangular into two triangles.
The second type of grids are icosahedral geodesic grids that provide a quasi-uniform coverage of the sphere \cite{Baumgardner1985}. We use static h-refinement to refine the interesting area around the coast line. In h-refinement, new grid points are introduced to the grid, to increase the model resolution in regions of specific interest. The influence of grid refinement to the model solution is investigated in \cite{Dueben2013}.
\section{Numerical tests and results}
\label{bo_testcases}
In this section, we present the numerical results for two test cases. We will start with an idealized case that uses straight boundaries, simulating a steady wind driven ocean gyre with a western boundary current that separates at the corner of an obstacle.
In the second test, we study a more realistic setup to investigate coast lines as used in ocean models simulating a steady wind driven circulation in an Atlantic shaped basin.
\subsection{Ocean gyre with idealized coast lines}
\label{bo_Dengg_TC}
We study an ocean gyre setup on the northern hemisphere. The gyre is driven by a wind forcing in clockwise direction. Due to the change of the Coriolis parameter in the meridional direction, the gyre is intensified towards the western boundary, and a western boundary current develops \cite{Stommel1948,Pedlosky1996}.
The current is separating from the coast at the edge of a rectangular obstacle. The setup is chosen to be as close as possible to the setup used in \cite{Dengg1993}. Here, Dengg investigated boundary separation in a model that simulates the barotropic vorticity equation.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.4 \textwidth, angle=270]{./grid_dengg.ps}
\caption{Vertices of the grid with two refinement levels used for the idealized coast line test. The red line marks the coast line.}
\label{bo_fig:dengg_grid}
\end{figure}
We perform model runs on a triangular grid, which is structured in longitude/latitude space. We use refinement to increase the resolution in the vicinity of the boundary (see Figure \ref{bo_fig:dengg_grid}). A grid edge has a length of about $1.6^\circ$ in the coarsest and $0.4^\circ$ in the finest part of the grid, this corresponds to about 172 and 43 km at the southern boundary of the domain.
While the meridional wind forcing is zero, the zonal wind forcing is set to
\begin{equation}
\tau^s_{\lambda} = \tau_0 \cdot 10^{-3} \cdot \cos \left( \frac{\pi \left(\theta - 15^\circ \right)}{40^\circ} \right) , \notag
\end{equation}
where $\theta$ is the latitude. The bottom friction coefficient $\gamma_f$ is set to $10^{-6} \; s^{-1}$, and $\tau_0$ is $0.28 \; m^2 s^{-2} $. The height field is initialized with a constant water depth of $1000 \; m$; the initial velocity is set to zero.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.6 \textwidth, angle=90]{./height_resolution}
\caption{Equilibrium height field of the idealized coast line test. While the runs $\mathit{a}$, $\mathit{b}$, $\mathit{c}$, and $\mathit{d}$ were simulated with $\nu = 3000.0 \; m^2 s^{-1}$, runs $\mathit{e}$ and $\mathit{f}$ were simulated with $\nu = 10000.0 \; m^2 s^{-1}$.
The simulations are performed on the grid plotted in Figure \ref{bo_fig:dengg_grid} either without refinement ($\mathit{e}$), with one refinement level ($\mathit{a}$ and $\mathit{c}$), or with two refinement levels ($\mathit{b}$, $\mathit{d}$, and $\mathit{f}$).}
\label{bo_fig:height_resolution}
\end{figure}
Figure \ref{bo_fig:height_resolution} shows the equilibrated height field. For all tests, the Munk layer at the western boundary is represented smoothly.
The simulations in \cite{Dengg1993} show no boundary separation when free-slip boundary conditions are used. Nevertheless, in the present simulations the boundary flows separate for free-slip and no-slip boundary conditions. The equilibrated fields have a different shape for the two different boundary conditions (compare $\mathit{b}$ and $\mathit{d}$). While resolution does not play an important role for boundary separation (compare $\mathit{a}$ with $\mathit{b}$, $\mathit{c}$ with $\mathit{d}$, and $\mathit{e}$ with $\mathit{f}$), changes in viscosity have a strong impact (compare $\mathit{d}$ and $\mathit{f}$). The simulated flows have Reynoldsnumbers between 10 and 100 along the boundary. The results are qualitatively the same when simulations are performed with stronger wind forcings, and therefore with higher Reynoldsnumbers. We do not show these results since the resulting equilibrium fields are unsteady and it is much more difficult to compare them.
\begin{figure}[ht!]
\centering
\includegraphics[width=1.0 \textwidth, angle=0]{./height_changegrid}
\caption{Equilibrium height field of the idealized coast line model runs with no-slip (top row) and free-slip (bottom row) boundaries conditions on different grids. Results should be compared with the model runs $\mathit{a}$ and $\mathit{c}$ in Figure \ref{bo_fig:height_resolution}.}
\label{bo_fig:height_changegrid}
\end{figure}
\begin{figure}[ht!]
\centering
\includegraphics[width=0.4 \textwidth, angle=270]{./grid_dengg_icon}
\caption{Vertices of the refined grid built from an icosahedral grid, used in model run $\mathsf{a}$ in Figure \ref{bo_fig:height_changegrid}.}
\label{bo_fig:grid_icosahedron}
\end{figure}
\begin{figure}[ht!]
\centering
\includegraphics[width=0.2 \textwidth, angle=0]{./triangle1} $\qquad$ \includegraphics[width=0.2 \textwidth, angle=0]{./triangle2} $\qquad$ \includegraphics[width=0.37 \textwidth, angle=0]{./triangle3}
\caption{Structure of the triangular grids used for the standard runs (left), simulations $\mathsf{c}$ and $\mathsf{g}$ (middle), and simulations $\mathsf{d}$ and $\mathsf{h}$ in Figure \ref{bo_fig:height_changegrid} (right), with indicated coast line at the western boundary.}
\label{bo_fig:changegrid_grids}
\end{figure}
The simulations in Figure \ref{bo_fig:height_changegrid} are performed with the same setup as for the previous idealized coast line tests $\mathit{a}$ and $\mathit{c}$ in Figure \ref{bo_fig:height_resolution} with $\nu = 3000 \; m^2 s^{-1}$, $\tau_0 = 0.28 m^2 s^{-2} $. However, different grids have been used. Simulations $\mathsf{a}$ and $\mathsf{d}$ were performed on a one level refined icosahedral grid (see Figure \ref{bo_fig:grid_icosahedron}). The coast line is represented by an unstructured pattern.
The simulations $\mathsf{b}$ and $\mathsf{f}$ were performed on a grid with a changed longitude of the model domain compared to the standard grid. A change of the longitude is changing the alignment of the two components of the velocity field $u$ and $v$ with the coast line. In the model runs $\mathsf{c}$, $\mathsf{d}$, $\mathsf{g}$ and $\mathsf{h}$ the arrangement of the triangles in the structured grid was changed (Figure \ref{bo_fig:changegrid_grids}). The change of the grid structure leads to a zig-zag pattern of the coast line in model run $\mathsf{d}$ and $\mathsf{h}$.
While all simulations with no-slip boundary conditions result in a fairly similar gyre structure, this is different for the free-slip simulations. The simulations with free-slip boundary conditions and unstructured or zig-zag meridional coast line ($\mathsf{e}$ and $\mathsf{h}$) appear to be similar to the no-slip simulations.
A likely reason for this similarity is a shift of the boundary flow into the interior of the domain that is caused by the boundary conditions in the no-slip case and by the abrupt changes of the direction of the coast line in simulations $\mathsf{e}$ and $\mathsf{h}$.
A similar behavior can be found for simulations with ocean models based on an Arakawa C-grid finite difference scheme that change the boundary conditions from free-slip to no-slip when zig-zag coast lines are considered (see \cite{Adcroft1997}), but the mechanism is different. In the finite element model, the representation of the boundary conditions stays the same when the alignment of the velocity components with the boundary change. This is confirmed by the simulations $\mathsf{b}$ and $\mathsf{f}$ that show almost identical flow pattern compared to simulations $\mathit{a}$ and $\mathit{c}$ in Figure \ref{bo_fig:height_resolution}.
While the separation behavior in the free-slip simulation $\mathsf{g}$ in Figure \ref{bo_fig:height_changegrid} shows differences to the reference simulation $\mathit{a}$ in Figure \ref{bo_fig:height_resolution}, the separation behavior of the no-slip simulation $\mathsf{c}$ is very similar to the reference simulation. The slight change of the coast line is able to change the solution of the free-slip simulation, while this is not true for the no-slip simulation.
\subsection{Irregular coast lines - An Atlantic shaped ocean domain}
\label{bo_Atlantic_TC}
In this section, we investigate a more realistic application to model the ocean. We simulate an ocean basin which is shaped like the Atlantic ocean and offers a realistic representation of the coast line. The domain is cut at the equator and at 58$^{\circ}$ North. We simulate on real-world topography, but water depth is cut at 1000 m. An artificial wind forcing that is balanced by bottom friction induces a steady circulation. The used numerical grid is plotted in Figure \ref{bo_fig:atlantic_grid}. The grid is refined at the western boundary and has a typical edge length of 120 km in the coarse, and 60 km in the fine part of the grid. In the refined area along the coast line there are always two neighbored grid edges that are aligned with each other.
\begin{figure}[ht!]
\center
\includegraphics[width=0.36 \textwidth, angle=270]{./grid_topo.ps}
\caption{Vertices of the refined grid used for the Atlantic test. The red line marks the coast line.}
\label{bo_fig:atlantic_grid}
\end{figure}
Simulations are initialized with zero surface elevation and zero velocity. The zonal wind forcing is given by
\begin{align}
\tau^s_{\lambda} = \begin{cases}
-\tau_0 \cdot 10^{-3} \cdot \cos \left( 4 \cdot \theta \right) \qquad \qquad&\text{if} \qquad \theta < 45^{\circ} \\
0 \qquad &\text{if} \qquad \theta \ge 45^{\circ}, \notag
\end{cases}
\end{align}
the meridional wind forcing is zero. The bottom friction coefficient $\gamma_f$ is set to $10^{-6} \; s^{-1}$, and $\tau_0$ is $3.0 \; m^2 s^{-2} $.
\begin{figure}[ht!]
\center
\includegraphics[width=0.9 \textwidth, angle=90]{./real_coast_height}
\caption{Equilibrium height field in the Atlantic model runs, after 140 days. While run $\mathit{a}$ and $\mathit{b}$ were simulated with $\nu = 6655.0 \; m^2 s^{-1}$, run $\mathit{c}$ and $\mathit{d}$ were simulated with $\nu = 53240.0 \; m^2 s^{-1}$. }
\label{bo_fig:atlantik}
\end{figure}
Figure \ref{bo_fig:atlantik} shows the equilibrium height field of the performed model runs.
The model runs $\mathit{a}$, $\mathit{b}$ and $\mathit{c}$ are performed on the grid plotted in Figure \ref{bo_fig:atlantic_grid}. Model run $\mathit{d}$ is using the same grid without refinement of the western boundary. As for the simulation with an unstructured representation of the idealized coast line ($\mathsf{a}$ in Figure \ref{bo_fig:height_changegrid}) the unstructured realistic coast line seems to make the used boundary condition play only a minor role -- the height field along the western boundary differs much more for different values of eddy viscosity (compare $\mathit{a}$ with $\mathit{c}$) than for no-slip and free-slip boundary conditions (compare $\mathit{a}$ and $\mathit{b}$). A change in resolution leads to minor changes (compare $\mathit{c}$ and $\mathit{d}$), although it limits the smallest possible value for eddy viscosity (see also \cite{Dueben2013}).
\section{Discussion of the results}
Although finite element methods provide an improved coast line representation compared to finite difference methods, our investigations show that the representation of the coast line and the boundary conditions is still not satisfying. Changes of the grid structure can lead to changes of the separation behavior (see Figure \ref{bo_fig:height_changegrid}).
Our tests on the influence of resolution and eddy viscosity show that steady western boundary currents are not strongly affected by changes in resolution, as long as the Munk layer is resolved properly. However, a higher resolution allows the use of a smaller eddy viscosity, which can change the model results significantly. To this end, grid refinement can be used to increase the local resolution in the Munk layer.
The model results change strongly between free-slip and no-slip boundary conditions, when idealized straight coast lines are simulated (subsection \ref{bo_Dengg_TC}). This is heavily discussed in the literature (see \cite{Dengg1993} as one example). On the other hand, the model results change only slightly with the boundary conditions for coast lines as used in ocean models (subsection \ref{bo_Atlantic_TC}).
In simulations with free-slip boundaries and zig-zag or unstructured coast lines, the flow is shifted towards the interior of the domain, due to the rapid changes of the direction of the coast line. The results look similar to no-slip model runs, in which the flow is shifted into the interior of the domain via the boundary conditions (subsection \ref{bo_Dengg_TC}).
In contrast to finite difference models with the vorticity as prognostic quantity \cite{Dengg1993}, we obtain separation for free-slip boundary conditions, using a finite element model with velocity and height as prognostic quantities. We do obtain premature separation for no-slip, but not for free-slip boundary conditions (subsection \ref{bo_Dengg_TC}). This result is consistent with results of finite difference models.
Although finite element methods offer an improved coast line representation compared to finite difference methods, the representation of boundary flows remains dependent on the pattern of the coast line, which is -- for today's ocean models -- very much dependent on the resolution, and not satisfying. Small changes of the grid structure can lead to changes in the separation behavior.
\subsection*{Acknowledgments}
We thank David Marshall for a useful revision of a previous version of this paper.
\bibliographystyle{alpha}
|
2,877,628,091,456 | arxiv | \section{Introduction}
The essential problem in statistics is to bound the probability of a surprising observation, under a \emph{null hypothesis} that observations are being drawn from some unbiased probability distribution. This calculation can fail to be straightforward for a number of reasons. On the one hand, defining the way in which the outcome is surprising requires care; for example, intricate techniques have been developed to allow sophisticated analysis of cases where multiple hypotheses are being tested. On the other hand, the correct choice of the unbiased distribution implied by the null hypothesis is often not immediately clear; classical tools like the $t$-test are often applied by making simplifying assumptions about the distribution in such cases. If the distribution is well-defined, but not be amenable to mathematical analysis, a $p$-value can still be calculated using bootstrapping, if test samples can be drawn from the distribution.
A third way for $p$-value calculations to be nontrivial occurs when the observation is surprising in a simple way, the null hypothesis distribution is known, but where there is no simple algorithm to draw samples from this distribution. In these cases, the best candidate method to sample from the null hypothesis is often through a \emph{Markov chain}, which essentially takes a long random walk on the possible values of the distribution. Under suitable conditions, theorems are available which guarantee that the chain converges to its \emph{stationary distribution}, allowing a random sample to be drawn from a distribution quantifiably close to the target distribution. This principle has given rise to diverse applications of Markov chains, including to simulations of chemical reactions, to Markov chain Monte Carlo statistical methods, to protein folding, and to statistical physics models.
A persistent problem in applications of Markov chains is the often unknown \emph{rate} at which the chain converges to the stationary distribution\cite{gelman1992inference,Gelman92asingle}. It is rare to have rigorous results on the mixing time of a real-world Markov chain, which means that in practice, sampling is performed by running a Markov chain for a ``long time'', and hoping that sufficient mixing has occurred. In some applications, such as in simulations of the Potts model from statistical physics, practitioners have developed modified Markov chains in the hopes of achieving faster convergence \cite{PhysRevLett.58.86}, but such algorithms have still been demonstrated to have exponential mixing times in many settings \cite{borgs2012tight,cooper1999mixing,gore1999swendsen}.
In this paper, we are concerned with the problem of assessing statistical significance in a Markov chain without requiring results on the mixing time of the chain, or, indeed, any special structure at all in the chain beyond reversibility. Formally, we consider a reversible Markov chain $\mathcal{M}$ on a state space $\Sigma$, which has an associated label function $\omega:\Sigma\to \mathbb{R}$. (The definition of Markov chain is recalled at the end of this section.) The labels constitute auxiliary information, and are not assumed to have any relationship to the transition probabilities of $\mathcal{M}$. We would like to demonstrate that a presented state $\sigma_0$ is unusual for states drawn from a stationary distribution $\pi$.
If we have good bounds on the mixing time of $\mathcal{M}$, then we can simply sample from a distribution of $\omega(\pi)$, and use bootstrapping to obtain a rigorous $p$-value for the significance of the smallness of the label of $\sigma_0$. Such bounds are rarely available, however.
We propose the following simple and rigorous test to detect that $\sigma_0$ is unusual relative to states chosen randomly according to $\pi$, which does not require bounds on the mixing rate of $\mathcal{M}$:
\begin{center}
\framebox{\parbox{.96\linewidth}{\textbf{The $\sqrt{\varepsilon}$ test:} Observe a trajectory $\sigma_0,\sigma_1,\sigma_2\dots,\sigma_k$ from the state $\sigma_0$, for any fixed $k$. The event that $\omega(\sigma_0)$ is an $\varepsilon$-outlier among $\omega(\sigma_0),\dots, \omega(\sigma_k)$ is significant at $p=\sqrt {2\varepsilon}$, under the null-hypothesis that $\sigma_0\sim \pi$.}}
\end{center}
Here, we say that a real number $\alpha_0$ is an \emph{$\varepsilon$-outlier} among $\alpha_0,\alpha_2,\dots,\alpha_k$ if there are at most $\varepsilon(k+1)$ indices $i$ for which $\alpha_i\leq \alpha_0$. In particular, note for the $\sqrt \varepsilon$ test, the only relevant feature of the label function is the ranking it imposes on the elements of $\Sigma$. In the Supplement, we consider the statistical power of the test, and show that the relationship $p\approx \sqrt{\varepsilon}$ is best possible. We leave as an open question whether the constant $\sqrt 2$ can be improved.
Roughly speaking, this kind of test is possible because a reversible Markov chain cannot have many \emph{local outliers} (Figure \ref{f.simplechain}). Rigorously, the validity of the test is a consequence of the following theorem.
\begin{theorem}\label{t.gtest}
Let $\mathcal{M}=X_0,X_1,\dots$ be a reversible Markov chain with a stationary distribution $\pi$, and suppose the states of $\mathcal{M}$ have real-valued labels. If $X_0\sim \pi$, then for any fixed $k$, the probability that the label of $X_0$ is an $\varepsilon$-outlier from among the list of labels observed in the trajectory $X_0,X_1,X_2,\dots,X_k$ is at most $\sqrt{2\varepsilon}$.
\end{theorem}
\begin{figure}
\centering
\includegraphics[width=.8\linewidth]{outlierSchematic-crop.pdf}
\caption{This schematic illustrates a region of a potentially much larger Markov chain with a very simple structure; from each state seen here, a jump is made with equal probabilities to each of the 4 neighboring states. Colors from green to pink represent labels from small to large. It is impossible to know from this local region alone whether the highlighted green state has unusually small label in this chain overall. But to an unusual degree, this state is a \emph{local outlier}. The $\sqrt \varepsilon$ test is based on the fact that \emph{no} reversible Markov chain can have too many local outliers.\label{f.simplechain}}
\end{figure}
We emphasize that Theorem \ref{t.gtest} makes no assumptions on the structure of the Markov chain beyond reversibility. In particular, it applies even if the chain is not irreducible (in other words, even if the state space is not connected) even though in this case the chain will never mix.
In Section \ref{s.political} we apply the test to Markov chains generating random political districtings, for which no results on rapid mixing exist. In particular, we show that for various simple choices of constraints on what constitutes a ``valid'' Congressional districting (e.g., that the districts are contiguous, and satisfy certain geometric constraints), the current Congressional districting of Pennsylvania is significantly biased, under the null hypothesis of a districting chosen at random from the set of valid districtings. (We obtain $p$-values between $\approx 2.5\cdot 10^{-4}$ and $\approx 8.1\cdot 10^{-7}$ for the constraints we considered.)
One hypothetical application of the $\sqrt \varepsilon$ test is the possibility of rigorously demonstrating that a chain is not mixed. In particular, suppose that Research Group 1 has run a reversible Markov chain for $n_1$ steps, and believes that this was sufficient to mix the chain. Research Group 2 runs the chain for a further $n_2$ steps, producing a trajectory of total length $n_1+n_2$, and notices that a property of interest changes in these $n_2$ further steps. Heuristically, this suggests that $n_1$ steps was not sufficient to mix the chain, and the $\sqrt \varepsilon$ test quantifies this reasoning rigorously. For this application, however, we must allow $X_0$ to be distributed not exactly as the stationary distribution $\pi$, but as some distribution $\pi'$ whose total variation distance to $\pi$ is small, as this is the scenario for a ``mixed'' Markov chain. In the Supplement, we give a version of Theorem \ref{t.gtest} which applies in this scenario.
One area of research related to the present manuscript concerns methods for \emph{perfect sampling} from Markov chains. Beginning with the Coupling From The Past algorithm of Propp and Wilson\cite{CFTP,CFTPguide} and several extensions\cite{Fill,Huber}, these techniques are designed to allow sampling of states \emph{exactly} from the stationary distribution $\pi$, without having rigorous bounds on the mixing time of the chain. Compared with $\sqrt \varepsilon$ test, perfect sampling techniques have the disadvantage that they require the Markov chain to possess certain structure for the method to be implementable, and that the time it takes to generate each perfect sample is unbounded. Moreover, although perfect sampling methods do not require rigorous bounds on mixing times to work, they will not run efficiently on a slowly mixing chain. The point is that for a chain which has the right structure, and which actually mixes quickly (in spite of an absence of a rigorous bound on the mixing time), algorithms like CFTP can be used to rigorously generate perfect samples. On the other hand, the $\sqrt \varepsilon$ test applies to \emph{any} reversible Markov chain, regardless of the structure, and has running time $k$ chosen by the user. Importantly, it is quite possible that the test can detect bias in a sample even when $k$ is much smaller than the mixing time of the chain, as seems to be the case in the districting example discussed in Section \ref{s.political}. Of course, unlike perfect sampling methods, the $\sqrt \varepsilon$ test can only be used to demonstrate a given sample is not chosen from $\pi$; it does not give a way for generating samples from $\pi$.
\section{Definitions}
We remind the reader that a Markov chain is a discrete time random process; at each step, the chain jumps to a new state, which only depends on the previous state. Formally, a Markov chain $\mathcal{M}$ on a state space $\Sigma$ is a sequence $\mathcal{M}=X_0,X_1,X_2,\dots$ of random variables taking values in $\Sigma$ (which correspond to states which may be occupied at each step) such that for any $\sigma,\sigma_0,\dots,\sigma_{n-1}\in \Sigma$,
\begin{multline*}
\Pr(X_n=\sigma|X_0=\sigma_0,X_1=\sigma_1,\dots,X_{n-1}=\sigma_{n-1})\\=\Pr(X_1=\sigma|X_{0}=\sigma_{n-1}).
\end{multline*}
Note that a Markov chain is completely described by the distribution of $X_0$ and the transition probabilities $\Pr(X_1=\sigma_1|X_0=\sigma_0)$ for all pairs $\sigma_0,\sigma_1\in \Sigma$. Terminology is often abused, so that the \emph{Markov chain} refers only to the ensemble of transition probabilities, regardless of the choice of distribution for $X_0$.
With this abuse of terminology, a \emph{stationary distribution} for the Markov chain is a distribution $\pi$ such that $X_0\sim \pi$ implies that $X_1\sim \pi$, and therefore that $X_i\sim \pi$ for all $i$. When the distribution of $X_0$ is a stationary distribution, the Markov chain $X_0,X_1,\dots$ is said to be \emph{stationary}. A stationary chain is said to be \emph{reversible} if for all $i,k$, the sequence of random variables $(X_i,X_{i+1},\dots,X_{i+k})$ is identical in distribution to the sequence $(X_{i+k},X_{i+k-1},\dots,X_{i})$. Finally a chain is \emph{reducible} if there is a pair of states $\sigma_0,\sigma_1$ such that $\sigma_1$ is inaccessible from $\sigma_0$ via legal transitions, and \emph{irreducible} otherwise.
A simple example of a Markov chain is a random walk on a directed graph, beginning from an initial vertex $X_0$ chosen from some distribution. Here $\Sigma$ is the vertex-set of the directed graph. If we are allowed to label the directed edges with positive reals and the probability of traveling along an arc is proportional to the label of the arc (among those leaving the present vertex), then any Markov chain has such a representation, as the transition probability $\Pr(X_1=\sigma_1|X_0=\sigma_0)$ can be taken as the label of the arc from $\sigma_0$ to $\sigma_1$. Finally, if the graph is undirected, the corresponding Markov chain is reversible.
\section{Detecting bias in political districting}
\label{s.political}
A central feature of American democracy is the selection of Congressional districts in which local elections are held to directly elect national representatives. Since a separate election is held in each district, the proportions of party affiliations of the slate of representatives elected in a state does not always match the proportions of statewide votes cast for each party. In practice, large deviations from this seemingly desirable target do occur.
Various tests have been proposed to detect \emph{gerrymandering} of districtings, in which a districting is drawn in such a way as to bias the resulting slate of representatives towards one party; this can be accomplished by concentrating voters of the unfavored party in a few districts. One class of methods to detect gerrymandering concerns heuristic `smell tests' which judge whether a districting seems generally reasonable in its statistical properties (see, e.g., \cite{Wangthree,nagle}). For example, such tests may frown upon districtings in which difference between the mean and median vote on district-by-district basis is unusually large \cite{McBest}.
The simplest statistical smell test, of course, is whether the party affiliation of the elected slate of representatives is close in proportion to the party affiliations of votes for representatives. Many states have failed this simple test spectacularly, such as in Pennsylvania, where in 2012, 48.77\% of votes were cast for Republican representatives and 50.20\% for Democrat representatives, in an election which resulted in a slate of 13 Republican representatives and 5 Democrat representatives.
Heuristic statistical tests such as these all suffer from lack of rigor, however, due to the fact that the statistical properties of `typical' districtings are not rigorously characterized. For example, it has been shown \cite{unintentional} that Democrats may be at a natural disadvantage when drawing electoral maps even when no bias is at play, because Democrat voters are often highly geographically concentrated in urban areas. Particularly problematic is that the degree of geographic clustering of partisans is highly variable from state to state: what looks like a gerrymandered districting in one state may be a natural consequence of geography in another.
Some work has been done in which the properties of a ``valid'' districting are defined (which may be required to have have roughly equal populations among districts, have districts with reasonable boundaries, etc.) so that the characteristics of a given districting can be compared with what would be ``typical'' for a valid districting of the state in question, by using computers to generate random districtings \cite{carolina,minority}; see also \cite{McBest} for discussion. However, much of this work has relied on heuristic sampling procedures which do not have the property of selecting districtings with equal probability (and, more generally, whose distributions are not well-characterized), undermining rigorous statistical claims about the properties of typical districts.
In an attempt to establish a rigorous framework for this kind of approach, several groups \cite{fifield,cmu,duke} have used Markov chains to sample random valid districtings for the purpose of such comparisons.
Like many other applications of real-world Markov chains, however, these methods suffer from the completely unknown mixing time of the chains in question. Indeed, no work has even established that the Markov chains are irreducible (in the case of districtings, this means that any valid districting can be reached from any other by a legal sequence of steps), even if valid districtings were only required to consist of contiguous districts of roughly equal populations. And, indeed, for very restrictive notions of what constitutes a valid districting, irreducibility certainly fails.
\smallskip
\begin{figure*}
\hspace{\stretch{1}}
\includegraphics[width=.45\linewidth]{start.png}
\hspace{\stretch{1}}
\includegraphics[width=.45\linewidth]{gerry_P125.png}
\hspace{\stretch{1}}
\caption{\label{f.globaldistrictings}\textbf{Left:} The current districting of Pennsylvania. \textbf{Right:} A districting produced by the Markov chain after $2^{40}$ steps. (Detailed parameters for this run are given in the supplement.)}
\end{figure*}
As a straightforward application of the $\sqrt \varepsilon$ test, we can achieve rigorous $p$-values in Markov models of political districtings in spite of the lack of bounds on mixing times of the chains. In particular, for all choices of the constraints on valid districtings we tested, the $\sqrt \varepsilon$ test showed that the current Congressional districting of Pennsylvania is an outlier at significance thresholds ranging from $p\approx 2.5\cdot 10^{-4}$ and $p\approx 8.1\cdot 10^{-7}$. Detailed results of these runs are in the Supplement.
A key advantage of the Markov chain approach to gerrymandering is that it rests on a rigorous framework; namely, comparing the actual districting of a state with typical (i.e., random) districtings from a well-defined set of valid districtings. The rigor of the approach thus depends on the availability of a precise definition of what constitutes a valid districting; in principle and in practice, this is a a thorny legal question. While some work on Markov chains for redistricting (in particular, \cite{duke}) has aimed to account for complex constraints on valid districtings, our main goal in the present manuscript is to illustrate the application of the $\sqrt \varepsilon$ test. In particular, we have erred on the side of using relatively simple sets of constraints on valid districtings in our Markov chains, while checking that our significance results are not highly sensitive to the parameters that we use. On the other hand, our test immediately gives a way of putting the work such as that in \cite{duke} on a rigorous statistical footing.
The full description of the Markov chain we use in the present work is given in the supplement, but it's basic structure is as follows: Pennsylvania is divided into roughly 9000 Census blocks. (These blocks can be seen upon close inspection of Figure \ref{f.globaldistrictings}.) We define a division of these blocks into 18 districts to be a valid districting of Pennsylvania if districts differ in population by less than $2\%$, are contiguous, are simply connected (districts do not contain holes) and are ``compact'' in ways we discuss in the supplement; roughly, this final condition prohibits districts with extremely contorted structure. The state space of the Markov chain is the set of valid districtings of the state, and one step of the Markov chain consists of randomly swapping a precinct on the boundary of a district to a neighboring district, if the result is still a valid districting. As we discuss in the supplement, the chain is adjusted slightly to ensure that the uniform distribution on valid districtings is indeed a stationary distribution for the chain. Observe that this Markov chain has a potentially huge state space; if the only constraint on valid districtings was that the districts have roughly equal population, there would be $10^{10000}$ or so valid districtings. Although contiguity and especially compactness are severe restrictions which will decrease this number substantially, it seems difficult to compute effective upper bounds on the number of resulting valid districtings, and certainly, it is still enormous. Impressively, these considerations are all immaterial to our very general method.
Applying the $\sqrt \varepsilon$ test involves the choice of a label function $\omega(\sigma)$, which assigns a real-number to each districting. We have conducted runs using two label functions: $\omega_{\textrm{var}}$ is the (negative) variance of the proportion of Democrats in each district of the districting (as measured by 2012 presidential votes), and $\omega_{\textrm{MM}}$ is the difference between the median and mean of the proportions of Democrats in each district. $\omega_{\textrm{MM}}$ is motivated by the fact that this metric has a long history of use in gerrymandering, and is directly tied to the goals of gerrymandering, while the use of the variance is motivated by the fact that it can change quickly with small changes in districtings. These two choices are discussed further in the Supplement, but an important point is that our use of these label functions \textbf{is not} based on an assumption that small values of $\omega_{\textrm{var}}$ or of $\omega_{\textrm{MM}}$ directly imply gerrymandering. Instead, as Theorem \ref{t.gtest} is valid for any fixed label function, these labels are tools used to demonstrate significance, which are chosen because they are simple and natural functions on vectors which can be quickly computed, seem likely to be different for typical versus gerrymandered districtings, and have the potential to change relatively quickly with small changes in districtings. For the various notions of valid districtings we considered, the $\sqrt{\varepsilon}$ test demonstrated significance at $p$-values in the range $10^{-4}$ to $10^{-5}$ for the $\omega_{\textrm{MM}}$ label function, and in the range $10^{-4}$ and $10^{-7}$ for the $\omega_{\textrm{var}}$ label function.
As noted earlier, the $\sqrt \varepsilon$ test can easily be used with more complicated Markov chains which capture more intricate definitions of the set of valid districtings. For example, the current districting of Pennsylvania splits fewer rural counties than the districting on the right in Figure \ref{f.globaldistrictings}, and the number of county splits is one of many metrics for valid districtings considered by the Markov chains developed in \cite{duke}. Indeed, our test will be of particular value in cases where complex notions of what constitute a valid districtings slow the chain to make the heuristic mixing assumption particularly questionable. Regarding mixing time: even our chain with relatively weak constraints on the districtings (and very fast running time in implementation) appears to mix too slowly to sample $\pi$, even heuristically; in Figure \ref{f.globaldistrictings}, we see that several districts seem still to have not left their general position from the initial districting, even after $2^{40}$ steps.
On the same note, it should also be kept in mind that while our result gives a method to rigorously disprove that a given districting is unbiased---e.g., to show that the districting is unusual among districtings $X_0$ distributed according to the stationary distribution $\pi$---it does so \emph{without} giving a method to sample from the stationary distribution. In particular, our method can not answer the question of how many seats Republicans and Democrats should have in a typical districting of Pennsylvania, because we are still not mixing the chain. Instead, Theorem \ref{t.gtest} has given us a way to disprove $X_0\sim \pi$ without sampling $\pi$.
\section{Proof of Theorem \ref{t.gtest}}
We let $\pi$ denote any stationary distribution for $\mathcal{M}$, and suppose that the initial state $X_0$ is distributed as $X_0\sim \pi$, so that in fact $X_i\sim \pi$ for all $i$. We say $\sigma_j$ is \emph{$\ell$-small} among $\sigma_0,\dots,\sigma_k$ if there are at most $\ell$ indices $i\neq j$ among $0,\dots,k$ such that the label of $\sigma_i$ is at most the label of $\sigma_j$. In particular, $\sigma_j$ is $0$-small among $\sigma_0,\sigma_1,\dots,\sigma_k$ when its label is the unique minimum label, and we encourage readers to focus on this $\ell=0$ case in their first reading of the proof.
For $0\leq j\leq k$, we define
\begin{align*}
\rho_{j,\ell}^k&:=\Pr\left(X_j\mbox{ is $\ell$-small among }X_0,\dots,X_k\right)\\
\rho_{j,\ell}^k(\sigma)&:=\Pr\left(X_j\mbox{ is $\ell$-small among }X_0,\dots,X_k\mid X_j=\sigma\right)
\end{align*}
Observe that since $X_s\sim \pi$ for all $s$, we also have that
\begin{multline}\label{l.lshift}
\rho_{j,\ell}^k(\sigma)=\\\Pr\left(X_{s+j}\mbox{ is $\ell$-small among }X_s,\dots,X_{s+k}\mid X_{s+j}=\sigma\right)
\end{multline}
We begin by noting two easy facts.
\begin{observation}\label{o.rev}
$\rho_{j,\ell}^k(\sigma)=\rho_{k-j,\ell}^k(\sigma)$.
\end{observation}
\begin{proof}
Since $\mathcal{M}=X_0,X_1,\dots$ is stationary and reversible, the probability that $(X_0,\dots,X_k)=(\sigma_0,\dots,\sigma_k)$ is equal to the probability that $(X_0,\dots,X_k)=(\sigma_k,\dots,\sigma_0)$ for any fixed sequence $(\sigma_0,\dots,\sigma_k)$. Thus, any sequence $(\sigma_0,\dots,\sigma_k)$ for which $\sigma_j=\sigma$ and $\sigma_j$ is a $\ell$-small corresponds to an equiprobable sequence $(\sigma_k,\dots,\sigma_0)$ for which $\sigma_{k-j}=\sigma$ and $\sigma_{k-j}$ is $\ell$-small.
\end{proof}
\begin{observation}\label{o.split}
$\rho_{j,2\ell}^k(\sigma)\geq \rho_{j,\ell}^j(\sigma)\cdot \rho_{0,\ell}^{k-j}(\sigma).$
\end{observation}
\begin{proof}
Consider the events that $X_j$ is a $\ell$-small among $X_0,\dots,X_j$ and among $X_j,\dots,X_k$, respectively. These events are conditionally independent, when conditioning on the value of $X_j=\sigma$, and $\rho_{j,\ell}^j(\sigma)$ gives the probability of the first of these events, while applying equation \eqref{l.lshift} with $s=j$ gives that $\rho_{0,\ell}^{k-j}(\sigma)$ gives the probability of the second event.
Finally, when both of these events happen, we have that $X_j$ is $2\ell$-small among $X_0,\dots,X_k$.
\end{proof}
We can now deduce that
\begin{equation}\label{l.rs}
\rho_{j,2\ell}^k(\sigma)\geq\rho_{j,\ell}^j(\sigma)\cdot \rho_{0,\ell}^{k-j}(\sigma)\\=
\rho_{0,\ell}^j(\sigma)\cdot \rho_{0,\ell}^{k-j}(\sigma)\geq \left(\rho_{0,\ell}^{k}(\sigma)\right)^2.
\end{equation}
Indeed, the first inequality follows from Observation \ref{o.split}, the equality follows from Observation \ref{o.rev}, and the final inequality follows from the fact that $\rho_{j,\ell}^k(\sigma)$ is monotone nonincreasing in $k$ for fixed $j,\ell,\sigma$.
Observe now that
$
\rho_{j,\ell}^k=\expect \rho_{j,\ell}^k(X_j),
$
where the expectation is taken over the random choice of $X_j\sim \pi$.
Thus taking expectations in \eqref{l.rs} we find that
\begin{multline}\label{l.exp}
\rho_{j,2\ell}^k=\expect\rho_{j,2\ell}^k(\sigma)\geq \expect\left(\left(\rho_{0,\ell}^{k}(\sigma)\right)^2\right)\\\geq \left(\expect\rho_{0,\ell}^{k}(\sigma)\right)^2=(\rho_{0,\ell}^{k})^2.
\end{multline}
where the second of the two inequalities is the Cauchy-Schwartz inequality.
For the final step in the proof, we sum the left and righthand sides \eqref{l.exp} to obtain
\[
\sum_{j=0}^k \rho_{j,2\ell}^k\geq(k+1) (\rho_{0,\ell}^{k})^2
\]
If we let $\xi_j$ $(0\leq i\leq k)$ be the indicator variable which is 1 whenever $X_j$ is $2\ell$-small among $X_0,\dots,X_k$, then $\sum_{j=0}^k{\xi_j}$ is the number of $2\ell$-small terms, which is always at most $2\ell+1$, so that linearity of expectation gives that
\begin{equation}\label{l.sums}
2\ell+1\geq (k+1) (\rho_{0,\ell}^k)^2,
\end{equation}
giving that
\begin{equation}\label{l.final}
\rho_{0,\ell}^k\leq\sqrt{\tfrac{2\ell+1}{k+1}}.
\end{equation}
This proves Theorem \ref{t.gtest}, as if $X_i$ is an $\varepsilon$-outlier among $X_0,\dots,X_{k}$, then $X_i$ is necessarily $\ell$-small among $X_0,\dots,X_{k}$ for $\ell=\flr{\varepsilon(k+1)-1}\leq \varepsilon(k+1)-1$, and then we have $2\ell+1\leq 2\varepsilon(k+1)-1\leq 2\varepsilon(k+1)$.\qed
\bigskip
\subsection*{Acknowledgment}
We are grateful for helpful conversations with John Nagle, Danny Sleator, and Dan Zuckerman.
\bibliographystyle{plain}
|
2,877,628,091,457 | arxiv | \section{Introduction}
The obstacle avoidance problem is a long lasting problem that has attracted the attention of the robotics and control communities for decades. In a typical robot navigation scenario, the robot is required to reach a given goal (destination) while avoiding to collide with a set of obstacle regions in the workspace. Since the pioneering work by Khatib \cite{khatib1986real} and the seminal work by Koditscheck and Rimon \cite{koditschek1990robot}, artificial potential fields or navigation functions have been widely used in the literature, see. {\it e.g.,} \cite{khatib1986real,koditschek1990robot,dimarogonas2006feedback,filippidis2013navigation}, to deal with the obstacle avoidance problem. The idea is to generate an artificial potential field that renders the goal attractive and the obstacles repulsive. Then, by considering trajectories that navigate along the negative gradient of the potential field, one can ensure that the system will reach the desired target from all initial conditions except from a set of measure zero.
This is a well known topological obstruction to global asymptotic stabilization by continuous time-invariant feedback when the free state space is not diffeomorphic to a Euclidean space, see, e.g., \cite[Thm.~2.2]{wilson1967structure}. This topological obstruction occurs then also in the navigation transform \cite{loizou2017navigation} and (control)-barrier-function approaches \cite{prajna2007framework,wieland2007constructive,romdlony2016stabilization,ames2017control}.
To deal with such a limitation, the authors in \cite{sanfelice2006robust} have proposed a hybrid state feedback controller to achieve robust global asymptotic regulation, in $\mathbb{R}^2$, to a target while avoiding an obstacle. This approach has been exploited in \cite{poveda2018hybrid} to steer a planar vehicle to the source of an unknown but measurable signal while avoiding an obstacle. In \cite{braun2018unsafe}, a hybrid control law has been proposed to globally asymptotically stabilize a class of linear systems while avoiding an unsafe single point in $\mathbb{R}^n$.
In this work, we propose a hybrid control algorithm for the global asymptotic stabilization of a single-integrator system that is guaranteed to avoid a non-point spherical obstacle.
Our approach considers trajectories in an $n-$dimensional Euclidean space and we resort to tools from higher-dimensional geometry \cite{meyer2000matrix} to provide a construction of the flow and jump sets where the different modes of operation of the hybrid controller are activated.
Our proposed hybrid algorithm employs a hysteresis-based switching between the avoidance controller and the stabilizing controller in order to guarantee forward invariance of the obstacle-free region (related to safety) and global asymptotic stability of the reference position. The parameters of the hybrid controller can be tuned so that the hybrid control law matches the stabilizing controller in arbitrarily large subsets of the obstacle-free region.
Preliminaries are in Section~\ref{section:preliminaries}, the problem is formulated in Section~\ref{section:problem}, and our solution is in Sections~\ref{section:controller}-\ref{section:main}, with a numerical exemplification in Section~\ref{section:example}. All the proofs of the intermediate lemmas are in the appendix.
\section{Preliminaries}
\label{section:preliminaries}
Throughout the paper, $\mathbb{R}$ denotes the set of real numbers, $\mathbb{R}^n$ is the $n$-dimensional Euclidean space and $\mathbb{S}^n$ is the $n$-dimensional unit sphere embedded in $\mathbb{R}^{n+1}$. The Euclidean norm of $x\in\mathbb{R}^n$ is defined as $\|x\|:=\sqrt{x^\top x}$ and the geodesic distance between two points $x$ and $y$ on the sphere $\mathbb{S}^n$ is defined by $\mathbf{d}_{\mathbb{S}^n}(x,y):=\arccos(x^\top y)$ for all $x,y\in\mathbb{S}^n$. The closure, interior and boundary of a set $\mathcal{A}\subset\mathbb{R}^n$ are denoted as $\overline{\mathcal{A}}, \mathcal{A}^\circ$ and $\partial\mathcal{A}$, respectively. The relative complement of a set $\mathcal{B}\subset\mathbb{R}^n$ with respect to a set $\mathcal{A}$ is denoted by $\mathcal{A}\setminus\mathcal{B}$ and contains the elements of $\mathcal{A}$ which are not in $\mathcal{B}$.
Given a nonzero vector $z\in\mathbb{R}^n\setminus\{0\}$, we define the maps:
\begin{equation}
\label{eq:proj-refl-maps}
\pi^\parallel(z):=\tfrac{zz^\top}{\|z\|^2},\, \pi^\perp(z):=\!I_n\!-\tfrac{zz^\top}{\|z\|^2},\, \rho^\perp(z)=\!I_n\!-2\tfrac{zz^\top}{\|z\|^2}
\end{equation}
where $I_n$ is the $n\times n$ identity matrix. The map $\pi^\parallel(\cdot)$ is the parallel projection map, $\pi^\perp(\cdot)$ is the orthogonal projection map \cite{meyer2000matrix}, and $\rho^\perp(\cdot)$ is the reflector map (also called Householder transformation). Consequently, for any $x\in\mathbb{R}^n$, the vector $\pi^\parallel(z)x$ corresponds to the projection of $x$ onto the line generated by $z$, $\pi^\perp(z)x$ corresponds to the projection of $x$ onto the hyperplane orthogonal to $z$ and $\rho^\perp(z)x$ corresponds to the reflection of $x$ about the hyperplane orthogonal to $z$. For each $z\in\ensuremath{\mathbb{R}}^n \setminus\{ 0\}$, some useful properties of these maps follow:
\begin{align}
\label{eq:propLine1}
\pi^\parallel(z)z&=z,&\pi^\perp(z)\pi^\perp(z)&=\pi^\perp(z),\\
\label{eq:propLine2}
\pi^\perp(z)z&=0,&\pi^\parallel(z)\pi^\parallel(z)&=\pi^\parallel(z), \\
\label{eq:propLine3}
\rho^\perp(z)z&=-z,&\rho^\perp(z)\rho^\perp(z)&=I_n,\\
\label{eq:propLine4}
\pi^\perp(z)\pi^\parallel(z)&=0,&\pi^\perp(z)+\pi^\parallel(z)&=I_n, \\
\label{eq:propLine5}
\pi^\parallel(z)\rho^\perp(z)&=-\pi^\parallel(z),& 2\pi^\perp(z)-\rho^\perp(z)&=I_n,\\
\label{eq:propLine6}
\pi^\perp(z)\rho^\perp(z)&=\pi^\perp(z),& 2\pi^\parallel(z)+\rho^\perp(z)&=I_n.
\end{align}
We define for $z\in\ensuremath{\mathbb{R}}^n\setminus\{ 0\}$
and $\theta\in\ensuremath{\mathbb{R}}$ the parametric map
\begin{align}
\label{eq:def:piTheta}
\pi^\theta(z):=\cos^2(\theta)\pi^\perp(z)-\sin^2(\theta)\pi^\parallel(z).
\end{align}
\begin{figure}
\centering
\includegraphics[scale=0.42]{helmet}
\caption{The helmet region (dark grey) defined in \eqref{eq:helmet}.}
\label{fig:helmet}
\end{figure}%
In~\eqref{eq:def:ball}--\eqref{eq:helmet}, we define for $v\in\ensuremath{\mathbb{R}}^n\setminus\{ 0\}$ some geometric subsets of $\mathbb{R}^n$, which are described after~\eqref{eq:helmet}:
\begin{align}
\label{eq:def:ball}
\mathcal{B}_\epsilon(c)&:=\{x\in\mathbb{R}^n: \|x-c\|\leq\epsilon\}, \\
\label{eq:def:line}
\mathcal{L}(c,v)&:=\{x\in\mathbb{R}^n: x=c+\lambda v, \lambda\in\mathbb{R}\}, \\
\label{eq:def:plane}
\mathcal{P}^{\bigtriangleup}(c,v)&:=\{x\in\mathbb{R}^n: v^\top(x-c)\bigtriangleup 0\},\\
\label{eq:def:cone}
\mathcal{C}^{\bigtriangleup}(c,v,\theta)&:=\{x\in\mathbb{R}^n:\!(x\!-c)^\top\!\pi^\theta\!(v)(x\!-c)\!\bigtriangleup\!0\}\\
&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!=\{x\in\mathbb{R}^n:\cos^2(\theta)\|v\|^2\|x-c\|^2\!\bigtriangleup\! (v^\top(x-c))^2\} \nonumber\\
\label{eq:def:half_cone}
\mathcal{C}_{\bigtriangledown}^{\bigtriangleup}(c,v,\theta)&:=\mathcal{C}^{\bigtriangleup}(c,v,\theta)\cap\mathcal{P}^{\bigtriangledown}(c,v),\\
\label{eq:helmet}
\mathcal{H}(c,\epsilon,\epsilon^\prime,\mu)&:=\overline{\mathcal{B}_{\epsilon^\prime}(c)\setminus\mathcal{B}_{\epsilon}(c)\setminus\mathcal{B}_{\|\mu c\|}(\mu c)},
\end{align}
where the symbols $\bigtriangleup$ and $\bigtriangledown$ can be selected as $\bigtriangleup\in\{=,<,>,\leq,\geq\}$ and $\bigtriangledown\in\{<,>,\leq,\geq\}$.
The set $\mathcal{B}_\epsilon(c)$ in~\eqref{eq:def:ball} is the \emph{ball} centered at $c\in\mathbb{R}^n$ with radius $\epsilon$.
The set $\mathcal{L}(c,v)$ in~\eqref{eq:def:line} is the $1-$dimensional \emph{line} passing by the point $c\in\mathbb{R}^n$ and with direction parallel to $v$.
The set $\mathcal{P}^=(c,v)$ in~\eqref{eq:def:plane} is the $(n-1)-$dimensional \emph{hyperplane} that passes through a point $c\in\mathbb{R}^n$ and has normal vector $v$.
The hyperplane $\mathcal{P}^=(c,v)$ divides the Euclidean space $\mathbb{R}^n$ into two closed sets $\mathcal{P}^{\geq}(c,v)$ and $\mathcal{P}^\leq(c,v)$.
The set $\mathcal{C}^=(c,v,\theta)$ in~\eqref{eq:def:cone} is the right circular \emph{cone} with vertex at $c\in\mathbb{R}^n$, axis parallel to $v$ and aperture $2\theta$.
The set $\mathcal{C}^{\bigtriangleup}(c,v,\theta)$ in~\eqref{eq:def:cone} with $\leq$ as $\bigtriangleup$ (or $\geq$ as $\bigtriangleup$, respectively) is the region inside (or outside, respectively) the cone $\mathcal{C}^=(c,v,\theta)$.
The plane $\mathcal{P}^=(c,v)$ divides the conic region $\mathcal{C}^{\bigtriangleup}(c,v,\theta)$ into two regions $\mathcal{C}^{\bigtriangleup}_{\leq}(c,v,\theta)$ and $\mathcal{C}^{\bigtriangleup}_{\geq}(c,v,\theta)$ in~\eqref{eq:def:half_cone}.
The set $\mathcal{H}(c,\epsilon,\epsilon^\prime,\mu)$ in~\eqref{eq:helmet} is called a {\it helmet} and is obtained by removing from the spherical shell (annulus) $\mathcal{B}_{\epsilon^\prime}(c)\setminus\mathcal{B}_{\epsilon}(c)$ the portion contained in the ball $\mathcal{B}_{\|\mu c\|}(\mu c)$, see Fig.~\ref{fig:helmet}. The following geometric fact will be used.
\begin{lemma}\label{lemma:cones}
Let $c\in\mathbb{R}^n$ and $v_1,v_2\in\mathbb{S}^{n-1}$ be some arbitrary unit vectors such that $\mathbf{d}_{\mathbb{S}^{n-1}}(v_1,v_2)=\theta$ for some $\theta\in(0,\pi]$. Let $\psi_1,\psi_2\in[0,\pi]$ such that $\psi_1+\psi_2<\theta<\pi-(\psi_1+\psi_2)$. Then
\begin{align*}
\mathcal{C}^{\leq}(c,v_1,\psi_1)\cap\mathcal{C}^{\leq}(c,v_2,\psi_2)=\{c\}.
\end{align*}
\end{lemma}
Finally, we consider in this paper hybrid dynamical systems \cite{goebel2012hybrid}, described through constrained differential and difference inclusions for state $X \in \ensuremath{\mathbb{R}}^n$:
\begin{equation}
\label{Hybrid:general}
\begin{cases}
\dot X\in\mathbf{F}(X), &X\in\mathcal{F},\\
X^+\in \mathbf{J}(X), & X\in\mathcal{J}.
\end{cases}
\end{equation}
The data of the hybrid system \eqref{Hybrid:general} (i.e., the \textit{flow set} $\mathcal{F}\subset\mathbb{R}^n$, the \textit{flow map} $\mathbf{F}:\mathbb{R}^n\rightrightarrows\mathbb{R}^n$, the \textit{jump set} $\mathcal{J}\subset\mathbb{R}^n$, the \textit{jump map} $\mathbf{J}:\mathbb{R}^n\rightrightarrows\mathbb{R}^n$) is denoted as $\mathscr{H}=(\mathbf{\mathcal{F}},\mathbf{F},\mathcal{J},\mathbf{J})$.
\section{Problem Formulation}\label{section:problem}
We consider a vehicle moving in the $n$-dimensional Euclidean space according to the following single integrator dynamics:
\begin{align}
\dot x=u
\end{align}
where $x\in\mathbb{R}^n$ is the state of the vehicle and $u\in\mathbb{R}^n$ is the control input. We assume that in the workspace there exists an obstacle considered as a spherical region $\mathcal{B}_\epsilon(c)$ centered at $c\in\mathbb{R}^n$ and with radius $\epsilon>0$. The vehicle needs to avoid the obstacle while stabilizing its position to a given reference. Without loss of generality we consider $n\geq 2$ and take our reference
position at $x=0$ (the origin)\footnote{ \label{footnote:n=1} For $n=1$ (i.e., the state space is a line), global asymptotic stabilization with obstacle avoidance is physically impossible to solve via any feedback.}.
\begin{assumption}\label{assumption:obstacle}
$\|c\|>\epsilon>0$.
\end{assumption}
Assumption \ref{assumption:obstacle} requires that the reference position $x=0$ is not inside the obstacle region, otherwise the following control objective would not be feasible. Our objective is indeed to design a control strategy for the input $u$ such that:
\begin{itemize}
\item[i)] the obstacle-free region $\mathbb{R}^n\setminus\mathcal{B}_{\epsilon}(c)$ is forward invariant;
\item[ii)] the origin $x=0$ is globally asymptotically stable;
\item[iii)] for each $\epsilon^\prime>\epsilon$, there exist controller parameters such that the control law matches, in $\mathbb{R}^n\setminus\mathcal{B}_{\epsilon^\prime}(c)$, the law $u=-k_0x$ ($k_0>0$) used in the absence of the obstacle.
\end{itemize}
Objective i) guarantees that all trajectories of the closed-loop system are safely avoiding the obstacle by remaining outside the obstacle region. Objectives i) and ii), together, can not be achieved using a continuous feedback due to the topological obstruction discussed in the introduction. Objective iii) is the so-called {\it semiglobal preservation} property \cite{braun2018unsafe}. This property is desirable when the original controller parameters are optimally tuned and the controller modifications imposed by the presence of the obstacle should be as minimal as possible. Such a property is also accounted for in the quadratic programming formulation of~\cite[III.A.]{wang2017safety}.
The obstacle avoidance problem described above is solved via a hybrid feedback strategy in Sections~\ref{section:controller}-\ref{section:main}.
\section{Proposed Hybrid Control Algorithm for Obstacle Avoidance}\label{section:controller}
In this section, we propose a hybrid controller that switches suitably between an {\it avoidance} controller and a {\it stabilizing} controller. Let $m\in\{-1,0,1\}$ be a discrete variable dictating the control mode where $m=0$ corresponds to the activation of the stabilizing controller and $|m|=1$ corresponds to the activation of the avoidance controller, which has two configurations $m\in\{-1,1\}$. The proposed control input, depending on both the state $x\in\mathbb{R}^n$ and the control mode $m\in\{-1,0,1\}$, is given by the feedback law
\begin{equation*}\label{eq:u}
\begin{aligned}
u& =\kappa(x,m):=\begin{cases}
-k_0 x, & m=0\\
- k_m \pi^\perp(x-c)(x-p_m),&m \in\{-1,\, 1\}
\end{cases}
\end{aligned}
\end{equation*}
where $k_m>0$ (with $m\in\{-1,0,1\}$) and $p_m\in\mathbb{R}^n$ (with $m\in\{-1,1\}$) are design parameters. During the stabilization mode ($m=0$), the control input above steers $x$ towards $x=0$. During the avoidance mode ($|m|=1$), the control input above minimizes the distance to the \emph{auxiliary} attractive point $p_m$ {\it while} maintaining a constant distance to the center of the ball $\mathcal{B}_{\epsilon}(c)$, thereby avoiding to hit the obstacle. This is done by projecting the feedback $-k_m(x-p_m)$ on the hyperplane orthogonal to $(x-c)$. This control strategy resembles the well-known path planning Bug algorithms (see, {\it e.g.,} \cite{lumelsky1990incorporating}) where the motion planner switches between motion-to-goal objective and boundary-following objective.
We refer the reader to Fig.~\ref{fig:flowAndJumpSets} from now onward for all of this section.
For $\theta>0$ (further bounded in~\eqref{ineq:parameters}), the points $p_1, p_{-1}$ are selected to lie on the cone\footnote{Following the remark in Footnote~\ref{footnote:n=1}, note that the set $\mathcal{C}^=_\leq(c,c,\theta)\setminus\{c\}$ is nonempty for all $n\geq 2$.} $\mathcal{C}^=_\leq(c,c,\theta)\setminus\{c\}$:
\begin{align}\label{eq:p-1}
p_1\in\mathcal{C}^=_\leq(c,c,\theta)\setminus\{c\} \text{ and } p_{-1}:=-\rho^\perp(c)p_1.
\end{align}
Note that, by~\eqref{eq:p-1}, $p_{-1}$ opposes $p_1$ diametrically with respect to the axis of the cone $\mathcal{C}^=_\leq(c,c,\theta)$ and also belongs to $\mathcal{C}^=_\leq(c,c,\theta)\setminus\{c\}$ as shown in the following lemma.
\begin{lemma}\label{lemma:p-1}
$p_{-1}\in\mathcal{C}^=_\leq(c,c,\theta)\setminus\{c\}.$
\end{lemma}
The logic variable $m$ is selected according to a hybrid mechanism that exploits a suitable construction of the flow and jump sets. This hybrid selection is obtained through the hybrid dynamical system
\begin{subequations}
\label{eq:hs_1obs}
\begin{align}
&\left\{\begin{aligned}
\dot x&=\kappa(x,m)\\
\dot m&=0
\end{aligned}\right.&&(x,m)\in\bigcup_{m \in \{-1,0,1\}} \!\!\!\! \mathcal{F}_m \times \{m\}\label{eq:hs_1obs:flowMap}\\
&\left\{\begin{aligned}
x^+ &=x\\
m^+ &\in\mathbf{M}(x,m)
\end{aligned}\right.&&(x,m)\in\bigcup_{m \in \{-1,0,1\}} \!\!\!\! \mathcal{J}_m \times \{m\}. \label{eq:hs_1obs:JumpMap}
\end{align}
The flow and jump sets for each mode $m\in\{-1,0,1\}$ are defined as (see~\eqref{eq:helmet} for the definition of the helmet $\mathcal{H}$):
\begin{align}
\label{eq:F0}
& \mathcal{F}_0:=\overline{\mathbb{R}^n\setminus(\mathcal{J}_0\cup\mathcal{B}_{\epsilon}(c))}, & & \\
\label{eq:J0}
&\mathcal{J}_0:=\mathcal{H}(c,\epsilon,\epsilon_s,1/2), & &\\
\label{eq:Fm}
&\mathcal{F}_m:=\mathcal{H}(c,\epsilon,\epsilon_h,\mu)\cap\mathcal{C}_\leq^\geq(c,p_m-c,\psi), & & |m|=1,\\
\label{eq:Jm}
&\mathcal{J}_m:=\overline{\mathbb{R}^n\setminus(\mathcal{F}_m\cup\mathcal{B}_{\epsilon}(c))},& & |m|=1,
\end{align}
see their depiction in Fig.~\ref{fig:flowAndJumpSets}, and the (set-valued) jump map is defined as
\begin{align}
\mathbf{M}(x,0)&\!:=\left\{m^\prime\!\in\!\{-1,1\}\colon x\in\mathcal{C}^{\geq}(c,p_{m^\prime}\!-c,\bar\psi)\right\} \label{eq:hs_1obs:M(x,0)}\\
\mathbf{M}(x,m)&\!:= 0, \quad \text{ for } m\in \{-1,1\},
\end{align}
\end{subequations}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{newFlowJumpSets-v3.pdf}
\caption{2D illustration of flow and jump sets considered in Sections~\ref{section:controller}-\ref{section:main}.}
\label{fig:flowAndJumpSets}
\end{figure}%
where $\epsilon_s$, $\epsilon_h$, $\theta$, $\psi$, $\bar \psi$ are design parameters selected later as in Assumption~\ref{assumption:parameters}. Before we state our main result, a discussion motivating the above construction of flow and jump sets is in order.
During the stabilization mode $m=0$, the closed-loop system should not flow when $x$ is close enough to the surface of the obstacle region $\mathcal{B}_{\epsilon}(c)$ and the vector field $-k_0x$ points inside $\mathcal{B}_{\epsilon}(c)$. Indeed, by computing the time derivative of $\|x-c\|^2$, we can obtain the set where the stabilizing vector field $-k_0x$ causes a decrease in the distance $\|x-c\|^2$ to the centre of the obstacle region $\mathcal{B}_\epsilon(c)$. This set is characterized by the inequality
\begin{align}
\label{eq:dist_to_c_decreases}
-k_0 x^\top(x-c)\leq 0 \Longleftrightarrow
\left\|x-{c}/{2}\right\|^2\geq\left\|{c}/{2}\right\|^2.
\end{align}
The closed set in~\eqref{eq:dist_to_c_decreases} corresponds to the region outside the ball $\mathcal{B}_{\|c/2\|}(c/2)$. Therefore, to keep the vehicle safe during the stabilization mode, we define around the obstacle a helmet region $\mathcal{H}(c,\epsilon,\epsilon_s,1/2)$, which is used as the jump set $\mathcal{J}_0$ in \eqref{eq:J0}. In other words, if during the stabilization mode the vehicle hits this {\it safety helmet}, then the controller jumps to avoidance mode. The amount $\epsilon_s-\epsilon$ represents the thickness of the safety helmet that defines the jump set $\mathcal{J}_0$.
During the avoidance mode $|m|=1$, we want our controller to slide on the helmet $\mathcal{H}(c,\epsilon,\epsilon_h,\mu)$ while maintaining a constant distance to the center $c$. Note that, with $\epsilon_h>\epsilon_s$ and $\mu<1/2$, the helmet $\mathcal{H}(c,\epsilon,\epsilon_h,\mu)$ (see also Fig.~\ref{fig:helmet}) is an {\it inflated} version of the helmet $\mathcal{H}(c,\epsilon,\epsilon_s,1/2)$ and creates a hysteresis region useful to prevent infinitely many consecutive jumps (Zeno behavior). Let us then characterize in the following lemma the equilibria of the avoidance vector field $\kappa(x,m)= - k_m \pi^\perp(x-c)(x-p_m)$ ($|m|=1$).
\begin{lemma}\label{lemma:equilibria}
For each $x\in\ensuremath{\mathbb{R}}^n \setminus \{c\}$ and $m \in\{-1, 1\}$, $\pi^\perp(x-c)(x-p_m)=0$ if and only if
$x\in\mathcal{L}(c,p_m-c)$.
\end{lemma}
Since we want the trajectories to leave the set $\mathcal{F}_m$ during the avoidance mode, it is necessary to select the point $p_m$ and the flow set $\mathcal{F}_m$ such that $\mathcal{L}(c,p_m-c)\cap\mathcal{F}_m = \emptyset$ for each $m\in\{-1,1\}$, otherwise trajectories can stay in the avoidance mode indefinitely. This motivates the intersection with the conic region in~\eqref{eq:Fm} and Lemma~\ref{lemma:empty1}, in view of which we pose the following assumption.
\begin{assumption}\label{assumption:parameters}
The parameters in~\eqref{eq:hs_1obs} are selected as:
\begin{align}
& \epsilon_h \in \big(\epsilon, \sqrt{\epsilon\|c\|}\big) &&\epsilon_s\in(\epsilon, \epsilon_h)
&&\mu \in (\mu_{\min},1/2) \label{ineq:eps_h,eps_s}\\
&\theta\in(0,\theta_{\max})
&& \psi\in(0,\psi_{\max}) \label{ineq:parameters}
&& \bar\psi \in(\psi,\psi_{\max})
\end{align}
where $\mu_{\min}$, $\theta_{\max}$ and $\psi_{\max}$ are defined as
\begin{align}
&\mu_{\min}:=\frac{1}{2} \frac{\epsilon_h^2+ \| c \|^2 - 2 \epsilon \| c \|}{\| c \|^2 - \epsilon \| c \|} \in (0,{1}/{2}),\\
&\theta_{\max}:=\arccos\left(\frac{\epsilon_h^2+\| c \|^2(1-2 \mu)}{2\epsilon \| c\|(1-\mu)}\right) \in (0,{\pi}/{2}) \label{eq:theta_max},\\
& \psi_{\max} := \min(\theta,\pi/2-\theta) \in (0,\pi/4).
\label{eq:psi_max}
\end{align}
\end{assumption}
The intervals in~\eqref{ineq:eps_h,eps_s}--\eqref{eq:psi_max} are well defined. They can be checked in this order. The intervals of $\epsilon_h$ and $\epsilon_s$ are well defined by Assumption~\ref{assumption:obstacle}. Then, those of $\mu_{\min}$, $\mu$, $\theta_{\max}$ ($\theta_{\max}>0$ directly from $\mu > \mu_{\min}$), $\theta$, $\psi_{\max}$ and, finally, those of $\psi$ and $\bar \psi$ (corresponding to $0< \psi < \bar \psi < \psi_{\max}$) are also well defined.
\begin{lemma}\label{lemma:empty1}
Under Assumption~\ref{assumption:parameters}, $\mathcal{F}_m\cap\mathcal{L}(c,p_m-c)=\emptyset$, for $m\in\{-1,1\}$.
\end{lemma}
\section{Main Result}\label{section:main}
In this section, we state and prove our main result, which corresponds to the objectives discussed in Section \ref{section:problem}. Let us first write more compactly flow/jump sets and maps:
\begin{align}
\label{eq:hs_1obs:flowJumpSets}
\mathcal{F}:=\!\!\!\! \bigcup_{m \in \{-1,0,1\}} \!\!\!\! & \mathcal{F}_m \times \{m\},\, \mathcal{J}:=\!\!\!\! \bigcup_{m \in \{-1,0,1\}} \!\!\!\!\mathcal{J}_m \times \{m\}\\
(x,m) & \mapsto \mathbf{F}(x,m) := (\kappa(x,m),0),\\
(x,m) & \mapsto \mathbf{J}(x,m) := (x,\mathbf{M}(x,m)).
\end{align}
The mild regularity conditions satisfied by the hybrid system~\eqref{eq:hs_1obs}, as in the next lemma, guarantee the applicability of many results in the proof of our main result.
\begin{lemma}
\label{lemma:hbc}
The hybrid system with data $(\mathcal{F},\mathbf{F},\mathcal{J},\mathbf{J})$ satisfies the hybrid basic conditions in~\cite[Ass.~6.5]{goebel2012hybrid}.
\end{lemma}
Let us define the obstacle-free set $\mathcal{K}$ and the attractor $\mathcal{A}$ as:
\begin{equation}
\label{eq:KandA}
\mathcal{K}:=\overline{\mathbb{R}^n\setminus\mathcal{B}_\epsilon(c)}\times\{-1,0,1\},\quad \ensuremath{\mathcal{A}}:=\{0\}\times\{0\}.
\end{equation}
Our main result is given in the following theorem.
\begin{theorem}\label{theorem:invariance}
Consider the hybrid system \eqref{eq:hs_1obs} under Assumptions~\ref{assumption:obstacle}-\ref{assumption:parameters}. Then,
\begin{itemize}
\item[i)] all maximal solutions do not have finite escape times, are complete in the ordinary time direction, and the obstacle-free set $\mathcal{K}$ in~\eqref{eq:KandA} is forward invariant;
\item[ii)] the set $\ensuremath{\mathcal{A}}$ in~\eqref{eq:KandA} is
globally asymptotically stable;
\item[iii)] for each $\epsilon^\prime>\epsilon$, it is possible to tune the hybrid controller parameters so that the resulting hybrid feedback law matches, in $\mathbb{R}^n\setminus\mathcal{B}_{\epsilon^\prime}(c)$, the law $u=-k_0x$.
\end{itemize}
\end{theorem}
Theorem \ref{theorem:invariance} shows that the three objectives discussed in Section \ref{section:problem} are fulfilled.
\subsection{Proof of Theorem \ref{theorem:invariance}}
\begin{table}
\[
\begin{array}{ll}
\toprule
\text{Set to which $x$ belongs}& \mathbf{T}_{\mathcal{F}_0}(x)\\
\midrule
\partial\mathcal{B}_\epsilon(c)\cap\mathcal{B}^\circ_{\|c/2\|}(c/2) & \mathcal{P}^\geq(0,x-c)\\
\partial\mathcal{B}_{\epsilon_s}(c)\setminus\mathcal{B}_{\|c/2\|}(c/2)& \mathcal{P}^\geq(0,x-c)\\
(\partial\mathcal{B}_{\|c/2\|}(c/2)\cap\mathcal{B}^\circ_{\epsilon_s}(c))\setminus\mathcal{B}_\epsilon(c) & \mathcal{P}^\leq(0,x-c/2)\\
\partial\mathcal{B}_{\epsilon}(c)\cap\partial\mathcal{B}_{\|c/2\|}(c/2) & \mathcal{P}^\geq(0,x-c)\cap\mathcal{P}^\leq(0,x-c/2)\\
\partial\mathcal{B}_{\|c/2\|}(c/2)\cap\partial\mathcal{B}_{\epsilon_s}(c) & \mathcal{P}^\geq(0,x-c)\cup\mathcal{P}^\leq(0,x-c/2)\\
\bottomrule
\end{array}
\]
\[
\begin{array}{ll}
\toprule
\text{Set to which $x$ belongs}& \mathbf{T}_{\mathcal{F}_{\bar m}}(x) \\
\midrule
\partial\mathcal{B}_\epsilon(c)\!\setminus\!\mathcal{B}_{\|\mu c\|}(\mu c)\!\setminus\!\mathcal{C}^\leq(c,p_{\bar m}\!-\!c,\psi) & \mathcal{P}^\geq(0,x-c)\\
\partial\mathcal{B}_{\epsilon_h}\!(c)\!\setminus\!\mathcal{B}_{\|\mu c\|}\!(\mu c)\!\setminus\!\mathcal{C}^\leq(c,p_{\bar m}\!-\!c,\psi)
& \mathcal{P}^\leq(0,x-c)\\
\partial\mathcal{B}_{\|\mu c\|}(\mu c)\cap\mathcal{B}^\circ_{\epsilon_h}(c)\setminus\mathcal{B}_{\epsilon}(c) & \mathcal{P}^\geq(0,x-\mu c)\\
\mathcal{C}^=_{\le}(c,p_{\bar m}-c,\psi)\cap\mathcal{B}^\circ_{\epsilon_h}(c)\setminus\mathcal{B}_{\epsilon}(c) & \mathcal{P}^\geq(0,n_{\bar m}(x))\\
\partial\mathcal{B}_{\epsilon}(c)\cap\partial\mathcal{B}_{\|\mu c\|}(\mu c) & \mathcal{P}^\geq(0,x\!-\!c)\!\cap\!\mathcal{P}^\geq(0,x-\mu c)\\
\partial\mathcal{B}_{\epsilon_h}(c)\cap\partial\mathcal{B}_{\|\mu c\|}(\mu c) & \mathcal{P}^\leq(0,x\!-\!c)\!\cap\!\mathcal{P}^\geq(0,x-\mu c)\\
\partial\mathcal{B}_{\epsilon}(c)\cap\mathcal{C}^=(c,p_{\bar m}-c,\psi) & \mathcal{P}^\geq(0,x\!-\!c)\!\cap\!\mathcal{P}^\geq(0,n_{\bar m}(x))\\
\partial\mathcal{B}_{\epsilon_h}(c)\cap\mathcal{C}^=(c,p_{\bar m}-c,\psi) & \mathcal{P}^\leq(0,x\!-\!c)\!\cap\!\mathcal{P}^\geq(0,n_{\bar m}(x))\\
\bottomrule
\end{array}
\]
\caption{Points $(x,m)$ and their tangent cones ($\bar m$ is either $-1$ or $1$ and $n_{\bar m}(x):=\pi^{\psi}(p_{\bar m}-c)(x-c)$).}
\label{eq:tangent_cone}
\end{table}
To prove item i), we resort to~\cite[Thm.~4.3]{chai2018forward}. We first establish for $\mathscr{H}$ in~\eqref{eq:hs_1obs} the relationships invoked in~\cite[Thm.~4.3]{chai2018forward}, and we refer the reader to Fig.~\ref{fig:flowAndJumpSets} for a two-dimensional visualization. In particular, the boundary of the flow set $\mathcal{F}$ is given by $\partial\mathcal{F}=\{(x,m):x\in\partial\mathcal{F}_m\}$, where the sets $\partial\mathcal{F}_0$ and $\partial\mathcal{F}_m, m\in\{-1,1\}$, are
\begin{align*}
\nonumber
\partial\mathcal{F}_0&=\big(\partial\mathcal{B}_\epsilon(c)\cap\mathcal{B}_{\|c/2\|}(c/2)\big)
\cup\big(\partial\mathcal{B}_{\epsilon_s}(c)\setminus\mathcal{B}_{\|c/2\|}(c/2)\big)\\
&\quad\cup\big((\partial\mathcal{B}_{\|c/2\|}(c/2)\cap\mathcal{B}_{\epsilon_s}(c))\setminus\mathcal{B}_\epsilon(c)\big),\\
\nonumber
\partial\mathcal{F}_m&=\big((\partial\mathcal{B}_\epsilon(c)\cup\partial\mathcal{B}_{\epsilon_h}(c))\setminus\mathcal{B}_{\|\mu c\|}(\mu c)\setminus\mathcal{C}^\leq_\le(c,p_m-c,\psi)\big)\\
&\cup\big((\partial\mathcal{B}_{\|\mu c\|}(\mu c)\cup\mathcal{C}^=_\leq(c,p_m-c,\psi))\cap\mathcal{B}_{\epsilon_h}(c)\setminus\mathcal{B}^\circ_{\epsilon}(c)\big).
\end{align*}
The tangent cone\footnote{For the definition of tangent cone, see~\cite[Def.~5.12 and Fig.~5.4]{goebel2012hybrid}.}, evaluated at the boundary of $\mathcal{F}$, is given in Table~\ref{eq:tangent_cone}. Consider $m=0$ and let $z:=\kappa(x,0)=-k_0x$.
If $x\in\partial\mathcal{B}_\epsilon(c)\cap\mathcal{B}^\circ_{\|c/2\|}(c/2)$ then one has $(x-c)^\top z=-k_0x^\top(x-c)>0$ (since $x \in \mathcal{B}^\circ_{\| c/2 \|}(c/2)$, see~\eqref{eq:dist_to_c_decreases}), i.e., $z\in\mathcal{P}^>(0,x-c)$. If $x\in(\partial\mathcal{B}_{\|c/2\|}(c/2)\cap\mathcal{B}^\circ_{\epsilon_s}(c))\setminus\mathcal{B}_{\epsilon}(c)$ then one has $(x-c/2)^\top z=-k_0x^\top(x-c/2)=-k_0x^\top c/2=-k_0\|x\|^2/2 \le 0$ since $x^\top(x-c)=0$ from $\|x-c/2\|=\|c/2\|$. Then, $z\in\mathcal{P}^\le (0,x-c/2)$.
If $x\in\partial\mathcal{B}_{\epsilon}(c)\cap\partial\mathcal{B}_{\|c/2\|}(c/2)$ or $x\in\partial\mathcal{B}_{\|c/2\|}(c/2)\cap\partial\mathcal{B}_{\epsilon_s}(c)$ then $z^\top(x-c)=0$ and $z^\top(x-c/2)=-k_0\|x\|^2/2\leq 0$ showing, respectively, that $z\in\mathcal{P}^\geq(0,x-c)\cap\mathcal{P}^\leq(0,x-c/2)$.
Finally, if $x\in\partial\mathcal{B}_{\epsilon_s}(c)\setminus\mathcal{B}_{\|c/2\|}(c/2)$, then one has $(x-c)^\top z=-k_0x^\top(x-c)< 0$ (since $x\notin \mathcal{B}_{\| c / 2\|}(c/2)$), i.e., $z\in\mathcal{P}^<(0,x-c)$. Let $\mathcal{L}_0:=\partial\mathcal{B}_{\epsilon_s}(c)\setminus\mathcal{B}_{\|c/2\|}(c/2)$. Therefore, by all the previous arguments,
\begin{equation}
\label{eq:tgConeBndF0}
\begin{aligned}
x\in\mathcal{L}_0 & \implies \kappa(x,0)\cap\mathbf{T}_{\mathcal{F}_0}(x)=\emptyset\\
x\in\partial\mathcal{F}_0\setminus\mathcal{L}_0 & \implies \kappa(x,0)\subset\mathbf{T}_{\mathcal{F}_0}(x).
\end{aligned}
\end{equation}
Consider then $m\in\{-1,1\}$ and let now $z:=\kappa(x,m)=-k_m\pi^\perp(x-c)(x-p_m)$.
If $x\in\partial\mathcal{B}_{\epsilon}(c)$ or $x\in\partial\mathcal{B}_{\epsilon_h}(c)$ then one has $(x-c)^\top z=-k_m(x-c)^\top\pi^\perp(x-c)(x-p_m)=0$, which implies that both $z\in\mathcal{P}^\geq(0,x-c)$ and $z\in\mathcal{P}^\leq(0,x-c)$.
Define $n_m(x):=\pi^{\psi}(p_m-c)(x-c)$, which is a normal vector to the cone $\mathcal{C}^=(c,p_m-c,\psi)$ at $x$.
If $x\in\mathcal{C}^=_\leq(c,p_m-c,\psi)$, then\footnote{Each (in)equality is obtained thanks to the relationship reported over it. \label{note:overset}}
\begin{align*}
&n_m(x)^\top z=-k_m n_m(x)^\top\pi^\perp(x-c)(x-p_m)\\
&\overset{\eqref{eq:propLine2}}{=}k_m(x-c)^\top\pi^{\psi}(p_m-c) \pi^\perp(x-c)(p_m-c)\\
&\overset{\eqref{eq:def:piTheta},\eqref{eq:propLine4}}{=}\!k_m(x-c)^\top\!(\pi^\perp\!(p_m-c)\! -\!\sin^2(\psi)I_n )\pi^\perp\!(x-c)(p_m-c)\\
&\overset{\eqref{eq:propLine2}}{=}k_m(x-c)^\top\pi^\perp(p_m-c)\pi^\perp(x-c)(p_m-c)\\
&\overset{\eqref{eq:propLine4}}{=} k_m(x-c)^\top\pi^\perp(p_m-c)\big(I_n - \pi^\parallel(x-c)\big)(p_m-c)\\
&\overset{\eqref{eq:propLine2}}{=}-k_m(x-c)^\top\pi^\perp(p_m-c)\pi^\parallel(x-c)(p_m-c)\\
&\overset{\eqref{eq:proj-refl-maps}}{=}-k_m\frac{(x-c)^\top\pi^\perp(p_m-c)(x-c)}{\|x-c\|^2}(x-c)^\top(p_m-c)\geq 0
\end{align*}
where the last bound follows from $\pi^\perp(p_m-c)$ positive semidefinite and $(x-c)^\top(p_m-c)\leq 0$ (since $x\in\mathcal{C}^=_\leq(c,p_m-c,\psi)\subset\mathcal{P}^{\leq}(c,p_m-c)$). Hence, $z\in\mathcal{P}^\geq(0,n_m(x))$. Finally, let $x\in\partial\mathcal{B}_{\|\mu c\|}(\mu c)\cap\mathcal{B}_{\epsilon_h}(c)\setminus\mathcal{B}^\circ_{\epsilon}(c)$. With $\theta_{\max}$ in~\eqref{eq:theta_max}, we have
\begin{subequations}
\label{eq:proofRel}
\begin{align}
& 0 \le c^\top (c- x) \le \cos(\theta_{\max}) \| c \| \| x- c\| \label{eq:proofRel1} \\
& |(x-c)^\top (p_m -c)| \le \| x-c \| \| p_m - c\| \label{eq:proofRel2}\\
& c^\top (p_m - c) = - \cos(\theta) \| c \| \| p_m - c \| \label{eq:proofRel3}
\end{align}
\end{subequations}
where the bounds in~\eqref{eq:proofRel1} follow from \eqref{eq:whenInH(epsilon_h,mu)} in the proof of the previous Lemma~\ref{lemma:empty1}, $\mu<1/2$, and $x\in\partial\mathcal{B}_{\|\mu c\|}(\mu c)\cap\mathcal{B}_{\epsilon_h}(c)\setminus\mathcal{B}^\circ_{\epsilon}(c) \subset \mathcal{H} (c,\epsilon,\epsilon_h, \mu)$; \eqref{eq:proofRel3} follows from $p_m \in \mathcal{C}^=_\leq(c,c,\theta)$ (by~\eqref{eq:p-1} and Lemma~\ref{lemma:p-1}). So
\begin{equation*}
\begin{aligned}
&(x-\mu c)^\top z=-k_m(x-\mu c)^\top\pi^\perp(x-c)(x-p_m)\\
&\overset{\eqref{eq:propLine2}}{=}k_m(c-\mu c)^\top\pi^\perp(x-c)(p_m-c)\\
&\overset{\eqref{eq:proj-refl-maps}}{=}k_m(1-\mu)(c^\top(p_m-c)+\\
&\qquad c^\top(c-x)(x-c)^\top(p_m-c)/\|x-c\|^2)\\
&\overset{\eqref{eq:proofRel}}{\leq} k_m(1-\mu)(-\cos(\theta)+\cos(\theta_{\max}))\|c\|\|p_m-c\|<0
\end{aligned}
\end{equation*}
since $k_m>0$, $1-\mu >0$ (from~\eqref{ineq:eps_h,eps_s}) and $\theta < \theta_{\max}$ (from~\eqref{ineq:parameters}). $(x-\mu c)^\top z < 0$ implies then $z\in\mathcal{P}^<(0,x-\mu c)$. Let $\mathcal{L}_m:=\partial\mathcal{B}_{\|\mu c\|}(\mu c)\cap\mathcal{B}_{\epsilon_h}(c)\setminus\mathcal{B}^\circ_{\epsilon}(c)$. Therefore, by all the previous arguments,
\begin{equation}
\label{eq:tgConeBndFm}
\begin{aligned}
x\in\mathcal{L}_m & \implies \kappa(x,m)\cap\mathbf{T}_{\mathcal{F}_m}(x)=\emptyset\\
x\in\partial\mathcal{F}_m\setminus\mathcal{L}_m & \implies \kappa(x,m)\subset\mathbf{T}_{\mathcal{F}_m}(x).
\end{aligned}
\end{equation}
We can now apply \cite[Thm.~4.3]{chai2018forward}. With $\mathcal{K}$ in~\eqref{eq:KandA}, let $\hat{\mathcal{F}}:=\partial(\mathcal{K}\cap\mathcal{F})\setminus\mathcal{L}$ with $\mathcal{L}:=\{(x,m)\in\partial\mathcal{F}: \mathbf{F}(x,m)\cap\mathbf{T}_{\mathcal{F}}(x,m)=\emptyset\}$. By~\eqref{eq:tgConeBndF0} and \eqref{eq:tgConeBndFm} and $\mathcal{K}\cap\mathcal{F}=\mathcal{F}$, we have $\hat{\mathcal{F}}=\cup_{m=-1,0,1}(\partial\mathcal{F}_m\setminus\mathcal{L}_m)\times\{m\}$
and $\mathcal{L}=\cup_{m=-1,0,1}\mathcal{L}_m\times\{m\}$. It follows from~\eqref{eq:tgConeBndF0} and \eqref{eq:tgConeBndFm} that for every $(x,m)\in\hat{\mathcal{F}}$, $\mathbf{F}(x,m)\subset\mathbf{T}_{\mathcal{F}}(x,m)$. Also, $\mathbf{J}(\mathcal{K}\cap\mathcal{J})\subset\mathcal{K}$, $\mathcal{F}$ is closed, the map $\mathbf{F}$ satisfies the hybrid basic conditions as proven in Lemma~\ref{lemma:hbc} and it is, moreover, locally Lipschitz since it is continuously differentiable. We conclude then that the set $\mathcal{K}$ is forward pre-invariant \cite[Def.~3.3]{chai2018forward}. In addition, since $\mathcal{L}_0\subset\mathcal{J}_0$ and $\mathcal{L}_m\subset\mathcal{J}_m$ with $m\in\{-1,1\}$, one has $\mathcal{L}\subset\mathcal{J}$. Besides, finite escape times can only occur through flow, and since the sets $\mathcal{F}_{-1}$ and $\mathcal{F}_1$ are bounded by their definitions in~\eqref{eq:Fm}, finite escape times cannot occur for $x \in \mathcal{F}_{-1} \cup \mathcal{F}_1$. They can neither occur for $x \in \mathcal{F}_{0}$ because they would make $x^\top x$ grow unbounded, and this would contradict that $\tfrac{d}{dt}(x^\top x) \le 0$ by the definition of $\kappa(x,0)$ and by~\eqref{eq:hs_1obs:flowMap}. Therefore, all maximal solutions do not have finite escape times. By~\cite[Thm.~4.3]{chai2018forward} again, the set $\mathcal{K}$ is actually forward invariant \cite[Def.~3.3]{chai2018forward}, and solutions are complete. Finally, we anticipate here a straightforward corollary of completeness and Lemma~\ref{lemma:finiteJumps} below: since the number of jumps is finite by Lemma~\ref{lemma:finiteJumps}, all maximal solutions to~\eqref{eq:hs_1obs} are actually complete in the ordinary time direction.
Now, we will prove item ii) in two steps. First, we prove in the following Lemma~\ref{lemma:GAS_jumpless} that the set $\ensuremath{\mathcal{A}}$ is globally asymptotically stable for the system without jumps. To this end, the \emph{jumpless system} has data
$
\mathscr{H}^0 =(\mathbf{F}, \mathcal F, \emptyset, \emptyset )
$
with flow map $\mathbf{F}$ and flow set $\mathcal F$ defined in~\eqref{eq:hs_1obs}. We emphasize that $\mathscr{H}^0$ is obtained in accordance to~\cite[Eqq.~(38)-(39)]{goebel2009hybrid} by identifying \emph{all} jumps with events.
\begin{lemma}
\label{lemma:GAS_jumpless}
$\ensuremath{\mathcal{A}}$ in~\eqref{eq:KandA} is globally asymptotically stable for the jumpless hybrid system $\mathscr{H}^0$.
\end{lemma}
Second, we prove in the following Lemma~\ref{lemma:finiteJumps} that the number of jumps is finite for the given hybrid dynamics in~\eqref{eq:hs_1obs}.
\begin{lemma}
\label{lemma:finiteJumps}
For $\mathscr{H}$ in~\eqref{eq:hs_1obs}, each solution starting in $\mathcal{K}$ experiences no more than $3$ jumps.
\end{lemma}
Based on Lemmas~\ref{lemma:GAS_jumpless}-\ref{lemma:finiteJumps}, global asymptotic stability of $\ensuremath{\mathcal{A}}$ follows straightforwardly from~\cite[Thm.~31]{goebel2009hybrid} since the hybrid system in~\eqref{eq:hs_1obs} satisfies the Basic Assumptions \cite[p.~43]{goebel2009hybrid}, as proven in Lemma~\ref{lemma:hbc}, the set $\ensuremath{\mathcal{A}}$ is compact and has empty intersection with the jump set.
Lastly, to prove item iii), let $\epsilon^\prime>\epsilon$. Select the parameter $\epsilon_h\in(\epsilon,\min(\epsilon^\prime,\sqrt{\epsilon\|c\|}))$ while all other hybrid controller parameters are selected as in Assumption~\ref{assumption:parameters}. Then this implies that the flow sets $\mathcal{F}_m, m\in\{-1,1\},$ of the avoidance mode are entirely contained in $\mathcal{B}_{\epsilon^\prime}(c)$. Therefore, as long as the state $x$ remains in $\mathbb{R}^n\setminus\mathcal{B}_{\epsilon^\prime}(c)$, solutions are enforced to flow only with the stabilizing mode $m=0$, which corresponds to the feedback law $u=-k_0x$.
\section{Numerical example}
\label{section:example}
We illustrate our results through a three-dimensional example. The hybrid system in~\eqref{eq:hs_1obs} is fully specified by the following parameters. The obstacle has center $c=(1,1,1)$ and radius $\epsilon=0.700$. The controller gains are $k_m= 1$ for $m \in \{-1,0,1\}$. The parameters used in the construction of the flow and jump sets are $\epsilon_h = 0.901$, $\epsilon_s = 0.800$, $\mu=0.444$, $\theta= 0.276$, which satisfy Assumption~\ref{assumption:parameters}. To select a point $p_1\in\mathcal{C}^=_\leq(c,c,\theta)\setminus\{c\}$, we proceed as follows. Select $v\in\mathbb{S}^n$ such that $v^\top c=0$ and consider $\mathbf{R}(v,\theta)\in\mathbb{SO}(3)$, i.e., an orthogonal rotation matrix specified by axis $v$ and angle $\theta$. Then, we can verify that the point $p_1=(I_3-\mathbf{R}(v,\theta))c$ is a point on the cone $\mathcal{C}^=_\leq(c,c,\theta)$. By letting $v=(0,1,-1)$, we determine $p_1=(0.424,-0.155,-0.155)$ and $p_{-1}=(-0.348,0.231,0.231)$ as in~\eqref{eq:p-1}. We also select $\psi= 0.249$ and $\bar\psi = 0.266$, which satisfy Assumption~\ref{assumption:parameters}.
Fig.~\ref{figure:construction} shows that the objectives posed in Section~\ref{section:problem} and proven in Theorem~\ref{theorem:invariance} are fulfilled. The top part of the figure illustrates the relevant sets. The middle part shows that the origin is globally asymptotically stable, and the control law matches the stabilizing one sufficiently away from the obstacle. The bottom part shows that the solutions are safe since they all stay away from the obstacle set $\mathcal{B}_\epsilon(c)$.
\begin{figure}
\centering
\includegraphics[width=0.30\columnwidth]{sim_sets3.png}~~\includegraphics[width=0.25\columnwidth]{sim_sets2.png}~~\includegraphics[width=0.30\columnwidth]{sim_sets1.png}\\
~\\
\includegraphics[width=.97\columnwidth]{sim_sol.png}\\
\includegraphics[width=.97\columnwidth]{obstDist.png}
\caption{Top left: sets $\mathcal{F}_{-1}$ (green) and $\mathcal{J}_0$ (red) surrounding $\mathcal{B}_\epsilon(c)$ (grey). Top center: sets $\mathcal{J}_0$ (red), $\mathcal{J}_{-1} \cap \mathcal{H}(c,\epsilon,\epsilon_h, \mu)$ (green), and $\mathcal{J}_{1} \cap \mathcal{H}(c,\epsilon,\epsilon_h, \mu)$ (blue) surrounding $\mathcal{B}_\epsilon(c)$ (grey). Top right: sets $\mathcal{F}_1$ (blue) and $\mathcal{J}_0$ (red) surrounding $\mathcal{B}_\epsilon(c)$ (grey).
Middle: phase portrait of solutions with different initial conditions and $\mathcal{B}_\epsilon(c)$ (grey).
Bottom: distance to the obstacle for the solutions and radii $\epsilon_s$, $\epsilon$ of $\mathcal{H}(c,\epsilon,\epsilon_s, 1/2)$, $\mathcal{B}_\epsilon(c)$.}
\label{figure:construction}
\end{figure}
|
2,877,628,091,458 | arxiv | \section{Introduction}
Variational studies have contributed greatly to our understanding of
correlated systems. In part this is due to their relative simplicity,
applicability to larger sizes irrespective of the number of dimensions, and
the easily accessible physical insight they provide. In the case of the
Hubbard model~\cite{Gutzwiller63,Gutzwiller65,Hubbard63,Kanamori63} frequently
used variational wavefunctions include the Gutzwiller
~\cite{Gutzwiller63,Gutzwiller65} (GWF) and Baeriswyl
wavefunctions~\cite{Baeriswyl86,Baeriswyl00,Dzierzawa97} (BWF), and their
combinations. The former is based on suppressing charge fluctuations in the
noninteracting solution, the latter on projecting with the hopping operator
onto a wavefunction in the large interaction limit.
The GWF has been studied by a variety of methods. It can be solved
exactly in one~\cite{Metzner87,Metzner89} and
infinite~\cite{Metzner88,Metzner89,Metzner90} dimensions, and it can be
simulated in two and three dimensions by variational Monte
Carlo~\cite{Yokoyama87}. The one-dimensional exact solution produces a state
with a finite discontinuity of the momentum density at the Fermi surface.
Millis and Coppersmith~\cite{Millis91} have investigated the response of the
GWF and have concluded that it is metallic with a conductivity proportional to
the kinetic energy. Insulating behavior in projected wavefunctions similar to
the one due to Gutzwiller can be produced by generalized projection
operators~\cite{Capello05,Capello06}, for example non-centro-symmetric or
singular projectors.
Calculating the Drude or the superfluid weight in a variational context is a
difficult issue. These two quantities can be cast in terms of identical
expressions (see Eq. (\ref{eqn:Dc})), the second derivative of the ground
state energy with respect to a Peierls
phase~\cite{Kohn64,Shastry90,Scalapino92,Scalapino93}. As pointed out by
Scalapino, White, and Zhang, the two quantities differ in the interpretation
of the derivative~\cite{Scalapino92,Scalapino93}. For the Drude weight the
Peierls phase shifts the ground state energy adiabatically, remaining always
in the same state, for the superfluid weight level crossings are also
considered.
In this work general expressions for the Drude and superfluid weights are
derived in a variational setting. For the Drude weight deriving an easily
applicable expression is a difficult issue, since the expression derived
herein depends on the exact eigenvalues of the perturbed Hamiltonian, in
practical settings often not available. It is then demonstrated that the
linear response expression for the current can be cast in terms of a geometric
phase. The tool for calculating this geometric phase (the total position
shift operator) are also presented. The formalism is then used to interpret
the current response of projected wavefunctions. It is demonstrated that the
current response in the commonly used Gutzwiller and Baeriswyl projected, as
well as wavefunctions based on combinations of the two projections, produce a
current response identical to the wavefunction on which the projections are
applied (the Fermi sea or the wavefunction in the strongly interacting limit).
\section{Drude and superfluid weights in variational theory}
An expression for the frequency ($\omega$) and wave vector (${\bf
q}$)-dependent conductivity was derived by Kohn~\cite{Kohn64}. The DC
conductivity (Drude weight, $D_c$) corresponds to the strength of the
$\delta$-function peak of the conductivity in the zero frequency limit. The
correct expression for $D_c$ is obtained by first taking the limit (${\bf
q}\rightarrow 0$) and then the other limit $\omega\rightarrow 0$. $D_c$ is
often expressed~\cite{Kohn64,Shastry90} in terms of the second derivative of
the ground state energy with respect to a phase associated with the perturbing
field as
\begin{equation}
\label{eqn:Dc}
D_c = \frac{\pi}{L} \left[ \frac{\partial^2 E_0(\Phi)}{\partial
\Phi^2} \right]_{\Phi=0}.
\end{equation}
Here $E_0(\Phi)$ denotes the perturbed ground state energy, $\Phi$ denotes the
Peierls phase.
Scalapino, White, and Zhang (SWZ)~\cite{Scalapino92,Scalapino93} have
investigated the distinction between the Drude and superfluid weights. In
particular they studied the importance of the order of different limits
($\omega\rightarrow 0$, ${\bf q}\rightarrow 0$) for the conductivity. In a
variational context implementation of the frequency limit is not
straightforward, since, strictly speaking there is no frequency to speak of.
However, SWZ have also pointed out that the derivative with respect to the
phase $\Phi$ in Eq. (\ref{eqn:Dc}) is ambiguous. They showed that if the
derivative is defined via adiabatically shifting the state which is the ground
state at zero field, then the Drude weight results. In the presence of level
crossings the adiabatically shifted state may be an excited state for a finite
value of the perturbation. The superfluid weight is obtained if the
derivative corresponds to the ``envelope function'', i.e. the ground state of
the perturbed sytem is taken to define the derivative. The distinction
between these two derivatives can be implemented by embedding the periodic
system under study in a larger periodic system, and defining the perturbation
in terms of the periodic boundary conditions of this larger system. In cases
in which level crossings are close to $\Phi=0$ conductors, superconductors,
and insulators can be distinguished~\cite{Scalapino92,Scalapino93,Hetenyi12}.
In general, the position of level crossings depends on
dimensionality~\cite{Scalapino92,Scalapino93}.
The finite temperature extension of $D_c$ has been given by Zotos, Castella,
and Prelov\v{s}ek\cite{Zotos96} (ZCP). This generalization can be summarized
as
\begin{equation}
D_c(T) = \frac{\pi}{L}\sum_n \frac{\exp(-\beta E_n)}{Q} \left[ \frac{\partial^2 E_n(\Phi)}{\partial
\Phi^2} \right]_{\Phi=0}.
\label{eqn:ZCP_Dc}
\end{equation}
Note in this expression the Boltzmann weight factors remain unchanged as the
perturbation $\Phi$ is turned on. Eq. (\ref{eqn:ZCP_Dc}) has been
applied~\cite{Kirchner99} to calculate the DC conductivity in strongly
correlated systems. Taking the zero temperature limit reproduces Kohn's
expression for $D_c$. To define a finite temperature analog of $D_s$ one lets
the Boltzmann weight factors depend on the perturbing field $\Phi$ as
\begin{equation}
D_s(T) = \frac{\pi}{L} \left[\frac{ \partial^2}{\partial \Phi^2} \sum_n
\frac{\exp(-\beta E_n(\Phi))}{Q} E_n(\Phi) \right]_{\Phi=0}.
\label{eqn:ZCP_Ds}
\end{equation}
Indeed the ground state superfluid weight is reproduced in the zero
temperature limit. Eqs. (\ref{eqn:ZCP_Dc}) and (\ref{eqn:ZCP_Ds}) follow from
the assumption that the distinction between the Drude and superfluid weights
is due to the different types of derivatives as discussed by SWZ.
Similarly, in deriving expressions for $D_c$ and $D_s$ in a variational
setting our starting assumption will also be that the distinction between the
two quantities is due to the effects of level crossings. Suppose
$|\tilde{\Psi}(\gamma)\rangle$ is a variational wavefunction, where $\gamma$
denotes a set of variational parameters, which we wish to use to optimize some
Hamiltonian $\hat{H}$ with eigenbasis
\begin{equation}
\hat{H}|\Psi_n \rangle = E_n |\Psi_n \rangle.
\end{equation}
The estimate for the ground state energy may be written in terms of a density
matrix as
\begin{equation}
\langle \tilde{\Psi}(\gamma)|\hat{H}|\tilde{\Psi}(\gamma)\rangle = \sum_n
\langle \tilde{\Psi}(\gamma)|\Psi_n\rangle E_n \langle \Psi_n
|\tilde{\Psi}(\gamma)\rangle = \sum_n P_n E_n,
\end{equation}
the probabilities can be written as
\begin{equation}
P_n = |\langle \tilde{\Psi}(\gamma) | \Psi_n \rangle|^2.
\end{equation}
Comparing with Eq. (\ref{eqn:ZCP_Dc}) it is obvious that a consistent
formalism requires that the variational Drude weight be defined as
\begin{equation}
D_c = \frac{\pi}{L}\sum_n P_n \frac{\partial^2 E_n(\Phi)}{\partial \Phi^2},
\end{equation}
with $P_n$ independent of the perturbation $\Phi$. It follows that the
variational parameter $\gamma$ is independent of the perturbation $\Phi$. The
variational analog of $D_s$ (based on Eq. (\ref{eqn:ZCP_Ds})) corresponds to
\begin{equation}
D_s = \frac{\pi}{L}\frac{\partial^2 }{\partial \Phi^2} \sum_n P_n(\Phi) E_n(\Phi),
\label{eqn:Ds}
\end{equation}
where the probabilities $P_n(\Phi)$ depend on $\Phi$ and the variational
parameters $\gamma$ in this case {\it also depend} on $\Phi$.
A central difficulty in calculating $D_c$ in a variational theory is the fact
that it depends on the exact eigenvalues of the perturbed Hamiltonian (see
Eq. (\ref{eqn:Dc})), however variational theories are usually applied in cases
where the exact solution is not easily accessible. While the Drude weight
remains a difficult problem in general, it is shown below that the current can
be cast in terms of a geometric phase, and evaluated even in a variational
context.
\section{Current in terms of a geometric phase}
\label{sec:J}
In this section we consider the adiabatic current response of a system in
general, not only in a variational context. After showing that the persistent
current can be expressed as a geometric phase~\cite{Berry84,Shapere89}, we
explicitly construct the mathematical tools to calculate it, and use the
results in the next section to interpret the GWF. Since the current can be
cast in terms of observables, it follows that the calculation of the Drude
weight is also accessible, being the first derivative of the current as a
function of the Peierls phase.
Consider a system periodic in $L$, and experiencing a perturbation in the form
of a Peierls phase $\Phi$. Its Hamiltonian can be written as
\begin{equation}
\hat{H}(\Phi) = \sum_{i=1}^N\frac{(\hat{p}_i+\Phi)^2}{2m} + V(x_1,...,x_N).
\end{equation}
The following identity also holds
\begin{equation}
\label{eqn:pH}
\partial_\Phi \hat{H}(\Phi) = \sum_{i=1}^N \frac{(\hat{p}_i+\Phi)}{m}.
\end{equation}
The ground state energy can be written as
\begin{equation}
E(\Phi) = \langle \Psi(\Phi)|\hat{H}(\Phi)| \Psi(\Phi)\rangle.
\end{equation}
The average current for such a system can be expressed as~\cite{Kohn64}
\begin{equation}
J(\Phi) = \partial_\Phi E(\Phi) =
\langle \Psi(\Phi)|\partial_\Phi \hat{H}(\Phi)|\Psi(\Phi)\rangle.
\end{equation}
Substituting for the partial derivative of the Hamiltonian we obtain
\begin{equation}
J(\Phi) = \frac{N \Phi}{m}+ \sum_{i=1}^N
\langle \Psi(\Phi)| \frac{\hat{p}_i}{m}|\Psi(\Phi)\rangle.
\end{equation}
In the position representation the current can be written
\begin{equation}
J(\Phi) = \frac{N \Phi}{m} - \frac{i}{m} \sum_{i=1}^N
\langle \Psi(\Phi)| \frac{\partial }{\partial x_i}|\Psi(\Phi)\rangle.
\end{equation}
Next we rewrite the wavefunction in terms of a shift of the total position and
define a wavefunction
\begin{equation}
\langle x_1,...x_N| \Psi(\Phi;X) \rangle = \Psi(x_1+X,...,x_N+X;\Phi).
\end{equation}
The action of the total momentum can then be cast in terms of the derivative
with respect to $X$ as
\begin{equation}
\sum_{i=1}^N \frac{\partial}{\partial x_i}
\Psi(x_1+X,...,x_N+X;\Phi)
= \partial_X \Psi(x_1+X,...,x_N+X;\Phi).
\end{equation}
The effect of $X$ on the particle positions is similar to the effect of the
Peierls phase on the momenta. Like the Peierls phase it is an external
parameter, so one can perform adiabatic cycles as a function of it. Averaging
in $X$ over the unit cell $\frac{1}{L} \int_0^L \mbox{d} X...$ leads to
\begin{equation}
J(\Phi) = \frac{N\Phi}{m}-\frac{i}{mL} \int_0^L \mbox{d}X
\langle \Psi(\Phi;X)| \partial_X| \Psi(\Phi;X)\rangle.
\label{eqn:Jphi}
\end{equation}
The second term in Eq. (\ref{eqn:Jphi}) is a geometric phase~\cite{Berry84}.
Since it results from the periodicity of the parameter $X$ it is similar to
the geometric phase derived by Zak~\cite{Zak89}. It is also similar to the
geometric phase expression appearing in the modern theory of
polarization~\cite{Resta94}, with the variable $X$ playing the role of the
crystal momentum in this case. Thus the current due to a perturbation can be
expressed in terms of a constant proportional to the number of particles, and
a geometric phase term. Below an interpretation of the phase term is given.
It is interesting to note that a finite persistent current is in principle
possible for an unperturbed system (the case $\Phi=0$).
The next question to address is the actual calculation of this quantity. We
can construct a scheme which is in the spirit of the total position operator
proposed by Resta~\cite{Resta98,Resta99} to calculate the polarization. We
consider the case $\Phi=0$ (and suppress the notation), without loss of
generality. We first rewrite the Berry phase appearing in the expression for
the current in terms of its discretized analog as~\cite{Resta96}
\begin{equation}
\label{eqn:JBP}
J(0) = \lim_{\Delta X \rightarrow 0} \frac{1}{mL} \mbox{Im} \ln \prod_{s=0}^{M-1}
\langle \Psi(s\Delta X) |\Psi((s+1)\Delta X) \rangle.
\end{equation}
The continuous expression can be recovered by Taylor expanding the
wavefunction $|\Psi((s+1)\Delta X)\rangle$ around $s\Delta X$ and taking the
limit as $\Delta X \rightarrow 0$. Indeed
\begin{equation}
J(0) = \lim_{\Delta X \rightarrow 0} \frac{1}{mL} \mbox{Im} \sum_{s=0}^{M-1}
\ln \langle \Psi(s\Delta X) |\left[ |\Psi(s)\Delta X) \rangle + \partial_X
|\Psi(s\Delta X) \rangle \Delta X \right] =
\ln [ 1 + \langle \Psi(s\Delta X) | \partial_X | \Psi(s\Delta X) \rangle \Delta X].
\end{equation}
When the limit $\Delta X \rightarrow 0$ is taken the natural logarithm can be
expanded and we obtain
\begin{equation}
J(0) = \frac{1}{mL} \mbox{Im} \left[ \int \mbox{d}X
\langle\Psi(X)|\partial_X|\Psi(X)\rangle \right]
= -\frac{i}{mL} \int \mbox{d}X \langle\Psi(X)|\partial_X|\Psi(X)\rangle .
\end{equation}
The shift in the total position of the system by a value of $\Delta X$ can be
accomplished using the total position shift operator $\hat{U}(\Delta X)$. The
explicit form of this operator will be derived below, for now we assume its
existence. We define it as
\begin{equation}
\hat{U}(\Delta X) |\Psi(X) \rangle = |\Psi(X + \Delta X) \rangle,
\label{eqn:U}
\end{equation}
Using Eq. (\ref{eqn:U}) we can express the product in Eq. (\ref{eqn:JBP}) as
\begin{equation}
\prod_{s=0}^{M-1} \langle \Psi(s\Delta X) |\Psi((s+1)\Delta X) \rangle
= \langle \Psi(0) |\hat{U}(\Delta X)|\Psi(0) \rangle^M
\end{equation}
Substituting into Eq. (\ref{eqn:JBP}) the expression for the current becomes
\begin{equation}
\label{eqn:JBP_DX}
J(0) = \lim_{\Delta X \rightarrow 0} \frac{1}{m}\frac{1}{\Delta X}
\mbox{Im} \ln \langle \Psi(0) |\hat{U}(\Delta X)|\Psi(0) \rangle.
\end{equation}
The total position shift operator can be constructed using real space
permutation operators. This derivation has been given
elsewhere~\cite{Essler05}, here we emphasize the main results. In second
quantized notation the permutation operator between two positions can be
written as
\begin{equation}
P_{ij} = 1 - (c_i^\dagger - c_j^\dagger)(c_i - c_j).
\end{equation}
This operator has the properties
\begin{equation}
P_{ij}c_j = c_i P_{ij}, P_{ij}c_i = c_j P_{ij},
P_{ij}c_j^\dagger = c_i^\dagger P_{ij}, P_{ij}c_i^\dagger = c_j^\dagger P_{ij}.
\end{equation}
Assuming a grid with spacing $\Delta X$, using $P_{ij}$ we can construct an
operator which shifts all the positions on the grid in a periodic system. The
operator
\begin{equation}
\hat{U}(\Delta X) = P_{12} P_{23}...P_{L-1L},
\end{equation}
where it is assumed that the indices refer to particular grid points, has the
property that
\begin{equation}
\hat{U}(\Delta X) c_i = \left\{ \begin{array}{rl}
c_{i-1}\hat{U}(\Delta X), & i = 2,...,L
\\
c_{L}\hat{U}(\Delta X), & i = 1.
\end{array} \right.
\label{eqn:UU}
\end{equation}
It also holds that
\begin{equation}
\hat{U}(\Delta X) \tilde{c}_k = e^{i \Delta X k} \tilde{c}_k
\hat{U}(\Delta X),
\label{eqn:Uc} \end{equation}
where $\tilde{c}_k$ denotes the annihilation operator in reciprocal space.
Eq. (\ref{eqn:Uc}) can be demonstrated by Fourier transforming $\tilde{c}_k$
and applying (\ref{eqn:UU}). Taking the Fermi sea
\begin{equation}
|FS \rangle = \tilde{c}_{k_1}^\dagger ... \tilde{c}_{k_N}^\dagger |0\rangle,
\end{equation}
as an example one can show that
\begin{equation}
\hat{U}(\Delta X)|FS \rangle = e^{i \Delta X K}|FS \rangle,
\end{equation}
with $K = \sum_{i=1}^N k_i$.
As an example we consider again the non-interacting Fermi sea given by
\begin{equation}
|FS\rangle = \tilde{c}_{k_1}^\dagger...\tilde{c}_{k_N}^\dagger|0\rangle,
\end{equation}
where the $k$-vectors are spread symmetrically around zero. Applying a
perturbation $\Phi$ shifts all $k$-vectors by $\Phi$. The resulting current
is
\begin{equation}
J(\Phi) = \frac{2N}{m}\Phi,
\end{equation}
corresponding to a Drude weight of $D_c = 2N/m$. It is interesting to see
that the current is proportional to {\it twice} the number of particles. In a
Fermi sea conduction can occur due to particles as well as holes, of which at
half-filling there are an equal number. For systems with bound particles and
holes, $J(\Phi)$ is reduced, as bound excitons do not participate in
conduction and reduce the effective number of charge carriers. Thus the
geometric phase in Eq. (\ref{eqn:Jphi}) accounts for exciton binding. When
all the particles are bound to holes then the constant term in
Eq. (\ref{eqn:Jphi}) is cancelled by the phase term leading to $J(\Phi)=0$.
An example of bound particles and holes in the same band leading to insulating
behaviour is the Baeriswyl variational wavefunction~\cite{Baeriswyl00}.
\section{Contribution of the geometric phase to the current response of projected wavefunctions}
In this section we provide the response theory of some commonly used projected
wavefunctions~\cite{Gutzwiller63,Gutzwiller65,Baeriswyl86,Baeriswyl00}. We
emphasize that it is the contribution of the phase term to the current
response we calculate, not the Drude weight, which is the first derivative of
the current response with respect to the perturbing phase.
The Gutzwiller wavefunction~\cite{Gutzwiller63,Gutzwiller65} (GWF) was
proposed as a variational wavefunction for the Hubbard model, and it has the
form
\begin{equation}
|\Psi_G(\gamma)\rangle = e^{-\gamma \hat{D}}|FS\rangle,
\end{equation}
where $\hat{D} = \sum_i n_{i\uparrow}n_{i\downarrow}$. Without loss of
generality we consider the one-dimensional case.
Before developing the current response theory of the GWF, we present the
calculation of a quantity which expresses the extent of localization.
Localization has been suggested long ago as a general criterion of
metallicity~\cite{Kohn64}, and the relation of the spread to the DC
conductivity has been shown in a number of
places~\cite{Resta98,Resta99,Hetenyi12}. In particular we calculate the
normalized spread defined as
\begin{equation}
\frac{\langle X^2 \rangle - \langle X \rangle^2}{L^2}.
\label{eqn:X2}
\end{equation}
Due to the ill-defined nature of the position operator in periodic systems we
choose a sawtooth representation which can be written as
\begin{equation}
X = \sum_{\stackrel{m=-L/2}{m\neq0}}^{L/2-1} \left(\frac{1}{2} +
\frac{\hat{W}^m}{\mbox{exp}\left( i\frac{2\pi m}{L}\right)-1} \right),
\label{eqn:X_st}
\end{equation}
where $\hat{W}$ denotes the total momentum shift operator, which has the
property that
\begin{equation}
\hat{W} |\Psi(K)\rangle = |\Psi(K+ (2\pi)/L)\rangle.
\end{equation}
The construction~\cite{Hetenyi09} of this operator is analogous to the total
position shift operator used to define the persistent current in section
\ref{sec:J}. For a state $|\tilde{\Phi}\rangle$ diagonal in the position
representation one can write
\begin{equation}
\hat{W}|\tilde{\Phi}\rangle = e^{i\frac{2\pi}{L}\sum_i \hat{x}_i}|\tilde{\Phi}\rangle
\end{equation}
where $x_i$ denotes the position of particle $i$. Using the sawtooth
representation one can show that for the Fermi sea that the spread in position
\begin{equation}
\frac{\langle X^2 \rangle - \langle X \rangle^2}{L^2} = \lim_{L\rightarrow \infty}
\frac{1}{L^2}\sum_{m=1}^{L-1} \frac{1}{2(1-\mbox{cos}\left(\frac{2\pi
m}{L}\right))}
= \frac{1}{12}.
\label{eqn:sst}
\end{equation}
To show this one needs to substitute Eq. (\ref{eqn:X_st}) into
Eq. (\ref{eqn:X2}), and then use the fact that
\begin{equation}
\hat{W}^m|FS\rangle = \left\{ \begin{array}{rl}
0 & m = 2,...,L-1
\\
|FS\rangle & m = 0.
\end{array} \right.
\end{equation}
Our results are shown in Table \ref{tab:spread_gwf}. The GWF results for two
different values of the variational parameter were calculated for a
one-dimensional system based on the variational Monte Carlo method of Yokoyama
and Shiba~\cite{Yokoyama87}. The fact that the normalized spread approaches a
constant for large $L$ (system size) indicates that the system is delocalized,
hence metallic. What is surprising in these results, however, is that for
large $L$ the spread of all three examples converges to the same value. The
projecting out of double occupations in the GWF seems to have no effect on the
spread for large $L$, and is identical to the result for the Fermi sea. The
GWF though is thought to be a representative of ``bad metals'', metals whose
conductivity is reduced due to strong
correlations.~\cite{Brinkman70,Fazekas99}
\begin{table}
\begin{tabular}{|c||c|c|c|}
\hline
$L$ & Fermi sea & $\gamma=1.0$& $\gamma=2.0$ \\ \hline
$12$ & $0.08275$ & $0.079(1)$ & $0.0412(9)$ \\
$24$ & $0.08312$ & $0.0830(6)$ & $0.0682(6)$ \\
$36$ & $0.08327$ & $0.0831(5)$ & $0.0797(5)$ \\
$48$ & $0.08330$ & $0.0833(4)$ & $0.0824(4)$ \\
$60$ & $0.08331$ & $0.0829(3)$ & $0.0830(3)$ \\
$\infty$ & $1/12$ & & \\
\hline
\end{tabular}
\caption{Spread in the total position divided by the square of the system size
for the Fermi sea and the Gutzwiller wavefunction. Two different values of
the \ variational parameter, $\gamma=1.0$ and $\gamma=2.0$ are shown. As
the system size increases the value $1/12$ is approached by all three
systems. The approach to the limiting value slows down as correlation
effects are introduced, it is slowest for $\gamma=2.0$, the "most projected"
of the three examples.}
\label{tab:spread_gwf}
\end{table}
It turns out that these results are actually consistent with what one obtains
for the current response. We consider the phase term under a perturbation in
the form in Eq. (\ref{eqn:JBP_DX}). Consider first the action of the operator
$\hat{U}(\Delta X)$ on the GWF.
\begin{equation}
\hat{U}(\Delta X)|\Psi(\gamma)\rangle = \hat{U}(\Delta X)e^{-\gamma \sum_i n_{i\uparrow}n_{i\downarrow}}|FS\rangle.
\end{equation}
The operator $\hat{U}(\Delta X)$ shifts the positions of {\it all particles}
by one lattice spacing. Such a shift will not change the total number of
double occupations, hence the Gutzwiller projector and the total position
shift operator commute. We can write
\begin{equation}
\hat{U}(\Delta X)|\Psi(\gamma)\rangle = e^{-\gamma \sum_i
n_{i\uparrow}n_{i\downarrow}}\hat{U}(\Delta X)|FS\rangle = e^{i\Delta X
\sum_i k_i}|\Psi(\gamma)\rangle,
\end{equation}
where $\sum_i k_i$ denote the sum over the momenta of the {\it Fermi sea}.
One obtains exactly the same result in the absence of the Gutzwiller
projector. When substituting back into Eq. (\ref{eqn:JBP_DX}) we find that
the current response of the GWF is exactly that of the Fermi sea, and this
result is independent of correlations (whose strength increases monotonically
with the variable $\gamma$). The above derivation can be extended to
projections based on Jastrow-type correlations and the conclusion is valid as
long as the projections are centro-symmetric (considered in
Ref. \cite{Millis91}). It has been shown~\cite{Capello05} that
non-centrosymmetric correlators can produce an insulating state. The current
response in this case will also not follow the derivation above, since a shift
in all the particles can change the contribution to the projector. Another
exception is the case when $\gamma\rightarrow\infty$, i.e. the singular case,
which in general is also known to allow for insulating
behavior~\cite{Capello06}.
For the GWF one can obtain further insight into the current response by
writing it in the position representation as
\begin{equation}
|\Psi_G \rangle = \sum_{\bf R} e^{-\gamma D({\bf R})} \mbox{Det}({\bf
K};{\bf R}) |{\bf R} \rangle.
\label{eqn:Psi_G_r}
\end{equation}
In Eq. (\ref{eqn:Psi_G_r}) ${\bf R}$ indicates the configurations of particles
(both up-spin and down-spin), $D({\bf R})$ indicates the number of double
occupations for a particular configuration of particles, $\mbox{Det}({\bf
K};{\bf R})$ denotes the product of Slater determinants for up-spin and
down-spin electrons, and $|{\bf R} \rangle$ denotes a position space
eigenstate. From Eq. (\ref{eqn:Psi_G_r}), we see that the projection changes
the relative weight of different configurations but leaves their phases {\it
intact.}~\cite{Fazekas99} The fact that the current, a quantity related to
the phase of the wavefunction, is unaltered by the Gutzwiller projection
coincides with the result above, namely that the persistent current for a Gutzwiller
wavefunction is determined exclusively by the Fermi sea. In fact Millis and
Coppersmith~\cite{Millis91} suggest a scheme in which a projector operator of the form
$e^{iS}$, with $S=\frac{1}{U}(H_t^+-H_t^-)$, ($H_t^+$($H_t^-$) raises(lowers)
the number of double occupations) acts on the Fermi sea to produce an
insulating wavefunction. Clearly this scheme would alter the phases of the
Fermi sea.
The above reasoning can be extended to other commonly used projected
variational wavefunctions. The Baeriswyl-Gutzwiller wavefunction can be
written
\begin{equation}
|\Psi_{BG}(\alpha,\gamma)\rangle = e^{-\alpha \hat{T}}e^{-\gamma \hat{D}}
|FS\rangle.
\label{eqn:Psi_BG}
\end{equation}
In this case $\hat{T}$ denotes the kinetic energy, and $\alpha$ denotes the
variational parameter. Since the total position shift operator
$\hat{U}(\Delta X)$ is diagonal in momentum space, it trivially commutes with
the projector $e^{-\alpha \hat{T}}$. We conclude that the current response of
the Baeriswyl-Gutzwiller projected wavefunction is identical to that of the
Fermi sea. The other two commonly used variational wavefunctions are the
Baeriswyl and Gutzwiller-Baeriswyl projected wavefunctions. Their form is
\begin{equation}
\label{eqn:Psi_B}
|\Psi_{B}(\alpha,\gamma)\rangle = e^{-\alpha \hat{T}}
|\Psi_{\infty}\rangle,
\end{equation}
\begin{equation}
\label{eqn:Psi_GB}
|\Psi_{GB}(\alpha,\gamma)\rangle = e^{-\gamma \hat{D}} e^{-\alpha \hat{T}}
|\Psi_{\infty}\rangle.
\end{equation}
In Eqs. (\ref{eqn:Psi_B}) and (\ref{eqn:Psi_GB}) $|\Psi_{\infty}\rangle$
denotes the wavefunction in the limit of infinite interaction. This function
is in general not known. Again one can exploit the fact that the total
position shift commutes with the projector operators and conclude that the
current response in both cases will depend on $|\Psi_{\infty}\rangle$
exclusively. While this function is not known, in general, in the
half-filled case one can assume that its current response is zero.
\section{Conclusion}
The current response was investigated in the context of variational theory.
The Drude and superfluid weights have seemingly identical expressions (second
derivative of the ground state energy with respect to the Peierls phase),
however, as was pointed by Scalapino, White, and Zhang, the meaning of the
derivative differs between the two, one being the adiabiatic the other the
``envelope'' derivative. Assuming their interpretation of the derivative we
derived the expressions for the Drude and superfluid weights appropriate for
variational theory. A key difficulty with the former is the appearance of the
exact eigenstates of the perturbed Hamiltonian, in general not available in
practical situations where variational theory is used. As a partial remedy
the persistent current was shown to consist of a constant term, proportional
to the perturbation and the number of charge carriers, and a geometric phase
term. This expression can be used in practical settings to obtain the Drude
weight by numerically taking the first derivative of the current with respect
to the phase. The current response of several commonly used variational
wavefunctions was also analyzed, and shown that variational wavefunctions
which use a Baeriswyl or Gutzwiller projection will have a current response
determined by the wavefunction on which the projectors are applied (Fermi sea
or the solution in the strongly interacting limit).
\section*{Acknowledgements} This work was supported by the Turkish agency for
basic research (T\"UBITAK, grant no. 112T176). Part of the work was carried
out at the Graz University of Technology under a grant from FWF (no.
P21240-N16). The author is grateful to the Physical Society of Japan for
financial support in publication.
|
2,877,628,091,459 | arxiv | \section{Introduction}
\label{introduction}
Evolutionary search has frequently been used to generate artistic images~\cite{Heijer14,correia2013machado,DBLP:journals/tec/KowaliwDM12}. Images have a high-dimension. Previous work has either reduced the dimensionality of the search space through programmatic encodings~\cite{correia2013machado,citeulike:12541313} or have constrained the images with priors~\cite{neumann2017evolutionary,DBLP:conf/gecco/AlexanderKN17}. In recent years, Generative Adversarial Networks\cite{goodfellow2014generative} \mbox{\em{GAN}}'s have been used to map a low dimensional real-valued latent vector into images in the category for which the ${\mbox{\em{GAN}}}$ has trained. There has been work in using \mbox{\em{GAN}}'s to generate and mix novel images~\cite{nguyen2016a} and perform artistic style transfer~\cite{gatys2016image}, along with many other applications. However, to date there has no work using evolutionary, or other methods, to explore the latent space of a ${\mbox{\em{GAN}}}$ to generate images according to aesthetic feature measures. In this work we employ \mbox{\em{GAN}}'s to generate novel images by evolving the latent vector to maximise and minimise single aesthetic and pairs of aesthetic features. We show that the generation of images in this space requires the use of carefully constructed constraints on image realism. We also show that different \mbox{\em{GAN}}'s appear to impose different bounds on the values of aesthetic measures that can be evolved.
The paper is structured as follow. Section \ref{related} outlines related work. Section \ref{basic} presents the methodology used for evolving images. Section \ref{preliminary} and \ref{results} present our results. Finally, section \ref{discussion} presents discussion and the future work.
\subsection{Related Work}
\label{related}
Aesthetic feature measures have been often applied to the creation of new artistic images using evolutionary search~\cite{Heijer14,correia2013machado,DBLP:journals/tec/KowaliwDM12,machado2008experiments}.
There has also been significant work in the evolution of existing images~\cite{DBLP:conf/gecco/NeumannSCN17,DBLP:conf/iconip/NeumannAN16}.
This work differs from previous work in the use of a ${\mbox{\em{GAN}}}$ as a mapper from the latent search vector to the images space and the use of the discriminator network and feature metrics to constrain these images,
the space of a generator network as a given image source and use the discriminator and feature metrics to constrain these images.
In terms of deep learning,
Gatys~\cite{gatys2016image,DBLP:conf/cvpr/GatysEBHS17,DBLP:conf/nips/GatysEB15} used a convolutional network to transfer artistic style into an existing image. These new approaches in network architectures and training
methods enabled the generation of realistic images~\cite{radford2015unsupervised,dosovitskiy2015learning}.
Recently Dosovitskiy and Brox~\cite{dosovitskiy2016generating} trained networks of generating images from feature vector and
combining an auto-encoder-style approach with deep convolutional generative adversarial networks training. Furthermore, Nguyen~\cite{nguyen2016a} used priors from Deep Generative Network to generate image variants that look close to natural within a preferred inputs for neurons.
For our investigation we consider recently work~\cite{DBLP:conf/gecco/AlexanderKN17} on feature based diversity optimization. The approach applied $(\mu + \lambda)-EA_D$ to evolve diverse image instances.
Previously the algorithm was introduced in~\cite{gao2016feature} for Traveling Salesman Problem (TSP) which is a NP-hard combinatorial optimization problem with real world applications.
\begin{figure} [!th]
\centering
\includegraphics[width=7cm]{final_system.png}
\caption{The final setup of the system. The latent vector $Z$ is randomly seeded and sent through the system, mutating until it reaches an optimal solution or the termination condition ($2000$ mutations) is reached.}
\label{model_1}
\end{figure}
\section{Methodology}
\label{basic}
In this section we describe the methodology that we use to evolve images. In our system, images are created by optimising the latent space of the ${\mbox{\em{GAN}}}$ to create images that score high or low on aesthetic feature measures. Further, we try to constrain images to be real, this is done using the discriminator network trained with the ${\mbox{\em{GAN}}}$.
\subsection{Our System}
In this section we describe our system that is based on Generative Adversarial Networks ${\mbox{\em{GAN}}}$~\cite{goodfellow2014generative}. Figure~\ref{model_1} shows the structure of our system. In the ${\mbox{\em{GAN}}}$, we train two networks: 1) a generator, to generate images from a latent vector $z$ of 100 real numbers, and 2) a discriminator which scores the images in the generator for realness as both networks are trained. We train these two networks with two image datasets: the Celebfaces attributes ({\em{CelebA}}) data set containing more than 150k celebrity faces~\cite{liu2015faceattributes} and the imagenet~\cite{deng2009imagenet} class of butterflies containing over $45000$ butterfly images.
The ${\mbox{\em{GAN}}}$ is based on the Pytorch ${\mbox{\em{GAN}}}$ tutorial~\cite{pytorchGan2017}.
The generator component of the network consists of $5$ deconvolutional layers. The activation functions for the first $4$ layers are LeakyRelu's, and the hidden layer is a ReLU~\cite{nair2010rectified}, and the last deconvolutional layer uses {\em{tanh}}. The discriminator uses LeakyReLU's in its $5$ convolutional layers.
The generator takes the hundred elements of $Z$ as input and generates a 128x128 pixel image. The discriminator takes an image and generates a normally distributed {\em{realness}} score with the most real images scoring zero.
Finally, at the top of Figure~\ref{model_1} is the aesthetic feature
function, as described in~\cite{DBLP:conf/gecco/AlexanderKN17}.
The ${\mbox{\em{GAN}}}$ and the necessary feature functions are linked together to drive evolution. The combined system works as follows. A randomly initialised latent feature vector is sent into the generator, which outputs an image. This image is run through both the chosen artistic feature function and the discriminator, and both contribute to a score. The evolutionary process, guided by the score, mutates $Z$ with the goal of optimising both the realness and the desired artistic feature of the output image.
\subsection{Aesthetic Features}
This section describes in more detail the features used in our experiments. We denote a function $f$ for an image $I$, representing an artistic feature. This function maps an image $I$ to a scalar value $f(I)$. For our experiments we use the following features: mean hue, saturation, smoothness, reflectional symmetry and Global Contrast Factor. We describe the features as follows.
{\em{Mean-hue}} is the mean value of every pixel's hue in the image. The range of $\mbox{\em{Hue}}$ is $[0,1]$. Note that both $0$ and $1$ represent the colour red.
{\em{Mean-saturation}} is the mean value of every pixel's saturation in the image. The range is $[0,1]$ with $0$ representing low $\mbox{\em{Saturation}}$ and $1$ representing high.
{\em{Smoothness}} of an image is measured, for a given picture $I$ with $N$ pixels as:
\[
1- \sum\nolimits_{i=1}^{N} \sum\nolimits_{c=1}^{3} \mbox{\em{gradient}}(I_{ic})/3N,
\]
where {\em{gradient}} is the gradient magnitude image produced by MATLAB's {\em{intermediate}} image gradient method, which calculated gradients between adjoining pixel values on each colour channel. We perceive from this that $\mbox{\em{smoothness}}(I)$ is the disparity of colour between adjacent pixels and also lies within the range $[0,1]$.
\emph{Reflectional Symmetry} is a measure based on den Heijer's work~\cite{Heijer14} to measure the degree which an image reflects itself. Symmetry divides an image into four quadrants and measures horizontal, vertical, and diagonal symmetry. Note
Symmetry is defined for image $I$ as:
\[
\mbox{\em{Symm}}(I) = S_h(I)+S_v(I)+S_d(I)/3
\]
{\emph{Global Contrast Factor}} ${(\mbox{\em{GCF}})}$ is a measure of mean contrast between neighbouring pixels at different image resolutions. ${\mbox{\em{GCF}}}$ is determined by calculating the local contrast at each pixel at resolution $r$:
\[
lc_r(I_{ij})=\sum\nolimits_{I_{kl} \in N(I_{ij})} |lum (I_{kl}) -lum(I_{ij})|
\]
where $lum(P)$ is the perceptual luminosity of pixel $P$ and $N(I_{ij})$ are the four neighbouring pixels of $I_{ij}$ at resolution $r$. The mean local contrast at the current resolution is defined as:
\[
C_r=(\sum\nolimits_{i=1}^m \sum\nolimits_{j=1}^n lc_r(I_{ij}))/(mn).\] From these local contrasts, ${\mbox{\em{GCF}}}$ is calculated as
\[
\mbox{\em{GCF}} = \sum\nolimits_{r=1}^9 w_r \cdot C_r.
\]
The pixel resolutions correspond to different {\em{superpixel}} sizes of $1,2,4,8,16,25,50,100$, and $200$. Each superpixel is set to the average luminosity of the pixel's it contains. The $w_r$ are empirically derived weights of resolutions from~\cite{matkovic2005global} giving highest weight to moderate resolutions. Note ${\mbox{\em{GCF}}}$'s range is not bounded to $[0,1]$.
\subsection{Feature Optimisation}
In this work we investigated the use of single and multi-feature optimization. We explore optimization space with respect to feature values. For a single feature our system minimise and maximise particularly feature. We define the minimization process ${\mbox{\em{feature}}}$ and for maximization process $(1.0 - ${\mbox{\em{feature}}}$)$\footnote{Note that for feature${\mbox{\em{GCF}}}$ we are able to maximize $1/{\mbox{\em{GCF}}}$ and scale is in the range $[0,1]$.}. For double features $(f,g)$ we have four optimisaton targets representing the combinations of minimising and maximising $f$ and $g$.
To maintain realness we penalise the combined asthetic score with a measure for realness from the discriminator $\discr$.
Thus our fitness functions for single features are shown in equations~
$(1-2)$ and for multi-features in $(3-6)$.
\begin{gather}
\mbox{\em{feature}} \times \discr \\
(1.0 - \mbox{\em{feature}} ) \times \discr \\
\mbox{\em{feature 1}} \times \mbox{\em{feature 2}} \times \discr \\
\mbox{\em{feature 1}} \times (1.0 - \mbox{\em{feature 2}} ) \times \discr \\
(1.0 - \mbox{\em{feature 1}}) \times \mbox{\em{feature 2}} \times \discr \\
(1.0 - \mbox{\em{feature 1}}) \times (1.0 - \mbox{\em{feature 2}} ) \times \discr
\end{gather}
For multi-feature experiments we use $6$ feature combinations: hue-saturation, hue-symmetry, saturation-symmetry, smoothness-saturation, ${\mbox{\em{GCF}}}$-smoothness and ${\mbox{\em{GCF}}}$-saturation.
These combinations were chosen to produce potentially interesting outputs. ${\mbox{\em{GCF}}}$ + smoothness and ${\mbox{\em{GCF}}}$ + saturation were specifically chosen due to related work indicating ${\mbox{\em{GCF}}}$ + smoothness would constrain each other ~\cite{DBLP:conf/gecco/AlexanderKN17}, resulting in lower image diversity.
\section{Preliminary Experiments}
\label{preliminary}
To ensure meaningful results, we refined the methodology through an experimental process. These refinements are discussed in following.
Initial experimentation was done using ${\mbox{\em{(1+1) EA}}}$ and ${\mbox{\em{CMA-ES}}}$~\cite{hansen2003reducing} frameworks to determine the performance of both algorithms. ${\mbox{\em{CMA-ES}}}$ was allowed to run for $2000$ mutations (the equivalent of $80000$ iterations) and ${\mbox{\em{(1+1) EA}}}$ was allowed to run for $80000$ iterations to promote a valid comparison.
\begin{figure}[th]
\centering
\includegraphics[width=8.12cm]{1+1_EA_CMS_ES}
\caption{Image obtained by using evolutionary algorithms: ${\mbox{\em{(1+1) EA}}}$ and ${\mbox{\em{CMA-ES}}}$, with corresponding hue feature value, from left respectively.}
\label{1+1_EA_CMS_ES_2}
\end{figure}
As illustrated in Figure~\ref{1+1_EA_CMS_ES_2} $\mbox{\em{CMA-ES}}$ was able to achieve more extreme feature values. This superiority of optimisation applied to all feature metrics. The ${\mbox{\em{CMA-ES}}}$ ran for 80 minutes and ${\mbox{\em{(1+1) EA}}}$ taking 240 minutes for an optimisation run.
\begin{figure}
\centering
\subfloat[Hue] {
\includegraphics[width=3.1cm]{min_hue.png}
\includegraphics[width=3.1cm]{max_hue.png}}
\subfloat[Symmetry] {
\includegraphics[width=3.1cm]{min_symm.png}
\includegraphics[width=3.1cm]{max_symm.png}}
\caption{Image obtained by single features. The left column corresponds to minimizing features hue and symmetry, from top with value 0.0841 and 0.7963, respectively. The right column corresponds to maximizing features hue and symmetry, from top with value 0.1275 and 0.9198, respectively. }
\label{single_feature_hue_symmetry_4}
\end{figure}
\subsection{Feature Experiments without Realness Constraint}
It was initially assumed that the ${\mbox{\em{GAN}}}$ would be able to create face-like images with any input vector. Some tests were performed which did not incorporate a constraint on realness as part of the optimisation process. As Figure~\ref{without_realness_sat_hue_3} demonstrates optimising features without the discriminator produces abstract images.
\begin{figure}[th]
\centering
\includegraphics[width=8.12cm]{without_realness_sat_hue}
\caption{Image obtained without realness constraints by minimizing saturation and hue, from links respectively.}
\label{without_realness_sat_hue_3}
\end{figure}
To constrain images to be more realistic three constraining methods were tested: 1) limiting the degree ${\mbox{\em{CMA-ES}}}$ was allowed to mutate $Z$; 2) discarding images that failed a certain realness threshold and 3) incorporating the discriminator's return value into the optimisation function. It was found that the third option of integrating realness as a variable $\discr$ gave the best results. The option of
discarding images resulted in $\mbox{\em{CMA-ES}}$ failing progress. Restricting the values in $Z$ to near-zero do not preserve realism as much as using the discriminator.
\subsection{Single Dimensional Feature Experiments with Constraint}
Single feature experiments require the fewest variables to optimise and as such, could be expected to evolve the image with the least difficulty. The results of each feature value are shown in Table~\ref{tab:single}.
\begin{table}
\centering
\caption{Single dimensional feature values obtained from experiments with constraint.}\label{tab:single}
\begin{tabular}{l|r|r}
\hline
Feature & Min & Max\\\hline
$\mbox{\em{Hue}}$ & 0.0841 & 0.1275\\\hline
$Saturation$ & 0.3306 & 0.3543 \\\hline
$Smoothness$ & 0.9737 & 0.9843\\\hline
$Symmerty$ & 0.7985 & 0.9198\\\hline
$\mbox{\em{GCF}}$ &0.0276&0.0286 \\\hline
\end{tabular}
\end{table}
As can be seen the ranges of features above are very small, with the exception of symmetry. In these runs the $\discr$ term has the strong effect of constraining the feature values. For symmetry, the larger range might be explained by presence of both symmetric and asymmetric faces in the training dataset.
In line with the small feature ranges the constrained
images only showed small differences as seen in figure~\ref{single_feature_hue_symmetry_4} corresponding to
the hue and symmetry measures in Table~\ref{tab:single}.
\subsection{Two-Dimensional Feature Experiments with Constraint}
Running the experiments on multiple features gave similarly constrained images to the single features. Figure~\ref{multi_features_saturat_symm_5} shows images evolved to minimise and maximise saturation and symmetry.
\begin{figure}
\centering
\includegraphics[width=3.1cm]{max_sat_min_sym2.png}
\vspace{0.08cm}
\includegraphics[width=3.1cm]{max_satsym.png}\\
\includegraphics[width=3.1cm]{min_satsym.png}
\includegraphics[width=3.1cm]{min_sat_max_sym.png}
\caption{ Images obtained by multi-features. The left column corresponds to minimizing feature saturation and symmetry, from top, respectively. The right column corresponds to maximizing feature saturation and symmetry, from too, respecively.}
\label{multi_features_saturat_symm_5}
\end{figure}
As can be seen in the Figure~\ref{multi_features_saturat_symm_5}, there is some success in evolving different amounts of symmetry but not a particularly strong difference in saturation. It appears, at least for some features, the realness constraint is preventing strong exploration of the feature space.
\subsection{Impact of cut-off function}
In order to maintain balance between
image realness and exploration we modify the discriminator term by passing the raw result of $\discr=x$ for an image through a cut-off function $f$ define as follow:
\begin{equation}
f(x)=
\begin{cases}
x &\text{if } x \geq c \\
s &\text{if } x<c
\end{cases}
\end{equation}
with a cutoff $c$ and stable value $s$. In the experiments
that follow we set $s=0$, returning maximum realness, and vary $c$ to test its effect.
With the cut-off function, the search is unaffected until the image reaches a certain threshold of realness. When the threshold is reached the system has no variation in respect to the realness value, thus giving priority to aesthetic features over realness values.
A subjective analysis of possible cut-off values needed to be performed in order to determine the optimal value for future experiments.
Figure~\ref{first_cut_off_6} demonstrates the effect of different cut-off values on both image realness and aesthetic feature value.
\begin{figure}[th]
\centering
\includegraphics[width=10.0cm]{first_cut_off}
\caption{The results of minimizing hue with cut offs at 0.2 (left), 0.05 (middle) and 0.02 (right).}
\label{first_cut_off_6}
\end{figure}
A relationship can be observed from the above images. As the cut-off decreases, so does the degree we are able to evolve the feature value. However, it can be noted that even the $0.02$ cut-off was able to create a far lower hue compared to the constrained results -- while still being realistic enough to be called a face. The $0.02$ cut-off was used in the remaining single feature experiments.
\section{Results}
\label{results}
This section presents the results of evolution in single and paired feature spaces. The previous experiments were all carried on faces. In the following \mbox{\em{GAN}}'s generated from both the faces and the butterfly datasets were used.
\begin{figure}
\centering
\subfloat[Hue] {
\includegraphics[width=2.06cm]{min_hue_cutoff.png}\label{pre:A}
\includegraphics[width=2.06cm]{max_hue_cutoff.png}
\includegraphics[width=2.06cm]{min_hue_butterfly.png}
\includegraphics[width=2.06cm]{max_hue_butterfly.png}
}
\vspace{-0.25cm}
\subfloat[Saturation] {
\includegraphics[width=2.06cm]{min_sat_cutoff.png}
\includegraphics[width=2.06cm]{max_sat_cutoff.png}
\includegraphics[width=2.06cm]{min_sat_butterfly.png}
\includegraphics[width=2.06cm]{max_sat_butterfly.png}}
\vspace{-0.25cm}
\subfloat[Smoothness] {
\includegraphics[width=2.06cm]{min_smo_cutoff.png}
\includegraphics[width=2.06cm]{max_smo_cutoff.png}
\includegraphics[width=2.06cm]{min_smo_butterfly.png}
\includegraphics[width=2.06cm]{max_smo_butterfly.png}}
\vspace{-0.25cm}
\subfloat[Symmetry] {
\includegraphics[width=2.06cm]{min_symm_cutoff.png}
\includegraphics[width=2.06cm]{max_symm_cutoff.png}
\includegraphics[width=2.06cm]{min_sym_butterfly.png}
\includegraphics[width=2.06cm]{max_sym_butterfly.png}}
\vspace{-0.25cm}
\subfloat[GCF] {
\includegraphics[width=2.06cm]{min_GCF_0_02.png}
\includegraphics[width=2.06cm]{max_GCF_0_02.png}
\includegraphics[width=2.06cm]{min_GCF_butterfly_0_02.png}
\includegraphics[width=2.06cm]{max_GCF_butterfly_0_02.png}}
\caption{All single feature optimisations with 0.02 cut-off.}
\label{single_features_cut_off_7}
\end{figure}
\subsection{Single Dimensional Feature Experiments with Cut-off Function}
We conducted single feature dimension experiments using the face and butterfly datasets for the following features: hue, saturation, smoothness, symmetry and GCF. For these experiments we use a cut-off of $0.02$ on the discriminator output for both \mbox{\em{GAN}}'s.
The results were obtained for the minimum and maximum feature values from each experiment. Figure~\ref{single_features_cut_off_7} shows the result of the experiments
for the single dimensional feature with each row corresponding to an image minimising and maximising the feature for faces and butterflies respectively.
Table~\ref{tab:tab1} shows the minimum and maximum value for each feature for the faces and butterfly datasets, respectively. We observe that hue has the highest range with respect to feature values.
The use of the butterfly dataset provides good way to
see how evolution with aesthetic measure responds to the priors embedded in different \mbox{\em{GAN}}'s.
For the single dimensional experiments we observe in Figure ~\ref{single_features_cut_off_7} that images generated with the faces dataset appear more real than the images generated with butterfly dataset. This is likely to be due to the more diverse nature of the butterfly dataset.
The images shown in Figure~\ref{single_features_cut_off_7} (a)
have the most variance in the hue dimension. Image with lowest hue value appears most realistic in contrast the image with higher value appears less realistic. We observe that image generated from the butterfly dataset achieves
a higher feature range.
Figure~\ref{single_features_cut_off_7} (b) shows that in spite of the saturation feature for faces extending over a narrow range the resulting faces are not very realistic. The butterfly dataset is able to produce higher values of saturation, resulting in a realistic and colorful image.
In Figure~\ref{single_features_cut_off_7} (c) we observe that minimization for produces realistic images with superimposed darker shadows. In contrast maximising smoothness produces less realistic images.
The images shown in Figure~\ref{single_features_cut_off_7} (d) produces high values for symmetry for both datasets. These images appears symmetrical and less real. Images with lower symmetry value are more real.
Finally, the images shown in Figure~\ref{single_features_cut_off_7} (e) exhibits less realness for faces and butterflies dataset in minimisation and more realism in maximisation.
\begin{table}
\centering
\begin{tabular}{l|r|r|r|r}
\hline
Feature & Min-F & Max-F &Min-B&Max-B\\\hline
$\mbox{\em{Hue}}$ & 0.0337 & 0.4886 &0.1083&0.5282\\\hline
$Saturation$ & 0.3306 & 0.3543&0.1205&0.5918 \\\hline
$Smoothness$ & 0.9737 & 0.9843 &0.9462&0.9887\\\hline
$Symmetry$ & 0.7985 & 0.9198 &0.5568&0.9428\\\hline
$\mbox{\em{GCF}}$ &0.0106&0.0348&0.0090&0.0417 \\\hline
\end{tabular}
\vspace{0.5 cm}
\caption{Single features value with cut-off of 0.2 for faces and butterfly dataset, from link respectively.}\label{tab:tab1}
\end{table}
\subsection{Two-Dimensional Feature Experiments with Cut-off Function}
In our next experiment, we evolve images using the $\mbox{\em{GAN}}$ to minimise and maximise in two feature dimensions.
These experiments aim to give us insight into how features interact with each other and also the impact of the image priors as embedded in the \mbox{\em{GAN}}'s on the extent to which features can be optimised.
After training our $\mbox{\em{GAN}}$ models on the faces and butterfly dataset,
We run experiments with the following feature combinations: GCF-Saturation; GCF-Smoothness; Hue-Saturation; Hue-Symmetry; Saturation-Symmetry; and Smoothness-Saturation.
The feature, pair values resulting from these experiments are shown in Tables~\ref{tab:tab1} (for faces) and~\ref{tab:tab2} (for butterflies).
The images corresponding to these values are shown in Figure~\ref{multi_features_cut_off_GCF_8} and~\ref{multi_features_cut_off_9}.
The first column of Figures\ref{multi_features_cut_off_GCF_8} and~\ref{multi_features_cut_off_9}, show images, in clockwise order from top-left, for min-max, max-max, max-min, min-min combinations of features for faces,
The corresponding pictures butterflies dataset is shown in the second column.
The last column, plots the positions in feature-space, of the four face images from the first column (in red) and the four butterfly images from the second column (in blue).
The shape of the quadrilateral in these plots provides an
indication of how feature values are constrained with respect to each other and by the $\mbox{\em{GAN}}$ used to generate them.
Based on our findings from the previous experiments with single dimension diversity, we reduced the cut-off to $0.008$ for the multi-feature experiments to try to maintain the realism of the images. The impact of this smaller cut-off can be observed in Figures~\ref{multi_features_cut_off_GCF_8} and ~\ref{multi_features_cut_off_9} in terms of the relatively small areas of feature space contained by the plots.
Looking at the feature-pairs in turn.
The images shown in Figure~\ref{multi_features_cut_off_GCF_8} (a) have the highest GCF values and for maxmin and maxmax optimisation appear most realistic. We observe in Figure~\ref{multi_features_cut_off_GCF_8} (b) that image generated on the butterflies dataset achieve higher score for GCF and permit a higher range of saturation for high-GCF.
\begin{figure*}
\centering
\includegraphics[width=0.45cm]{a.png}
\hspace*{0.2cm}
\includegraphics[width=3.76cm]{gcf_sat_f.png}
\includegraphics[width=3.78cm]{gcf_sat_b.png}
\includegraphics[width=5.06cm]{gcfsat_updatedplot_0_008.png}
\includegraphics[width=0.45cm]{b.png}
\hspace*{0.2cm}
\includegraphics[width=3.76cm]{gcf_smo_f.png}
\includegraphics[width=3.76cm]{gcf_smo_b.png}
\includegraphics[width=5.06cm]{gcfsmo_updatedplot_0_008.png}
\vspace{0.1cm}
\caption{Image obtained by multi-features with a 0.008 cut-off constraint. The images correspond to 2 feature pairs, GCF, saturation and GCF, smoothness for faces and butterfly dataset, from top left, respectively. Note the images follow their positions on the graph.}
\label{multi_features_cut_off_GCF_8}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[width=0.45cm]{a.png}
\hspace*{0.2cm}
\includegraphics[width=3.76cm]{hue_sat_f.png}
\includegraphics[width=3.76cm]{hue_sat_b.png}
\includegraphics[width=5.06cm]{huesat_plot_0_008.png}\\
\includegraphics[width=0.45cm]{b.png}
\hspace*{0.2cm}
\includegraphics[width=3.76cm]{hue_sym_f.png}
\includegraphics[width=3.76cm]{hue_sym_b.png}
\includegraphics[width=5.06cm]{huesym_plot_0_008.png}\\
\includegraphics[width=0.45cm]{c.png}
\hspace*{0.2cm}
\includegraphics[width=3.76cm]{sat_sym_f.png}
\includegraphics[width=3.76cm]{sat_sym_b.png}
\includegraphics[width=5.06cm]{satsym_plot_0_008.png}\\
\includegraphics[width=0.45cm]{d.png}
\hspace*{0.2cm}
\includegraphics[width=3.76cm]{smo_sat_f.png}
\includegraphics[width=3.76cm]{smo_sat_b.png}
\includegraphics[width=5.06cm]{smosat_plot_0_008.png}
\caption{Image obtained by multi-features with a 0.008 cut-off constraint. The first four images correspond to features hue and saturation, hue and symmetry, saturation and symmetry, smoothness and saturation, from top, respectively. The images follow their positions on the graph.}
\label{multi_features_cut_off_9}
\end{center}
\end{figure*}
The feature plot in Figure~\ref{multi_features_cut_off_GCF_8}
shows that GCF and Saturation can vary independently.
In contrast, Figure~\ref{multi_features_cut_off_GCF_8} (b)'s plot indicates some difficulty in minimising both smoothness and GCF. From the plot also appears to be relatively difficult to maximise both of these features.
This result is in concordance with the observations in ~\cite{DBLP:conf/gecco/AlexanderKN17} which found that GCF and smoothness, being spatial features, appeared to be in conflict with each other. More specifically the high contrast required for high GCF scores is in direct conflict with the low contrast required for high smoothness scores. Also notable from figure~\ref{multi_features_cut_off_GCF_8} (b) is a relative lack of realism in the faces as compared to those in part (a).
The images shown in Figure~\ref{multi_features_cut_off_9} (a)
show the relationship between hue and saturation. The butterfly pictures show the most variance in the saturation dimension and the face pictures show marginally more variance in hue. Images high in saturation seem to appear sharper -- with the face image that maximises both features having quite harsh colour, more contrast, and a mask-like appearance.
For figure~\ref{multi_features_cut_off_9} (b) both sets of images have similar ranges of symmetry but faces have a much narrower range of hue. Highly symmetric images seem to be less realistic, tending to ovoid shapes, with detail seemly sacrificed in order to maximise symmetry. In contrast asymmetric images appear to have more realistic textures and more intense colours.
Figure~\ref{multi_features_cut_off_9} (c) combines saturation and symmetry. As before highly symmetric images appear less realistic. The evolutionary process seems to have difficulty maximising both saturation and symmetry for both \mbox{\em{GAN}}'s. Clearly it is possible to create artificial images that score highly on both feature dimensions so this difficulty may be reflective of the rarity of this feature combination in the training sets for these \mbox{\em{GAN}}'s. As a final observation, the butterfly picture maximising both features resembles an insect's face, perhaps an interesting consequence of having diverse images in the training set.
Finally, the images in Figure~\ref{multi_features_cut_off_9} (d)
show difficulty in minimising both smoothness and saturation.
There is a smaller corresponding problem in maximising both features. In both data sets the most realistic images are produced by the minimisation of smoothness and the maximisation of saturation, perhaps indicating that the priors in the data set are biased toward rougher and more colourful images.
\begin{table*}
\centering
\begin{tabular}{l|r|r|r|r}
\hline
Feature pairs & Min.f1-Min.f2 & Min.f1-Max.f2 & Max.f1-Min.f2 & Max.f1-Max.f2 \\\hline
$Hue-Sat.$ & 0.0426 - 0.3318 & 0.0457 - 0.3671 & 0.2900 - 0.2727 & 0.4651 - 0.4257 \\\hline
$Hue-Sym.$ & 0.0480 - 0.7233 & 0.0549 - 0.9577& 0.2523 - 0.6448 & 0.0935 - 0.9722 \\\hline
$Sat.-Sym.$ & 0.2573 - 0.7020 & 0.3190 - 0.9654 & 0.4256 - 0.6336 & 0.3582 - 0.9679 \\\hline
$Smooth.-Sat.$ & 0.9814 - 0.2378 & 0.9751 - 0.4876 & 0.9901 - 0.2912 & 0.9903 - 0.3670 \\\hline
$\mbox{\em{GCF}}-Sat. $ & 0.0164 - 0.2930 & 0.0126 - 0.4048 & 0.0310 - 0.2796 & 0.0332 - 0.4160 \\\hline
$\mbox{\em{GCF}}-Smooth.$ & 0.0166 - 0.9890 & 0.0166 - 0.9890 & 0.0326 - 0.9762& 0.0290 - 0.9893 \\\hline
\end{tabular}
\caption{Dual features with cut-off for faces dataset.}\label{tab:tab1}
\vspace{3mm}
\begin{tabular}{l|r|r|r|r}
\hline
Feature pairs & Min.f1-Min.f2 & Min.f1-Max.f2 & Max.f1-Min.f2 & Max.f1-Max.f2 \\\hline
$Hue-Sat.$ & 0.1622 - 0.1956 & 0.1218 - 0.4772 & 0.4659 - 0.1402 & 0.3458 - 0.5912 \\\hline
$Hue-Sym.$ & 0.1286 - 0.7232 & 0.1744 - 0.9558& 0.5067 - 0.6701& 0.3075 - 0.9614 \\\hline
$Sat.-Sym.$ & 0.1447 - 0.7133 & 0.1594 - 0.9554 & 0.5760 - 0.6512 & 0.3014 - 0.9647\\\hline
$Smooth.-Sat.$ & 0.9840 - 0.1469 & 0.9706 - 0.6177 & 0.9852 - 0.1291 & 0.9868 - 0.4661 \\\hline
$\mbox{\em{GCF}}-Sat.$ & 0.0100 - 0.1743 & 0.0102 - 0.3820 & 0.0334 - 0.1983 & 0.0357 - 0.5205 \\\hline
$\mbox{\em{GCF}}-Smooth.$ & 0.0094 - 0.9826 & 0.0093 - 0.9840 & 0.0378 - 0.9638 & 0.0319 - 0.9838 \\\hline
\end{tabular}
\caption{Dual features with cut-off for butterfly dataset.}\label{tab:tab2}
\end{table*}
\section{Discussion and future work}
\label{discussion}
Evolutionary search can be a powerful technique for creating novel images.
In this work, we have shown how to apply \mbox{\em{GAN}}'s in order to generate images scoring high or low for given aesthetic feature values. We used evolutionary search to maximise and minimise single features and pairs of features for two datasets, faces and butterflies.
We have shown how to explore the latent space of a ${\mbox{\em{GAN}}}$ to create semi-realistic images that sample different regions of aesthetic feature spaces.
Additionally, we studied the effects of different values of the cut-off function on the aesthetic appearance of the images.
For future research, it would be interesting to explore
intermediate points in the feature space to gain more insight into the relationships between features and
to explore additional constraints and their effect on the process of generating novel images.
|
2,877,628,091,460 | arxiv | \section{Introduction}
\label{sec:intro}
Due to its potential to double network capacity at the physical (PHY) layer and to provide many other benefits at higher layers, full-duplex (FD) wireless has drawn significant attention~\cite{sabharwal2014band,bharadia2013full,zhou2017integrated}. The major challenge associated with FD is the extremely strong self-interference (SI) on top of the desired signal, requiring more than $\SI{90}{dB}$ of self-interference cancellation (SIC) across both the RF and digital domains.
Our work on FD transceivers/systems within the Columbia FlexICoN project~\cite{flexicon} focuses on integrated circuit (IC) implementations that are appropriate for mobile and small-form-factor devices~\cite{zhou2017integrated,Zhou_NCSIC_JSSC14,krishnaswamy2016full}.
In~\cite{fd_demo_mobihoc16}, we presented the FlexICoN Gen-1 FD transceiver and an FD wireless link, featuring $\SI{40}{dB}$ RF SIC across $\SI{5}{MHz}$. The implemented Gen-1 RF SI canceller emulates its RFIC counterpart that we presented in~\cite{Zhou_NCSIC_JSSC14} and modeled and analyzed in~\cite{marasevic2017resource}.
However, there is no existing open-access wireless testbed with FD-capable nodes, which is crucial for experimental evaluations of FD-related algorithms at the higher layers. Therefore, to facilitate research in this area and to allow the broader community to experiment with FD wireless, we integrated an improved version of the Gen-1 RF canceller presented in~\cite{fd_demo_mobihoc16} with a National Instruments (NI)/Ettus Research USRP N210 SDR in the open-access ORBIT wireless testbed~\cite{orbit}. Since interfacing an RFIC canceller with an SDR presents numerous technical challenges, we implemented the RF canceller on a printed circuit board (PCB) to facilitate the cross-layered experiments with an SDR platform.
In this technical report, we present our cross-layered (hardware and software) implementation of an open-access FD transceiver integrated with the ORBIT testbed, including the design and implementation of the customized Gen-1 RF canceller box and an FD transceiver baseline program. We also present two example FD experiments that run remotely in the ORBIT testbed, where SIC is performed across both the RF and digital domains, demonstrating the FD capability in the ORBIT testbed. The first example is based on UHD~\cite{uhd}, where $\SI{90}{dB}$ overall SIC is achieved for a simple waveform. The second example is based on GNU Radio~\cite{gnuradio}, where $\SI{85}{dB}$ overall SIC is achieved for PSK modulated signals. The code for the baseline program and a tutorial for the FD transceiver are available at~\cite{flexicon_github,flexicon_orbit_gen1}. The implemented FD transceiver and the baseline program, which can be further extended to more complicated communication networking scenarios, can allow the broader community to experiment with FD wireless.
\section{The FlexICoN Gen-1 RF Canceller Box}
\label{sec:canceller}
\begin{figure}[!t]
\centering
\includegraphics[width=0.98\columnwidth]{figs/diagram_orbit_node.png}
\caption{Block diagram of the implemented FD transceiver.}
\label{fig:orbit-node-diagram}
\vspace{-0.5\baselineskip}
\end{figure}
\begin{figure}[!t]
\centering
\subfloat[]{
\label{fig:orbit-node-canc}
\includegraphics[width=0.49\columnwidth]{figs/gen1_canc_box.png}}
\hfill
\subfloat[]{
\label{fig:orbit-node-sdr}
\includegraphics[width=0.465\columnwidth]{figs/gen1_orbit_node.png}}
\caption{(a) The Columbia FlexICoN Gen-1 RF canceller box, and (b) the FD-capable node installed in the ORBIT wireless testbed.}
\label{fig:orbit-node}
\vspace{-0.5\baselineskip}
\end{figure}
Fig.~\ref{fig:orbit-node-diagram} shows the block diagram of the implemented FD transceiver, in which a Gen-1 RF canceller box (as depicted in Fig.~\ref{fig:orbit-node}\subref{fig:orbit-node-canc}) is connected to an Apex II multi-band antenna (at the ANT port) and a USRP (at the TX IN and RX OUT ports). Fig.~\ref{fig:orbit-node}\subref{fig:orbit-node-sdr} shows the FD transceiver installed in the ORBIT testbed. Specifically, a circulator is used at the antenna interface so that a single antenna can be shared between the TX and RX. To alleviate the RX front-end linearity and the analog-to-digital converter (ADC) dynamic range requirements, sufficient SI isolation and cancellation in the RF domain are needed before digital SIC is engaged.
In the FD transceiver, the RF SI suppression is achieved by the circulator and the RF SI canceller in the Gen-1 RF canceller box, where the circulator has a TX/RX isolation of around $\SI{20}{dB}$ and the RF SI canceller can provide $20$-$\SI{30}{dB}$ RF SIC. As Fig.~\ref{fig:orbit-node}\subref{fig:orbit-node-canc} shows, the RF canceller box contains four components: (i) a frequency-flat amplitude- and phase-based RF canceller, which is an improved version of that presented in~\cite{fd_demo_mobihoc16}\footnote{The implemented RF canceller includes a variable gain attenuator with higher resolution and an SPI compared with that presented in~\cite{fd_demo_mobihoc16}.}, (ii) a coaxial circulator, (iii) a custom-designed antenna tuner, and (iv) a SUB-20 controller. Fig.~\ref{fig:meas-canc-box} shows an example of the measured TX/RX isolation (measured between TX IN and RX OUT ports of the canceller box), where $\SI{40}{dB}$ RF SIC is achieved across $\SI{5}{MHz}$ bandwidth.
\begin{figure}[!t]
\centering
\includegraphics[width=0.5\columnwidth]{figs/meas/meas_canc_orbit_onsite.eps}
\caption{Measured TX/RX isolation of the RF canceller box with and without turning on the RF canceller. The RF canceller box with the circulator and the RF canceller provides $\SI{40}{dB}$ RF SIC across $\SI{5}{MHz}$ bandwidth.}
\label{fig:meas-canc-box}
\vspace{-0.5\baselineskip}
\end{figure}
\begin{figure}[t]
\centering
\subfloat{
\label{fig:meas-canc-no-circ-amp}
\includegraphics[width=0.48\columnwidth]{figs/meas/meas_canc_tx_rx_no_circ_amp.eps}}
\hspace{-12pt} \hfill
\subfloat{
\label{fig:meas-canc-no-circ-phase}
\includegraphics[width=0.48\columnwidth]{figs/meas/meas_canc_tx_rx_no_circ_phase.eps}}
\caption{Measured amplitude and phase of the RF canceller with varying attenuation \mytexttt{ATT} values (left) and phase shift \mytexttt{PS} values (right).}
\label{fig:meas-canc-no-circ}
\vspace{-0.5\baselineskip}
\end{figure}
\subsection{The Amplitude- and Phase-based RF Canceller}
The amplitude- and phase-based RF canceller is implemented using discrete components on a PCB and is optimized around $\SI{900}{MHz}$ operating frequency.\footnote{In this implementation, we select $\SI{900}{MHz}$ operating frequency but this approach can be easily extended to other frequencies (e.g., $2.4/\SI{5}{GHz}$).} The RF canceller taps a reference signal from the output of the power amplifier (PA) at the TX side (through a $\SI{6}{dB}$ Mini-Circuits ADC-6-13+ directional coupler) and adjusts its amplitude and phase. Then, SIC is performed at the input of the low-noise amplifier (LNA) at the RX side.
For amplitude adjustment, a $7$-bit SKY12343-364LF digital attenuator~\cite{SKY12343} is used, in which the attenuation can be adjusted within a $\SI{31.75}{dB}$ range with a resolution of $\SI{0.25}{dB}$. As a result, the RF canceller has an amplitude tuning range between $-\SI{48}{dB}$ and $-\SI{17}{dB}$. For phase adjustment, a Mini-Circuits passive SPHSA-152+ phase-shifter~\cite{SPHSA152} is used, which covers full $\SI{360}{deg}$ and is controlled by an $8$-bit TI-DAC081S101 digital-to-analog converter (DAC)~\cite{DAC081S101}. Both the attenuator and phase shifter are programmed through the SUB-20 controller serial-to-parallel interface (SPI) with code values \mytexttt{ATT} (\underline{ATT}uation) and \mytexttt{PS} (\underline{P}hase \underline{S}hift), respectively, and the parameter configuration ranges are
\begin{align*}
\mytexttt{ATT} \in \{0,1,\cdots,127\},\ \mytexttt{PS} \in \{0,1,\cdots,255\}.
\end{align*}
The attenuator and DAC have $\SI{3}{V}$ supply voltage and the phase shifter has a reference voltage of $\SI{12}{V}$.
Fig.~\ref{fig:meas-canc-no-circ} shows the amplitude and phase measurements of the RF canceller with varying \mytexttt{ATT} values (under fixed $\mytexttt{PS} = 0$) and with varying \mytexttt{PS} values (under fixed $\mytexttt{ATT} = 0$). As Fig.~\ref{fig:meas-canc-no-circ} shows, the RF canceller has an amplitude tuning range of $\SI{29}{dB}$ (from $-\SI{46.5}{dB}$ to $-\SI{17.5}{dB}$) and a phase tuning range of full $\SI{360}{deg}$.
\subsection{The Coaxial Circulator}
An RF-CI RFCR3204 coaxial circulator is used, whose operating frequency is between $860$-$\SI{960}{MHz}$.
\begin{figure}[t]
\centering
\includegraphics[width=0.95\columnwidth]{figs/tuner.jpg}
\caption{Circuit diagram and PCB implementation of the programmable antenna tuner.}
\label{fig:tuner}
\vspace{-0.5\baselineskip}
\end{figure}
\subsection{The Programmable Antenna Tuner}
In order for the circulator to better match with varying impedance of the antenna due to environmental changes (around $\SI{900}{MHz}$ operating frequency), we also designed and implemented a programmable antenna tuner. Fig.~\ref{fig:tuner} shows the circuit diagram and the PCB implementation of the antenna tuner. In particular, a $\pi$-network with lossless inductor ($L$) and digitally tunable capacitors ($C_i$) is used for impedance transformation. In our implementation, we use a fixed chip inductor with inductance $L_{\rm fixed} = \SI{5.1}{nH}$ and the Peregrine Semiconductor $5$-bit PE64909 digitally tunable capacitors~\cite{PE64909} for $C_i$ ($i=1,2,3$). By programming the capacitors with code values \mytexttt{CAPi} ($i=1,2,3$), different antenna interface impedance matching can be achieved. The corresponding configuration ranges of the tunable capacitors are
\begin{align*}
\mytexttt{CAPi} \in \{0,1,\cdots,31\},\ \forall i = 1,2,3.
\end{align*}
\subsection{The SUB-20 Controller}
As Fig.~\ref{fig:orbit-node-diagram} shows, a DIMAX SUB-20 multi-interface USB adapter~\cite{SUB20} connected to the host PC is used to program the attenuator and DAC (on the RF SI canceller) and the capacitors (on the antenna tuner) through SPI. The SUB-20 SPI is configured to operate at the maximal master clock of $\SI{8}{MHz}$. At this clock rate, programming one \mytexttt{ATT} or \mytexttt{PS} value (a $2$-byte word including the address fields, etc.) takes $\SI{2}{us}$, and programming one \mytexttt{CAPi} value (a $1$-byte word) takes $\SI{1}{us}$. We note that other controller platforms with higher SPI clock rates can also be used to improve the performance.
\section{Integration with the ORBIT Testbed \\ and an FD Transceiver Baseline Node Image}
\label{sec:integration}
An ORBIT node equipped with the Gen-1 RF canceller box is depicted in Fig.~\ref{fig:orbit-node}\subref{fig:orbit-node-sdr}. We use \mytexttt{node11-10} in the ORBIT main \mytexttt{grid} with a USRP N210 SDR. In particular, the RF canceller box TX IN/RX OUT ports are connected to the USRP TX/RX ports, respectively, and the RF canceller box ANT port is connected to an Apex II multi-band antenna (see Figs.~\ref{fig:orbit-node-diagram} and~\ref{fig:orbit-node}).
We developed an FD transceiver baseline node image, which contains two example FD experiments running on the host PC (i.e., the yellow box in Fig.~\ref{fig:orbit-node}\subref{fig:orbit-node-sdr}): (i) a UHD-based example with a simple waveform, and (ii) a GNU Radio-based example with modulated signals using Phase-Shift Keying (PSK) modulation scheme. Throughout the experiments, the USRP has a receiver noise floor of $-\SI{85}{dBm}$.\footnote{This USRP receiver noise floor is limited by the existence of environmental interference at $\SI{900}{MHz}$ frequency. The USRP has a true noise floor of around $-\SI{95}{dBm}$ at the same receiver gain setting, when not connected to an antenna.}
To facilitate the experiments with the RF canceller box and FD wireless, the customized FD transceiver baseline node image named \mytexttt{flexicon-orbit-v2.ndz} with the required software was created and stored in the ORBIT testbed. The code for the FD transceiver baseline program is available at \href{https://github.com/Wimnet/flexicon_orbit}{\mytexttt{https://github.com/Wimnet/flexicon\_orbit}}. The detailed tutorial and instructions containing the steps for running the example FD experiments can be found at~\cite{flexicon_orbit_gen1, flexicon_github}.
\section{An Example FD Experiment based on UHD}
\label{sec:exp-simple}
\begin{lstlisting}[float=tp,basicstyle=\ttfamily\footnotesize,caption={Representative output of the FD transceiver SUB-20 \mytexttt{C} program.},label={fig:orbit-output-sub20},captionpos=b,language=c++,frame=single,belowskip=-.5\baselineskip,linewidth=0.96\columnwidth,xleftmargin=0.02\columnwidth]
$ ./rf_canc_gen1_config 30 110 16 6 6
Sub20 device found... Device opened!
Finished programming ATT with value 30
Finished programming PS with value 110
Finished programming CAP1 with value 16
Finished programming CAP2 with value 6
Finished programming CAP3 with value 6
\end{lstlisting}
\begin{lstlisting}[float=tp,basicstyle=\ttfamily\footnotesize,caption={Representative output of the FD transceiver UHD program.},label={fig:orbit-output-sic},captionpos=b,language=c++,frame=single,belowskip=-.5\baselineskip,linewidth=0.96\columnwidth,xleftmargin=0.02\columnwidth]
$ ./fd_transceiver_simple --rate 5e6 --freq 900e6
--tx-gain 10 --rx-gain 10 --wave-freq 200e3
...
TX Signal: 0.00 dBm
RX Signal after RF SIC: -45.21 dBm
Amount of RF SIC: 45.21 dB
RX Signal after Digital SIC: -87.87 dBm
Amount of Digital SIC: 42.66 dB
TX Signal: 0.00 dBm
RX Signal after RF SIC: -45.28 dBm
Amount of RF SIC: 45.28 dB
RX Signal after Digital SIC: -88.53 dBm
Amount of Digital SIC: 43.25 dB
...
\end{lstlisting}
\begin{figure}[!t]
\centering
\vspace{-.5\baselineskip}
\subfloat[]{
\label{fig:exp-simple-node}
\includegraphics[width=0.48\columnwidth]{figs/exp/sic_simple_node.eps}}
\hspace{-12pt} \hfill
\subfloat[]{
\label{fig:exp-simple-link}
\includegraphics[width=0.48\columnwidth]{figs/exp/sic_simple_link.eps}}
\vspace{-0.5\baselineskip}
\caption{Power spectrum of the received signal at the FD transceiver at $\SI{0}{dBm}$ TX power: (a) without the desired signal, (b) with the desired signal.}
\label{fig:exp-simple}
\vspace{-0.5\baselineskip}
\end{figure}
In this section, we present an example FD experiment using the FD transceiver and the baseline program, where the FD transceiver transmits and receives simultaneously at $\SI{900}{MHz}$ carrier frequency with $\SI{5}{MHz}$ sampling rate. Different from regular UHD programs that are designed for half-duplex applications, the FD UHD program includes three parallel threads for performance optimization: the TX/RX streaming threads running on the same frequency channel and a third thread for executing the digital SIC algorithm. In particular, the digital SIC algorithm is based on Volterra series and a least-square problem and is similar to that presented in~\cite{bharadia2013full,krishnaswamy2016full}. Moreover, the \mytexttt{Eigen C++} library is included for computations in the digital SIC algorithm (e.g., matrix operations and FFT).
In this example FD experiment, the FD transceiver (\mytexttt{node11-10}) sends a single tone with frequency offset $\SI{200}{kHz}$ at $\SI{5}{dBm}$ TX power level. Fig.~\ref{fig:orbit-output-sub20} shows an example output of the FD transceiver SUB-20 program, where the RF canceller box is configured with parameters
\begin{align*}
\mytexttt{ATT} = 30,\ \mytexttt{PS} = 110,\ \mytexttt{CAP1} = 16,\ \mytexttt{CAP2} = 6,\ \mytexttt{CAP3} = 6,
\end{align*}
through the \mytexttt{C} program \mytexttt{rf\_canc\_gen1\_config}.\footnote{The optimal configuration of the RF canceller box may change due to factors such as antenna being re-tightened or rotated. Please refer to the detailed tutorial~\cite{flexicon_orbit_gen1} for updates.} Fig.~\ref{fig:meas-canc-box} shows the TX/RX isolation of the RF canceller box under this configuration. Fig.~\ref{fig:orbit-output-sic} shows an example output of the FD transceiver UHD program where $\SI{90}{dB}$ overall SIC is achieved, where $\SI{45}{dB}$ is from the RF domain and $\SI{45}{dB}$ is from the digital SIC algorithm, and the SI signal is canceled to the receiver noise floor. Fig.~\ref{fig:exp-simple} shows the power spectrum of the residual SI after RF and digital SIC through an offline MATLAB script.
In addition, another ORBIT node (\mytexttt{node13-8}) serves as a second radio that sends a single tone with frequency offset $\SI{400}{kHz}$ using the UHD \mytexttt{tx\_waveforms} program~\cite{uhd}, i.e.,
\begin{lstlisting}[basicstyle=\ttfamily\small,language=c++]
$ ./tx_waveforms --rate 5e6 --freq 900e6
--wave-type SINE --wave-freq 400e3
\end{lstlisting}
Fig.~\ref{fig:exp-simple} presents the power spectrum of the signal received at the FD transceiver after RF and digital SIC. As Fig.~\ref{fig:exp-simple} shows, the SI at the FD transceiver (with frequency offset $\SI{200}{kHz}$) is canceled to the receiver noise floor after SIC in both the RF and digital domains, and the digital SIC algorithm introduces minimal SNR loss to the desired signal (with frequency offset $\SI{400}{kHz}$).
\section{An Example FD Experiment based on GNU Radio}
\label{sec:exp-psk}
In this section, we present another example FD experiment based on GNU Radio, where the FD transceiver transmits a wideband PSK-modulated signals. Compared with the UHD-based example, GNU Radio provides both user-friendly implementation and a graphical user interface (GUI) but it also has performance limitations, as will be explained below.
To integrate the RF canceller configuration with the main GNU Radio program, we implemented a customized GNU Radio out-of-tree (OOT) SUB-20 module. Given the relatively stable wireless environment in the ORBIT testbed, the OOT SUB-20 module is implemented with fixed\footnote{Users can change \mytexttt{CAPi} using the SUB-20 \mytexttt{C} program (see Section~\ref{sec:exp-simple}).} $\mytexttt{CAP1} = 16$, $\mytexttt{CAP2} = \mytexttt{CAP3} = 6$, and users can vary the values of \mytexttt{ATT} and \mytexttt{PS} to observe different RF SIC performance. This example FD experiment contains three parts:
\begin{enumerate}[leftmargin=*,label=\arabic*.]
\item \textbf{Data Generation}: The baseband samples encoding raw bits modulated using an PSK scheme (e.g., BPSK and QPSK) are generated using \mytexttt{gen\_data\_psk};
\item \textbf{Data Transmission and RF SIC}: The FD transceiver transmits the modulated samples and receives samples over-the-air using \mytexttt{usrp\_txrx\_psk}. The RF canceller can be configured in \emph{real-time} to observe different RF SIC performance;
\item \textbf{Digital SIC}: The digital SIC is performed offline using \mytexttt{dig\_sic\_on} and the received baseband samples.
\end{enumerate}
Due to the software and timing limitations of GNU Radio, baseband samples are recorded using the file option and digital SIC is performed offline (part 3). We remark that other implementations (e.g., UHD- or FPGA-based) may be able to support digital SIC in real-time.
A GNU Radio-based example experiment was demonstrated in~\cite{fd_demo_infocom18} where the FD transceiver (\mytexttt{node11-10}) transmits a $\SI{2.5}{MHz}$ QPSK signal stream with QPSK modulation scheme at $\SI{10}{MHz}$ sampling rate and $\SI{0}{dBm}$ average TX power level. Fig.~\ref{fig:exp-psk}\subref{fig:exp-psk-node} shows the power spectrum of the received signal at the FD transceiver, where $\SI{85}{dB}$ overall SIC is achieved, where $\SI{43}{dB}$ is from the RF domain and $\SI{42}{dB}$ is from the digital domain. The SI signal is canceled to the USRP receiver noise floor at $\SI{0}{dBm}$ TX power. Another ORBIT node (\mytexttt{node13-8}) is then used to serve as a second radio that transmits a single tone with a frequency offset of $\SI{1}{MHz}$. As Fig.~\ref{fig:exp-psk}\subref{fig:exp-psk-link} shows, the desired signal is recoved after the SIC (in both the RF and digital domains) is performed at the FD transceiver.
\begin{figure}[!t]
\centering
\subfloat[]{
\label{fig:exp-psk-node}
\includegraphics[width=0.48\columnwidth]{figs/exp/sic_psk_node.eps}}
\hspace{-12pt} \hfill
\subfloat[]{
\label{fig:exp-psk-link}
\includegraphics[width=0.48\columnwidth]{figs/exp/sic_psk_link.eps}}
\vspace{-0.5\baselineskip}
\caption{Power spectrum of the received signal at the FD transceiver, which transmits a $\SI{2.5}{MHz}$ QPSK signal at $\SI{0}{dBm}$ average TX power level: (a) without the desired signal, (b) with the desired signal.}
\label{fig:exp-psk}
\vspace{-0.5\baselineskip}
\end{figure}
\section{Other Potential FD Wireless Experiments}
Some potential FD experiments that can be conducted using the presented FD transceiver are listed below:
\begin{itemize}[leftmargin=*]
\item[-] Hands-on experiments with FD wireless on an SDR platform in a teaching/lab course;
\item[-] Studying different RF SIC performance and its relation to the antenna interface response by tuning the RF canceller box (the SUB-20 \mytexttt{C} program or the OOT SUB-20 module);
\item[-] Studying the performance of the digital SIC algorithm by tuning its parameters (digital SIC part of the GNU Radio/UHD program);
\item[-] Development and evaluation of different digital SIC algorithms (digital SIC part of the GNU Radio/UHD program);
\item[-] Incorporation of modulated signals, such as OFDM, with different bandwidth (the GNU Radio/UHD program);
\item[-] Experimentation and evaluation of medium access control (MAC) algorithms in a heterogeneous network with an FD access point/client (e.g., modifying the GNU Radio/UHD program and adding a MAC layer).
\end{itemize}
\section{Conclusion}
\label{sec:conclusion}
In this report, we presented our cross-layered (hardware and software) design and implementation of the first open-access remotely-accessible FD transceiver which is integrated with the ORBIT wireless testbed. An FD transceiver baseline program and an example FD experiment were provided to facilitate the experimentation with the FD transceiver. We discussed other potential FD experiments that can be developed and conducted using the FD transceiver.
Our future work includes the integration of the Gen-2 canceller box in both the ORBIT testbed and the PAWR COSMOS testbed. In particular, we demonstrated the Gen-2 RF canceller in~\cite{fd_demo_infocom17}, which can achieve wideband RF SIC via the technique of frequency-domain equalization. The Gen-2 RF SI canceller implemented on a PCB emulates its RFIC counterpart we presented in~\cite{Zhou_WBSIC_JSSC15}. We plan to install more FD transceivers in the ORBIT and COSMOS testbeds with both Gen-1 and Gen-2 RF canceller boxes. Our future work also includes developing more advanced FD-related software and applications.
\section*{Acknowledgments}
This work was supported in part by NSF grants ECCS-1547406 and CNS-1827923, DARPA RF-FPGA program, DARPA SPAR program, a Qualcomm Innovation Fellowship, Texas Instruments, Intel, and a National Instruments Academic Research Grant. We thank Steven Alfano, Jelena Diakonikolas, Aishwarya Rajen, Jinhui Song, Mingyan Yu for their contributions to various aspects of the project. We thank Ivan Seskar, Jakub Kolodziejski, and Prasanthi Maddala from WINLAB, Rutgers University, for their help on the integration with the ORBIT testbed. We also thank Kira Theuer and Kendall Ruiz from NI and the NI technical support team for their help.
\scriptsize
\bibliographystyle{IEEEtran}
\interlinepenalty=10000
\subsection{The RF Canceller PCB}
To achieve SIC in the RF domain, the RF canceller taps a reference signal from the output of the power amplifier (PA) at the TX side and adjusts its amplitude and phase, and then performs SIC at the input of the low-noise amplifier (LNA) at the RX side. The implemented RF canceller PCB is optimized around $\SI{900}{MHz}$ operating frequency.\footnote{In this implementation, we select $\SI{900}{MHz}$ operating frequency but this approach can be easily extended to other frequencies (e.g., $2.4/\SI{5}{GHz}$).} For amplitude adjustment, a $7$-bit SKY12343-364LF digital attenuator is used, in which the attenuation can be adjusted within a $\SI{31.75}{dB}$ range at $\SI{0.25}{dB}$ step size. For phase adjustment, a passive phase-shifter is used, which covers full $\SI{360}{deg}$ with a resolution of $\SI{0.5}{deg}$, and is controlled by an $8$-bit TI-DAC081S101 digital-to-analog converter (DAC). Both the attenuator and the phase shifter are programmed via a serial-to-parallel interface (SPI), with code values represented by \texttt{ATT} (\underline{ATT}uation) and \texttt{PS} (\underline{P}hase \underline{S}hift), respectively, with tuning range
\begin{align}
\texttt{ATT} \in \{0,1,\cdots,127\},\ \texttt{PS} \in \{0,1,\cdots,255\}. \nonumber
\end{align}
Fig.~\ref{fig:meas-canc-no-circ} shows the measured amplitude and phased of the implemented conventional RF canceller without the antenna interface. The RF canceller has an amplitude tuning range between $-\SI{48}{dB}$ and $-\SI{17}{dB}$.
\begin{figure}[t]
\centering
\subfloat{
\label{fig:meas-canc-no-circ-amp}
\includegraphics[width=0.8\columnwidth]{figs/meas/meas_canc_tx_rx_no_circ_amp.eps}} \\
\subfloat{
\label{fig:meas-canc-no-circ-phase}
\includegraphics[width=0.8\columnwidth]{figs/meas/meas_canc_tx_rx_no_circ_phase.eps}}
\vspace{-0\baselineskip}
\caption{Measured amplitude and phase of the implemented conventional RF canceller with (a) different attenuation codes, and (b) different phase shift codes.}
\label{fig:meas-canc-no-circ}
\vspace{-\baselineskip}
\end{figure}
\subsection{The Antenna Tuner}
In order to better match with the antenna interface, we also designed and implemented a programmable antenna tuner, which is used to match with varying impedance of the antenna due to environmental changes around $\SI{900}{MHz}$ operating frequency. In particular, a $\pi$-network with lossless inductor ($L$) and digitally tunable capacitors ($C_i$) are used for impedance transformation. In our implementation, we use a $\SI{5.1}{nH}$ chip inductor and the Peregrine $5$-bit PE64909 digitally tunable capacitors for $C_i$ ($i=1,2,3$). By programming the capacitors with codes \texttt{CAPi} ($i=1,2,3$), different antenna interface impedance matching can be achieved. The corresponding tuning range of the tunable capacitors is
\begin{align*}
\texttt{CAPi} \in \{0,1,\cdots,31\},\ \forall i = 1,2,3.
\end{align*}
\begin{figure}[t]
\centering
\subfloat{
\label{fig:tuner-diagram}
\includegraphics[width=0.48\columnwidth]{figs/tuner_diagram.png}}
\subfloat{
\label{fig:tuner-pcb}
\includegraphics[width=0.48\columnwidth]{figs/tuner_pcb.png}}
\vspace{-0\baselineskip}
\caption{Diagram and PCB implementation of the programmable antenna tuner board.}
\label{fig:tuner}
\vspace{-\baselineskip}
\end{figure}
\subsection{Integration with the ORBIT Testbed}
Fig.~\ref{fig:canc-orbit-node} depicts the an FD ORBIT node equipped with the implemented conventional RF canceler. The node used is \texttt{node11-10} in the ORBIT \texttt{grid} with a USRP N210. In particular, the RF canceller TX/RX ports are connected to the USRP TX/RX ports, respectively, and the RF canceller CIRC port is connected to an Apex II-Multi-Band Antenna.
\begin{figure}[t]
\centering
\subfloat{
\label{fig:meas-canc}
\includegraphics[width=0.8\columnwidth]{figs/meas/meas_canc_orbit_onsite.eps}}
\vspace{-0\baselineskip}
\caption{Measured TX/RX isolation with and without using the conventional RF canceller. The $\SI{40}{dB}$ RF SIC bandwidth is $\SI{5}{MHz}$.}
\label{fig:meas-canc}
\vspace{-\baselineskip}
\end{figure}
|
2,877,628,091,461 | arxiv | \section{Introduction}
For safety reasons, trains currently operate strictly signal-based.
Therefore, each track is divided in multiple block sections delimited by
signals. The signals are coordinated by a central safety logic which guarantees
that only one train can occupy a block section at the same time. The necessary
train-position information is gathered by sensors installed at the tracks.
Although this system has proven to be safe and reliable it suffers
either from high costs for the huge amount of sensors, or a low track capacity
due to longer block sections
\cite{my_references:TheegVlasenko2009Railwaysignalling}. To overcome this
undesirable trade-off in the near future, trains have to become intelligent
vehicles which are able to localize themselves continuously without any
track-side installations.
The challenge when developing such a train-borne localization system is to
fulfill the high demands in terms of \ul{r}eliability, \ul{a}vailability,
\ul{m}aintainability, and \ul{s}afety (RAMS) in the sense of EN 50126
\cite{my_references:en50126}.
Although the development of train-borne localization systems has gained of
interest in recent years, there is currently no sensor configuration available
fulfilling all demands
\cite{my_references:OteguiBahilloEtAl2017SurveyTrainPositioning}.
In this paper, we want to focus on the purpose of digital track-maps in
train-borne localization systems and how they can help to fulfill the RAMS
demands in the near future. Therefore, we will investigate how track maps are
utilized to improve the positioning accuracy, availability and integrity of
train-borne localization systems. After that, we will give a brief overview over
the methods commonly used to generate track maps. From this overview it will
become apparent that there are hardly any adequate methods to generate suitable
track maps for the purposes described above. Therefore, we will present a new
approach to generate compact geometric track-maps based on the results of a
localization filter we presented in our previous work
\cite{my_references:WinterWillertEtAl2018IncreasingAccuracyTrain} motivated by
\cite{my_references:SchreierWillertEtAl2014Gridmappingdynamic}.
\section{Track Maps in Train-Borne Localization}
\label{sec:track_maps_for_localization}
We start with a brief overview on the different types of track maps and the
methods used to generate them. Afterwards, we briefly discuss the shortcomings
of these methods which motivated us to come up with a novel approach to
generate compact geometric track-maps.
\subsection{Map Types}
\label{sec:track_maps_types}
There are three different categories of track maps used for train-borne
localization:
\subsubsection{Topological Track-Maps}
This is the most basic track-map type. It only stores the topology and mileage
of the railway network. This is sufficient due to the fact that the position $p$
of a train can unambiguously be defined in railway coordinates by $p =
\{t,\,s\}$, where $t$ represents a unique track ID and $s$ being a
continuous track-length parameter. These maps are widely used in the railway
system today since an additional absolute position information is not needed
for its safe operation.
Theoretically, it is possible to realize a train-borne localization system with
these maps. Therefore, it has to be assumed that the start point and the
pre-set route of a specific train is known.
Then a train can localize itself by measuring its traveled distance relative to
its start point and thereby determining its position on this pre-set route
\cite{my_references:SchneiderTroelsen2000Introducingdigitalmap}.
Unfortunately, the pre-set route is normally not known on the train itself.
This makes the localization result ambiguous: After a switch the position can
no longer be clearly determined. To solve this ambiguity, maps holding
additional information have been introduced as described next.
\subsubsection{Topographic and Geometric Track-Maps}
Compared to the topological track-maps described before, topographic track-maps
additionally store the track-course in absolute coordinates. Furthermore, if
they hold track-characteristic information like the specific track element type
(straight, circular arc or transitional arc), orientation, curvature, or
something similar, they can be additionally named geometric track-maps. The
additional information stored in these maps allows to apply different
map-matching techniques with more track-selective localization approaches
compared to topological track-maps.
The map-matching approaches vary depending on the used sensor configuration.
Many approaches utilize global navigation satellite system (GNSS) data and
inertial measurement unit (IMU) data together with course and curvature
information from a track map to realize track-selective map-matching
\cite{my_references:Saab2000mapmatchingapproachb,
my_references:LueddeckeRahmig2011EvaluatingmultipleGNSS,
my_references:BroquetasComeronEtAl2012Trackdetectionrailway,
my_references:CrespilloHeirichEtAl2014BayesianGNSSIMUtight,
my_references:RothBaaschEtAl2018MapSupportedPositioning}. However, the
additional map information can not only be used to improve the localization
accuracy. It can also be used to increase the availability and integrity of the
localization system itself
\cite{my_references:NeriPalmaEtAl2013TrackconstrainedPVT,
my_references:YuEun2017Sensorattackdetection,
my_references:JinCaiEtAl2018DTMaidedadaptive}.
\subsubsection{Feature Track-Maps}
This type of track-map also stores information on features or landmarks along
the track. Features directly used for train-borne localization are for example
ferromagnetic inhomogeneities of the rails
\cite{my_references:SpindlerLauer2018HighAccuracyEstimation} or characteristic
distortions of the earth magnetic-field along the railway track
\cite{my_references:SieblerHeirichEtAl2018TrainLocalizationParticle}. Other
features may be characteristic infrastructure elements like bridges, tunnels or
stations, as suggested in
\cite{my_references:GerlachHoerste2009precisedigitalmap} which can help to
increase the accuracy and availability of GNSS positioning results.
\subsection{Generation Methods}
\label{sec:track_maps_generation}
There are basically four main approaches to create digital track-maps
\cite{my_references:GerlachHoerste2009precisedigitalmap}:
\begin{itemize}
\item Extraction from existing site plans, available as paper drawings,
Computer Aided Design (CAD) plans, or Geographic Information System (GIS)
databases,
\item direct surveying of the tracks, e.\,g.\xspace by GNSS measurements or the
application of tachymetry,
\item analysis of orthophotos, or
\item the application of simultaneous localization and mapping (SLAM)
methods.
\end{itemize}
Although track maps are indispensable for train-borne localization, it is often
not described in detail how the necessary maps are created. The probably most
commonly used maps consist of previously recorded position data-points which
are available from the localization sensors. If additional geometric track
information is needed, it is mostly referred to the possibility to extract this
data from existing site plans. To our knowledge there are currently two
simultaneous localization and mapping (SLAM) approaches available which are
especially designed for railway vehicles
\cite{my_references:HeirichRobertsonStrang2013RailSLAMLocalizationrail,
my_references:HasbergHenselEtAl2012Simultaneouslocalizationmapping}. Both
methods create data-point based track maps. In the following, we present a new
method that is not based on data points but on a concatenation of geometric
entities.
\subsection{Conclusions for Track-Maps and their
Generation}\label{sec:maps_discussion}
Based on the explanations in \pref{sec:track_maps_types} it becomes
obvious how important track-maps are for train-borne localization. They help to
increase the positioning accuracy, availability and integrity of the
localization system. Thus, track maps act like an additional passive sensor
helping to meet the RAMS requirements. This especially applies to geometric and
feature track-maps. However, it should also be stressed that an inaccurate track
map can also pose a single point of failure in the overall localization process.
The usability of map information for localization purposes is largely
influenced by the map representation and the map quality.
For a map to be suitable for train-borne localization it has to fulfill at least
two basic requirements\footnote{Some further conclusions on the requirements
for digital track-maps as well as some modeling schemes can be found in
\cite{my_references:BikkerKlingeEtAl1998ConceptsIntelligentRoute,
my_references:BoehringerGeistler2006Locationrailwaytraffic,
my_references:GerlachHoerste2009precisedigitalmap}.}:
\begin{itemize}
\item Track-length accuracy: It is essential to consistently
assign all stored information with respect to the track-length $s$ since all
localization algorithms somehow rely on this assignment.
\item Compactness: All information must be accessible in a
computationally efficient way, as the map is often directly used in the
localization algorithm itself, which has to run in real-time. Furthermore,
it is advantageous if the map consumes as little memory as possible in
order to be easily transferable.
\end{itemize}
All current generation methods directly utilizing measurement data store the map
in a data-point format. Between neighboring data points interpolation techniques
are applied. To avoid large interpolation errors the tracks are normally densely
sampled, i.\,e.\xspace with a sample distance between \valunit{1}{m} and \valunit{30}{m}.
Due to the necessary interpolation, such maps are not computationally efficient
and the resulting map representation is neither easy accessible nor memory
saving. Thus, these maps are not optimal in the sense of the compactness
requirement mentioned above
\cite{my_references:LiuCaiEtAl2013Generatingelectronictrack}. A more suitable
track map representation would be a direct description of the geometric
properties of each track element in a list. This would result in geometric
track-maps easily fulfilling the compactness requirement. Those maps may be
extracted from existing site plans. However, these site plans can differ
significantly from the real track situation
\cite{my_references:LiPuEtAl2019MethodAutomaticallyRereating}. Two possible ways
to create compact geometric track-maps based on measurement data are presented in
\cite{my_references:TaoCaiEtAl2017Digitaltrackmap,
my_references:LiPuEtAl2019MethodAutomaticallyRereating}.
In the remainder of this paper an alternative mapping approach is presented,
which generates a compact geometric track-map with a much simpler method.
It advantageously incorporates the results of our previously
published localization filter
\cite{my_references:WinterWillertEtAl2018IncreasingAccuracyTrain}, and is
furthermore more suitable for train-borne localization applications.
\section{Map Generation}
\label{sec:problem_formulation}
The aim of our map generation procedure is to create compact geometric
track-maps like the example listed in \pref{tab:ref_track_map}. This table
fully represents the track shown in \pref{fig:ref_track}. The compactness
results from the fact that railway tracks always consist of a continuous
sequence of well described geometric shapes (straight, transitional arc, and
circular arc) \cite{my_references:HaldorEtAl2017Planungvonbahnanlagen}.
Therefore, a railway track can unambiguously be described by a single starting
point, the direction of the track at the starting point, the sequence of
geometric shapes, and the geometric parameters for each shape (c.\,f.\xspace
\pref{tab:ref_track_map}).
\begin{table}[ht]
\centering
\caption{Compact geometric track-map for the track shown in
\pref{fig:ref_track}.}
\label{tab:ref_track_map}
%
\input{tables/ref_track_map}
%
\end{table}
\begin{figure}[ht]
\centering
\tikzsetnextfilename{ref_track}
\input{figures/ref_track}
\caption{Exemplary track consisting of the three standard track
geometries: straight, transitional arc and circular arc. A compact
geometric track-map representation of this track is given in
\pref{tab:ref_track_map}.}
\label{fig:ref_track}
\end{figure}
\subsection{Initial Situation}\label{sec:initial}
We assume to start with the results of the localization filter we presented in
\cite{my_references:WinterWillertEtAl2018IncreasingAccuracyTrain}. Along with
the position solution this filter estimates some of the track's geometric
parameters which conveniently serve as initialization for the map generation
process. Moreover, the filter delivers assignments between measurement
data and identified track geometries which vastly simplifies the formulation of
the mapping error that is derived later in this section.
A summary of the available parameters from the localization filter is
listed in \pref{tab:initial_track_parameters}. A visualization of the resulting
discontinuous track is shown in \pref{fig:initial_track_geometry}.
For this example, the input measurements used for the filter have been generated
by simulation. The used ground-truth track is shown in \pref{fig:ref_track}. The
detailed simulation procedure and parameters are described in
\cite{my_references:WinterWillertEtAl2018IncreasingAccuracyTrain}.
All further explanations are illustrated using this example data.
\begin{table}[ht]
\centering
\caption{Initially available geometric track-map parameters}
\label{tab:initial_track_parameters}
%
\input{tables/initial_track_map}
%
\end{table}
\begin{figure}[ht]
\centering
\tikzsetnextfilename{initial_map}
\input{figures/initial_map}
%
\caption{Initial track-elements identified by the localization
filter presented in
\cite{my_references:WinterWillertEtAl2018IncreasingAccuracyTrain}. These
elements do not constitute a continuous track-map.}
\label{fig:initial_track_geometry}
%
\end{figure}
\subsection{Mapping Procedure}
The initial track depicted in \pref{fig:initial_track_geometry} is not usable
for localization since it is discontinuous. It has some gaps at the points
where no track-geometry has been identified (c.\,f.\xspace
\pref{tab:initial_track_parameters}, track-IDs: 2, 4, 6, and 8). Therefore, the
task of the map generation procedure is to connect the initially identified
track-elements to a continuous track which also has to fit the available GNSS
measurement data. To solve this task, first, missing track-geometries have to be
identified. Afterwards, the geometric parameters of the individual
track-elements can be estimated.
\subsubsection{Track Geometry Identification}
We assume that the missing track-geometries can be concluded from the following
knowledge \cite{my_references:HaldorEtAl2017Planungvonbahnanlagen,
my_references:WinterWillertEtAl2018IncreasingAccuracyTrain}: Railway tracks only
consist of three basic geometric shapes which are straights, transitional arcs
and circular arcs. A straight can only be connected to a circular-arc with the
help of a transitional-arc and vice versa. Since the localization
filter already identified straights and circular-arcs it can be concluded that
the unknown geometries have to be transitional arcs.
\subsubsection{Geometry Parameter Identification}
After all track geometries have been identified, the corresponding geometric
parameters have to be tuned such that the continuous concatenation of all
track-elements fits best to the available GNSS measurement data.
Although there is already a lot of information available from the localization
filter, it is necessary to revise the parameters altogether.
This can be seen when we try to simply concatenate all identified tracks. The
parameters of the transitional arcs are inferred from the neighboring
elements%
\footnote{This is possible because transitional arcs are built as clothoids
\cite{my_references:HaldorEtAl2017Planungvonbahnanlagen}. They are clearly
defined by their length, their radius in the end point, and their orientation
either in the start or end-point.}.
\begin{figure}[ht]
\centering
\tikzsetnextfilename{naive_track_map}
\input{figures/naive_track_map}
%
\caption{Track-map resulting from the simple concatenation of the
track-elements identified by the localization filter described in
\cite{my_references:WinterWillertEtAl2018IncreasingAccuracyTrain} (c.\,f.\xspace
\pref{tab:initial_track_parameters} and \pref{fig:initial_track_geometry}).}
\label{fig:naive_track_map}
%
\end{figure}
The resulting track-map for this simple approach is shown in
\pref{fig:naive_track_map}. Obviously, it is a quite poor fit to the GNSS data.
This is a result of the continuity condition of railway tracks. Slight parameter
inaccuracies of one track element are propagated on all succeeding elements.
Therefore, it is necessary to tune the parameters in a joint optimization. To
achieve this, we establish an optimization problem for the whole track by
defining an appropriate error function that incorporates all given measurement
information.
In order to solve this optimization problem we furthermore have to reformulate
the track parameters in a more suitable representation and have to choose an
optimization method. The essential aspects of the optimization are
described in the following paragraphs.
\paragraph{Error-Function Definition}
The error introduced by a track element is given by the perpendicular distances
between the track and the GNSS measurements related to this track element.
Let $\vec x_t$ be a vector representing the parameters of track element $t$.
Furthermore, let $\vec{z}_{t,i}$ be the $i$-th GNSS position measurement
assigned to this track element $t$ by the localization filter. With
$\hat{\vec{z}}_{t,i}(\vec{x}_t)$, the dropped perpendicular point of
$\vec{z}_{t,i}$ on the track element $t$, the error for this measurement is then
defined as
\begin{equation}
\vec{e}_{t,i}(\vec{x}_t)
=
\vec{z}_{t,i} - \hat{\vec{z}}_{t,i}(\vec{x}_t)\, .
\end{equation}
The sum over all this measurement errors for all track-elements
$\mathcal T$ yields the total error of the whole track map, i.\,e.\xspace
\begin{equation}\label{eq:error}
F\left(\vec{\mathcal X} {=} \lbrace \vec{x}_1,\ldots,\vec{x}_N\rbrace\right)
=
\sum_{t \in \mathcal T}\sum_{i \in {\mathcal C}_t}\vec{e}_{t,i}^\mathrm{T} \vec\Omega_{t,i} \vec{e}_{t,i}\, ,
\end{equation}
where $N$ is the number of identified track-geometries, $\mathcal C_t$ is the
set of all measurements assigned to track $t$ and $\vec\Omega_{t,i}$ is the
information matrix corresponding to measurement $\vec{z}_{t,i}$. The
information matrix is the inverse of the covariance matrix which is often
provided by the GNSS receiver. If no adequate uncertainty information is
available $\vec\Omega_{t,i}$ should be chosen according to the assumed receiver
uncertainty.
\paragraph{Parameter Representation}
The parameterization of the track map presented in \pref{tab:ref_track_map} is
not very suitable for an optimization. With this parameterization the whole
track would be very sensitive to changes in specific parameters, e.\,g.\xspace small
changes in $\varphi_0$ or $L$, would rotate, respectively move, major parts of
the track. Therefore, an alternative representation is chosen with less
sensitivity. All straights are now parameterized by their start and end point
whereas transitional arcs and circular arcs are parameterized by a minimal set
of geometric parameters. For our example track
(c.\,f.\xspace \pref{tab:initial_track_parameters}) this new parameterization is given in
\pref{tab:reparam_track_map} and the corresponding parameter vector is
\begin{gather}
\vec{\mathcal X}
=
\lbrace
\vec{x}_1, \vec{x}_2, \vec{x}_3, \vec{x}_4, \vec{x}_5, \ldots
\rbrace,
\quad
\text{with}
\qquad\qquad\\
\begin{aligned}
\vec{x}_1 &= \mylvec{\xi_{0,1} & \eta_{0,1} & \xi_{e,1} & \eta_{e,1}},\\
\vec{x}_2 &= L_2,\quad
\vec{x}_3 = \mylvec{r_3 & L_3},\quad
\vec{x}_4 = L_4,\\
\vec{x}_5 &= \mylvec{\xi_{0,5} & \eta_{0,5} & \xi_{e,5} & \eta_{e,5}},
\quad\ldots\quad.
\end{aligned}\nonumber
\end{gather}
\begin{table}[ht]
\centering
\caption{Track-map from \pref{tab:initial_track_parameters} reparameterized
for the optimization.}
\label{tab:reparam_track_map}
%
\input{tables/reparam_track_map}
%
\end{table}
\paragraph{Optimization Method}
The objective is to minimize the error function
$F(\vec{\mathcal X})$ given in \pref{eq:error}. Although the initial parameters
given by the localization filter yield a poor track map when being simply
concatenated (c.\,f.\xspace \pref{fig:naive_track_map}), they are still a good initial
guess $\vec{\mathcal X}_0$ for the track-map parameters. Therefore, we can
start the optimization with this initial parameter set which is presumably
close to the global optimum and it is sufficient to use the Levenberg-Marquardt
algorithm \cite{my_references:More1978LevenbergMarquardtalgorithm} to find that
optimum.
\section{Evaluation}
\label{sec:analysis}
In this section the performance of the presented mapping method will be
evaluated with the help of simulated data and real measurement data.
\subsection{Simulation Results}
First, an evaluation based on simulations is carried out to gain some
principal insights into the behavior of the presented mapping algorithm. This is
advantageous, as in simulations ground-truth data is directly available.
Throughout all simulations the example track described in \pref{sec:initial} is
used.
\subsubsection{Optimization Process}
The progress of the residual $\lVert F(\vec{\mathcal X}) \rVert$ during the
optimization is shown in \pref{fig:optim_progress}. It can be seen that the
optimization converges very fast. After eight iterations the stopping criteria
(relative step size limit of \num{1e-6}) is reached.
The biggest change in $\lVert F(\vec{\mathcal X})\rVert$ and the parameter set
occurs in the first iteration. This confirms our hypothesis that the initial
parameter set $\vec{\mathcal X}_0$ provided by the localization filter, is
already a good guess and the Levenberg-Marquardt algorithm can quickly find a
good solution.
\begin{figure}[ht]
\centering
\tikzsetnextfilename{optim_progress}
\input{figures/optim_progress}
%
\caption{Simulation result: Progress of the residual during the
optimization.}
\label{fig:optim_progress}
%
\end{figure}
\subsubsection{Absolute Accuracy}
A visualization of the resulting track is shown in \pref{fig:optim_map}. A good
qualitative match with the GNSS measurements and the reference track becomes
evident.
Figure \ref{fig:abs_mapping_error} allows to investigate the map's quality in
even more detail. The plot shows the absolute position deviation to the
reference track over the path length $s$.
The generated map is compared to a typically used data-point
based map which has been sampled from the virtually generated GNSS data with a
spacing of \valunit{1}{m}. Intermediate points are calculated by a linear
interpolation.
The deviation of our optimized geometric track-map to the reference track is on
average \valunit{1.8}{m}. The error of the data-point based map varies
strongly over the whole track length and the average deviation is
\valunit{10.3}{m}. This value corresponds with the simulated GNSS measurement
noise which has a standard deviation of \valunit{10}{m}.
The better performance of the new approach results from the joint
incorporation of all measurements in the optimization process. Thereby, the
error induced by the GNSS measurement noise can be reduced significantly.
\begin{figure}[ht]
\centering
\tikzsetnextfilename{optim_map}
\input{figures/optim_map}
%
\caption{Simulation result: Final geometric track-map resulting from the
here presented mapping approach. Due to the optimization procedure the
final map fits very well to the GNSS data, compared to the initial map
resulting from the simple concatenation of the identified track elements.}
\label{fig:optim_map}
%
\end{figure}
\begin{figure}[ht]
\centering
\tikzsetnextfilename{abs_mapping_error}
\input{figures/abs_mapping_error}
%
\caption{Simulation result: Absolute mapping error $|\epsilon|$ plotted
against the track length $s$. The error of the geometric track-map
generated with the here presented approach is significantly smaller
compared to a typically used data-point based track map.}
\label{fig:abs_mapping_error}
%
\end{figure}
\subsubsection{Geometric Accuracy}
An often used metric to evaluate the geometric similarity of two paths is the
fréchet distance
\cite{my_references:KubickaCelaEtAl2018ComparativeStudyApplication}. The
fréchet distance of the optimized geometric track-map to the reference track is
\valunit{3.6}{m}. In comparison, the fréchet distance of the data-point based
map is \valunit{26.9}{m}. Consequently, the generated geometric track-map is
significantly better in representing the geometric characteristic of the track.
Furthermore, the optimized geometric track-map allows to efficiently access
useful geometric information, e.\,g.\xspace the curvature at an arbitrary path length.
For the data-point based map this is only possible with additional calculations.
\subsubsection{Map Size}
To compare the sizes of the maps, we express the storage demand as the number of
necessary data fields. For example, a position specification
$
\vec{p} = \left(
\begin{IEEEeqnarraybox*}[][c]{,c/c,}
\xi & \eta%
\end{IEEEeqnarraybox*}
\right)^T
$
requires two data fields. The data-point based map which is \valunit{4.4}{km}
long and sampled at a density of \valunit{1}{m}, therefore, consist of more
than $8000$ data fields, whereas the generated track-map only consists of $38$
data fields. This clearly shows how compact the optimized geometric track-map
is, compared to a simple data-point based map.
\subsection{Evaluation on Real Measurement Data}
The mapping performance is also evaluated with real GNSS and IMU
measurement data. The data has been recorded on a \valunit{24}{km} long
test drive on a secondary line in the Erzgebirge in Germany. The results
presented next belong to a \valunit{5.7}{km} long section of this track.
\subsubsection{Absolute Accuracy}
The absolute accuracy of the optimized geometric track-map is evaluated with the
help of OpenStreetMap (OSM) data \cite{my_references:2019opnstreetmap} since no
other reference is available.
In \pref{fig:tgc_map_result} the cumulative distribution function (CDF) of the
absolute mapping error $|\epsilon|$, i.\,e.\xspace the
perpendicular distance between the track from the optimized geometric track-map
and the OSM map, is shown.
\begin{figure}[ht]
\centering
\tikzsetnextfilename{pd_mapping_error}
\input{figures/pd_mapping_error}
%
\caption{Result on real measurement data: Cumulative distribution
function (CDF) of the absolute mapping error $|\epsilon|$ between the
optimized geometric track-map and the OSM map.}
\label{fig:pd_mapping_error}
%
\end{figure}
The mean error is \valunit{1.8}{m} and the maximum mapping error is
\valunit{8.7}{m}. For the optimized geometric track-map a mapping error of less
than \valunit{2}{m} is achieved on \valunit{68}{\%} of the track (c.\,f.\xspace
\pref{fig:pd_mapping_error}). It can be concluded that the
generated map is quite accurate and that the presented mapping method is also
applicable on real data.
However, when projecting the OSM map on a satellite image, it can be seen that
the OSM map sometimes gives a poor fit to the visible course of the rails.
Interestingly, the biggest deviations between the optimized geometric track-map
and the OSM map occurs at these sections. Thus, it is very likely that the real
error of the optimized geometric track-map is even smaller than stated above.
\subsubsection{Geometric Accuracy}
The geometric accuracy of the optimized geometric track-map is evaluated with
the help of satellite images as shown in \pref{fig:tgc_map_result}.
\begin{figure}[ht]
\centering
\tikzsetnextfilename{tgc_map_result}
\input{figures/tgc_map_result}
%
\caption
[
Result on real measurement data: Visualization of the finally generated
map on a satellite image. The OSM map is not visible on this scale as
it is perfectly covered from the generated map.
]
{
Result on real measurement data: Visualization of the generated map on
a satellite image\footnotemark. The OSM map is not visible on this scale
as it is perfectly covered from the generated map.
}
\label{fig:tgc_map_result}
%
\end{figure}
\footnotetext{Image \textcopyright{ }2019 Google, Maps \textcopyright{ }2019
GeoBasis-DE/BKG (\textcopyright{ }2009), Google}
As in the simulational evaluation, it can be seen that the initial map,
resulting from the simple concatenation of all identified track elements,
yields a rather poor fit of the real track. Only the first track elements are
near the course of the real track (see \pref{fig:tgc_map_result} considering
the direction of travel from east to west). Due to the continuity
condition, these small errors on the first track elements accumulate for the
track elements further away of the start.
In contrast, the final map resulting from the optimization corresponds
very well to the course of the rails visible on the satellite image (c.\,f.\xspace
\pref{fig:tgc_map_result}). Therefore, we assume that the geometric shape of the
track has been mapped in a suitable way for train-borne localization
applications.
\section{Conclusions}
\label{sec:conclusions}
In this paper we presented an approach to generate geometric track-maps for
train-borne localization applications. After a brief overview of the
existing track-map types and map generation methods, we argued how important
track maps are for trains to become intelligent vehicles that are able to
localize themselves in the near future. Furthermore, the overview revealed that
there are hardly any adequate methods to generate suitable track maps for this
purpose.
We presented an optimization method that finds the geometric parameters of a
continuous track that fits the position measurements best and is much more
compact. The method uses information provided by a localization filter for
initialization, shape identification and data association.
Through a simulative evaluation we demonstrated that the presented method is
able to generate geometric track-maps which are more accurate than the typically
used data-point based maps. Furthermore, the generated map provides additional
geometric track information and is much more compact. Finally, we demonstrated
on a \valunit{5.7}{km} long real track-section that the approach is also
applicable on real measurement data and the resulting map corresponds very well
to the track visible in satellite images.
Consequently, the presented method is capable of generating compact geometric
track-maps, which can help to introduce train-borne localization systems in the
near future.
\section*{Acknowledgment}
We kindly thank Deutsche Bahn for supporting this research project. Furthermore,
we like to thank Thales affording us to collect the raw data with their test
vehicle LUCY, and the group of Geodetic Measurement Systems and Sensors at TU
Darmstadt for providing the IMU/GNSS sensor platform.
\htodos{
TODOs for possible final submission:
\begin{itemize}
\item check usage of hyphens in words like ``digital track-map''
\item improve figures (maybe show some zoomed sections)
\item consequently use track-length parameter $s$ (especially in the
figure captions where at them moment $l$ is often used)
\item check references and use IEEE abbreviations
\item compare parameter tables before and after optimization
\end{itemize}
}
|
2,877,628,091,462 | arxiv | \section{Introduction}
\label{intro}
As one of the two competing theories for hiding extra dimensions, braneworlds have received a lot of attention, starting from the first attempts to get matter to stick on a field theory domain wall \cite{Akama:1982jy,Rubakov:1983bb}, and then to the understanding of how gravity can become localized on a hypersurface \cite{Randall:1999vf}. This naturally led to braneworld cosmological models \cite{Langlois:2002bb} with the possibility that the big-bang may simply correspond to the collision of two brane-worlds \cite{Khoury:2001wf,Bucher:2001it,Gen:2001bx,Langlois:2001uq}. With this in mind there have been a number of numerical studies examining the behaviour both of how matter on the branes react to such collisions \cite{Gibbons:2006ge,Saffin:2007ja,Saffin:2007qa,Takamizu:2004rq}, and to how the spacetime geometry itself deals with the collisions \cite{Takamizu:2006gm,Takamizu:2007ks}. As is to be expected, if the collision of the walls is energetic enough then a singularity will form due to gravitational collapse, this was the main focus of \cite{Takamizu:2007ks} and is what we concern ourselves with here, in particular the global structure of collisions where a curvature singularity forms. The symmetry of the problem is that of parallel branes with three flat, extended spatial directions, so any singularity that forms will have the same symmetry, leading Takamizu {\it et al} \cite{Takamizu:2007ks} to write that the end state of the collision process would be the black hole with these symmetries. In this paper we re-examine the same system, but we claim instead that the end state of a collision process that forms a curvature singularity is in fact a big-crunch, there are no asymptotic regions for any observers to hide in.
\section{field theory domain wall}
\label{dwModel}
The model we use to examine the collision of domain walls is that of a single real scalar, canonically coupled to gravity in five dimensions.
\begin{eqnarray}
{\cal L}&=&\frac{m_p^3}{2}R-\frac{1}{2}\ensuremath{\partial}_a\phi\ensuremath{\partial}^a\phi-{\cal V}(\phi),
\end{eqnarray}
where we require the potential to have at least two distinct vacua in order for a domain wall solution to exist that can interpolate between them. A rather nice way to achieve this, and one that allows for explicit analytic solutions, is to write the scalar potential in a form inspired by supergravity \cite{Chamblin:1999cj},
\begin{eqnarray}
{\cal V}&=&\frac{1}{2}\left[(\ensuremath{\partial} W/\ensuremath{\partial} \phi)^2-\frac{4}{3m_p^3}W^2\right]
\end{eqnarray}
where $W(\phi)$ is termed the superpotential. With this restriction on the form of the potential one finds that a line element and scalar field ansatz of the form
\begin{eqnarray}
\label{eq:staticLineElement}
ds^2&=&e^{2U(r)}\eta_{\mu\nu}dx^\mu dx^\nu+dr^2,\qquad\phi=\phi(r),
\end{eqnarray}
leads directly to the following BPS system of equations
\begin{eqnarray}
\label{eq:BPSeqns}
\phi'&=&\pm\ensuremath{\partial} W/\ensuremath{\partial}\phi,\\\nonumber
U'&=&\mp\frac{1}{3m_p^3}W.
\end{eqnarray}
where the $\pm$ gives us our kinks or anti-kinks. To be specific we need to make a choice of superpotential, and we pick the sine-Gordon model
\begin{eqnarray}
W&=&\mu^4-\frac{4m}{\beta^2}\cos(\beta\phi/2)
\end{eqnarray}
giving a potential of the form
\begin{eqnarray}
{\cal V}&=&\left[\frac{2m^2}{\beta^2}-\frac{2\mu^8}{3m_p^3}\right]+\frac{16m\mu^4}{3m_p^3\beta^2}\cos(\beta\phi/2)\\\nonumber
&~&-\left[\frac{2m^2}{\beta^2}+\frac{32m^2}{3m_p^3\beta^4}\right]\cos^2(\beta\phi/2).
\end{eqnarray}
The three parameters of the superpotential, $m$, $\beta$, $\mu$ have the following effect: firstly, $m$ is the mass of the scalar field if we switch off gravity, i.e. $m_p\rightarrow \infty$, and so $m$ controls the curvature of the potential in the minima; $\beta$ gives the separation of the vacua in field space, with the vacua being located at $\phi_{vac}=n\pi/\beta$; finally, $\mu$ controls the additive constant to the potential, and we shall tune it so that one set of the vacua have vanishing potential, and lead to a Minkowski geometry. To see how to achieve a Minkowski minimum we note that
\begin{eqnarray}
{\cal V}(\beta\phi/2=2\pi n)&=&-\frac{2m^2}{3m_p^3\beta^4}\left(1-\frac{\mu^4\beta^2}{4m}\right)^2,
\end{eqnarray}
and so by choosing $\mu^4=4m/\beta^2$ we have a set of Minkowski minima, leaving us with
\begin{eqnarray}\nonumber
&~&{\cal V}=\frac{2m^2}{\beta^2}\left[1-\cos^2(\beta\phi/2)-\frac{16}{3\beta^2m_p^3}[1-\cos(\beta\phi/2)]^2\right]\\
&~&{\cal V}(\beta\phi/2=\pi+2\pi n)=-6m_p^3\left(\frac{8m}{3\beta^2m_p^3}\right)^2=m_p^3\Lambda
\end{eqnarray}
and a form of potential shown in Fig. \ref{fig:potential}, where A, C, E refer to minima tht are Minkowski vacua, and B, D are the AdS$^5$ vacua. In the simulations we perform we shall be using $\beta^2 m_p^3=100$, correspond to the upper curve of Fig. \ref{fig:potential}.
\begin{figure}
\centering
\includegraphics[width=7cm]{Vpot}
\caption{\label{fig:potential}The potential for $\beta^2 m_p^3=10$ (lower curve) and $\beta^2 m_p^3=100$ (upper curve). The labels A, B, C, D, E indicate the various minima, with B and D being AdS vacua, and A, C, E being Minkowski.}
\end{figure}
The setup we focus on is the same as Takamizu {\it et al} \cite{Takamizu:2007ks}, where we have two parallel domain walls, with a geometry that asymptotes to AdS$^5$ and contains a Minkowski region sandwiched in between. The field is then taken to interpolate from the B-vacuum, through the C-vacuum, and then on to the D-vacuum. In order to accomplish this we need the profiles of the BC-kink and the CD-kink.
BC kinks are given by the BPS solutions (lower sign of (\ref{eq:BPSeqns}))
\begin{eqnarray}\label{eq:kink}
\beta\phi/2&=&2\tan^{-1}\left[\tanh[m(r-r_0)/2]\right]-\pi/2\\\nonumber
U&=&-\frac{4}{3\beta^2m_p^3}\left\{\ln[\cosh[m(r-r_0)]]-\frac{\beta^2\mu^4}{4}(r-r_0)\right\},
\end{eqnarray}
while CD anti-kinks are given by the anti-BPS solutions (upper sign of (\ref{eq:BPSeqns}))
\begin{eqnarray}\label{eq:akink}
\beta\phi/2&=&2\tan^{-1}\left[\tanh[m(r-r_0)/2]\right]+\pi/2+\\\nonumber
U&=&-\frac{4}{3\beta^2m_p^3}\left\{\ln[\cosh[m(r-r_0)]]+\frac{\beta^2\mu^4}{4}(r-r_0)\right\}
\end{eqnarray}
What we have just presented are the solutions for the single-kink systems, but we need double kink initial conditions. Although the analytic solution is not available, we are able to add together the BC and the CD kink profiles for $\phi$ and $U$ to provide an excellent approximate solution - so long as the kinks are far enough apart. This is still not quite what we need, as we want to be able to boost the kinks at will, in order to collide them at various speeds; we shall address this when we write down the dynamical equations.
\section{asymptotic structure}
\label{sec:asStruct}
From the kink solutions (\ref{eq:kink}) (\ref{eq:akink}) we see that the line element (\ref{eq:staticLineElement}) has the asymptotic limit
\begin{eqnarray}
ds^2(r\rightarrow+\infty)&\rightarrow& \exp\left[-2\alpha r\right]\eta_{\mu\nu}dx^\mu dx^\nu+dr^2\\
\alpha&=&\frac{8m}{3\beta^2m_p^3}
\end{eqnarray}
and by defining $\alpha Z=\exp\left[\alpha r \right]$ we see that the asymptotic region of the domain wall is given
\begin{eqnarray}
ds^2(Z\rightarrow+\infty)&\rightarrow&\frac{1}{\alpha^2 Z^2}\left[\eta^{\mu\nu}dx^\mu dx^\nu+dZ^2\right]
\end{eqnarray}
which we recognize as a portion of AdS$^5$, in particular, it does not contain the AdS boundary, $Z=0$.
Now let us consider a possible end-state for a singular system generated by the collision of two domain walls. The natural choice isthe AdS$^5$ black-brane given by
\begin{eqnarray}
ds^2_{bb}&=&-f(R)dT^2+f^{-1}(R)dR^2+R^2\delta_{ij}dx^idx^j,\\
f(R)&=&-MR^{-2}-\Lambda R^2/6.
\end{eqnarray}
Indeed, a version of Birkhoff's theorem tells us that this is the unique solution with these symmetries \cite{Charmousis:2002rc,Zegers:2005vx} in AdS$^5$ spacetime.
Now note that the asymptotic limit is reached at large $R$, and in order to compare it to the brane case we perform the following co-ordinate transformations, $t=\sqrt{-\Lambda/6}T$, $R=1/(\sqrt{-\Lambda/6}Z)$ and find that the asymptotic region is given by
\begin{eqnarray}
ds^2(Z\rightarrow0)&\rightarrow&-\frac{6}{\Lambda Z^2}\left[\eta^{\mu\nu}dx^\mu dx^\nu+dZ^2\right]
\end{eqnarray}
confirming that the asymptotic regions of the domain wall and the asymptotic region of the black brane cover different portions of AdS$^5$. It would therefore be surprising if the the collision of domain walls ended up with a final state that was a black brane.
Before we move on the the dynamical system, we briefly note we use dimensionless variables $\tilde x$, $\tilde\phi$ defined by
\begin{eqnarray}
x&=&\tilde x/m,\qquad\phi=\tilde\phi/\beta,
\end{eqnarray}
meaning that we are left with the single physical parameter $\beta^2m_p^3$. From now on we work with the dimensionless variables but drop the tildes. This is analogous to measuring distances and the Planck mass in units of $m$, and measuring $\phi$ in units of $\beta$.
\section{\label{sec:dynamics}the dynamical set-up}
Having found the solutions for isolated, static kinks, we need to know how to get them to move. The simplest way to achieve this is to change to co-ordinates in which the spatial direction defining the wall (the co-dimension one direction), and the time co-ordinate are on an equal footing, in which case there is an explicit SO(1,1) Lorentz symmetry. The metric suited to dynamics is therefore of the form
\begin{eqnarray}
ds^2&=&e^{2A(t,z)}(-dt^2+dz^2)+e^{2B(t,z)}\delta_{ij}dx^idx^j,
\end{eqnarray}
with $z$ and $r$ related by $e^{A}dz=\pm dr$ in the static case. The equations of motion using these co-ordinates may be written in a form that highlights the SO(1,1) symmetry as
\begin{eqnarray}
\ensuremath{\partial}_{\tilde\mu}\ensuremath{\partial}^{\tilde\mu}\phi+3\ensuremath{\partial}_{\tilde\mu}B\ensuremath{\partial}^{\tilde\mu}\phi&=&e^{2A}\frac{\ensuremath{\partial} {\cal V}}{\ensuremath{\partial} \phi},\\
\ensuremath{\partial}_{\tilde\mu}\ensuremath{\partial}^{\tilde\mu} A-3\ensuremath{\partial}_{\tilde\mu} B\ensuremath{\partial}^{\tilde\mu} B&=&
\frac{1}{m_p^3}\left[-\frac{1}{2}\ensuremath{\partial}_{\tilde\mu}\phi\ensuremath{\partial}^{\tilde\mu}\phi+\frac{1}{3}e^{2A}{\cal V}\right],\\
\ensuremath{\partial}_{\tilde\mu}\ensuremath{\partial}^{\tilde\mu}B+3\ensuremath{\partial}_{\tilde\mu}B\ensuremath{\partial}^{\tilde\mu}B&=&-\frac{2}{3m_p^3}e^{2A},\\
\end{eqnarray}
where the $\hat\mu$ index runs over the $t$, $z$ directions; the constraint equations become
\begin{eqnarray}
&~&\ensuremath{\partial}_{\tilde\mu}\ensuremath{\partial}_{\tilde\nu}B
+\ensuremath{\partial}_{\tilde\mu}B\ensuremath{\partial}_{\tilde\nu}B
+\eta_{\tilde\mu\tilde\nu}\ensuremath{\partial}_{\tilde\rho}B\ensuremath{\partial}^{\tilde\rho}B\\\nonumber
&~&-\ensuremath{\partial}_{\tilde\mu}A\ensuremath{\partial}_{\tilde\nu}B-\ensuremath{\partial}_{\tilde\mu}B\ensuremath{\partial}_{\tilde\nu}A
+\eta_{\tilde\mu\tilde\nu}\ensuremath{\partial}_{\tilde\rho}A\ensuremath{\partial}^{\tilde\rho}B\\\nonumber
&=&-\frac{1}{3m_p^3}\left[\ensuremath{\partial}_{\tilde\mu}\phi\ensuremath{\partial}_{\tilde\nu}\phi-\frac{1}{2}\eta_{\tilde\mu\tilde\nu}\ensuremath{\partial}_{\tilde\rho}\phi\ensuremath{\partial}^{\tilde\rho}\phi
+\eta_{\tilde\mu\tilde\nu}e^{2A}{\cal V}\right].
\end{eqnarray}
It is now clear that $A(t,z)$, $B(t,z)$ and $\phi(t,z)$ are all Lorentz scalars under this SO(1,1), and so it is easy to boost to different Lorentz frames ${\cal O}$ and ${\cal O}'$ using
\begin{eqnarray}\nonumber
t'&=&\gamma(t-vz),\quad z'=\gamma(x-vt),\quad\Psi'(x')=\Psi(x)
\end{eqnarray}
for generic scalars $\Psi$.
Now, because we are interested in studying the global structure of the system, and that system has a singularity, it is actually more convenient to use a double-null co-ordinate system \cite{Takamizu:2007ks}\cite{Burko:1997tb} given by
\begin{eqnarray}
u&=&\frac{1}{\sqrt 2}(t-z),\quad v=\frac{1}{\sqrt 2}(t+z)
\end{eqnarray}
because then the null geodesics are simply 45degree lines, and the causal structure is easy to picture. The precise details of the numerical method we use may be found in \cite{Burko:1997tb}, but for another approach see \cite{Martin:2003yh,Frolov:2004rz}.
\section{simulations}
\label{sec:sims}
Having described the system we now move on and give an overview of the collisions that lead to a singularity. To orient ourselves we start with Fig. \ref{fig:phi}, that shows the evolution of the scalar field, where we only simulate the region $z\geq0$, as the $z<0$ region follows by symmetry. Recall that the vacua are $\phi=0,\;2\pi$, and that the $\phi=2\pi$ vacuum corresponds to the AdS$^5$, and the $\phi=0$ vacuum is Minkowski. Fig. \ref{fig:phi} therefore shows two walls coming together, interacting, and then moving apart.
\begin{figure}
\centering
\includegraphics[width=7cm]{phiZoom}
\caption{\label{fig:phi}Evolution of the scalar field; $\phi=0$ corresponds to the Minkowski minima, and $\phi=2\pi$ to the AdS minima. The upper boundary of the solid-shaded and the white region marks the location of the curvature singularity.}
\end{figure}
In the example shown we found that a curvature singularity was forming, and so we cut off the evolution when the curvature became too large. A benefit of the double-null co-ordinates is that one can carry on simulating and map out the region where the curvature gets cut off; in Fig, \ref{fig:phi} this region is given by the upper boundary of the solid-shaded region.
\section{Horizon structure}
\label{sec:horizon}
Given that a singularity has formed it is natural to ask about the horizon structure of the spacetime, and for this we need to know about the behaviour of null geodesics. It is clear from Fig. \ref{fig:phi} that there is a region inside which timelike geodesics are doomed to end on the singularity, and so we my expect a horizon. However, given the dynamic nature of the system it is actually more convenient to work with objects that have a local definition, namely trapping surfaces. Hayward \cite{Hayward:1993wb} defined trapping surfaces in terms of the expansion of outgoing and ingoing null geodesics, which may be measured without reference to the global properties of the geometry. We start with the co-ordinate vectors $N_+=\ensuremath{\partial}_u$, $N_-=\ensuremath{\partial}_v$ which are ingoing and outgoing respectively (for $z>0$), and introduce their dual one-forms $n_+=-e^{2A}dv$, $n_-=-e^{2A}du$. Normalized outgoing and ingoing null vectors are then defined by $u_\pm=e^{-2A}N_\mp$ such that $n_\pm(u_\pm)=-1$, and an induced three-metric, $h$, is given by $h=g+e^{-2A}n_+\otimes n_-+e^{-2A}n_-\otimes n_+$, where $g$ is the full metric. The expansions are then defined as
\begin{eqnarray}
\label{eq:expansions}
\Theta_{\pm}&=&\frac{1}{2}h^{ab}{\cal L}_\pm h_{ab}
\end{eqnarray}
where the Lie derivatives ${\cal L}_\pm$ are taken along $u_\pm$.
A {\it marginal surface} is then a surface where one of the expansions vanishes, say $\Theta_-$. A marginal surface for us is a three-surface, and will be a single point on the $u-v$ plane diagrams such as Fig. \ref{fig:phi}. A trapping horizon is then the four-surface found by sticking together all these marginal surfaces; for us, they correspond to a line on the $u-v$ plane. Having found the trapping surface we can characterize it according to the sign of $\Theta_+$ (the trapping horizon is {\it future} if $\Theta_+<0$ and past if $\Theta_+>0$), and the sign of ${\cal L}_+\Theta_-$ (the trapping horizon is {\it outer} if ${\cal L}_+\Theta_-<0$ and {\it inner} if ${\cal L}_+\Theta_->0$). In Figs. \ref{fig:thetaMinus} and \ref{fig:thetaPlus} we show the expansions, where we clearly see a trapping horizon that separates regions where $\Theta_-$ changes sign, moreover we see from Fig. \ref{fig:thetaPlus} that along this curve $\Theta_+<0$, making it a future trapping horizon, i.e. once you pass this surface your future is determined - you hit the singularity. To see whether the trapping surface is inner or outer we evaluate ${\cal L}_+\Theta_-$, i.e. just see whether $\Theta_-$ increases or decreases along $\ensuremath{\partial}_v$; it increases. That $\Theta_+<0$ and ${\cal L}_+\Theta_->0$ makes the trapping horizon a future inner horizon, which is the same type as one finds in a cosmological big crunch, as opposed to black hole trapping horizons which are future-outer.
\begin{figure}
\centering
\includegraphics[width=7cm]{thetaMinusZoom}
\caption{\label{fig:thetaMinus}The expansion scalar $\Theta_-$, see (\ref{eq:expansions}).}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=7cm]{thetaPlusZoom}
\caption{\label{fig:thetaPlus}The expansion scalar $\Theta_+$, see (\ref{eq:expansions}).}
\end{figure}
Another signature of future inner trapping horizons is that its area is non-increasing, which we confirm by measuring the value of $B$ along the trapping horizon in Fig. \ref{fig:B}.
To really check the claim that what we have is a big crunch, with no asymptotic region we should examine how the singularity behaves in the large-$v$ region. This is clearly a challenging task numerically, but what we can show is the location of level surfaces of the Ricci scalar, with the aim of showing that it cuts across any putative asymptotic region. In Fig. \ref{fig:ricci} we give the location of some level set (in this example it is $R=-5$) and we see that it is consistent with the line hitting $u=0$, albeit rather slowly in these co-ordinates. If this behaviour is repeated for larger values of the Ricci scalar, in particular the singular value, we see that the spacelike singularity cuts off the asymptotic region, ending the spacetime in a big crunch.
\begin{figure}
\centering
\includegraphics[width=7cm]{Bhor}
\caption{\label{fig:B}The value of the metric parameter $B$ as measured on the trapping horizon.}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=7cm]{RicciLevelSet}
\caption{\label{fig:ricci}The curved line is the location of Ricci level set $R=-5$, and the straight line is a fit of the form $1/v^{5.7\times10^{-3}}$}
\end{figure}
\section{conclusions}
\label{sec:concs}
We have re-considered the analysis of Takamizu {\it et al} \cite{Takamizu:2007ks} with the aim of understanding the global structure of domain wall collisions that form a curvature singularity. By examing the asymptotic regions of domain walls and black branes, and by measuring the behaviour of null rays in the dynamical geometry we conclude that the horizon structure is more consistent with that of a big-crunch, rather than black-brane end-state. Moreover, by following the location of level set of the Ricci scalar we find tentative agreement of this picture. The rather slow fall-off of makes it difficult to track the level sets to sufficient distance in $v$ using these co-ordinates, before numerical error becomes a problem. This can be compared to a prediction of \cite{Chamblin:1999cj} which is that the AdS$^5$ Cauchy horizon, generically gets replaced by a pp singularity when the AdS region is perturbed. Here we claim that in the cases where the collisions form a curvature singularity, then that curvature singularity closes off the geometry and no pp singularity would form. However, in less violent cases where no curvature singularity is observed, the Cauchy horizon could still be expected to form a pp singularity.
\begin{acknowledgments}
The authors would like to acknowledge support from STFC.
\end{acknowledgments}
|
2,877,628,091,463 | arxiv | \section{Introduction}
\label{intro}
Helical magnetic flux ropes (MFRs) are conventionally related to filaments (or prominences), which are one kind of the fundamental structures in the standard solar flare (or CSHKP) model \citep{Carmichael1964,Sturrock1966,Hirayama1974,KP1976}. Erupting MFRs and filaments are sometimes observed to experience a rotation as they arise \citep [e.g.,][]{Vrsnak1980,Kuro1987,Zhou2006,Green2007,Muglach2009,Bemporad2011,Su2013,Yan2013}. Some tornado-like rotational movements are also detected in filaments with high-resolution observations \citep [e.g.,][]{Li2012,Su2012,Wedemeyer2013,Panesar2013,Su2014}. These rotations are usually interpreted as a supply of twist into MFRs or a transformation of twist into writhe of MFRs. Note that twist is an inherent property of MFRs, which is strongly linked to the magnetic free energy and filament eruptions \citep[e.g.,][]{Prior2020ar,MacTaggart2020}.
In a manner of speaking, one of the key processes determining the stability and eruption behaviors of MFRs is untwisting or unwinding. The untwisting motion together with the mass flow that reveals a twisted structure was commonly observed in many events as reported by \cite{McCauley2015}. This process is often closely associated with eruptions of filaments and flares which are usually accompanied by coronal mass ejections (CMEs) \citep [e.g.,][] {Sakurai1976,Hood1992,Torok2005,Kliem2006,Dere2009}. There are also some reports about the untwisting process of erupting MFRs that are unrelated to CMEs \citep [e.g.,][] {Ji2003,Alexander2006,Yan2020a}. In a rough classification, the untwisting motions fall into symmetric \citep [e.g.,][] {Martin2003} and asymmetric \citep [e.g.,][] {Tripathi2006,Bi2013} types depending on whether they erupt at the top or footpoint of the filaments \citep [e.g.,][] {McCauley2015}. These investigations indicate that the untwisting of MFRs plays a pivotal role in solar eruptions. However, how the MFRs are untwisted is still under discussion and this needs more observations to explore.
Some researchers have investigated the untwisting process of MFRs associated with solar jets \citep{Patsourakos2008,Nistic2009,Chen2012,Curdt2012,Morton2012,Shen2012,Lee2013,Liu2014,Zhang2014,Zhu2017}. \cite{Chen2017} also reported rapid rotating and spinning of magnetic field structures, which were triggered by an interaction between EUV jets and filaments. Moreover, 3D magnetohydrodynamic (MHD) models of jets \citep[e.g.,][]{Pariat2009,Pariat2010,Rachmeler2010,Pariat2015,Karpen2017} likewise produce helical and untwisting structures that are similar to the observed ones. In recent high-resolution observations, the untwisting of MFRs in filament eruptions has been reported \citep[e.g.,][] {Kumar2012,Li2013,Cheng2016,Duchlev2016,Xu2017,Chen2019,Yan2020b}. Some researchers also utilized the multi-viewpoint observations from the Solar Dynamics Observatory \citep[SDO;][]{Pesnell2012} and/or the Solar Terrestrial Relations Observatory \citep[STEREO;][]{Howard2008,Kaiser2008} to investigate the continuous evolution of MFRs \citep[e.g.,][]{Bemporad2011,Thompson2011,Joshi2011,Su2013,Zhou2017,Wang2019}. Nevertheless, they have rarely focused on the untwisting process along with the thermal property of MFRs .
The thermal property of MFRs during the eruption can be studied via the differential emission measure (DEM) method \citep[e.g.,][]{Golub2004,Weber2004,HK2012,Cheung2015dem,Su2018} which is a powerful tool for extracting plasma parameters such as emission measure (EM), DEM- or EM-weighted temperature, and electron density. For example, \cite{cheng2012} estimated the temperature and density of the multi-structure components of CMEs (or MFRs) using the DEM method. Their results show that the core regions of CME are dramatically heated, presumably via magnetic reconnection, and the DEM-weighted temperature of the MFR centroid increases from $\sim$8.0 MK to $\sim$10.0 MK during the eruption. \cite{Krucker2014} deduced the DEM of the above-the-loop-top source that shows both cold and hot components. They claimed that the hot component is most likely connected with an M7.7 flare.
In this paper, we investigate two homologous filament eruptions associated with flares as well as two successive MFRs with multi-wavelength and dual-perspective imaging observations from SDO and STEREO-A. The two MFRs exhibit a similar morphological evolution in the field of view (FOV) of STEREO-A but a different one in the FOV of SDO. These comprehensive observations will give us a better understanding of the morphological evolution as well as kinematic and thermal properties of the two MFRs in their erupting processes. This paper is organized as follows. In Section \ref{data}, we describe the observational data and method. The analysis and results are shown in Section \ref{res}. Sections \ref{sum} gives the summary and discussions.
\section{Observational Data and Method}
\label{data}
The two homologous MFRs from NOAA active region 11515 \citep{Louis2014,Wangya2018} and their related eruptions of filaments and flares were simultaneously observed by SDO and STEREO-A that had a separation angle of $\sim$119.7\degree\ during 2012 July 8--9. The imaging data from the Atmospheric Imaging Assembly \citep[AIA;][]{Lemen2012} on board SDO and the Extreme Ultraviolet Imager \citep[EUVI;][]{Wuelser2004} on board STEREO-A are used in this study. AIA provides full-disk EUV and UV images in multiple channels with a high spatial resolution of 0.6\arcsec\ pixel$^{-1}$ and a high temporal resolution of 12 s or 24 s. The multiple-wavelength images from AIA are sensitive to temperatures ranging from 0.05--20 MK. Here we mainly use the images from one UV channel centered at 1600 \AA\ (\ion{C}{4}, $\sim$0.1 MK) and seven EUV channels centered at 94 \AA\ (\ion{Fe}{18}, $\sim$7 MK), 131 \AA\ (\ion{Fe}{8}, $\sim$0.4 MK; \ion{Fe}{21}, $\sim$11 MK), 171 \AA\ (\ion{Fe}{9}, $\sim$0.6 MK), 193 \AA\ (\ion{Fe}{12}, $\sim$1.3 MK; \ion{Fe}{24}, $\sim$20 MK), 211 \AA\ (\ion{Fe}{14}, $\sim$2 MK), 304 \AA\ (\ion{He}{2}, $\sim$0.05 MK), and 335 \AA\ (\ion{Fe}{16}, $\sim$2.5 MK). The 1.5 level data of AIA are analyzed. EUVI obtains full-disk EUV images in four channels (sensitive to plasmas at 0.1--20 MK) with a spatial resolution of $\sim$1.6\arcsec\ and a temporal resolution of 3--6 minutes. Here we only use the 304 \AA\ (\ion{He}{2}, $\sim$0.05 MK) and 195 \AA\ (\ion{Fe}{12}, $\sim$1.3 MK; \ion{Fe}{24}, $\sim$20 MK) images from EUVI to show the evolution of two eruption events.
Using the six AIA EUV channels (excluding the one at 304 \AA), we employ the DEM method as introduced in \cite{Su2018} to diagnose the thermal property of the two MFRs. This method is developed based on the one from \cite{Cheung2015dem} and can well constrain the DEMs at high temperatures by using AIA data only. We have binned the imaging data by 2 $\times$ 2 pixels to improve the signal-to-noise ratio when constructing the EM and EM-weighted temperature maps.
For the temperature $T$ (in units of K), we set a bin of 0.05 in logarithmic scale. The EM (in units of cm$^{-5}$) is defined by
\begin{equation}
EM= \int DEM(T)dT= \int n_{e}n_{H}dl\propto n{_{e}}^{2},
\end{equation}
where $n_{e}$ and $n_{H}$ are the number densities of electron and hydrogen, respectively, and $l$ is the optical depth along the line-of-sight (LOS).
The DEM and EM-weighted temperature are defined by
\begin{equation}
DEM_{i} =\frac{EM_{i}}{\Delta T_{i}}=\frac{EM_{i}}{T_{i}ln10\Delta logT}
\end{equation}
and
\begin{equation}
\bar{T}=\frac{\sum (EM_{i}T_{i})}{\sum EM_{i}},
\end{equation}
where $i$ is the $i$th bin of log $T$ (also see the formulas in \citealt{sun2014,Su2018,Xue2020}).
\section{Analysis and Results}
\label{res}
\subsection{Overview of the Two Homologous Events}
\label{overview}
Figure \ref{f1} gives the overview of the two homologous eruption events. On 2012 July 8--9, NOAA 11515 was located near the southwest limb from the perspective of SDO (i.e., side view, Figures \ref{f1}(c) and (f)) but appeared on the solar disk in the FOV of STEREO-A (face view, Figures \ref{f1}(a), (b), (d), and (e)), which enables us to study the two events from dual perspectives. From the EUVI 304 \AA\ images (Figures \ref{f1}(a) and (d)) we can see that two filaments (referred to as F1 for the filament on July 8 and F2 for the one on July 9, indicated by the white dashed curve) show up at an early time. Tens of minutes later, the two filaments erupt (see the accompanying animation of Figure \ref{f1}), which are accompanied by two flares, an M6.9 one and a C6.0 one, respectively (as revealed from the EUVI 195 \AA\ and AIA 131 \AA\ images in Figures \ref{f1}(b), (c), (e), and (f) and also from the GOES soft X-ray light curves in Figures \ref{f1}(g) and (h)). In the meantime, two helical structures, i.e., two twisted MFRs (called MFR-1 and MFR-2 hereafter), appear and exhibit some eruption and expansion motions, as clearly shown in Figures \ref{f2} and \ref{f6}, respectively (also see the accompanying animations). The two MFRs also show untwisting motions that last for a few tens of minutes (Figures \ref{f3} and \ref{f7}). During the erupting and untwisting processes, some of the MFR plasmas are heated into a high temperature (Figures \ref{f4}, \ref{f5}, \ref{f8}, and \ref{f9}). At a late stage, there are materials falling back along the magnetic structures. It should be noted that the two homologous MFRs appear in a similar magnetic environment such as being wrapped by some magnetic field lines with a large spiral arm (marked by the magenta arrow in Figures \ref{f1}(b) and (e)). They also have many similar behaviors, say, exhibiting helical motions and showing a close association with eruptions of filaments and flares, as shown in the EUVI images in Figure \ref{f1} and the accompanying animation. There are, however, some differences between them, which can clearly be seen in the AIA images, as described below.
\subsection{The MFR-1 on 2012 July 8}
\subsubsection{Morphological evolution}
\label{Morphological evolution}
The first eruption event took place on 2012 July 8. Figure \ref{f2} shows some snapshots for its onset, development, and disintegration at EUVI 304 \AA\ as well as AIA 304, 1600, and 131 \AA\ from 16:16 UT to 17:36 UT. Before the eruption, there exists a reversed-C shape filament (F1), as indicated by the white dashed curve in Figure \ref{f2}(a1). At $\sim$16:24 UT, F1 erupts as displaying a reversed-$\gamma$ shape (indicated by the blue dashed curve in Figures \ref{f2}(b2)--(d2)) and brightens up implying a possible heating of its materials. In the meanwhile, an M6.9 flare (denoted by the blue arrow in Figures \ref{f2}(a2)--(d2)) occurs near F1. Afterwards, F1 rises rapidly with an inclination to the north, showing a helical structure, i.e., MFR-1, as clearly revealed in the multi-temperature AIA images (Figures \ref{f2}(b3)--(d3) and \ref{f3}(b)). At the moment, a cluster of spiral arms (outlined by the dashed curve in Figures \ref{f2}(a3) and (a4)) come into being in a face view of EUVI. In particular, there appears a clockwise swirling motion in the north part of MFR-1 from the side view of AIA (marked by a white box in Figure \ref{f2}(c3)), which is clearly shown in Figure \ref{f3}(b). Later on, MFR-1 stops its northward motion but expands and rotates into a higher altitude counterclockwisely (Figures \ref{f2}(b4)--(d4)), which may be caused by an interaction with the wrapping magnetic fields. Simultaneously, the spiral arms become longer and more evident as seen in the EUVI 304 \AA\ images (Figure \ref{f2}(a4)). When MFR-1 is erupting, expanding, swirling or rotating (i.e., untwisting), and disintegrating, the materials are injected into the upper atmosphere (Figures \ref{f2}(a5)--(d5) and (a6)--(d6)). The untwisting motion of MFR-1 lasts for about 20 minutes. At a late stage, there are some materials falling down along the helical trajectories (see the accompanying animation of Figure \ref{f2}).
\subsubsection{Kinematic motions}
In order to study the kinematic motions of MFR-1 from two viewpoints, we cut three slices (S1, S2, and S3 as indicated in Figures \ref{f2}(a4) and (b4)) in AIA and EUVI images to track the temporal evolution of MFR-1. We also utilize the Fourier local correlation tracking (FLCT) method \citep[]{Welsch2004,Fisher2008} to study the swirling or untwisting motion of MFR-1 using AIA images.
Figure \ref{f3}(a) shows the time-slice map along S1 that is located at a lower part of MFR-1 from a side view. It is seen that MFR-1 has a movement towards north with a speed of about 123 km s$^{-1}$ at $\sim$16:28 UT, which indicates that MFR-1 is erupting. After that, there are materials intermittently ejected into the atmosphere from the northern footpoint of MFR-1 (see the yellow dashed lines). At the northernmost part of MFR-1, the materials show some swirling motions clockwisely, which can be clearly seen from the FLCT map in Figure \ref{f3}(b). The swirling speeds are from a few tens to more than one hundred km s$^{-1}$ with an average of about 55 km s$^{-1}$ at $\sim$16:40 UT. At $\sim$16:44 UT, some materials are injected into the upper atmosphere with a typical speed of $\sim$100 km s$^{-1}$, which can be seen from the FLCT map in Figure \ref{f3}(d) (marked by the yellow box). Besides the swirling motions at the northernmost part of MFR-1, some similar but counterclockwise rotations can be seen in the main body of MFR-1 afterwards. From the time-slice map along S2 in Figure \ref{f3}(c), one can see that there displays a swaying pattern (marked by some red dashed curves), indicating that MFR-1 is rotating along its axis. One can also see that the width of MFR-1 increases with a speed of 12--18 km s$^{-1}$ (see the two white arrows), which demonstrates that MFR-1 is expanding. The swirling, rotating, and expanding motions suggest that MFR-1 is untwisting. From the disk view of EUVI (Figure \ref{f3}(e)), we can see the eruption and expansion of MFR-1 as well. Starting from $\sim$16:25 UT, MFR-1 undergoes a rapid motion with a speed of about 108 km s$^{-1}$ towards north. At a later time ($\sim$16:40 UT), one can notice a slow motion with a speed of about 35 km s$^{-1}$ instead. Note that these speeds are just in the plane of sky and the real speeds should be somewhat larger due to the projection effect. The rapid motion at an earlier time is supposed to correspond to the eruption stage of MFR-1, and the slow motion after that basically corresponds to its expansion and untwisting stage. The fast-eruption stage lasts for about 10 minutes and the slow-untwisting stage lasts for a longer time (about 20 minutes).
\subsubsection{Thermal Property}
To investigate the thermal property of MFR-1 during its eruption and untwisting processes, we try to track some ejected materials visually and plot the DEM distributions together with EM and temperature evolutions in Figure \ref{f4}. From the AIA 131 \AA\ images (top panels), we can see that the tracked materials (enclosed by the red box) move northwards during $\sim$16:30--16:35 UT (Figures \ref{f4}(a1)--(a3)). When MFR-1 is obviously expanding and untwisting (after $\sim$16:40 UT, Figure \ref{f4}(a4)), these materials are fragmented and finally hard to be tracked (see Figure \ref{f4}(a5)). The DEM distributions for the red box region are shown in the middle panels of Figure \ref{f4} (see the red curves). For comparison, we also plot the DEM distributions for the red box region but before the eruption ($\sim$25 minutes earlier, black curves) serving as a background or reference. Moreover, we give the DEM results (green curves) for a quiet coronal region nearby (marked by the green box) for another reference, which are actually similar to the background curves. It can be seen that, relative to the background or quiet coronal region, the MFR-1 plasmas mainly show DEM enhancements peaking at $\sim$0.4--0.5 MK (log $T\approx\,$ 5.6--5.7) and $\sim$7--8 MK (log $T\approx\,$ 6.8--6.9) throughout the evolution. Note that these ejected plasmas also exhibit a much hotter component at some particular times, say, a component peaking at $\sim$20 MK (log $T\approx\,7.3$) around 16:31 UT (see Figure \ref{f4}(b2)). These results indicate that the plasmas of MFR-1 consist of a cold component and a hot component in principle and that there is some localized heating during its dynamic evolution. The evolutions of EM and EM-weighted temperature from the red box region are plotted in the bottom panel of Figure \ref{f4} (see the solid curves with error bars). One can see that both the EM and temperature have an increase followed by a decrease and finally return to the background level (dashed curves). This also demonstrates that the MFR-1 plasmas are heated during the evolution, especially in the early period. Note that the temperature reaches its maximum ($\sim$7 MK) a little bit later than EM, which might suggest that the plasmas are heated locally or in situ.
In addition, we show the spatial maps of EM and EM-weighted temperature during the untwisting process of MFR-1 in Figure \ref{f5}. From $\sim$16:35--16:40 UT, MFR-1 stops its northward motion but exhibits some swirling motions, as seen from the AIA 131 \AA\ images in the top panels of Figure \ref{f5}. The EM maps in the middle panels show similar structures to that in the AIA 131 \AA\ images. This can be expected since both the EM and EUV emissions are closely related to the electron density. The temperature maps overlaid with contours at 4.0 and 3.5 MK are given in the bottom panels of Figure \ref{f5}. The temperature contours are also overplotted on the AIA 131 \AA\ and EM maps. It is interesting to see that hot emissions mainly show up at the north edge (denoted by the magenta arrow). Some hot emissions can also be found in the center of or within MFR-1 (see the black arrow). These hot plasmas may be heated when MFR-1 is interacting with the ambient magnetic fields, say, via magnetic reconnection. Magnetic reconnection could also occur within MFR-1 and heats the plasmas therein. Another possibility is that the plasmas are compressed and thus heated when they are swirling or interacting with the ambient magnetic structures.
\subsection{The MFR-2 on 2012 July 9}
\subsubsection{Morphological evolution}
The second eruption event took place on 2012 July 9, about 12 hours later than the first event. Figure \ref{f6} shows some snapshots for its onset, development, and disintegration at EUVI 304 \AA\ as well as AIA 304, 1600, and 131 \AA\ from 05:16 UT to 06:46 UT. Similar to the first event, a reversed-C shape filament (F2) pre-exists in the active region, which is supposed to be reformed after F1 erupts. At $\sim$05:25 UT, F2 starts to erupt and brighten up. It is related to a C6.0 flare, as denoted by the blue arrow in Figures \ref{f6}(a2) and (d2). After its eruption, F2 exhibits a movement towards the north (Figures \ref{f6}(a3)--(d3)). During this process, F2 begins to expand and displays a clear helical structure or a $\gamma$-like shape, i.e., MFR-2 (see Figures \ref{f6}(b4)--(d4)). A few minutes later, we can see a cluster of spiral arms (outlined by the dashed curve in Figure \ref{f6}(a5)) showing up in the face view of EUVI images. From the side view of AIA images, it is seen that MFR-2 stops its northward motion then and exhibits a tornado-like structure that swirls or rotates clockwisely (see the accompanying animation of Figure \ref{f6} and Figure \ref{f7}(b)). This suggests that MFR-2 is untwisting. In the following, MFR-2 continues its clockwise rotation or untwisting motion with some of its plasmas ejected to the upper atmosphere (indicated by the yellow arrows in Figures \ref{f6}(a6) and (b6)). The untwisting motion of MFR-2 lasts for about 20 minutes. At a late stage, some materials fall down along the magnetic structures (see the accompanying animation). It should be pointed out that all of these dynamic behaviors are very similar to the first eruption event except that MFR-2 does not change its rotating direction from clockwise to counterclockwise before and after the plasma ejection.
\subsubsection{Kinematic motions}
Similarly, we cut three slices (S4, S5, and S6 as indicated in Figures \ref{f6}(a5) and (b5)) in AIA and EUVI images to study the kinematic motions of MFR-2 from two perspectives. We also utilize the FLCT method to study the swirling or untwisting motion of MFR-2 by AIA images.
Figure \ref{f7}(a) shows the time-slice map along S4 that is located at a bottom part of the tornado-like structure of MFR-2 as shown in images from a side view. We can see that MFR-2 erupts towards the north with a speed of about 62 km s$^{-1}$ at $\sim$05:44 UT. Later, it stops its northward motion and mainly shows a swirling motion as clearly seen from the FLCT map in Figure \ref{f7}(b). This may be caused by an interaction with the surrounding magnetic fields, similarly to the case of MFR-1. The swirling motion of MFR-2 is clockwise with an average speed of about 40 km s$^{-1}$ at 06:04 UT. From the time-slice map along S5 as shown in Figure \ref{f7}(c), it is seen that after $\sim$06:00 UT, MFR-2 continues rotating clockwisely but shows an expansion with a speed of about 12--28 km s$^{-1}$. At $\sim$06:10 UT, the plasmas in MFR-2 are ejected to the upper atmosphere with a typical speed of a few tens of km s$^{-1}$ (see the yellow box in Figure \ref{f7}(d)). The eruption and expansion (untwisting) can also be reflected in the disk view of EUVI images. From the time-slice map along S6 in Figure \ref{f7}(e), one can see a fast eruption of MFR-2 with a speed of about 53 km s$^{-1}$ before $\sim$06:00 UT. After that the speed changes to $\sim$12 km s$^{-1}$ when MFR-2 untwists and ejects plasmas to a higher altitude. Note that all of these speeds of MFR-2 are smaller than the corresponding ones of MFR-1. Both the fast-eruption stage and the slow-expansion stage of MFR-2 last for about 15 minutes. After $\sim$06:15 UT, MFR-2 disintegrates and some ejected materials fall down along the magnetic structures.
\subsubsection{Thermal Property}
To study the thermal property of MFR-2 particularly during its untwisting process, we plot the DEM distribution as well as EM and EM-weighted temperature maps in Figures \ref{f8} and \ref{f9}. Firstly, we select a region (marked by the red box in Figure \ref{f8}(a)) at the north edge of MFR-2 at $\sim$06:04 UT and show its DEM distribution (red curve) in Figure \ref{f8}(b). For comparison, we also plot the DEM curve (black) for the red box but before the eruption ($\sim$30 minutes earlier) serving as a background or reference. Moreover, we provide the DEM curve (green) from a quiet coronal region nearby (denoted by a green box in Figure \ref{f8}(a)) for a second reference. One can see that compared with the background and quiet coronal region, the DEM distribution enclosed by the red box has two main components peaking at $\sim$0.6 MK (log $T\approx\,$5.8) and $\sim$7 MK (log $T\approx\,$6.8), i.e., a cold component plus a hot component. This is also similar to the result of MFR-1, though the temperature of the hot component is slightly lower than that of MFR-1. Secondly, we show the evolutions of EM and temperature along S5 in Figures \ref{f8}(c) and (d), respectively. It is seen that the EM diagram basically exhibits the expansion (or untwisting) motion of MFR-2, as similarly shown in the AIA 1600 \AA\ images in Figure \ref{f7}(c). The interesting thing is that the temperature feature does not quite match the EM feature, say, a relatively high temperature (\textgreater2.0 MK) appears fragmentarily in the center of MFR-2 where EM is not so high (see the red contours overlaid on both the temperature and EM diagrams). We further plot the spatial maps of EM and temperature in Figure \ref{f9}, both of which are overlaid by temperature contours with levels of 3.5 and 2.0 MK. One can see that during the untwisting process, a relatively high temperature shows up mainly in the bottom part of MFR-2. In particular, the high temperature region extends to the center of MFR-2 (marked by the black arrow) where EM is relatively low. All these results suggest that some heating processes are working within MFR-2, probably due to internal magnetic reconnection or plasma compression.
\section{Summary and Discussions}
\label{sum}
In this paper, we study two homologous MFRs in NOAA 11515 using multi-waveband and dual-perspective imaging observations from SDO/AIA and STEREO-A/EUVI. The morphological evolution, kinematic motions, and thermal property of the two MFRs are analyzed. The observational features of the two MFRs are summarized in Table \ref{tab1}. Our main findings are as follows. (1) Both of the MFRs show up in multi-wavelength passbands and their DEM distributions mainly consist of a cold component peaking at $\sim$0.4--0.6 MK and a hot component peaking at $\sim$7--8 MK. (2) The two MFRs exhibit erupting, expanding, and untwisting motions and their evolution can be divided into two stages, a fast-eruption stage with speeds of a few tens to hundreds of km s$^{-1}$ and a slow-expansion (or untwisting) stage with speeds of several tens of km s$^{-1}$. (3) During the two-stage evolution, hot plasmas show up at the edge and in the center of the two MFRs, indicating that some local heatings take place there via magnetic reconnection and/or plasma compression.
\begin{table}[htb]
\begin{center}
\small
\caption{Summary of the observational features of the two homologous MFRs}
\label{tab1}
\begin{tabular}{lcc}
\hline
\hline
MFRs & MFR-1 & MFR-2 \\
\hline
Observing Dates & 2012-07-08 & 2012-07-09 \\
Associated Filaments & F1 (pre-existing) & F2 (pre-existing) \\
Related Flares & M6.9 & C6.0 \\
Erupting Motions$^*$ & northward (asymmetric) & northward (asymmetric) \\
& 123 km s$^{-1}$ (AIA) & 62 km s$^{-1}$ (AIA) \\
& 108 km s$^{-1}$ (EUVI) & 53 km s$^{-1}$ (EUVI) \\
Expanding Motions$^*$ &12--18 km s$^{-1}$ (AIA) & 12--28 km s$^{-1}$ (AIA) \\
& 35 km s$^{-1}$ (EUVI) & 12 km s$^{-1}$ (EUVI) \\
Swirling/Untwisting & $\sim$55 km s$^{-1}$ (AIA) & $\sim$40 km s$^{-1}$ (AIA) \\
Motions$^*$ & clockwise first & clockwise first \\
& then counterclockwise & then still clockwise \\
Plasma Emission & multi-temperature & multi-temperature \\
& $\sim$0.4--0.5 MK \& $\sim$7--8 MK & $\sim$0.6 MK \& $\sim$7 MK \\
High Temperature & at the northern edge & at the bottom edge \\
Features & and in the center & and in the center \\
\hline
Additional Comments & \multicolumn{2}{c}{in a similar magnetic environment before eruption} \\
& \multicolumn{2}{c}{with plasmas ejected during eruption and falling back at late stage} \\
\hline
\hline
\end{tabular}
\end{center}
$^*$ All the speeds of motions are measured in the plane of sky.
\end{table}
\subsection{General Remarks on the Observational Features of the Two MFRs}
The two homologous MFRs show some similarities and also differences in the observational features as seen from Table \ref{tab1}. More specifically, both MFRs are associated with filaments and flares and show erupting, expanding, and untwisting motions, though the flare magnitudes and the speeds of motions are somewhat different. The morphological evolutions of the two MFRs are almost identical as seen from the EUVI viewpoint. In addition, the DEMs of both MFRs consist of a cold component plus a hot component with the high temperature features mainly appearing in the interface region between the MFRs (their northern or bottom edge) and the ambient magnetic structures as well as in the center of the MFRs. We ascribe these similarities to a similar magnetic environment in which the two MFRs are rooted, as manifested by the similar reversed-C shape filaments and especially the surrounding magnetic fields with a large spiral arm before the eruptions. On the other hand, some noticeable differences, including the morphology of the two MFRs and the directions of rotation before and after the plasma ejection, can be clearly seen in the AIA images. These differences can only be distinguished by multi-viewpoint observations that are necessary to fully understand the eruption process of MFRs.
\subsection{Roles of the Ambient Magnetic Fields in MFR Eruptions}
In both of the events, the MFRs move northward (i.e., asymmetric eruptions) at an early stage. During this process, the MFRs are supposed to interact with the ambient magnetic fields especially on the north side based on the following signatures. (1) Before the eruptions, some large-scale helical structures (as background magnetic fields) are clearly seen in the EUVI 195 \AA\ images. (2) During the eruptions, the MFRs stop moving towards the north probably constrained by the ambient magnetic fields. (3) MFR-1 changes its rotating direction from clockwise to counterclockwise before and after the plasmas are ejected to the upper atmosphere along the helical trajectories. Note that MFR-2 does not change its rotating direction during the eruption. (4) The two MFRs show some heatings at the interface region, i.e., at the north or bottom edge, which are probably caused by magnetic reconnection between the MFRs and ambient magnetic fields during the eruptions. In other words, the ambient magnetic fields play an important role in MFR eruptions, say, constraining the MFRs, changing the directions of MFR motions, increasing the twist of MFRs, heating MFRs, and transferring the twist and materials from MFRs to the ambient magnetic loops, as have been reported by previous studies \citep[e.g.,][]{Ji2003,Liu2007,Cohen2010,Bi2013,Yanglh2019,Yan2020b,Yan2020a}. It should be noted that in the two events under study, the strength of interaction between the MFRs and ambient magnetic fields is supposed to be different, which probably depends on the magnetic field strength (or magnetic topology) and also erupting speeds of the MFRs and so on. The interaction of MFR-1 with the ambient magnetic structures could be more intense than the one of MFR-2, as MFR-1 has a larger erupting speed than MFR-2. A stronger constraining force from the ambient magnetic fields might also play a role in MFR-1, while in MFR-2 the constraining force could become weaker or the magnetic topology has somewhat changed after MFR-1 erupts. Therefore, MFR-1 changes its rotating direction during the eruption while MFR-2 does not.
\subsection{Eruption of the Two MFRs}
There are some interesting topics or questions related to the eruption of MFRs such as trigger mechanisms and eruption characteristics. Here we discuss some of them for the two homologous MFRs in the present study.
Firstly, what lead to the onset of the two MFRs toward eruption, i.e., the trigger mechanisms? It has been widely accepted that magnetic reconnections such as tether-cutting and breakout types as well as ideal MHD processes including kink or torus instabilities can trigger solar eruptions \citep[e.g.,][]{Moore2001,Torok2005,Kliem2006,Aulanier2010,Shen2012breakout,Zuccarello2014}. In our two events studied here, we speculate that tether-cutting reconnection and kink instability likely play roles in the MFR eruptions according to the following observational features. Tens of minutes before the eruption, both filaments display separate curved structures and some brightenings appear around simultaneously (as shown in Figures \ref{f10}(a1)--(a3) and (b1)--(b3)). We then clearly see a reversed-C shape filament, namely F1 or F2, just before the eruption (Figures \ref{f10}(a4) and (b4)). This may suggest that tether-cutting reconnection takes place, which can help form the filaments as well as push the filaments into an unstable state (say, via increasing the twist). When the filaments start to erupt, they show a $\gamma$-like shape or a writhed structure, which may indicate that a kink instability is happening. Note that here we cannot rule out the other trigger mechanisms. Secondly, what is the role of the reversed-C shape filaments in the eruption? As mentioned above, when the reversed-C shape filaments are formed via tether-cutting reconnection, their twist can increase during the process. Once the twist increases to a certain value, the filament will writhe by the conversion of twist and writhe \citep[e.g.,][]{Kliem2010}. As a result, we see the $\gamma$-like structure in the eruption as well as a rotation of the MFR afterwards caused by kink instability. The reversed-C shape of filaments also determines the initial clockwise direction of the MFR rotation. Thirdly, why do the MFRs show a fast-eruption stage followed by a slow-expansion stage? This could be explained in two aspects. On the one hand, both MFRs experience a kink instability and show an asymmetric eruption toward north. During this course, the MFR can be accelerated, at least in the initial time. As moving northward further, the MFRs are constrained by the ambient magnetic fields, which can change their kinematic motions, say, entering a relaxation (or an expansion) stage from the eruption phase. On the other hand, when the MFRs interact or reconnect with the ambient magnetic fields, their twist could increase further. When the twist of MFRs exceeds a certain threshold, the MFRs would become unstable \citep[e.g.,][]{HP1981,Baty2001,Torok2005,Williams2005,Srivastava2010} and an untwisting process could happen \citep[e.g.,][]{Alexander2006,Li2015}. Therefore, the MFRs show a fast-eruption stage first and then a slow expansion/untwisting stage during the eruption.
\subsection{Untwisting of the Two MFRs}
The two MFRs studied here exhibit prominent untwisting motions which could provide some information on their magnetic structures such as the twist. It is a consensus that MFRs consist of helical magnetic fields and the toroidal current dominates the twist which is related to the untwisting of MFRs. In practice, one could estimate the twist number of MFRs from their untwisting motions. Some previous studies have reported untwisting motions of MFRs and estimated their twist number using high resolution observations. For example, \cite{Yan2014b} detected counterclockwise untwisting motions of an active region filament and derived a total twist of at least 5$\pi$ by using the time slice method. \cite{Li2015} also estimated the total twist (about 4$\pi$) of an MFR from its untwisting motion. For the two MFRs in the present study, the twist is estimated to be at least one turn (i.e., 2$\pi$) from the time-slice diagrams (see the red dashed curves in Figures \ref{f3}(c) and \ref{f7}(c), each of which represents half a turn and a pair from the same location can give one turn). In addition, from the accompanying animations we could see the MFRs rotate more than one turn. Note that one turn is just the lower limit of the twist number for the two MFRs and the real twist number could be much greater than this, say, reaching a strongly kink-unstable threshold of 5$\pi$ \citep{Kliem2012}. It is a pity that we cannot obtain the twist number accurately using the time-slice method here since the MFR emissions are somewhat weak during the untwisting process especially at a higher height.
\subsection{Heating of the Two MFRs}
Through the DEM analysis, we find that the DEMs of both MFRs contain a hot component peaking at $\sim$7--8 MK and that the hot plasmas are mainly located in the interface region between the MFRs and the ambient magnetic fields and also in the center of MFRs. This result indicates some heating effects in the MFR plasmas during the eruption and untwisting processes. Considering that the twist of MFRs seems to increase at first (and is released later) when the MFRs interact with the ambient magnetic fields, we speculate that magnetic reconnection takes place between the MFRs and ambient magnetic fields that heats the plasmas at the north or bottom edge of MFRs \citep[e.g.,][]{Yan2013,Yanglh2019}. The plasmas in the center of MFRs could be heated by internal magnetic reconnection that occurs within the MFRs \citep[e.g.,][]{Galsgaard1997,Gibson2006,Gibson2008,Fermo2014,Yang2015,Mei2020ApJ}. There is another possibility that the MFR plasmas are compressed so as to be heated when the MFRs interact with the ambient magnetic fields and rotate around their axes. Note that in the two MFRs studied here, we think that the plasmas are more likely heated locally during but not before their eruptions based on the result that the temperature of some tracked plasmas reaches its maximum a little bit later than EM.
\acknowledgments
This work greatly benefits from the high quality data from SDO and STEREO. We acknowledge the use of DEM and FLCT codes. We thank Dr. Ying-na Su, Dr. Yang Guo, Dr. Zhi-xin Mei, and Dr. Jin-cheng Wang for their valuable discussions. We also thank the anonymous referee for the very constructive comments and suggestions that improve the manuscript. The authors are supported by NSFC under grants 11873095, 11733003, 11961131002, 11921003, and U1731241, and by the CAS Strategic Pioneer Program on Space Science under grants XDA15052200, XDA15320103, and XDA15320301. Y.L. is also supported by the CAS Pioneer Talents Program for Young Scientists.
|
2,877,628,091,464 | arxiv | \section{Introduction}
We consider only simple graphs. Let $G$ be a graph with vertex set
$V_G$ and edge set $E_G$. Let $\mathbb{F}$ be a field. We adopt the
notation and terminology from \cite{AB} and \cite{We}.
An $n\times n$ matrix $A$ over $\mathbb{F}$ is skew-symmetric
(respectively, symmetric) if $A^T=-A$ (respectively, $A^T = A$),
where $A^T$ denotes the transpose of $A$.
For an $n\times n$ symmetric or skew-symmetric matrix $A$, the
graph of $A$, denoted $G(A)$, is the graph with vertex set
$\{v_{1},v_{2},\dots, v_{n}\}$ and edge set $\{v_{i}v_{j}:
a_{ij}\neq 0, 1\leq i <j \leq n\}$.
The classic minimum rank problem involving symmetric matrices has
been studied extensively, see, e.g.,~\cite{FH}.
The minimum skew rank problem involves skew symmetric matrices and
its study began recently in \cite{AB}.
If the characteristic of $\mathbb{F}$ is $2$, then a
skew-symmetric matrix over $\mathbb{F}$ is also symmetric.
Thus it is assumed throughout this
paper that the characteristic of $\mathbb{F}$ is not $2$.
For a field $\mathbb{F}$ and a graph $G$, let
$S^{-}(\mathbb{F},G)=\{A\in \mathbb{F}^{n\times n}: A^{T}=-A,
G(A)=G\}$ be the set of skew-symmetric matrices over $\mathbb{F}$
described by $G$. The minimum skew rank of $G$ over $\mathbb{F}$ is
defined as
\[
mr^{-}(\mathbb{F},G) = \min\{\text{rank}(A): A\in
S^{-}(\mathbb{F},G)\}.
\] The
corresponding maximum skew nullity of $G$ is defined as
\[ M^{-}(\mathbb{F},G)
= \max\{\text{nullity}(A): A\in S^{-}(\mathbb{F},G)\}.
\]
Obviously, $mr^{-}(\mathbb{F},G)+M^{-}(\mathbb{F},G)=|V_G|$.
Let $K_n$ be the complete graph with $n$ vertices, and $K_{n_1,n_2,
\dots, n_t}$ the complete $t$-partite graph with $n_i$ vertices in
the $i$th partite sets for $i=1, 2, \dots, t$.
Note that the rank of a skew-symmetric matrix over $\mathbb{F}$ is
always even. Thus $mr^{-}(\mathbb{F},G)$ is even for any field
$\mathbb{F}$ and any graph $G$. As observed in \cite{AB},
$mr^{-}(\mathbb{F},G)=0$ if and only if $G$ is an empty graph, and
if $\mathbb{F}$ is infinite and $G$ is a connected graph with at
least two vertices, then $mr^{-}(\mathbb{F},G)=2$ if and only if
$G$ is a complete multipartite graph $K_{n_{1},n_{2},\ldots , n_{t}}$ for some $t\geq 2$,
$n_{i}\geq 1$ for $i=1,\ldots, t$. The authors \cite{AB} posed an
open question (Question 5.2) to characterize the graphs $G$ such
that $mr^{-}(\mathbb{F},G)=4$. We characterize the graphs $G$ with cut vertices over
the infinite field $\mathbb{F}$ such that $mr^{-}(\mathbb{F},G)=4$.
The class of $k$-trees is defined recursively as follows \cite{Ro}:
(i) The complete graph $K_{k+1}$ is a $k$-tree; (ii) A $k$-tree $G$
with $n+1$ vertices ($n\ge k+1$) can be constructed from a $k$-tree
$H$ on $n$ vertices by adding a vertex adjacent to all vertices of a
$k$-clique of $H$.
A $k$-path is a $k$-tree which
is either $K_{k+1}$ or has exactly two vertices of degree $k$. We determine
the minimum skew rank of $k$-paths over a field $\mathbb{F}$. The $k$-th power $G^{k}$ of a graph $G$ is the graph
whose vertex set is $V_G$, two distinct vertices being adjacent in
$G^{k}$ if and only if their distance in $G$ is at most $k$. Let
$P_n=v_1v_2\dots v_n$ be the path on $n$ vertices.
If
$k\le n-1$, then $P_{n}^{k}$ is a
$k$-path (see below). As a corollary, we obtain the minimum skew rank
of the $k$-th power of a path over the real field $\mathbb{R}$, which was already given in \cite{DKT}.
The maximum skew rank $MR^{-}(\mathbb{F},G)$ of a graph $G$ over a
field $\mathbb{F}$ is defined as
\[
MR^{-}(\mathbb{F},G) = \max\{\text{rank}(A): A\in
S^{-}(\mathbb{F},G)\}.
\]
Let $match(G)$ be the matching number of $G$. It was shown
in \cite{AB} that
$mr^{-}(\mathbb{F},G)=2match(G)=MR^{-}(\mathbb{F},G)$ for a tree (a connected graph with no cycles) $G$
and a field $\mathbb{F}$. We extend this by showing that the above conclusion holds also for
a
connected graph $G$ with no even cycles.
\section{Preliminaries}
Let $G$ be a graph. For $v\in V_G$, $G-v$ denotes the graph obtained
from $G$ by deleting vertex $v$ (and all edges incident with $v$).
For $X\subseteq V_G$, $G[X]$ denotes the
subgraph of $G$ induced by vertices in $X$.
We give some lemmas that we will use in our proof.
\begin{Lemma} \label{lm2.5}
\cite{AB} Let $G$ be a connected graph with at least two vertices
and let $\mathbb{F}$ be an infinite field.
Then $mr^{-}(\mathbb{F},G)=2$ if and only if
$G$ is a complete multipartite graph
\end{Lemma}
For a field $\mathbb{F}$ and a graph $G$ with $v\in V_G$, let
$r_{v}^{-}(\mathbb{F},
G)=mr^{-}(\mathbb{F},G)-mr^{-}(\mathbb{F},G-v)$.
The union of graphs $G_i$, $i=1,2, \dots,h$, denoted by
$\cup_{i=1}^{h}G_i$, is the graph with vertex set
$\cup_{i=1}^{h}V_{G_i}$ and edge set $\cup_{i=1}^{h}E_{G_i}$.
\begin{Lemma} \cite{AB,De}
\label{lm2.7}
Let $G$ be a graph with cut vertex $v$ and $\mathbb{F}$ a field, where
$G=\cup_{i=1}^{h}G_i$ and $\cap_{i=1}^{h}V_{G_i}=\{v\}$. Then
$mr^{-}(\mathbb{F},G)=\sum_{i=1}^{h}mr^{-}(\mathbb{F},G_{i}-v)+ \min
\{\sum_{i=1}^{h}r_{v}^{-}(\mathbb{F},G_{i}),2\}$.
\end{Lemma}
\begin{Lemma} \label{lm2.1} \cite{AB}
Let $G$ be a graph and let $\mathbb{F}$ be an infinite field. If $G
=G_1\cup G_2$, then $mr^{-}(\mathbb{F},G) \leq
mr^{-}(\mathbb{F},G_1)+mr^{-}(\mathbb{F},G_2)$.
\end{Lemma}
Let $G$ be a graph. A subset $Z\subset V_G$ defines an initial
coloring by coloring all vertices in $Z$ black and all the vertices
outside $Z$ white. The color change rule says: If a black vertex $u$
has exactly one white neighbor $v$, then change the color of $v$ to
black. In this case we write $u\rightarrow v$. The derived set of an
initial coloring $Z$ is the set of vertices colored black until no
more changes are possible. A zero forcing set is a subset $Z\subset
V_G$ such that the derived set of $Z$ is $V_G$. The zero forcing
number of $G$, denoted by $Z(G)$, is the minimum size of a zero
forcing set of $G$.
\begin{Lemma} \label{lm2.4}
\cite{AB} Let $G$ be a graph and
$\mathbb{F}$ a field. Then $M^{-}(\mathbb{F},G)\leq Z(G)$.
\end{Lemma}
\begin{Lemma}\label{lm2.8} \cite{AB}
Let $G$ be a graph and $\mathbb{F}$ a
field. Then $MR^{-}(\mathbb{F},G)=2match(G)$.
\end{Lemma}
\begin{Lemma}\label{lm2.9} \cite{AB} Let $G$ be a graph and $\mathbb{F}$ a
field. If $H$ is an induced subgraph of $G$,
$mr^{-}(\mathbb{F},H)\leq $ $ mr^{-}(\mathbb{F},G)$.
\end{Lemma}
\begin{Lemma} \label{lm2.6}
\cite{AB} Let $G$ be a graph with a unique perfect matching and
$\mathbb{F}$ a field. Then $mr^{-}(\mathbb{F},G) =|V_G|$.
\end{Lemma}
\section{Results}
we first give a partial result to the question in \cite{AB} to
characterize graphs with $mr^{-}(\mathbb{F},G)=4$. We consider
graphs with cut vertices.
\begin{Theorem} Let $G$
be a graph with cut vertex $v$ and $\mathbb{F}$ an infinite field.
Then $mr^{-}(\mathbb{F},G)=4$ if and only if one of the following
conditions holds:\\
$(i)$ $G =G_1\cup G_2$ and $V_{G_1}\cap V_{G_2}=\{v\}$, where $G_1$,
$G_2$ are complete multipartite graphs such that $G_{1}-v$,
$G_{2}-v$ are
nonempty, and\\
$(ii)$ $G-v$ consists of a nonempty complete multipartite component
and isolated vertices.
\end{Theorem}
\begin{proof}
Suppose first that (i) holds. Note that $G_{i}-v$ is still a
complete multipartite graph for $i=1,2$. By Lemma~\ref{lm2.5},
$mr^{-}(\mathbb{F},G_{1})=mr^{-}(\mathbb{F},G_{2})=mr^{-}(\mathbb{F},G_{1}-v)=mr^{-}(\mathbb{F},G_{2}-v)=2$.
Then $r_{v}^{-}(\mathbb{F},G_{1})+r_{v}^{-}(\mathbb{F},G_{2})=0$.
Thus by Lemma~\ref{lm2.7},
$mr^{-}(\mathbb{F},G)=mr^{-}(\mathbb{F},G_{1}-v)+mr^{-}(\mathbb{F},G_{2}-v)+\min\{0,2\}=4$.
Now suppose that (ii) holds. Let $W$ be the unique complete
multipartite component, and $a$ the number of isolated vertices in
$G-v$. By Lemma~\ref{lm2.5}, $mr^{-}(\mathbb{F},W)=2$. Note that
$r_v^{-}(\mathbb{F},K_{2})=2$. Then by Lemma~\ref{lm2.7},
$mr^{-}(\mathbb{F},G)=mr^{-}(\mathbb{F},W)+a\cdot
mr^{-}(\mathbb{F},K_{1})+2=4$.
Conversely, suppose that $mr^{-}(\mathbb{F},G)=4$. Let $p$ be the
number of nonempty complete multipartite components, and $q$ the
number of isolated vertices in $G-v$. Let $m$ be the number of the
remaining components.
Note that the minimum skew rank of a graph that is neither a
complete multipartite graph nor an empty graph is larger than $4$.
\noindent {\bf Case 1.} $q=0$. By Lemma~\ref{lm2.7},
$4=mr^{-}(\mathbb{F},G)\geq 2p+4m$. If $m=1$, then $p=0$, a
contradiction to the fact that $v$ is a cut vertex of $G$. Thus
$m=0$, implying that $p=2$. Let $W_{1}$, $W_{2}$ be the vertex sets
of the two complete multipartite components of $G-v$ and let
$G_{1}$, $G_{2}$ be the subgraph induced by $\{v\}\cup W_{1}$,
$\{v\}\cup W_{2}$. By Lemma~\ref{lm2.5},
$mr^{-}(\mathbb{F},G_{1}-v)=mr^{-}(\mathbb{F},G_{2}-v)=2$. By
Lemma~\ref{lm2.7},
$4=mr^{-}(\mathbb{F},G)=mr^{-}(\mathbb{F},G_{1}-v)+mr^{-}(\mathbb{F},G_{2}-v)+\min\{r_{v}^{-}(\mathbb{F},G_{1})+r_{v}^{-}(\mathbb{F},G_{2}),2\}=2+2+\min\{r_{v}^{-}(\mathbb{F},G_{1})+r_{v}^{-}(\mathbb{F},G_{2}),2\}$.
Then $r_{v}^{-}(\mathbb{F},G_{1})=r_{v}^{-}(\mathbb{F},G_{2})=0$.
Thus $mr^{-}(\mathbb{F},G_{1})=mr^{-}(\mathbb{F},G_{2})=2$. By
Lemma~\ref{lm2.5}, $G_1$ and $G_2$ are complete multipartite graphs,
and then (i) follows.
\noindent {\bf Case 2.} $q\neq0$. Note that
$r_v^{-}(\mathbb{F},K_{2})=2$. By Lemma~\ref{lm2.7},
$4=mr^{-}(\mathbb{F},G)\geq 2p+4m+2$. Then $m=0$ and $p=1$, and thus
(ii) follows.
\end{proof}
Now we consider the minimum skew rank of $k$-paths. Note that a
$k$-path with at least $k+2$ vertices has at least two vertices of
degree $k$ and any two vertices of degree $k$ are not adjacent. The
following lemma follows directly from the definition of $k$-path.
\begin{Lemma} \label{base} Let $G$ be a $k$-path with at least $k+2$
vertices, and $v$ a vertex of $G$ with degree $k$. Then $G-v$ is
also a $k$-path.
\end{Lemma}
Let $G$ be a $k$-path with $n\ge k+2$ vertices. By Lemma~\ref{base},
the vertices of $G$ may be labeled as follows: Choose a vertex
of degree $k$, labeled as $v_n$, and label its unique neighbor of
degree $k+1$ in $G$ with $v_{n-1}$. Then $v_{n-1}$ is a vertex of
degree $k$ in the $k$-path $G-v_n$. Repeating the process above, we
may label $n-{k+1}$ vertices of $G$ as $v_n$, $v_{n-1}, \dots,
v_{k+2}$. Obviously, $G-v_n-v_{n-1}-\cdots -v_{k+2}=K_{k+1}$ and it
contains a vertex of degree $k$ in $G$, which is labeled as $v_1$,
and the remaining vertices are labeled as $v_2$, $v_3, \dots,
v_{k+1}$ such that $v_2$ is the unique neighbor of $v_1$ with degree
$k+1$ in $G$. Note that in our labelling, $v_i$ is not adjacent to
$v_{j+1}, v_{j+2},\dots, v_n$ if $v_i$ is not adjacent to $v_j$ for
$j\ge \max\{i+1,k+2\}$. Recall that a $k$-tree is a chordal graph. The above labeling is
actually the ``perfect elimination" labeling inherent to chordal graphs \cite{Sh}
\begin{Theorem} \label{th1}
Let $G$ be a $k$-$path$ on $n$ vertices and $\mathbb{F}$ an
infinite field. Then
\begin{equation*}
mr^{-}(\mathbb{F},G)=\begin{cases}n-k &\text{if $n-k$ is even},\\
n-k+1 &\text{if $n-k$ is odd}.
\end{cases}
\end{equation*}
\end{Theorem}
\begin{proof}
Let $Z=\{v_{1},v_{2},\ldots , v_{k}\}$. Color all vertices in $Z$
black and all the vertices outside $Z$ white. We will show that $Z$
is a zero forcing set of $G$. Since all neighbors of $v_{1}$
different from $v_{k+1}$ are black, we have $v_{1}\rightarrow
v_{k+1}$. Note that $v_{2}$ is adjacent to $v_{k+2}$ but not
adjacent to $v_{k+3},v_{k+4},\ldots, v_{n}$.
Since all neighbors of $v_{2}$
different from $v_{k+2}$ are black, we have $v_{2}\rightarrow
v_{k+2}$. Let $G_1=G[\{v_{1},v_{2},\ldots , v_{k+3}\}]$ and
$G_2=G[\{v_{1},v_{2},\ldots , v_{k+4}\}]$. If each neighbor of
$v_{k+3}$ in $G_1$ is adjacent to $v_{k+4}$ in $G$, then $v_{k+4}$
is of degree $k+1$ in $G_2$, a contradiction. Thus there is a
neighbor, say $w$, of $v_{k+3}$ in $G_1$ such that $wv_{k+4}\not\in
E_G$, and then $wv_i\not\in E_G$ for $i\ge k+5$, implying that
$w\rightarrow v_{k+3}$.
Repeating the process above, we may finally color all vertices of $G$ black. Thus $Z$ is
a zero forcing set of $G$. By Lemma~\ref{lm2.4},
$M^{-}(\mathbb{F},G)\leq Z(G)\leq k$, and then
$mr^{-}(\mathbb{F},G)=n-M^{-}(\mathbb{F},G)\geq n-k$. Note that the
rank of a skew-symmetric matrix is even. It follows that
\begin{eqnarray*}
mr^{-}(\mathbb{F},G)\geq\begin{cases}n-k &\text{if $n-k$ is even},\\
n-k+1 &\text{if $n-k$ is odd}.
\end{cases}
\end{eqnarray*}
To prove the result, we need only to show
\begin{equation}\label{ee}
mr^{-}(\mathbb{F},G)\leq\begin{cases}n-k &\text{if $n-k$ is even},\\
n-k+1 &\text{if $n-k$ is odd}.
\end{cases}
\end{equation}
We prove this by induction on $n$. If $n=k+1$, then $G=K_{k+1}$,
which is a complete multipartite graph, and thus by
Lemma~\ref{lm2.5}, $mr^{-}(\mathbb{F},G)=2=n-k+1$. If $n=k+2$, then
$G=K_{k+2}-e$ is also a complete multipartite graph, where $e\in
E_{K_{k+2}}$, and thus by Lemma \ref{lm2.5},
$mr^{-}(\mathbb{F},G)=2=n-k$. Thus (\ref{ee}) is true for $n=k+1,
k+2$. Suppose that $n\geq k+3$ and for a $k$-path $H$ on $m$
vertices with $k+1\leq m\leq n-1$, we have
\begin{equation*}
mr^{-}(\mathbb{F},H)\leq\begin{cases}m-k &\text{if $m-k$ is even},\\
m-k+1 &\text{if $m-k$ is odd}.
\end{cases}
\end{equation*}
Let $G$ be a $k$-path on $n$ vertices. Let
$$G_{1}=G[\{v_{1},v_{2},\ldots ,v_{k+2}\}] \mbox{ and }
G_{2}=G[\{v_{3},v_{4},\ldots ,v_{n}\}].$$ Then $G_{1}$ is a $k$-path
on $k+2$ vertices, and $G_{2}$ is a $k$-path on $n-2$ vertices.
Obviously, $mr^{-}(\mathbb{F},G_{1})=2$, and by the induction
hypothesis,
\begin{equation*}
mr^{-}(\mathbb{F},G_{2})\leq\begin{cases}n-k-2 &\text{if $n-k-2$ is even},\\
n-k-1 &\text{if $n-k-2$ is odd},
\end{cases}
\end{equation*}
i.e.,
\begin{equation*}
mr^{-}(\mathbb{F},G_{2})\leq\begin{cases}n-k-2 &\text{if $n-k$ is even},\\
n-k-1 &\text{if $n-k$ is odd}.
\end{cases}
\end{equation*}
Note that $G=G_{1}\cup G_{2}$. By Lemma~\ref{lm2.1},
\begin{eqnarray*}
mr^{-}(\mathbb{F},G)&\le& mr^{-}(\mathbb{F},G_{1})+ mr^{-}(\mathbb{F},G_{2})\\
&\leq& 2+\begin{cases}n-k-2 &\text{if $n-k$ is even}\\
n-k-1 &\text{if $n-k$ is odd}
\end{cases}\\
&=&\begin{cases}n-k &\text{if $n-k$ is even},\\
n-k+1 &\text{if $n-k$ is odd}.
\end{cases}
\end{eqnarray*}
This proves (\ref{ee}).
\end{proof}
Obviously, $P_n^k$ is a complete graph if $k\ge n$. Suppose that
$k\le n-1$. Obviously, $P_n^k[\{v_1,v_2, \dots, v_{k+1}\}]=K_{k+1}$,
and if $k\le n-2$, then for $j=2,3,\dots,n-k$,
$P_n^k[\{v_{j},v_{j+1}, \dots, v_{k+j-1}\}]=K_{k}$, and $v_{k+j}$ is
adjacent to $v_{j},v_{j+1}, \dots, v_{k+j-1}$. Thus $P_{n}^{k}$ is a
$k$-path. Now by Lemma~\ref{lm2.5} and Theorem
\ref{th1} we have the following result, which was proved in \cite{DKT} when $\mathbb{F}$ is the real field $\mathbb{R}$.
\begin{Corollary} Let $\mathbb{F}$ be an
infinite field. Then
\begin{eqnarray*}
mr^{-}(\mathbb{F},P_{n}^{k})=\begin{cases}n-k &\text{if $1\le k\le n-1$ and $n-k$ is even},\\
n-k+1 &\text{if $1\le k\le n-1$ and $n-k$ is
odd},\\
2 &\text{if $k\ge n$}.
\end{cases}
\end{eqnarray*}
\end{Corollary}
Finally, we gave an observation.
\begin{Theorem} \label{add}
Let $G$ be a connected graph with no even cycles and $\mathbb{F}$
a field. Then
$mr^{-}(\mathbb{F},G)=2match(G)=MR^{-}(\mathbb{F},G)$.
\end{Theorem}
\begin{proof}
By Lemma \ref{lm2.8}, $mr^{-}(\mathbb{F},G)\leq
MR^{-}(\mathbb{F},G)=2match(G)$. Let $M$ be a maximum matching of
$G$ and $\{v_{1},\cdots,v_{k}\}$ the vertices in $M$. Then $M$ is a
perfect matching of $H=G[\{v_{1},\cdots,v_{k}\}]$. This perfect
matching is unique. Otherwise, the graph induced by the vertices of
the symmetric difference of two (different) perfect matchings of $H$
consists of even cycles, which is impossible because $G$ contains no
even cycles. By Lemmas \ref{lm2.9} and \ref{lm2.6},
$mr^{-}(\mathbb{F},G)\geq mr^{-}(\mathbb{F},H)=2match(G)$. The
result follows.
\end{proof}
Note that a tree has no (even) cycles. By previous theorem we have the following result.
\begin{Corollary} \cite{AB}
Let $G$ be a tree and $\mathbb{F}$ a field. Then
$mr^{-}(\mathbb{F},G)=2match(G)$ $=MR^{-}(\mathbb{F},G)$.
\end{Corollary}
Let $G$ be a connected unicyclic graph with a unique cycle $C$. If $C$ is odd, then by Theorem \ref{add},
$mr^{-}(\mathbb{F},G)=2match(G)$. Recall that it was shown in \cite{LMD} that if $C$ is odd, then
$mr^{-}(\mathbb{R},G)=2match(G)$, and if $C$ is even, then
$mr^{-}(\mathbb{R},G)=2match(G)$ or $2match(G)-2$.
\bigskip
{\bf Acknowledgment.} This work was supported by the National
Natural Science Foundation of China (No.~11071089)and the Guangdong
Provincial Natural Science Foundation of China (no.~S2011010005539).
|
2,877,628,091,465 | arxiv | \section{Introduction}
The existence of a gluon self-coupling in QCD suggests that, in addition to
the conventional $q\bar{q}$ states, there may be non-$q\bar{q}$ mesons: bound
states including gluons (gluonia and glueballs, and $q\bar{q}g$ hybrids) and
multiquark states \cite{1}. Since the theoretical guidance on the properties
of unusual states is often contradictory, models that agree in the $q\bar{q}$
sector differ in their predictions about new states. Among the naively
expected signatures for gluonium are \hfil\break
i) no place in $q\bar{q}$ nonet, \hfil\break
ii) flavor-singlet coupling, \hfil\break
iii) enhanced production in gluon-rich channels such as $J/\Psi (1S)$ decay,
\hfil\break iv) reduced $\gamma \gamma $ coupling, \hfil\break v) exotic
quantum numbers not allowed for $q\bar{q}$ (in some cases). \hfil\break
Points iii) and iv) can be summarized by the Chanowitz $S$ parameter \cite{Cha}
$$S=\frac{\Gamma (J/\Psi (1S)\rightarrow \gamma X)}{{\rm PS} (J/\Psi (1S)
\rightarrow \gamma X)}\times \frac{{\rm PS} (X\rightarrow \gamma \gamma )}{
\Gamma (X\rightarrow \gamma \gamma )},$$
where PS stands for phase space. $S$ is expected to be larger for gluonium
than for $q\bar{q}$ states. Of course, mixing effects and other dynamical
effects such as form-factors can obscure these simple signatures. Even if the
mixing is large, however, simply counting the number of
observed states remains a clear signal for non-exotic non-$q\bar{q}$ states.
Exotic quantum number states $(0^{--},0^{+-},1^{-+},2^{+-},\ldots )$ would be
the best signatures for non-$q\bar{q}$ states. It should be also emphasized
that no state has yet unambiguously been identified as gluonium, or as a
multiquark state, or as a hybrid.
In this paper we shall discuss $D$-wave meson states, the interpretation of
which as members of conventional quark model $q\bar{q}$ nonets encounters
difficulties \cite{enigmas}. We shall be concerned with the four meson nonets
which have the following $q\bar{q}$ quark model assignments, according to the
most recent Review of Particle Physics \cite{pdg}:\hfil\break
1) $\;1\; ^1D_2$ $J^{PC}=2^{-+},$ $\;\pi _2(1670),\;\;\eta _2^{'}($ ? $),\;\;
\eta _2($ ? $),\;\;K_2(1770)$\hfil\break
2) $\;1\; ^3D_1$ $J^{PC}=1^{--},$ $\;\rho (1700),\;\;\omega (1600),\;\;\phi ($ ? $),\;\;\;K^\ast (1680)$\hfil\break
3) $\;1\; ^3D_2$ $J^{PC}=2^{--},$ $\;\rho _2($ ? $),\;\;\;\omega _2($ ? $),\;\;
\;\phi _2($ ? $),\;\;K_2^{'}(1820)$\hfil\break
4) $\;1\; ^3D_3$ $J^{PC}=3^{--},$ $\;\rho _3(1690),\omega _3(1670),\phi _3(
1850),K_3^\ast (1780),$\hfil\break
and start with a discussion of the corresponding two problems associated with
the isodoublet channel of these nonets. One of them is related to the $K^\ast (
1410)-K^\ast (1680)$ problem, the other to possible $^1D_2-^3D_2$ mixing in
the $I=1/2$ channel.
\\ \\
The two mesons, $K^\ast (1680)$ (with mass $1714\pm 20$ MeV and width $323\pm
110$ MeV) and $K^\ast (1410)$ $(1412\pm 12$ MeV, $227\pm 22$ MeV) are currently
assigned to the 1 $^3D_1$ and 2 $^3S_1$ nonets, respectively (the latter, 2 $^
3S_1$ $J^{PC}=1^{--},$ $\rho (1450),$ $\omega (1420),$ $\phi (1680),$ $K^\ast
(1410),$ has the same flavor quantum numbers as the former), although, as the
Particle Data Group (PDG) states, ``the $K^\ast (1410)$ could be replaced by
the $K^\ast (1680)$ as the 2 $^3S_1$ state'' \cite{pdg1}. The problem with
these mesons is that the $K^\ast (1410)$ seems too light to be the 2 $^3S_1$
state, even if one takes into account possible $2\;^3S_1-1\;^3D_1$ mixing.
Similarly, the $K^\ast (1680)$ seems too light to be the 1 $^3D_1.$ One may
doubt even the existence of the $K^\ast (1410),$ as suggested first by
T\"{o}rnqvist \cite{To}, since it (as well as the $K^\ast (1680))$ has been
observed by only one group, LASS \cite{LASS}, although with superior
statistics, in partial wave analyses under the much stronger $K_2^\ast (1430)$
and $K_0^\ast (1430).$ Two older experiments \cite{Etkin,older} quote a
considerably higher mass, $\simeq 1500$ MeV. In addition, its $K\pi $
branching ratio is suspiciously small, only $(6.6\pm 1.3)$\%. On the other
hand, the $K^\ast (1680)$ has a suspiciously large total width $(\sim 400)$
MeV, much larger than typical hadron widths, and a natural suspicion would be
that it is really composed of two states of normal width $(\sim 150-200$ MeV)
\cite{To}, quite analogously to what has been suggested to be the case for the
$\rho (1600)$ and $\omega (1600)$ which have been resolved into $\rho (1450)$
plus $\rho (1700)$ and $\omega (1420)$ plus $\omega (1600)$ \cite{split}. The
masses of the two states contained in the $K^\ast (1680)$ were determined in
ref. \cite{To}to be 2 $^3S_1(\approx \!1608)$ and 1 $^3D_1(\approx \!
1784),$ from the requirement that the both fit the corresponding Regge
trajectories. This is in agreement with the values obtained by Godfrey and
Isgur in a relativized quark model \cite{GI}, 2 $^3S_1(1580),$ 1 $^3D_1(1780).$
An older experiment on the $K^\ast (1680)$ quotes a mass of the same order,
$\sim 1800$ MeV \cite{Etkin}.
\\ \\
Theoretically, for the four $(n,L)$-wave meson nonets, the isoscalar and
isovector members of the $n\;^3L_L$ and $n\;^1L_L$ nonets with the same
charge cannot mix, since they have opposite $C$- and $G$-parity, as long as
one neglects $SU(2)_I$ breaking. However, their isodoublet counterparts
(strange, charmed, ... mesons) do not possess definite $C$-parity and,
therefore, can in principle mix when only $SU(3)$ flavor symmetry is broken.
This type of mixing can take place for all $L\geq 1$ mesons, as follows,
\beq
\left( \begin{array}{c}
Q_{high} \\
Q_{low}
\end{array} \right) =\left( \begin{array}{cc}
\cos \theta _{nL} & \sin \theta _{nL} \\
-\sin \theta _{nL} & \cos \theta _{nL}
\end{array} \right) \left( \begin{array}{cc}
n\;^1L_L \\
n\;^3L_L
\end{array} \right) ,
\eeq
where $Q$ stands for the $K,D,D_s,$ ... . It is known that this mixing actually
takes place for the $P$-wave mesons where the $I=1/2$ $K_{1A}$ and $K_{
1B}$ states of the 1 $^3P_1$ and 1 $^1P_1$ nonets, respectively, mix, leading
to the physical $K(1270)$ and $K(1400)$ states \cite{K1,Lip}. If such a mixing
is also the case for the $D$-wave mesons, a question suggests itself regarding
the physical masses of the $I=1/2$ states of the $^3D_2$ and $^1D_2$ nonets,
which we call $K_{2A}$ and $K_{2B},$ respectively, in the following.
\\
If the assumption of T\"{o}rnqvist about the $K^\ast (1680)$ \cite{To} is
correct, one would have simultaneous mass near-degeneracy of the 1 $^3D_1$ and
1 $^3D_3$ meson nonets in the isovector and isodoublet channels, since in this
case $M(\rho (1700))\approx M(\rho _3(1690)),$ $M(K^\ast (1780))\approx M(K_3^
\ast (1780)).$ As shown in our previous paper \cite{prev}, similar degeneracy
of the 1 $^3P_0$ and 1 $^3P_2$ nonets is an intrinsic property of $P$-wave
meson spectroscopy and may be straightforwardly understood in a nonrelativistic
constituent quark model. We now wish to apply this model to the $D$-wave mesons
in order to show that near-degeneracy of the $^3D_3$ and $^3D_1$ nonets
mentioned above also takes place. We note that this result is a direct
consequence of the nonrelativistic constituent quark model which we discuss
below; this mass near-degeneracy of the two nonets does not depend on the
values of the input parameters, and cannot be considered as a numerical
coincidence, as the results of, e.g., Godfrey and Isgur \cite{GI}, may be
viewed (their model finds the values $M(K^\ast )=1780$ MeV, $M(K_3^\ast )=1790$
MeV for the $I=1/2$ 1 $^3D_1$ and 1 $^3D_3$ meson masses). We also expect our
model to provide relevant information on possible $K_{2A}-K_{2B}$ mixing.
\section{Nonrelativistic constituent quark model}
In the constituent quark model, conventional mesons are bound states of a spin
1/2 quark and spin 1/2 antiquark bound by a phenomenological potential which
has some basis in QCD \cite{LSG}. The quark and antiquark spins combine to
give a total spin 0 or 1 which is coupled to the orbital angular momentum $L.$
This leads to meson parity and charge conjugation given by $P=(-1)^{L+1}$ and
$C=(-1)^{L+S},$ respectively. One typically assumes that the $q\bar{q}$ wave
function is a solution of a nonrelativistic Schr\"{o}dinger equation with the
generalized Breit-Fermi Hamiltonian\footnote{The most widely used potential
models are the relativized model of Godfrey and Isgur \cite{GI} for the
$q\bar{q}$ mesons, and Capstick and Isgur \cite{CI} for the $qqq$ baryons.
These models differ from the nonrelativistic quark potential model only in
relatively minor ways, such as the use of $H_{kin}=\sqrt{m_1^2+{\bf p}_1^2}+
\sqrt{m_2^2+{\bf p}_2^2}$ in place of that given in (2), the retention of the
$m/E$ factors in the matrix elements, and the introduction of coordinate
smearing in the singular terms such as $\delta ({\bf r}).$}, $H_{BF},$
\beq
H_{BF}\;\psi _n({\bf r})\equiv \left( H_{kin}+V({\bf p},{\bf r})\right) \psi _
n({\bf r})=E_n\psi _n({\bf r}),
\eeq
where $H_{kin}=m_1+m_2+{\bf p}^2/2\mu -(1/m_1^3+1/m_2^3){\bf p}^4/8,$ $\mu =m_
1m_2/(m_1+m_2),$ $m_1$ and $m_2$ are the constituent quark masses, and to
first order in $(v/c)^2={\bf p}^2c^2/E^2\simeq {\bf p}^2/m^2c^2,$ $V({\bf p},
{\bf r})$ reduces to the standard nonrelativistic result,
\beq
V({\bf p},{\bf r})\simeq V(r)+V_{SS}+V_{LS}+V_T,
\eeq
with $V(r)=V_V(r)+V_S(r)$ being the confining potential which consists of a
vector and a scalar contribution, and $V_{SS},V_{LS}$ and $V_T$ the spin-spin,
spin-orbit and tensor terms, respectively, given by \cite{LSG}
\beq
V_{SS}=\frac{2}{3m_1m_2}\;{\bf s}_1\cdot {\bf s}_2\;\triangle V_V(r),
\eeq
$$V_{LS}=\frac{1}{4m_1^2m_2^2}\frac{1}{r}\left( \left\{ [(m_1+m_2)^2+2m_1m_2]\;
{\bf L}\cdot {\bf S}_{+}+(m_2^2-m_1^2)\;{\bf L}\cdot {\bf S}_{-}\right\}
\frac{dV_V(r)}{dr}\right. $$
\beq
\left. -\;[(m_1^2+m_2^2)\;{\bf L}\cdot {\bf S}_{+}+(m_2^2-m_1^2)\;{\bf L}\cdot
{\bf S}_{-}]\;\frac{dV_S(r)}{dr}\right) ,
\eeq
\beq
V_T=\frac{1}{12m_1m_2}\left( \frac{1}{r}\frac{dV_V(r)}{dr}-\frac{d^2V_V(r)}{
dr^2}\right) S_{12}.
\eeq
Here ${\bf S}_{+}\equiv {\bf s}_1+{\bf s}_2,$ ${\bf S}_{-}\equiv {\bf s}_1-
{\bf s}_2,$ and
\beq
S_{12}\equiv 3\left( \frac{({\bf s}_1\cdot {\bf r})({\bf s}_2\cdot {\bf r})}{
r^2}-\frac{1}{3}{\bf s}_1\cdot {\bf s}_2\right).
\eeq
For constituents with spin $s_1=s_2=1/2,$ $S_{12}$ may be rewritten in the form
\beq
S_{12}=2\left( 3\frac{({\bf S}\cdot {\bf r})^2}{r^2}-{\bf S}^2\right),\;\;\;
{\bf S}={\bf S}_{+}\equiv {\bf s}_1+{\bf s}_2.
\eeq
Since $(m_1+m_2)^2+2m_1m_2=6m_1m_2+(m_2-m_1)^2,$ $m_1^2+m_2^2=2m_1m_2+(m_2-m_
1)^2,$ the expression for $V_{LS},$ Eq. (5), may be rewritten as follows,
$$V_{LS}=\frac{1}{2m_1m_2}\frac{1}{r}\left[ \left( 3\frac{dV_V(r)}{dr}-
\frac{dV_S(r)}{dr}\right) + \frac{(m_2-m_1)^2}{2m_1m_2}\left(\frac{dV_V(r)}{d
r}-\frac{dV_S(r)}{dr}\right) \right] {\bf L}\cdot {\bf S}_{+}$$
\beq
+\frac{m_2^2-m_1^2}{4m_1^2m_2^2}\;\frac{1}{r}\left( \frac{dV_V(r)}{dr}-\frac{
dV_S(r)}{dr}\right) {\bf L}\cdot {\bf S}_{-}\equiv V_{LS}^{+}+V_{LS}^{-}.
\eeq
Since two terms corresponding to the derivatives of the potentials with respect
to $r$ are of the same order of magnitude, the above expression for
$V_{LS}^{+}$ may be rewritten as
\beq
V_{LS}^{+}=\frac{1}{2m_1m_2}\frac{1}{r}\left( 3\frac{dV_V(r)}{dr}-\frac{dV_
S(r)}{dr}\right) {\bf L}\cdot {\bf S}\left[ 1+\frac{(m_2-m_1)^2}{2m_1m_2}\;O(
1)\right] .
\eeq
\section{$D$-wave spectroscopy}
We now wish to apply the Breit-Fermi Hamiltonian to the $D$-wave mesons. By
calculating the expectation values of different terms of the Hamiltonian
defined in Eqs. (4),(8),(9), taking into account the corresponding matrix
elements $\langle {\bf s}_1\cdot {\bf s}_2\rangle ,$ $\langle {\bf L}\cdot
{\bf S}\rangle $ and $S_{12}$ \cite{LSG}, one obtains relations similar to
those for the $P$-wave mesons \cite{prev,BGP},
\bqryn
M(^3D_1) & = & M_0+\frac{1}{4}\langle V_{SS}\rangle -3\langle V_{LS}^{+}
\rangle -\frac{1}{2}\langle V_T\rangle , \\
M(^3D_3) & = & M_0+\frac{1}{4}\langle V_{SS}\rangle +2\langle V_{LS}^{+}\rangle
-\frac{1}{7}\langle V_T\rangle , \\
M(\rho _2) & = & M_0+\frac{1}{4}\langle V_{SS}\rangle -\langle V_{LS}^{+}
\rangle +\frac{1}{2}\langle V_T\rangle , \\
M(\pi _2) & = & M_0-\frac{3}{4}\langle V_{SS}\rangle ,
\eqryn
$$\left( \begin{array}{c}
M(K_2^{'}) \\ M(K_2) \end{array} \right) =\left( \begin{array}{cc}
M_0+\frac{1}{4}\langle V_{SS}\rangle -\langle V_{LS}^{+}\rangle +\frac{1}{2}
\langle V_T\rangle & \sqrt{2}\langle V_{LS}^{-}\rangle \\
\sqrt{2}\langle V_{LS}^{-}\rangle & M_0-\frac{3}{4}\langle V_{SS}\rangle
\end{array} \right) \left( \begin{array}{c}
K_{2A} \\ K_{2B} \end{array} \right) ,$$
where $M_0$ stands for the sum of the constituent quark masses in either case.
The $V_{LS}^{-}$ term acts only on the $I=1/2$ singlet and triplet states
giving rise to the spin-orbit mixing between these states\footnote{The
spin-orbit $^3D_2-^1D_2$ mixing is a property of the model we are considering;
the possibility that another mechanism contributes to this mixing, such as
mixing via common decay channels \cite{Lip} should not be ruled out, but is not
included here.}, and is responsible for the physical masses of the $K_2$ and
$K_2^{'}.$ Let us assume, for simplicity, that $$\sqrt{2}\langle V_{LS}^{-}
\rangle (K_{2B})\simeq -\sqrt{2}\langle V_{LS}^{-}\rangle (K_{2A})\equiv
\Delta .$$ The masses of the $K_{2A},\;K_{2B}$ are then determined by
relations similar to those for the $\pi _2,\;\rho _2$ above, and $M(K_2^{'})
\simeq M(K_{2A})+\Delta ,$ $M(K_2)\simeq M(K_{2B})-\Delta ,$ or\footnote{
Actually, as follows from Eq. (28) below, $$\frac{M(K_2^{'})-M(K_{2A})}{M(K_{
2B})-M(K_2)}=\frac{M(K_2)+M(K_{2B})}{M(K_2^{'})+M(K_{2A})}\simeq \frac{2M(K_{
2B})}{2M(K_{2A})}\simeq 1,$$ when both the deviations $M(K_{2B})-M(K_2),$ $M(
K_2^{'})-M(K_{2A})$ and the mass difference $M(K_{2A})-M(K_{2B})$ are small
compared to $M(K_{2A}),\;M(K_{2B}).$}
\beq
\Delta \simeq M(K_2^{'})-M(K_{2A})\simeq M(K_{2B})-M(K_2).
\eeq
We thus obtain the following formulas for the masses of all eight $I=
1,1/2$ $D$-wave mesons, $\pi _2,\rho ,\rho _1,\rho _2,K_{2B},K^\ast ,K_{2A},K_
3^\ast :$
\bqry
M(^1D_2) & = & m_1+m_2-\frac{3}{4}\frac{a}{m_1m_2}, \\
M(^3D_1) & = & m_1+m_2+\frac{1}{4}\frac{a}{m_1m_2}-\frac{3b}{m_1m_2}-\frac{c}{2
m_1m_2}, \\
M(^3D_2) & = & m_1+m_2+\frac{1}{4}\frac{a}{m_1m_2}-\frac{b}{m_1m_2}+\frac{c}{2
m_1m_2}, \\
M(^3D_3) & = & m_1+m_2+\frac{1}{4}\frac{a}{m_1m_2}+\frac{2b}{m_1m_2}-\frac{c}{7
m_1m_2},
\eqry
where $a,b$ and $c$ are related to the matrix elements of $V_{SS},$ $V_{LS}$
and $V_T$ (see Eqs. (4), (6), (10)) and assumed to be the same for all of
the $D$-wave states, and we have ignored the correction to $V_{LS}^{+}$ in the
formula (10) that is due to the difference in the masses of the $n$ and $s$
quarks. These masses, as calculated from (12)-(15), are
(in the following, $\pi _2$ stands for the mass of the $\pi _2,$ etc., and we
assume $SU(2)$ flavor symmetry, $n\equiv m_u=m_d,$ $s\equiv m_s)$
\beq
n=\frac{5\pi _2+3\rho +5\rho _2+7\rho _3}{40},
\eeq
\beq
s=\frac{10K_{2A}+6K^\ast +10K_{2B}+14K_3^\ast -5\pi _2-3\rho -5\rho _2-7\rho _
3}{40}.
\eeq
With the physical values of the meson masses (in GeV), $\pi _2\cong 1.67,$
$\rho \simeq \rho _2\simeq \rho _3\cong 1.70,$ $K_{2A}\simeq K_{2B}\cong 1.80,$
$K^\ast \simeq K_3^\ast \cong 1.77,$ the above relations give $$n\simeq 850\;
{\rm MeV,}\;\;\;s\simeq 940\;{\rm MeV,}$$ so that the abovementioned
correction, according to (10), is $\sim 90^2/(2\cdot 850\cdot 940)\simeq
0.5$\%, i.e., completely negligible. It follows from (12)-(15) that
\bqry
\frac{15a}{m_1m_2} & = & 3M(^3D_1)+5M(^3D_2)+7M(^3D_3)-15M(^1D_2), \\
\frac{60b}{m_1m_2} & = & 14M(^3D_3)-5M(^3D_2)-9M(^3D_1), \\
\frac{30c}{7m_1m_2} & = & 5M(^3D_2)-2M(^3D_3)-3M(^3D_1).
\eqry
By expressing the ratio $n/s$ in four different ways, viz., directly from
(16),(17) and dividing the expressions (18)-(20) for the $I=1/2$ and $I=1$
mesons by each other, one obtains the three relations,
$$\frac{5\pi _2+3\rho +5\rho _2+7\rho _3}{10K_{2A}+6K^\ast +10K_{2B}+14K_3^
\ast -5\pi _2-3\rho -5\rho _2-7\rho _3}$$
\beq
=\frac{3K^\ast +5K_{2A}+7K_3^\ast -15K_{2B}}{3\rho +5\rho _2+7\rho _3-15\pi _
2},
\eeq
\beq
\frac{3K^\ast +5K_{2A}+7K_3^\ast -15K_{2B}}{3\rho +5\rho _2+7\rho _3-15\pi _
2}=\frac{14K_3^\ast -5K_{2A}-9K^\ast }{14\rho _3-5\rho _2-9\rho },
\eeq
\beq
\frac{14K_3^\ast -5K_{2A}-9K^\ast }{14\rho _3-5\rho _2-9\rho }=\frac{5K_{2A}-
2K_3^\ast -3K^\ast }{5\rho _2-2\rho _3-3\rho }.
\eeq
First consider Eq. (23) which may algebraically be rewritten as
\beq
(K_3^\ast -K^\ast )(\rho _3-\rho _2)=(K_3^\ast -K_{2A})(\rho _3-\rho ).
\eeq
Since the $\rho $ and $\rho _3$ states are mass near-degenerate, $\rho \approx
\rho _3$ (their masses are $1700\pm 20$ MeV and $1691\pm 5$ MeV, respectively
\cite{pdg}), it then follows from (24) that either $\rho _2\approx \rho
\approx \rho _3,$ or $K^\ast \approx K_3^\ast .$ The first possibility leads,
through the relations (19),(20) applied to the $I=1$ mesons, to $b\approx c
\approx 0,$ which would in turn, from the same relations for the $I=1/2$
mesons, imply $K^\ast \approx K_{2A}\approx K_3^\ast .$ Although this case may
not be excluded on the basis of current experimental data on the meson masses,
we consider simultaneous disappearance of both the spin-orbit and tensor terms
as dubious. We believe, therefore, that the physical case corresponds to
\beq
K^\ast \approx K_3^\ast ,
\eeq
so that, the mass near-degeneracy of the 1 $^3D_1$ and 1 $^3D_3$ meson nonets
in the $I=1$ channel, $\rho \approx \rho _3,$ implies similar near-degeneracy
also in the $I=1/2$ channel. This result is a direct consequence of the model
we are considering; the equality $K^\ast =K_3^\ast $ follows from Eq. (24),
independent of the values of the input parameters $a,b,c,n,s,$ with the
proviso that the result $\rho =\rho _3$ is borne out experimentally.
With $K^\ast =K_3^\ast $ and $\rho =\rho _3,$ Eqs. (21) and (22) may be
rewritten as
\beq
(\rho -\rho _2+K^\ast -K_{2A})(\pi _2+\rho _2+2\rho )=2(K^\ast -K_{2A})(
K_{2A}+K_{2B}+2K^\ast ),
\eeq
\beq
(K_{2A}-K_{2B})(\rho -\rho _2)=(K^\ast - K_{2A})(\rho _2-\pi _2).
\eeq
One now has to determine the values of $\rho _2,$ $K_{2A}$ and $K_{2B}.$ The
remaining equation is obtained from the mixing of the $K_{2A}$ and $K_{2B}$
states which results in the physical $K_2$ and $K_2^{'}$ mesons. Independent
of the mixing angle,
\beq
K_{2A}^2+K_{2B}^2=K_2^2+K_2^{'2}.
\eeq
With (in MeV) $\pi _2=1670\pm 20,$ $\rho=\rho _3\cong 1690,$ $K^\ast =K_3^\ast
\cong 1780,$ $K_2=1773,$ $K_2^{'}=1816,$ the solution to (26)-(28) is
\beq
\rho _2=1741\mp 19\;{\rm MeV,}\;\;\;K_{2A}=1827\mp 17\;{\rm MeV,}\;\;\;
K_{2B}=1762\pm 18\;{\rm MeV.}
\eeq
For this solution, we observe the sum rule
\beq
K_{2A}^2-\rho _2^2=0.307\;{\rm GeV}^2\simeq K_{2B}^2-\pi _2^2=0.316\;{\rm GeV
}^2,
\eeq
which may be further generalized to include the near-degenerate $\rho \approx
\rho _3\cong 1690$ MeV and $K^\ast \approx K_3^\ast \cong 1780$ MeV:
\beq
K^{\ast 2}-\rho ^2\approx K_3^{\ast 2}-\rho _3^2\cong 0.312\;{\rm GeV}^2.
\eeq
Relations of the type (30),(31) could have been expected by anology with the
formulas $$K^{\ast 2}-\rho ^2=K^2-\pi ^2,\;\;\;K_2^{\ast 2}-a_2^2=K^2-\pi ^2,
\;\;\;{\rm etc.,}$$ provided by either the algebraic approach to QCD \cite{OT}
or phenomenological formulas $$m_1^2=2Bn+C,\;\;\;m_{1/2}^2=B(n+s)+C$$ (where
$B$ is related to the quark condensate, and $C$ is a constant within a given
meson nonet) motivated by the linear mass spectrum of a nonet and the
collinearity of Regge trajectories of the corresponding $I=1$ and $I=1/2$
states, as discussed in ref. \cite{linear}.
Note from (29) that both the $K_{2A}$ and $K_{2B}$ lie in the mass
intervals provided by current experimental data on the $K_2^{'}$ and $K_2$
states, respectively. This simply means that the mixing between these states is
negligible (within uncertainties provided by data), or $\sqrt{2}\langle V_{LS}^
{-}\rangle <<K_{2A}-K_{2B}.$ As we will see in Eqs. (32)-(34) below, this is
entirely consistent with reasonable expectation based on the decrease of such
matrix elements with increasing partial wave (see the corresponding $P$-wave
results \cite{prev}).
Thus, the nonrelativistic constituent quark model we are considering suggests
the following $q\bar{q}$ assignments for the isovector and isodoublet states
of the $D$-wave meson nonets:
\bqryn
\pi _2 & \simeq & 1680\;{\rm MeV,}\;\;\;K_{2B}\;\simeq \;1770\;{\rm MeV,} \\
\rho & \simeq & 1690\;{\rm MeV,}\;\;\;K^\ast \;\;\simeq \;\;1780\;{\rm MeV,}
\\
\rho _2 & \simeq & 1730\;{\rm MeV,}\;\;\;K_{2A}\;\simeq \;1820\;{\rm MeV,} \\
\rho _3 & \simeq & 1690\;{\rm MeV,}\;\;\;K^\ast \;\;\simeq \;\;1780\;{\rm
MeV.}
\eqryn
Let us now extract the matrix elements of the spin-spin,
spin-orbit, and tensor interaction in our model. As follows from (18)-(20) and
the above relations for the masses of the $I=1,1/2$ mesons,
\bqry
\langle V_{SS}\rangle & \simeq & \frac{a}{n^2}\;\simeq \;\frac{a}{ns}\;\cong
\;23.3\;{\rm MeV}, \\
\langle V_{LS}^{+}\rangle & \simeq & \frac{b}{n^2}\;\simeq \;\frac{b}{ns}\;
\cong -3.3\;{\rm MeV}, \\
\langle V_T\rangle & \simeq & \frac{c}{n^2}\;\simeq \;\frac{c}{ns}\;\cong \;
46.7\;{\rm MeV}.
\eqry
Also, $\langle V_{LS}^{-}\rangle \cong 0,$ since the $K_{2A}-K_{2B}$ mixing
angle is close to zero. Therefore, the spin-spin and tensor terms of the
Hamiltonian (2) are of the same order of magnitude, and the spin-orbit terms
are negligibly small.
One may now estimate the masses of the isoscalar mesons of the four nonets
assuming that they are pure $s\bar{s}$ states. Applying (12)-(15) with $m_1=m_
2=s,$ we find
\beq
\eta _2\simeq 1860\;{\rm MeV},\;\;\;\phi \approx \phi _3\simeq 1870\;{\rm MeV},
\;\;\;\phi _2\simeq 1910\;{\rm MeV}.
\eeq
The value 1870 is within 1\% of the physical value of the $\phi _3$ mass,
$1854\pm 7$ MeV \cite{pdg}. There exists an experimental candidate for the
$\eta _2(1860)$ but it was omitted from the recent Meson Summary Table as
``needs confirmation''. This state indicated in PDG as the $\eta _2(1870)$
\cite{pdg} has been seen by the Crystal Ball collaboration in the final state
$\eta \pi ^0\pi ^0$ of a $\gamma \gamma $ reaction as a resonant structure
having mass and width $1881\pm 32\pm 40$ MeV, $221\pm 92\pm 44$ MeV,
respectively \cite{Karch}, and as a similar structure in $\gamma \gamma
\rightarrow \eta \pi ^{+}\pi ^{-}$ by the CELLO collaboration, with mass and
width $1850\pm 50$ MeV, $\sim 360$ MeV, respectively \cite{Feindt}.
The masses of the remaining isoscalar $n\bar{n}$ states of the four nonets may
be calculated by assuming that all four nonets are ideally mixed and using the
Sakurai mass formula for an ideally mixed nonet \cite{Sak},
\beq
M^2(I=1)+M^2(I=0,n\bar{n})+2M^2(I=0,s\bar{s})=4M^2(I=1/2).
\eeq
In this way, one obtains
\beq
\eta _2^{'}\simeq 1670\;{\rm MeV,}\;\;\;\omega \approx \omega _3\simeq 1680\;
{\rm MeV,}\;\;\;\omega _2\simeq 1720\;{\rm MeV.}
\eeq
The value 1680 is within 1\% of the physical value of the $\omega _3$ mass,
$1667\pm 4$ MeV, and 2\% of that of the $\omega ,$ $1649\pm 24$ MeV \cite{
pdg}.
\section{Concluding remarks}
We have shown that a nonrelativistic constituent quark model displays a common
mass near-degeneracy of the 1 $^3D_1$ and 1 $^3D_3$ meson nonets in the
isovector and isodoublet channels, and suggests therefore that the $K^\ast (
1680)$ cannot be the $I=1/2$ member of the 1 $^3D_1$ nonet. The mass of the
true member of the latter is estimated to be $\simeq 1780$ MeV. This may
support the assumption of T\"{o}rnqvist that the $K^\ast (1680)$ should resolve
into two separate resonances which are the $I=1/2$ members of the 1 $^3D_1$
and 2 $^3S_1$ nonets. The analysis of the LASS data on the reaction $K^{-}p
\rightarrow \bar{K}^0\pi ^{-}p$ done by Bird \cite{Bird} reveals a resonant
structure with mass $1678\pm 64$ MeV and a huge width of $454\pm 270$ MeV; the
two abovementioned states may be associated with its upper- and lower-mass
parts, respectively.
The conclusion that the $K^\ast (1410)$ does not belong to the 2 $^3S_1$
nonet agrees with the results obtained by one of the authors in ref. \cite{LB}
on the basis of the linear spectrum of a meson nonet discussed in \cite{
linear}, which does not support the $K^\ast (1410)$ meson being the member of
the 2 $^3S_1$ nonet. (In \cite{LB}, out of the two, $K^\ast (1410)$ and $K^\ast
(1680),$ the preference being the 2 $^3S_1$ $I=1/2$ state was given to the
latter). If this is actually the case, and the true member of the 2 $^3S_1$
nonet is, e.g., the low-mass part of the broad $K^\ast (1680),$ in agreement
with T\"{o}rnqvist, the question immediately arises as to what the real nature
of this state is, if it does exist. A possible answer to this question may be
the subject of subsequent investigation. \\
We close with briefly summarizing our findings: \\
1. A nonrelativistic constituent quark model displays a common mass
near-degeneracy of the 1 $^3D_1$ and 1 $^3D_3$ meson nonets in the $I=1$ and
$1/2$ channels, and suggests therefore that the $K^\ast (1680)$ cannot be the
$I=1/2$ member of the 1 $^3D_1$ nonet. \\
2. When matched to current experimental data on the meson masses, this model
shows no mixing between the $I=1/2$ states of the 1 $^3D_2$ and $^1D_2$ nonets.
The spin-orbit terms of the Hamiltonian appear to be negligibly small. \\
3. The results suggest a sum rule $$M^2(I=1/2)-M^2(I=1)\approx \;{\rm const}
\simeq 0.31\;{\rm GeV}^2,$$ which holds for all four $D$-wave meson nonets. \\
4. The results also suggest that the $\eta _2(1870)$ which is at present
omitted from the Meson Summary Table, is the $I=1$ $s\bar{s}$ state of the 1
$^1D_2$ nonet. \\
5. The $q\bar{q}$ assignments for the $P$-wave nonets obtained on the basis
of the results of the work, are
1 $^1D_2$ $J^{PC}=2^{-+},$ $\;\pi _2(1680),$ $\eta _2^{'}(1670),$ $\eta _2(
1860),$ $K_{2B}(1770)$
1 $^3D_1$ $J^{PC}=1^{--},$ $\;\rho (1690),$ $\;\;\omega (1680),$ $\;\phi (
1870),$ $\;K^\ast (1780)$
1 $^3D_2$ $J^{PC}=2^{--},$ $\;\rho _2(1730),$ $\omega _2(1720),$ $\phi _2(
1910),$ $K_{2A}(1820)$
1 $^3D_3$ $J^{PC}=3^{--},$ $\;\rho _3(1690),$ $\omega _3(1680),$ $\phi _3(
1870),$ $\;K_3^\ast (1780)$
\section*{Acknowledgments}
Correspondence of one of the authors (L.B.) with L.P. Horwitz during the
preparation of this work is greatly acknowledged.
\bigskip
\bigskip
|
2,877,628,091,466 | arxiv | \section{Introduction.}
In \cite{B13} Example 1, A. Borisov devised a polynomial map called the \textit{additive trap} $F_{at}:\mathbb{A}_\mathbb{Z}^2\to \mathbb{A}_\mathbb{Z}^2$ by defining $F_{at}(x,y)=(x^2y,x^2y+xy^2)$. This polynomial map satisfies the following properties :
\begin{enumerate}[label=(\alph*)]
\item $F_{at}$ and its reductions modulo $p$ are dominant for all primes $p$.
\item The only fixed point of $F_{at}$ and any reduction of it modulo $p$ for all primes $p$ is (0,0).
\item $F_{at}^{(p)}(x,y)\equiv (0,0)$ (mod $p$) for every $(x,y)\in \mathbb{A}^2_{\mathbb{F}_p}$ and for all primes $p$, where $F_{at}^{(p)}$ is the $p$-th iteration of $F_{at}$.
\end{enumerate}
Note that all points $(x,y)\in\mathbb{A}_k^2$ with either $x=0$ or $y=0$ are taken to (0,0) by $F_{at}$. Let $p$ be any prime and $x\in \mathbb{F}_p^*$. Then for any $y\in\mathbb{F}_p$, we get $$\frac{x^2y+xy^2}{x^2y}=\frac{y}{x}+1.$$ So after at most $p-1$ iterations the second coordinate becomes 0 and thus applying $F_{at}$ once more we reach (0,0). Since $p$ is arbitrary, we get (c). For the proofs of (a) and (b) and more details the reader should look at \cite{B13}. \\
Upon further analysis of the discussion above we notice that the $p$-th iteration of $F_{at}$ modulo $p$ is the zero map which follows from the fact that the polynomial $u(x)=x+1$ has the following property : for every $n\in \mathbb{N}$, $u^{(n)}(x)=x+n$, so that, in particular, for every prime $p$, $u^{(p-1)}(1)=p\equiv 0$ (mod $p$). In fact, we can define the following : suppose that $u:=u(x)\in\mathbb{Z}[x]$ of degree $d$, $r\in\mathbb{Z}$ and $A$ is a finite subset of the set of prime numbers. If, for every prime in $p$ not contained in $A$, we have some $m_p\in\mathbb{N}$ such that $u^{(m_p)}(r)\equiv 0$ (mod $p$), then we will say that \textit{$u$ is weakly locally nilpotent at $r$ outside $A$}. The set of all weakly locally nilpotent polynomials at $r$ of degree $d$ will be denoted by $L_{r,A}^d$ (see other definitions in Section 2 below) and $L_{r,A}$ is the union of all such $L_{r,A}^d$, where the union is taken over $d\in\mathbb{N}$. When $A=\emptyset$, we drop the terms ``weakly" and ``outside". If $u$ is such that $u^{(n)}(r)=0,$ for some $n\in\mathbb{N}$, then we will say that $u$ is \textit{nilpotent at $r$} and the \textit{nilpotency index} is the least of such $n's$. We denote the set of all nilpotent polynomials at $r$ of degree $d$ and nilpotency index $i$ by $N_{r,i}^d$ and $N_r$ is the union of all such $N_{r,i}^d$, where the union is taken over $i,d\in\mathbb{N}$. Thus, for example, $u(x)=x+1\in L_{1,\emptyset}^1$ and $u(x)=x-1\in N_{1,1}^1$. Ideally, we would like to classify all weakly locally nilpotent polynomials of all possible degrees.
The paper has three main results:
\begin{enumerate}[label=(\arabic*)]
\item complete classification of all polynomials in $L_{1,\emptyset}$ and $L_{-1,\emptyset}$. This can be found in \textbf{Theorem 1} of Section 4.
\item complete classification of all polynomials in $L_{1,A}^1$ and $L_{-1,A}^1$ for a given finite subset $A$ of the set of prime numbers. This can be found in \textbf{Theorem 4} of Section 5. To prove this we needed the result of C. Corrales-Rodrig\'a\~nez and R. Schoof (see Theorem 1 \cite{CS97}).
\item complete classification of all polynomials in $S_r^1$ and $S_{-r}^1$ for $r\in \mathbb{Z}\setminus\{0,\pm 1\}$. This can be found in \textbf{Theorem 6} of Section 5. Here also we needed the result of C. Corrales-Rodrig\'a\~nez and R. Schoof.
\end{enumerate}
Note that \textit{Theorem }1 has been proven only by using the tools found in elementary number theory. But as we move to $r\in \mathbb{Z}\setminus\{\pm 1\}$, classifying these polynomials requires stronger machinery. Even though many of these tools seem to work, we were only able to analyze the argument for degree 1 polynomials, when we consider $r$ to be in $\mathbb{Z}\setminus\{\pm 1\}$. The main tools that we have used here are (see section 3 for details)
\begin{enumerate}[label=(\arabic*)]
\item fact 1,
\item lemma 1 which is a consequence of the aforementioned theorem by C. Corrales-Rodrig\'a\~nez and R. Schoof and
\item the reduction of polynomials.
\end{enumerate}
This paper has 5 sections in total. Section 1 is the introduction. In Sections 2 and 3 we formalize the definitions and introduce the main tools, respectively. Section 4 contains two theorems, the first of which is the first main result which classifies $L_{1,\emptyset}$ and $L_{-1,\emptyset}$ and the second theorem classifies $N_0$. The last section has three subsections. In the first subsection we state and prove two theorems, the first of which classifies $L_{0,\emptyset}^1$ and the second is the second main result of this paper. In the second subsection we treat the case when $r$ is a prime, followed by a few examples to illustrate the \textit{Theorem }5 and in the third subsection we state and prove the final (main) result of this paper.
A couple of conjectures can be listed below which shows up during our computations and it is quite likely that they require some strong results in algebraic number theory or class field theory in order to understand them better :
\begin{itemize}
\item (\textbf{Conjecture 1.}) Let $r\in\mathbb{Z}\setminus \{\pm 1\}$. Then $S_r^1=S_r$, where $S_r$ is the set of all locally nilpotent, non-nilpotent polynomials at $r$ of all possible degrees and $S_r^1$ is the set of all linear locally nilpotent, non-nilpotent polynomials at $r$.
\item (\textbf{Conjecture 2.}) Let $r\in \mathbb{Z}$. Then the set of all weakly locally nilpotent, non-nilpotent polynomials of all possible degrees outside some finite subset $A$ of the set of prime numbers are actually just the linear polynomials.
\end{itemize}
The reader should note that although extensive research has been done to understand polynomial iterations modulo primes (see \cite{RWKO84},\cite{RWKO85},\cite{IS09},\cite{J07} for example), those mostly deal with density of the polynomials which does not have such properties. But here we are trying to classify all such polynomials and hence those aforementioned results cannot be directly applied to this paper. Any comments and/or suggestions from the reader how to answer (or partially answer) \textit{Conjectures 1 and 2} would be greatly appreciated.
\newpage
\section{Terminology and Definitions}
We will start by formally defining the polynomials mentioned in the introduction and fixing some basic terminologies that we wish to use throughout this paper. Let $\mathcal{P}$ be the set of all primes in $\mathbb{Z}$. For a finite subset $A$ of $\mathcal{P}$ and for $a\in \mathbb{Z}$ we define
$$\mathcal{P}_A:=\mathcal{P}\setminus A\;\;\textup{and}\;\; P_A(a):=\{p\in \mathcal{P}_A~|~p \text{ divides }a\}\;\; \textup{and}\;\; P(a):=P_{\emptyset}(a).$$
So $P(a)$ is the set of all primes that divides $a$.
For $u=u(x)\in\mathbb{Z}[x]$, we define the polynomial $u^{(1)}(x):=u(x)$ and $u^{(n+1)}(x):=u(u^{(n)}(x))$, $n\in\mathbb{N}$. Having fixed $r$ in $\mathbb{Z}$ and $A$ (as above) with $d\in\mathbb{N}\cup \{0\}$ a degree and $i\in\mathbb{N}$, an index, we define the following:
\begin{enumerate}[label=(\arabic*)]
\item We will say that $u(x)$ is a \textbf{\textit{ weakly locally nilpotent polynomial}} at $r$ outside $A$ if for each $p\in \mathcal{P}_A$, there exists $m\in\mathbb{N}$ (possibly depending on $p$) such that $u^{(m)}(r)\equiv 0$ (mod $p$). For each $p\in \mathcal{P}_A$, we will denote by $m_p$ the least of all such $m'$s. We fix the following notations for weakly locally nilpotent polynomials at $r$ outside $A$ :\\
$L_{r,A}^d:=\{u=u(x)\in\mathbb{Z}[x]~|~u\textup{ of degree } d\textup{ is weakly locally nilpotent at }r$\\
$\textup{ outside }A\},$\\
$L_{r,A}:=\sqcup_{d=0}^\infty L_{r,A}^d~$.
\item If $A=\emptyset$ in a), then we will just drop the terms ``weakly" and ``outside $A$".
\item We will say that $u(x)\neq 0$ is a \textbf{\textit{nilpotent polynomial}} at $r$ if $\exists~ n\in\mathbb{N}$ such that $u^{(n)}(r)=0$. We will call the smallest of all such $n'$s as the \textbf{\textit{nilpotency index/index of nilpotency}} of $u(x)$ at $r$. If $u^{(n)}(r)\neq 0$ for all $n\in\mathbb{N}$, we will say that $u$ is \textbf{\textit{non-nilpotent}} at $r$. By convention, the zero polynomial has index of nilpotency 1 at $r$. We fix the following notations for nilpotent polynomials at $r$ :\\ $N_{r,i}^d:=\{u\in\mathbb{Z}[x]~|~u\textup{ is nilpotent at }r \textup{ of nilpotency index }i\textup{ and degree }d\},$\\
$N_{r,i}:=\sqcup_{d=0}^\infty N_{r,i}^d~,$\\
$N_r:=\sqcup_{i=1}^\infty N_{r,i}~.$
\item The rest of the notation that we will be using are as follows :\\
$S_r^d:=L_{r,\emptyset}^d\setminus N_r~\textup{and}$\\
$S_r:=\sqcup_{d=1}^\infty S_r^d$.\\
For integers $a,b,c\in\mathbb{Z} \textup{ with }c\neq 0$ we will write $a\equiv_c b$ to mean $a\equiv b$ (mod $c$).
\end{enumerate}
\begin{remark}
It should be noted at this stage that $N_r\subset L_{r,\emptyset}$. But the reverse inclusion never holds; in other words for each given $r\in\mathbb{Z}$, $S_r$ is non-empty (see \textit{Example }(c) below).
\end{remark}
\subsection{Some examples}
\begin{enumerate}[label=(\alph*)]
\item Let $r\in\mathbb{Z}$. For each $q(x)\in\mathbb{Z}[x]\setminus \{0\}$, $(x-r)q(x)\in N_{r,1}$.
\item If $u(x)=-2x-4$, then $u(-1)=-2,~u(-2)=0$. So $u\in N_{-1,2}^1$. If $r\in\mathbb{Z}\setminus\{-1\}$, $u_r(x):=-(r+1)x+(r+1)^2\in N_{r,2}^1$ and if $r\in\mathbb{Z}\setminus\{0\}$, $u_r(x):=-2x+4r\in N_{r,2}^1$. Also if $r\in\mathbb{N}, u(x)=x-1\in N_{r,r}^1$.
\item \textit{This example shows the existence of non-nilpotent, locally nilpotent polynomials at 1.} Let $u(x)=x+1$. Then by induction we see that $u^{(n)}(1)=n+1$, for every $n\in\mathbb{N}$ and hence $u\notin N_{1}$. For each $p\in \mathcal{P}$, $u^{(p-1)}(1)=p\equiv_p 0$. Thus $u(x)\in S_1^1$. In \textit{Corollary }1 we will see that $S_1=\{x+1\}$.
\item Let $u(x)=-2x^2+7x-3$. Then $u(1)=2,~u(2)=3$ and $u(3)=0$. So, $u(x)\in N_{1,3}^2$. From this and \textit{Fact }1 (stated and proved below) it follows that $v(x):=2x^2+7x+3\in N_{-1,3}^2$.
\item For every $a\in\mathbb{Z}\setminus\{0\}$, let $u_a=u_a(x):=x+a$. By induction, we get $u_a^{(n)}(0)=na$. So it is clear that $u_a\notin N_0$. For each prime $p$, $u_a^{(p)}(0)=pa\equiv_p 0$. Thus $u_a\in S_0^1$.
\item Let $u(x)=4x-2$. Then $u(1)=2$ and $u(2)=6\equiv_5 1$. This means that $u^{(n)}(1)$ is either 1 or 2 \textit{modulo} 5, for every $n\in\mathbb{N}$. This shows that $u(x)\notin L_{1,A}$, for every finite subset $A\subset \mathcal{P}_{\{5\}}$.
\item Let $u(x)$ be as in \textit{example }(f). Then by induction we have $u^{(n)}(0)=\frac{2}{3}(1-4^n)$,
which cannot be zero for any $n\in \mathbb{N}$ and so the above polynomial is not contained in $N_0$. Note that $m_2=1, ~m_3=3$. For every prime $p\in P\setminus\{2,3\}$ we have that $u^{(p-1)}(0)\equiv_p 0$ by \textit{Fermat's little theorem} and so $u(x)\in S_0^1$.
\item The polynomial $u(x)=-x^3+9x^2-25x+25\in N_{2,4}^3$.
\end{enumerate}
\begin{remark}
Computation of polynomial iterations is very complicated. In general, all we can say is that for a degree $d$ polynomial $u(x)$, $u^{(n)}(x)$ is a polynomial of degree $d^n$. But this fact does not help when it comes to determining whether a polynomial is nilpotent/locally nilpotent at some $r$. But the linear case has a nice and easy to understand iteration formula. Let $u(x)=ax+b$ be a linear polynomial, i.e., $a\in\mathbb{Z}\setminus\{0\}$. Then
by induction it follows that for every $n\geq 1$, $$u^{(n)}(x)=a^nx+b\left(\sum\limits_{i=0}^{n-1}a^i\right),~~n\in\mathbb{N}.$$ So, $u^{(n)}(r)=a^nr+b\left(\sum\limits_{i=0}^{n-1}a^i\right)$, for each $n\in\mathbb{N}$. Throughout this paper we will refer to
this formula as the \textit{\textbf{iteration formula.}}
\end{remark}
\section{The main tools}
\textit{In this subsection we will develop some necessary tools. We start with the following fact which indicates that it is enough to study the locally nilpotent polynomials at non-negative $r'$s. In fact, it shows that there is a one-to-one correspondence between $S_r$ and $S_{-r}$.}
\begin{fact}
Let $u(x)=a_dx^d+\cdots+a_0\in\mathbb{Z}[x]\setminus\{0\}$ be a polynomial of degree $d$ and let $r\in\mathbb{Z}\setminus\{0\}$. Define $v(x):=-u(-x)$. Then $u(x)\in L_{r,\emptyset}^d \iff v(x)\in L_{-r,\emptyset}^d$. Similarly $u(x)\in N_{r,n}^d \iff v(x)\in N_{-r,n}^d$ and $u(x)\in S_r\iff v(x)\in S_{-r}$.
\end{fact}
\textit{Proof.} Since $v(-x)=-u(x)$, by induction it follows that $v^{(n)}(-r)=-u^{(n)}(r)$, from which the fact follows.$\hfill \blacksquare${}
\vspace{3mm}
Before moving on with the other tools it is imperative that we formally introduce the result by C. Corrales-Rodrig\'a\~nez and R. Schoof (\textit{Theorem }1 \cite{CS97}) : Let $K$ be a number field and $x,y\in K^*$. If for almost all prime ideals of $\mathfrak{p},$ we have $$\{n\in\mathbb{N}~|~y^n\equiv 1~(\textup{mod }\mathfrak{p})\}\supseteq \{n\in\mathbb{N}~|~x^n\equiv 1~(\textup{mod }\mathfrak{p})\},$$ then $\exists~ m\in\mathbb{Z}$ such that $y=x^m$.
Now we will state and prove the \textit{Lemma }1 which was mentioned in the introduction and we will also explain why it is one of the main tools to understanding linear locally nilpotent polynomials.
\begin{lemma}
Let $\alpha,\beta,\gamma\in\mathbb{Z}\setminus \{0\}$ be such that neither $\frac{\beta}{\gamma}$ nor $\frac{\gamma}{\beta}$ is not a non-negative power of $\alpha$. Then $\mathcal{P}\setminus \cup_{n\in\mathbb{N}}P(\gamma \alpha^n-\beta)$ is an infinite set.
\end{lemma}
\textit{Proof.} Suppose, if possible, that $\mathcal{P}\setminus \cup_{n\in\mathbb{N}}P(\gamma \alpha^n-\beta)$ is a finite set. This means that the set $\cup_{n\in\mathbb{N}}P(\gamma \alpha^n-\beta)$ contains all but finitely many primes. Then $\cup_{n\in\mathbb{N}}P(\gamma \alpha^n-\beta)\setminus P(\gamma)$ also contains all but finitely many primes. So, for almost all $p\in \mathcal{P}_{P(\gamma)}, ~\alpha^{n_p}\equiv_p \beta\gamma^{-1}$ for some $n_p\in \mathbb{N}$ (choice of $n_p$ possibly depends on $p$). So, if $k\in\mathbb{N}$ is such that $\alpha^k\equiv_p 1$, then $(\beta \gamma^{-1})^k\equiv_p (\alpha^{n_p})^k\equiv_p 1$. Taking $\alpha=x$, $\beta\gamma^{-1}=y$ in Theorem 1 [5], we arrive at a contradiction! Thus $\mathcal{P}\setminus \cup_{n\in\mathbb{N}}P(\gamma \alpha^n-\beta)$ is an infinite set. $\hfill \blacksquare${}
\begin{remark}
A natural question for the reader to ask at this point is how does the above lemma relate to the polynomials that we have defined above. We can answer this question now. Let $r\in \mathbb{Z}\setminus\{0\}$ and $u=u(x)=ax+b\in L_{r,\emptyset}^1$ with $a\neq \pm 1$. By the \textit{iteration formula}, we have
$$u^{(n)}(r)=\frac{a^n(r-ar-b)+b}{1-a}.$$ Since $u\in L_{r,\emptyset}^1$, we can say that $\mathcal{P}\setminus \cup_{n\in\mathbb{N}} P(\gamma \alpha^n-\beta)$ is a finite set (in fact it is an empty set), where $\alpha=a,\beta=-b$ and $\gamma=r-ar-b$. Then it follows from the above \textit{lemma} that either $\frac{\beta}{\gamma}$ or $\frac{\gamma}{\beta}$ is a power of $\alpha$, i.e., $b=-a^m(r-ar-b),$ for some $m\in\mathbb{Z}$. Moreover, if $m\in\mathbb{N}$ we can say that $u\in N_r$. So, to summarize if $u$ is in $S_r^1$ (with $a\notin\{ \pm 1\}$), then $\exists~m\in\mathbb{N}\cup \{0\}$ such that $a^mb=b+ar-r$.
\end{remark}
\textbf{Reduction of polynomials.} Let $r$ be a non-zero integer and $u(x)\in\mathbb{Z}[x]$ such that $r~|~u(0)$. Define $v(x):=\frac{1}{r}u(rx)$. Note that $v$ is indeed a polynomial over $\mathbb{Z}$ of the same degree as $u(x)$ and $rv(1)=u(r)$. Then it follows that $u(x)$ is weakly locally nilpotent at $r$ outside $A$ iff $v(x)$ is weakly locally nilpotent at 1 outside $A\cup P(r)$ and also $u(x)$ is nilpotent at $r$ iff $v(x)$ is nilpotent at 1. Thus we can reduce any polynomial $u(x)$ in $L_{r,\emptyset}^d$ with $r~|~u(0)$ to the polynomial $v(x)$ in $L_{1,P(r)}^d$. We will call this the \textit{reduction of $u(x)$ to $v(x)$}.
\section{Arbitrary $d$ and $r\in\{0,1,-1\}$}
We will now formally state and prove our first main result.
\begin{theorem}[Main result 1]
The following is the list of all polynomials in $L_{1,\emptyset}$ \textup{:}
\begin{enumerate}[label=(\arabic*)]
\item $ (x-1)p(x)$ with $p(x)\in\mathbb{Z}[x]\setminus\{0\}$.
\item $ -2x+4+p(x)(x-1)(x-2)$, with $p(x)\in\mathbb{Z}[x]$.
\item $ -2x^2+7x-3+p(x)(x-1)(x-2)(x-3)$, with $p(x)\in\mathbb{Z}[x]$.
\item $ x+1$.
\end{enumerate}
\end{theorem}
\textit{Proof.} Let $u=u(x):=a_0+a_1x+\cdots+a_dx^d\in L_{1,\emptyset}^d$. We will consider the following three cases :
\subsubsection*{Case 1. $u(1)-1\not \in\{\pm 1\}$.}
Then $P(u(1)-1)\neq\emptyset$ and for each $p\in P(u(1)-1)$ we have $u(1)\equiv_p 1$, i.e., $m_p$ does not exist, a contradiction to that $u\in L_{1,\emptyset}^d$ !
\subsubsection*{Case 2. $u(1)-1=-1$.}
These are just the polynomials in $N_{1,1}^d$.
\subsubsection*{Case 3. $u(1)-1=1$.}
This means $u(1)=2$. Now we will explore the possibilities for $u(2)$. If $u(2)=0$, then $u(x)=-2x+4+p(x)(x-1)(x-2)$, for a suitable $p(x)\in\mathbb{Z}[x]$. So suppose that $u(2)\neq 0$. Of course $u(2)\notin\{1,2\}$ as otherwise we get $u^{(n)}(1)=$ 1 or 2, for every $n\in\mathbb{N}$ and hence it cannot be in $L_{1,\emptyset}^d$. Thus $u(2)$ is either $\le -1$ or $\ge 3$, i.e., $|u(2)-1|\ge 2$. In other words, $P(u(2)-1)\neq \emptyset$. Let $p\in P(u(2)-1)$. Then $u(2)\equiv_p 1$. As $u$ is locally nilpotent at $1$, $p$ must be 2 and so $u(2)-1$ must be of the form $\pm 2^t$ for some $t\in\mathbb{N}$. To arrive at a contradiction suppose that $u(2)\neq 3$. That means $u(2)$ is either $\geq 4$ or $\leq -1$. Let's consider the possibilities one by one.
\textbf{Possibility 1}. $u(2)\ge 4$. We know that $u(2)-1=2^t$ and so $u(2)$ is odd. So, in fact $u(2)\ge 5$. Then there exists $p\in \mathcal{P}_{\{2\}}$ such that $p\in P(u(2)-2)$. Hence $u^{(n)}(2)\equiv_p 2$, for every $n\in\mathbb{N}$, a contradiction to the fact that $u\in L_{1,\emptyset}^d$ !
\textbf{Possibility 2}. $u(2)\le -1$. We know that $u(2)-1=-2^t$ and so $u(2)$ is odd, which implies that $u(2)-2$ is odd as well and less or equal to $-3$. Using the same logic as in possibility 1 we get a contradiction!
So $u(2)$ must be 3. Next we look at $u(3)$. If $u(3)=0$, then $u(x)=-2x^2+7x-3+p(x)(x-1)(x-2)(x-3)$, for a suitable $p(x)\in\mathbb{Z}[x]$. So suppose that $u(3)\neq 0$. For the same reason as above $u(3)\notin\{0,1,2,3\}$. Thus $u(3)$ is either $\le -1$ or $\ge 4$. To get to a contradiction suppose that $u(3)\neq 4$. Then either $u(3)-3\le -4$ or $\geq 2$. In any case, $P(u(3)-3)\neq\emptyset$. Let $p\in P(u(3)-3)$. Then $u(3)\equiv_p 3$ and so $p\in \{2,3\}$. If $p=2$ then $u(1)\equiv_p 0$. Since $3\equiv_p 1$, we must have $u(3)\equiv_p u(1)$ and so $u(3)\equiv_p 3\equiv_p 1\not\equiv_p u(1)$, which is an impossibility ! So $p=3$ and $u(3)-3=\pm 3^s$, for some $s\in\mathbb{N}$.
Again for similar reasoning as above, $P(u(3)-1)\neq \emptyset$. For each $p\in P(u(3)-1)$, we have $u(3)\equiv_p 1$ which implies $p\in\{2,3\}$. But $p~|~u(3)-1=2\pm 3^s$ and so $p$ cannot be 2 or 3, which is absurd ! So $u(3)=4$.
Next we look at $u(4)$. We claim that no further iteration of $u$ at $1$ can be zero and we would like to prove this by showing that $u(n-1)=n$, $\forall ~n\ge 4$ and that would mean $u(x)=x+1$. We want to use \textit{mathematical induction} to prove this claim. Let $u(q-1)=q$, for every $2\le q\le n$, for some $n\ge 4$ and we want to show that $u(n)=n+1$. Since $u(1)=2, ~u(2)=3, ~u(3)=4,~\ldots, u(n-1)=n$, there is a polynomial $p(x)$ such that $u(x)=x+1+p(x)(x-1)(x-2)(x-3)\cdots (x-n+1)$. So $u(n)=n+1+p(n)\cdot (n-1)!\neq 0$, as $n\ge 4$. If $u(n)=i$ for some $i\in\{1,\ldots,n\}$, then the iterations $u^{(m)}(1)\in\{1,\ldots,n\}$, for every $m\in\mathbb{N}$ and that means for only finitely many primes $p$, $m_p$ exists. Thus $u$ cannot be locally nilpotent at $1$ and $u(n)\not \in \{0,\ldots,n\}$. This means $u(n)$ is either $\ge n+1$ or $\le -1$. For a contradiction, suppose that $u(n)\neq n+1$. Then, either $u(n)-n\ge 2$ or $u(n)-n\le -(n+1)$. In any case, we get $P(u(n)-n)\neq\emptyset$. For each $p\in P(u(n)-n)$, we have $u(n)\equiv_p n$ which is an impossibility unless $p\le n$. Suppose, if possible, $p<n$. So $n\equiv_p a$ for some $a\in \{1,\ldots,p-1\}$. Note that $a$ cannot be zero as otherwise $u(n)\equiv_p n\equiv_p 0$ and also $u(n-1)=n\equiv_p 0$. This means $p~|~u(0)=a_0$ and so $p~|~u(p)=p+1$, an impossibility ! But by the induction hypothesis we have $u(a)=a+1$ and also $a\equiv_p n\equiv_p u(n)\equiv_p u(a)$, i.e., $u(a)\equiv_p a$, which is absurd as this means $m_p$ does not exist ! So $p=n$, i.e., $n$ is prime and $u(n)=n\pm n^s$, for some $s\in\mathbb{N}$.
Again, for similar reasoning as above, $P(u(n)-1)\neq \emptyset$. So for every $q\in P(u(n)-1)$, $u(n)\equiv_q 1$ and that implies $q$ is less than or equal to $n$. But if $q=n$, then $n=q~|~u(n)-1=(n-1)\pm n^s$ and so $n~|~-1$, an impossibility ! So, in fact, we have $q\le n-1$. We can choose $b\in\{0,\ldots,q-1\}$ such that $n\equiv_q b+1$. By the induction hypothesis $u(b+1)=b+2$ and also $u(b+1)\equiv_q u(n)\equiv_q 1$. These two relations together imply $b+1\equiv_q 0$, i.e., $q~|~n$. But, since $n$ is a prime, $n=q$, which is a absurd as $q\le n-1$. Thus $u(n)=n+1$.$\hfill \blacksquare${}
\begin{remark}
It follows from \textit{Fact }1 and \textit{Theorem }1 that the following polynomials are in $L_{-1,\emptyset}$.
\begin{enumerate}[label=(\arabic*)]
\item $ (x+1)p(x)$, with $p(x)\in\mathbb{Z}[x]\setminus\{0\}$.
\item $ -2x-4+p(x)(x+1)(x+2)$, with $p(x)\in\mathbb{Z}[x]$.
\item $ 2x^2+7x+3+p(x)(x+1)(x+2)(x+3)$, with $p(x)\in\mathbb{Z}[x]$.
\item $ x-1$.
\end{enumerate}
\end{remark}
\begin{corollary}
The sets $S_1$ and $S_{-1}$ are singleton sets.
\end{corollary}
\textit{Proof.} Let $u(x)\in S_1$. Then by \textit{Theorem }1, $u(x)$ must be $x+1$ as all the other polynomials in the list (i)-(iv) in \textit{Theorem }1 are in $N_1$. Now by \textit{Fact }1, it follows that $S_{-1}=\{x-1\}$. Thus $S_1=S_1^1$ and $S_{-1}=S_{-1}^1$ (see Conjecture 1). $\hfill \blacksquare${}
We end this section with a theorem about possible nilpotency indices for nilpotent polynomials at 0.
\begin{theorem}
The only nilpotent polynomials at 0 are the polynomials with nilpotency indices 1 and 2, i.e., $N_{0}=N_{0,1}\sqcup N_{0,2}$.
\end{theorem}
\textit{Proof.} Let $u(x)\in N_{0,m}$, for some $m\in\mathbb{N}$. If $u(0)=0$, then $m=1$ and $u(x)=xp(x),$ for some $p(x)\in\mathbb{Z}[x]$. So suppose that $u(0)\neq 0$. Define $$u_0:=u(0),~u_n:=u^{(n+1)}(0)-u^{(n)}(0),~n\in\mathbb{N}.$$ Then $u_{n+1}=u^{(n+2)}(0)-u^{(n+1)}(0)= u(u^{(n+1)}(0))-u(u^{(n)}(0)).$ That means $u_n$ divides $u_{n+1},\forall~n\in\mathbb{Z}_{\ge 0}$. We also have $u^{(m)}(0)=0$ and so $u_m=u^{(m+1)}(0)-u^{(m)}(0)=u^{(m+1)}(0)=u_0$.
But $u^{(2)}(0)-u(0)=u_1~|~u_m=u_0$. This means $u_1=\pm u_0$. Similarly, we can show that $u_n=\pm u_0,\forall n\in\mathbb{N}.$
Note that $u_0+\cdots+u_{m-1}=u^{(m)}(0)=0$. This means $m$ must be even and half these integers are positive and the other half are negative (since $|u_n|=|u_0|,\forall~n\in\mathbb{N}$). So there exists $k\in\{1,\ldots,m-1\}$ such that $u_{k-1}=-u_k,$ i.e., $u^{(k)}(0)-u^{(k-1)}(0)=u^{(k)}(0)-u^{(k+1)}(0),$ i.e., $u^{(k+1)}(0)=u^{(k-1)}(0)$. Thus $u^{(n+2)}(0)=u^{(n)}(0),\forall ~n\ge k-1$ and so in particular, we have $0=u^{(m)}(0)=u^{(m+2)}(0)=u^{(2)}(0)$. Hence, $m=2$ and $u(x)=(x-\alpha)p(x)$, with $\alpha\in\mathbb{Z}\setminus\{0\}$ and $p(x)\in\mathbb{Z}[x]$ with $p(0)=-1$. $\hfill \blacksquare${}
\section{Linear case $d=1$}
\subsection{The case when $r=\{0,-1,1\}$}
We start with a theorem about $r=0$ case.
\begin{theorem}
The following is the list of all polynomials in $L_{0,\emptyset}^1$ \textup{:}
\begin{enumerate}[label=(\arabic*)]
\item $ ax$, with $a\in\mathbb{Z}\setminus\{0\}$.
\item $ \pm x+b$, with $b\in\mathbb{Z}\setminus \{0\}$.
\item $ ax+b$ with $\mathcal{P}\supsetneq P(b)\supseteq P(a)\neq \emptyset$.
\end{enumerate}
\end{theorem}
\textit{Proof.} Let $u=u(x):=ax+b\in L_{0,\emptyset}^1$. Note that if $b=0$, $u(x)=ax$ and $u(0)=0$. So $ax\in N_{0,1}^1$. Now suppose $b\neq 0$. When $a=1$, every $u(x)\in S_0^1$ : in fact if $u(x)=x+b$, then by the \textit{iteration formula}, $u^{(n)}(0)=b(1+\cdots+1)=bn$, which is always non-zero, $\forall~n\in\mathbb{N}$ and for each prime $p$, $u^{(p)}(0)=bp\equiv_p 0$. When $a=-1$, $u^{(2)}(0)=0$. So $-x+b\in N_{0,2}^1$.\\
So we can assume that $|a|\geq 2$, i.e., $P(a)$ is a non-empty, finite set. Again, using the \textit{iteration formula}, we get $u^{(n)}(0)=b(1+\cdots+a^{n-1}),~n\in\mathbb{N}$. Suppose, if possible, $p\in P_{P(b)}(a)$. Then $u^{(n)}(0)\equiv_p b$, for every $n\in\mathbb{N}$ and so $u(x)$ cannot be locally nilpotent. This means that $P(b)\supseteq P(a)\neq\emptyset$. If $p\in P(b)$, it can be checked that $m_p=1$.\\
If $p\notin P(b)\cup P(a-1)$,
then $u^{(p-1)}(0)=\frac{b}{a-1}(a^{p-1}-1)\equiv_p 0$.\\
Finally if $p\in P(a-1)$, then $u^{(p)}(0)=b(1+\cdots+a^{p-1})\equiv_p b(1+\cdots+1)\equiv_p 0$. Thus $m_p$ exists for every $p\in\mathcal{P}$. $\hfill \blacksquare${}
For the rest of the paper we will only consider the cases when $r\neq 0$. In order to fully use the \textit{reduction of polynomials} we state and prove the following theorem :
\begin{theorem}[Main result 2]
Let $q_1,\ldots q_k$ be $k$ distinct primes and $A=\{q_1,\ldots q_k\}$. Then the following is the list of all the polynomials in $L_{1,A}^1$ \textup{:}
\begin{enumerate}[label=(\arabic*)]
\item $ x\pm q_1^{s_1}\cdots q_k^{s_k}$, where $s_i\in\mathbb{N}\cup\{0\}$.
\item $ \alpha(x-1)$, $\alpha\in\mathbb{Z}\setminus\{0\}$.
\item $ \pm q_1^{s_1}\cdots q_k^{s_k} x+1$, , where $s_i\in\mathbb{N}\cup\{0\}$ such that $\sum s_i\ge 1$.
\item $ -2x-1$ (only when $2\in A$).
\item $ -2x+4$.
\end{enumerate}
\end{theorem}
\textit{Proof.} Let $u=u(x):=ax+b\in L_{1,A}^1$. It is clear that we can assume $b\neq 0$. By the \textit{iteration formula}, $u^{(n)}(1)=a^n+b(1+\cdots+a^{n-1})$, for every $n\in\mathbb{N}$.
\begin{align*}
\text{Note that if } a=1, ~u^{(m_p)}(1)=& 1+bm_p\equiv_p 0, \text{ for every prime }p\notin A,\\
\implies & bm_p\equiv_p -1, \text{ for every prime }p\notin A,\\
\implies & b\text{ is invertible in }\mathbb{F}_p, \text{ for every prime }p\notin A,\\
\implies & b=\pm q_1^{s_1}\cdots q_k^{s_k}, \text{ for some }s_i's \textup{ in }\mathbb{N}\cup \{0\}.
\end{align*}
If $a=-1$, $u(x)=-x+b$ and $u^{(2)}(x)=x$. So $u$ cannot be in $L_{1,A}^1$ unless $b=1$ and in that case it is in fact in $N_{1,1}^1$. Thus we can assume that $|a|\ge 2$. Similar to \textit{Theorem }1 we can break down these polynomials into the following three cases :
\subsubsection*{Case 1. $u(1)-1\notin \{\pm 1\}$.}
This means that $P(u(1)-1)\neq \emptyset$. So $a+b=1 \pm q_1^{s_1}\cdots q_k^{s_k}$, i.e., $b=1-a \pm q_1^{s_1}\cdots q_k^{s_k}$, for some $s_i\in\mathbb{N}\cup\{0\}$ with $\sum s_i\ge 1$. Then by the \textit{iteration formula}, we have $$u^{(n)}(1)=\frac{b\pm a^n(1-a-b)}{1-a}$$ and it follows from \textit{remark} 3 that $\exists ~m\in\mathbb{Z}$ such that $b=\pm a^m(1-a-b)$. If $m=0$, then $b=\pm(1-a-b)$, i.e., $a+2b=1$ or $a=1$. Since $|a|\ge 2$, we deduce that $a\neq 1$ and so $a+2b=1$. Also we have $a+b=1\pm q_1^{s_1}\cdots q_k^{s_k}$. Solving $a$ and $b$ from these two equations we get $a=1\pm 2q_1^{s_1}\cdots q_k^{s_k},~b=\mp q_1^{s_1}\cdots q_k^{s_k}$. So, $u(x)=(1\pm 2q_1^{s_1}\cdots q_k^{s_k})x\mp q_1^{s_1}\cdots q_k^{s_k}$ and so $u^{(n)}(1)=\frac{1+(1\pm 2q_1^{s_1}\cdots q_k^{s_k})^n}{2},~n\in\mathbb{N}$. Letting $\alpha=1\pm 2q_1^{s_1}\cdots q_k^{s_k},\beta=-1$ and $\gamma =1$, it is clear that neither $\frac{\beta}{\gamma}$ nor $\frac{\gamma}{\beta}$ is a power of $\alpha$. Thus it follows from \textit{Lemma }1 that $(1\pm 2q_1^{s_1}\cdots q_k^{s_k})x\mp q_1^{s_1}\cdots q_k^{s_k}\notin L_{1,A}^1$. So $|m|\ge 1$.
\underline{If $m\in\mathbb{N}$ and $b=a^m(1-a-b)$}, we get $b(1+a^m)=a^m(1-a)$. Since $\gcd(a^m,a^m+1)=1$, we must have $a^m+1~|~1-a$ which is only possible if $m=1$.
\underline{If $m\in\mathbb{N}$ and $b=-a^m(1-a-b)$}, we get $b(1-a^m)=-a^m(1-a)$, i.e., $b(1+\cdots+a^{m-1})=-a^m$. Since $\gcd(1+\cdots+a^{m-1},a^m)=1$, we must have $1+\cdots+a^{m-1}=\pm 1$ which is only possible if $m=1,2$.
\underline{If $m=-n,\text{ with }n\in\mathbb{N}$ and $b=a^m(1-a-b)$}, we get $ba^n=(1-a-b)$, i.e., $b(a^n+1)=1-a$. It follows from above that this is only possible if $n=1$.
\underline{If $m=-n,\text{ with }n\in\mathbb{N}$ and $b=-a^m(1-a-b)$}, we get $ba^n=-(1-a-b)$, i.e., $b(a^n-1)=a-1$. Again using the same logic as above, we conclude that $n=1,2$.
Thus we only need to look at the following four subcases :
\textit{Subcase 1.} $m=-1$.\\
Here we have $ba=\pm(1-a-b)$. First suppose that $ba=1-a-b$, i.e., $b(a+1)=1-a$. This means that $a+1~|~a-1$ and this is only possible if $a=-2$ and $a=-3$. These values generate the polynomials $u(x)=-2x-3$ and $u(x)=-3x-2$, respectively. When $u(x)=-2x-3$, the \textit{iteration formula} gives $$u^{(n)}(1)=2(-2)^n-1.$$ Letting $\alpha=-2,\beta=1$ and $\gamma=2$, it is clear that neither $\frac{\beta}{\gamma}$ nor $\frac{\gamma}{\beta}$ is a power of $\alpha$. So, by \textit{Lemma }1, $-2x-3\notin L_{1,A}^1$. Similarly we can show that $-3x-2\notin L_{1,A}^1.$
Now suppose $ba=-(1-a-b)$. This gives $b=1$ and hence $a=\pm q_1^{s_1}\cdots q_k^{s_k}$. Thus $u(x)=\pm q_1^{s_1}\cdots q_k^{s_k}x+1$ and it follows from the \textit{iteration formula} that $$u^{(n)}(1)=(\pm q_1^{s_1}\cdots q_k^{s_k})^n+[1+\cdots+(\pm q_1^{s_1}\cdots q_k^{s_k})^{n-1}]=\frac{1-(\pm q_1^{s_1}\cdots q_k^{s_k})^{n+1}}{1-(\pm q_1^{s_1}\cdots q_k^{s_k})}, ~n\in\mathbb{N}.$$ If $p\in P_A(1-(\pm q_1^{s_1}\cdots q_k^{s_k}))$, then $u^{(p)}(1)\equiv_p p\equiv_p 0$. So let $p\notin P_A(1-(\pm q_1^{s_1}\cdots q_k^{s_k}))$. Now, if $2\in A$, then existence of $m_2$ is not a concern and if $2\notin A$, then $2\in P_A(1-(\pm q_1^{s_1}\cdots q_k^{s_k}))$ which was covered above.\\
Finally, if $p\notin \mathcal{P}_{A\cup \{2\}}$, then $u^{(p-2)}(1)\equiv_p 0$, by \textit{Fermat's Little Theorem}.
\textit{Subcase 2.} $m=1$.\\
Here we have $b=\pm a(1-a-b)$. First suppose that $b=a(1-a-b)$, i.e., $b(a+1)=a(1-a)$. Same reasoning as above implies $a+1~|~a-1$ so that only possibilities we get are $a=-2,~b=6$ or $a=-3,~b=6$. These values produces the polynomials $u(x)=-2x+6$ and $u(x)=-3x+6$, respectively. When $u(x)=-2x+6$, the \textit{iteration formula} gives $$u^{(n)}(1)=-(-2)^n+2.$$ Letting $\alpha=-2,\beta=-2$ and $\gamma=-1$, it is clear that neither $\frac{\beta}{\gamma}$ nor $\frac{\gamma}{\beta}$ is a power of $\alpha$. So, by \textit{Lemma }1, $-2x+6\notin L_{1,A}^1$. Similarly, we can show that $-3x+6\notin L_{1,A}^1$.
\textit{Subcase 3.} $m=-2$.\\
Here we have $ba^2=\pm (1-a-b)$. First suppose $ba^2=1-a-b$, i.e., $b(a^2+1)=1-a$. This means that $a^2+1~|~a-1$ which is not possible as $|1-a|\le 1+|a|<1+a^2$. Thus $ba^2=-(1-a-b)$, i.e., $b(a+1)=1$, i.e., $b=a+1=\pm 1$. So $u(x)=-2x-1$. It follows from the \textit{iteration formula} that $$u^{(n)}(1)=(-2)^n-[1+\cdots+(-2)^{n-1}]=\frac{(-2)^{n+2}-1}{3},~n\in\mathbb{N}.$$ It is easy to see that $m_2$ does not exist, $m_3=1$ and for all $p\in \mathcal{P}_{\{2,3\}}$, $u^{(p-3)}(1)\equiv_p 0$. So $-2x-1$ is in $L_{1,A}^1$ iff $2\in A$.
\textit{Subcase 4.} $m=2$.\\
Here we have $b=\pm a^2(1-a-b)$. First suppose that $b=a^2(1-a-b)$, i.e., $b(1+a^2)=a^2(1-a)$. Since $\gcd(1+a^2,a^2)=1$, $1+a^2~|~1-a$ which is impossible (see the above subcase). Now suppose that $b=-a^2(1-a-b)$, i.e., $b(a+1)=-a^2$ which means $a+1=\pm 1$ and $b=\pm a^2$. Since $|a|\ge 2$, this means $a=-2$ and $b=4$. But then $u(1)-1=1\in \{\pm 1\},$ an impossibility in this \textit{case} !
\subsubsection*{Case 2. $u(1)-1=-1$, i.e., $u(1)=0$.}
These are the polynomials in $N_{1,1}^1$.
\subsubsection*{Case 3. $u(1)-1=1$, i.e., $u(1)=2$.}
If $u(2)=0$, then $u(x)=-2x+4$. So suppose that $u(2)\notin\{0,1,2\}$, i.e., $u(2)$ is either $\le -1$ or $\ge 3$, i.e., $|u(2)-1|\ge 2,$ i.e., $P(u(2)-1)\neq \emptyset$. If $u(2)=3$, then $u(x)=x+1\in S_1^1$. So we can further assume that $u(2)\neq 3$. Since $u(1)=2,~b=2-a$ and so $u(x)=ax+(2-a)$. Then by the \textit{iteration formula}, we get
$$u^{(n)}(1)=\frac{2-a-a^n}{1-a},~~n\in\mathbb{N}.$$
Since $u\in L_{1,A}^1$ it follows from \textit{Lemma }1 that $2-a=a^m$, for some $m\in\mathbb{Z}$.
If $m=0$, then $2-a=1$, i.e., $a=3$ and $b=-1$. So $u(x)=3x-1$ and $u^{(n)}(1)=\frac{3^n+1}{2},~n\in\mathbb{N}.$ Letting $\alpha=3,\beta=-1$ and $\gamma=1$, it is clear that neither $\frac{\beta}{\gamma}$ nor $\frac{\gamma}{\beta}$ is a power of $\alpha$. So, by \textit{Lemma }1, $3x-1\notin L_{1,A}^1$. Also note that if $m=-n \textup{ for some}~n\in\mathbb{N}, a^n(2-a)=1$. But this is an impossibility as $|a|\ge 2$. Thus $m\in\mathbb{N}$ and $2=a(1+ a^{m-1})$. Therefore $a=\pm 2$ and $1+a^{m-1}=\pm 1,$ i.e., $a^{m-1}=-2$, i.e., $a=-2$. But $a=-2$ implies that $b=4$ and hence $u(2)=2a+b=0$, which cannot happen as we have already said that $u(2)\not\in\{0,1,2,3\}$. Thus $ax+(2-a)\notin L_{1,A}^1$. $\hfill \blacksquare${}
The following corollary follows directly from the computations in \textit{Theorem }4 :
\begin{corollary}
Let $q_1,\ldots,q_k$ be $k$ distinct primes and $A=\{q_1,\ldots,q_k\}$. Then the following is the list of all polynomials in $L_{1,A}^1\setminus N_1$ \textup{:}
\begin{enumerate}[label=(\arabic*)]
\item $ x+ q_1^{s_1}\cdots q_k^{s_k}$, where $s_i\in\mathbb{N}\cup\{0\}$.
\item $ x- q_1^{s_1}\cdots q_k^{s_k}$, where $s_i\in\mathbb{N}\cup\{0\}$ such that $\sum s_i\ge 1$.
\item $ \pm q_1^{s_1}\cdots q_k^{s_k} x+1$, where $s_i\in\mathbb{N}\cup\{0\}$ such that $\sum s_i\ge 1$.
\item $ -2x-1$ (only when $2\in A$).
\end{enumerate}
\end{corollary}
\subsection{The case when $r$ is a prime}
Let us now try to understand $L_{r,\emptyset}^1$ when $r$ is a prime number $q$.
\begin{theorem}
Let $q\in \mathcal{P}$. Then the following is the list of all but finitely many polynomials in $L_{q,\emptyset}^1$ \textup{:}
\begin{enumerate}[label=(\arabic*)]
\item $ x\pm q^s$ with $s\in \mathbb{N}$.
\item $ \alpha(x-q)$ with $\alpha\in\mathbb{Z}\setminus\{0\}$.
\item $ \pm q^sx+q$ with $s\in\mathbb{N}$.
\item $ -qx-q$ (only when $q=2$).
\item $ -2x+4q$.
\item $ x-1$.
\item $ x+1$.
\end{enumerate}
The finitely many polynomials that are missing from the above list must be of the form $ax+b$ with either (i) $a~|~q-1$ and $b=q-aq-1$ or (ii) $a~|~q+1$ and $b=q-aq+1$. Note that this is a necessary condition but not a sufficient one and this point is illustrated through the examples that follow after \textit{Remark }5 below.
\end{theorem}
\textit{Proof.} Let $u=u(x):=ax+b\in L_{q,\emptyset}^1$. We can assume that $b\neq 0$. It follows from the \textit{iteration formula} that $u^{(n)}(q)=a^nq+b(1+\cdots+a^{n-1})$, for every $n\in\mathbb{N}$.
\begin{align*}
\text{Note that if } a=1, ~u^{(m_p)}(q)=&q+bm_p\equiv_p 0, \text{ for every prime }p\neq q,\\
\implies & bm_p\equiv_p -q, \text{ for every prime }p\neq q,\\
\implies & b\text{ is invertible in }\mathbb{F}_p, \text{ for every prime }p\neq q,\\
\implies & b=\pm q^{s}, \text{ for some }s\in\mathbb{N}\cup \{0\}.
\end{align*}
If $a=-1$, $u(x)=-x+b$ and $u^{(2)}(x)=x$. So, $u$ cannot be in $L_{q,\emptyset}^1$ unless $b=q$ and in that case it is in fact in $N_{q,1}^1$. So from this point forward we will assume $|a|\ge 2$. Similar to \textit{Theorem} 1 we can break down these polynomials into the following three cases :
\subsubsection*{Case 1. $u(q)-q\notin \{\pm 1\}$.}
This means $u(q)=q\pm q^s,$ for some $s\in\mathbb{N}$. So we can use the reduction of polynomials. Define $v=v(x):=\frac{1}{q}u(qx)$. Then $v(1)=\frac{1}{q}u(q)=\frac{1}{q}(q\pm q^s)=1\pm q^{s-1}$. We can use \textit{Theorem }4 here with $A=\{q\}$ to see that there are only 4 possibilities for $v$ :
\begin{enumerate}[label=(\roman*)]
\item $v(x)=\alpha(x-1)$, for some $\alpha\in\mathbb{Z}\setminus\{0,\pm 1\}$. Then $u(x)=\alpha(x-q)\in N_{q,1}^1$.
\item $v(x)=\pm q^{s-1}x+1$ with $s\ge 2$. Then $u(x)=\pm q^{s-1}x+q$ and it can be checked easily that this is in $S_q^1$.
\item $v(x)=-2x-1$ and $q=2$, i.e., $u(x)=-2x-2$. Again applying the \textit{iteration formula} it is clear that this is in $S_2^1$.
\item $v(x)=-2x+4$. Then $u(x)=-2x+4q\in N_{q,2}^1$.
\end{enumerate}
\subsubsection*{Case 2. $u(q)=q-1$.}
This means that $b=q-aq-1$, i.e., $u(x)=ax+(q-aq-1)$. Applying the \textit{iteration formula} we get $$u^{(n)}(q)=\frac{a^n+q-aq-1}{1-a}=\frac{a^n+b}{1-a},~n\in \mathbb{N}.$$ From \textit{remark} 3 it follows that $q-aq-1=-a^m$, for some $m\in\mathbb{Z}$. It is clear that $m\neq 0$ as otherwise $q-aq=0,$ i.e., $q(1-a)=0,$ i.e., either $q=0$ or $a=1$, which is absurd !
If $m=-n$ for some $n\in\mathbb{N}$, then $a^n(q-aq-1)=-1,$ again an impossibility as $|a|\ge 2$ ! Thus $m\in\mathbb{N}$ and $q(1-a)=1-a^m$, i.e., $q=1+\cdots+a^{m-1}$. So $m\ge 2$ and $a~|~q-1$. This means we can only have finitely many possibilities for $a$ (and hence finitely many possibilities for $b$).
\subsubsection*{Case 3. $u(q)=q+1$.}
This means that $b=q-aq+1$, i.e., $u(x)=ax+(q-aq+1)$. Proceeding exactly as \textit{case }2 above we can show that $m\ge 2$ and $a\in P(q+1)$ and thereby end up with finitely many possible values for $a$ as well.
Thus we can compute $L_{q,\emptyset}^1$ for every given $q\in P$.$\hfill \blacksquare${}
\begin{remark}
It should be noted that in \textit{cases }2 and 3 of \textit{Theorem }5, we can only get \textit{nilpotent polynomials} and also these cases does not depend on $q$ being a prime. So, for a given $r\in\mathbb{N}\setminus \{1\}$ and $u(x)=ax+b\in S_r^1$ with $|a|\ge 2$, we only need to analyze the case $u(r)-r\notin \{\pm 1\}$. So it follows from the above theorem that given a prime $q$, the following is the list of all polynomials in $S_q^1$ :
\begin{enumerate}[label=(\arabic*)]
\item $ x+q^s,$ $s\in \mathbb{N}\cup \{0\}$.
\item $x-q^s,$ $s\in\mathbb{N}\setminus\{1\}$.
\item $ \pm q^sx+q,$ $s\in\mathbb{N}$.
\item $ -qx-q$ (only when $q=2$).
\end{enumerate}
\end{remark}
We would like to illustrate the how \textit{Theorem }5 works with $r=2,3,5$ and 7.
\textbf{Example 1 ($r=2$).} Let $u=u(x):=ax+b\in L_{2,\emptyset}^1$. It follows from \textit{Theorem }5 that it is enough to consider the cases $u(2)=1$ and $u(2)=3$ with $|a|\ge 2$.
If $u(2)=1$, then it follows from \textit{Theorem }5 \textit{case }2, that $a~|~1$, which is an impossibility as $|a|\ge 2$!
If $u(2)=3$, then it follows from \textit{Theorem }5 \textit{case }3 that $a~|~3$ and $b=2-2a+1$, i.e., $a=\pm 3$ and $b=3-2a$. So we have only two possibilities for $u$ :
\begin{enumerate}[label=(\roman*)]
\item $u(x)=3x-3$. But $u(2)=3,~u(3)=6,~u(6)\equiv_{13} 2$. So $u\notin L_{2,\emptyset}^1$.
\item $u(x)=-3x+9\in N_{2,2}^1$.
\end{enumerate}
So the following is the list of all polynomials in $L_{2,\emptyset}^1$ : \begin{enumerate}[label=(\arabic*)]
\item $ \alpha(x-2)$, with $\alpha\in\mathbb{Z}\setminus \{0\}$.
\item $ -2x+8$.
\item $ x\pm 2^s$, with $s\in\mathbb{N}$.
\item $ \pm 2^s x+2$, with $s\in\mathbb{N}$.
\item $ -2x-2$.
\item $ x-1$.
\item $ -3x+9$.
\item $ x+1$.
\end{enumerate}
\textbf{Example 2 ($r=3$).} Let $u=u(x):=ax+b\in L_{3,\emptyset}^1$. It follows from \textit{Theorem }5 that it is enough to consider the cases $u(3)=2$ and $u(3)=4$ with $|a|\ge 2$.
If $u(3)=2$, then it follows from \textit{Theorem }5 \textit{case }2 that $a~|~2$ and $b=3-3a-1$, i.e., $a=\pm 2$ and $b=2-3a$. So we have two possibilities for $u$ here :
\begin{enumerate}[label=(\roman*)]
\item $u(x)=2x-4\in N_{3,2}^1$.
\item $u(x)=-2x+8\in N_{3,3}^1$.
\end{enumerate}
If $u(3)=4$, then it follows from \textit{Theorem }5 \textit{case }3 that $a~|~4$ and $b=3-3a+1$, i.e., $a=\pm 2,\pm 4$ and $b=4-3a$. So we have four possibilities for $u$ here:
\begin{enumerate}[label=(\roman*)]
\item $u(x)=2x-2$. But $u(3)=4,~u(4)=6,~u(6)\equiv_{7} 3$. So $u\notin L_{3,\emptyset}^1$.
\item $u(x)=-2x+10$. But $u(3)=4,~u(4)=2,~u(2)\equiv_5 1,~u(1)\equiv_5 3$. So $u\notin L_{3,\emptyset}^1$.
\item $u(x)=4x-8$. But $u(3)=4,~u(4)\equiv_7 1,~u(1)\equiv_7 3 $. So $u\notin L_{3,\emptyset}^1$.
\item $u(x)=-4x+16\in N_{3,2}^1$.
\end{enumerate}
So the following is the list of all polynomials in $L_{3,\emptyset}^1$ : \begin{enumerate}[label=(\arabic*)]
\item $ \alpha(x-3)$, $\alpha\in\mathbb{Z}\setminus \{0\}$.
\item $ x\pm 3^s$, $s\in\mathbb{N}$.
\item $ \pm 3^s x+3$, $s\in\mathbb{N}$.
\item $ -2x+12$.
\item $ -4x+16$.
\item $ 2x-4$.
\item $ -2x+8$.
\item $ x-1$.
\item $ x+1$.
\end{enumerate}
\textbf{Example 3 ($r=5$).} Let $u=u(x):=ax+b\in L_{5,\emptyset}^1$. It follows from \textit{Theorem }5 that it is enough to consider the cases $u(5)=4$ and $u(5)=6$ with $|a|\ge 2$.
If $u(5)=4$, then it follows from \textit{Theorem }5 \textit{case }2 that $a~|~4$ and $b=5-5a-1$, i.e., $a=\pm 2,~\pm 4$ and $b=4-5a$. So we have four possibilities for $u$ here:
\begin{enumerate}[label=(\roman*)]
\item $u(x)=2x-6$. But $u(5)\equiv_3 1,~u(1)\equiv_3 2,~u(2)\equiv_3 5$. So $u\notin L_{5,\emptyset}^1$.
\item $u(x)=-2x+14$. But $u(5)=4,~u(4)=6,~u(6)=2,~u(2)\equiv_7 3,~u(3)\equiv_7 1,~u(1)\equiv_7 5$. So $u\notin L_{5,\emptyset}^1$.
\item $u(x)=4x-16\in N_{5,2}^1$.
\item $u(x)=-4x+24$. But $u(5)\equiv_3 1,~u(1)\equiv_3 5$. So $u\notin L_{5,\emptyset}^1$.
\end{enumerate}
If $u(5)=6$, then it follows from \textit{Theorem }5 \textit{case }3 that $a~|~6$ and $b=5-5a+1$, i.e., $a=\pm 2,\pm 3,\pm 6$ and $b=6-5a$. So we have six possibilities for $u$ here:
\begin{enumerate}[label=(\roman*)]
\item $u(x)=2x-4$. But $u(5)=6,~u(6)\equiv_7 1,~u(1)\equiv_7 5$. So $u\notin L_{5,\emptyset}^1$.
\item $u(x)=-2x+16\in N_{5,4}^1$.
\item $u(x)=3x-9$. But $u(5)=6,~u(6)=9,~u(9)\equiv_{13} 5$. So $u\notin L_{5,\emptyset}^1$.
\item $u(x)=-3x+21$. But $u(5)=6,~u(6)=3,~u(3)\equiv_7 5 $. So $u\notin L_{5,\emptyset}^1$.
\item $u(x)=6x-24$. But $u(5)=6,~u(6)\equiv_7 5 $. So $u\notin L_{5,\emptyset}^1$.
\item $u(x)=-6x+36\in N_{5,2}^1$.
\end{enumerate}
So the following is the list of all polynomials in $L_{5,\emptyset}^1$ : \begin{enumerate}[label=(\arabic*)]
\item $ \alpha(x-5)$, $\alpha\in\mathbb{Z}\setminus \{0\}$.
\item $ x\pm 5^s$, $s\in\mathbb{N}$.
\item $ \pm 5^sx+5$, $s\in\mathbb{N}$.
\item $ -6x+36$.
\item $ -2x+20$.
\item $ -2x+16$.
\item $ 4x-16$.
\item $ x-1$.
\item $ x+1$.
\end{enumerate}
\textbf{Example 4 ($r=7$).} Let $u=u(x):=ax+b\in L_{7,\emptyset}^1$. It follows from \textit{Theorem }5 that it is enough to consider the cases $u(7)=6$ and $u(7)=8$ with $|a|\ge 2$.
If $u(7)=6,$ then it follows from \textit{Theorem }5 \textit{case }2 that $a~|~6$ and $b=7-7a-1$, i.e., $a=\pm 2,~\pm 3,~\pm 6$ and $b=6-7a$. So we have six possibilities for $u$ here:
\begin{enumerate}[label=(\roman*)]
\item $u(x)=2x-8\in N_{7,3}^1$.
\item $u(x)=-2x+20$. But $u(7)\equiv_5 1,~u(1)\equiv_5 3,~u(3)\equiv_5 4,~u(4)\equiv_5 7$. So $u\notin L_{7,\emptyset}^1$.
\item $u(x)=3x-15$. But $u(7)\equiv_5 1,~u(1)\equiv_5 3,~u(3)\equiv_5 4,~u(4)\equiv_5 7$. So $u\notin L_{7,\emptyset}^1$.
\item $u(x)=-3x+27\in N_{7,3}^1$.
\item $u(x)=6x-36\in N_{7,2}^1$.
\item $u(x)=-6x+48$. But $u(7)=6,~u(6)=12,~u(12)\equiv_{31} 7$. So $u\notin L_{7,\emptyset}^1$.
\end{enumerate}
If $u(7)=8$, then it follows from \textit{Theorem }5 \textit{case }3 that $a~|~8$ and $b=7-7a+1$, i.e., $a=\pm 2,\pm 4,\pm 8$ and $b=8-7a$. So we have six possibilities for $u$ here:
\begin{enumerate}[label=(\roman*)]
\item $u(x)=2x-6$. But $u(7)\equiv_3 2,~u(2)\equiv_3 7$. So $u\notin L_{7,\emptyset}^1$.
\item $u(x)=-2x+22$. But $u(7)=8,~u(8)=6,~u(6)=10,~u(10)=2,~u(2)\equiv_{11} 7$. So $u\notin L_{7,\emptyset}^1$.
\item $u(x)=4x-20$. But $u(7)\equiv_5 3,~u(3)\equiv_5 7$. So $u\notin L_{7,\emptyset}^1$.
\item $u(x)=-4x+36$. But $u(7)\equiv_3 2,~u(2)\equiv_3 7$. So $u\notin L_{7,\emptyset}^1$.
\item $u(x)=8x-48$. But $u(7)\equiv_3 2,~u(2)\equiv_3 7$. So $u\notin L_{7,\emptyset}^1$.
\item $u(x)=-8x+64\in N_{7,2}^1$.
\end{enumerate}
So the following is the list of all polynomials in $L_{7,\emptyset}^1$ : \begin{enumerate}[label=(\arabic*)]
\item $ \alpha(x-7)$, $\alpha\in\mathbb{Z}\setminus \{0\}$.
\item $ x\pm 7^s$, $s\in\mathbb{N}$.
\item $ \pm 7^sx+7$, $s\in\mathbb{N}$.
\item $ -2x+28$.
\item $2x-8$.
\item $-3x+27$.
\item $6x-36$.
\item $-8x+64$.
\item $ x-1$.
\item $ x+1$.
\end{enumerate}
\subsection{The case when $r$ is arbitrary}
Now we present and prove the final main result of this section.
\begin{theorem}[Main result 3]
Let $r=q_1^{a_1}\cdots q_k^{a_k}$ be the prime decomposition of $r$. Then the following is the list of all polynomials in $S_r^1$ \textup{:}
\begin{enumerate}[label=(\arabic*)]
\item $x+ q_1^{s_1}\cdots q_k^{s_k}$, where $s_i\in\mathbb{N}\cup\{0\}$.
\item $x-q_1^{s_1}\cdots q_k^{s_k}$, where $s_i\in\mathbb{N}\cup\{0\}$ with at least one $j\in\{1,\ldots k\}$ s.t $s_j>a_j$.
\item $\pm q_1^{s_1}\cdots q_k^{s_k}x+r$, where $s_i\in\mathbb{N}\cup\{0\}$ with $\sum\limits_{i} s_i\ge 1$.
\item $-2x-r$ (only when $r$ is even).
\end{enumerate}
\end{theorem}
\textit{Proof.} Let $u=u(x):=ax+b\in S_r^1$ and $A:=P(r)$. First we will look at the instances when $a=\pm 1$.
\begin{align*}
\text{Note that if } a=1, ~u^{(m_p)}(1)=&1+bm_p\equiv_p 0, \text{ for every prime }p\notin \{q_1,\ldots q_k\},\\
\implies & bm_p\equiv_p -1, \text{ for every prime }p\notin \{q_1,\ldots q_k\},\\
\implies & b\text{ is invertible in }\mathbb{F}_p, \text{ for every prime }p\notin \{q_1,\ldots q_k\},\\
\implies & b=\pm q_1^{s_1}\cdots q_k^{s_k}, \text{ for some }s_i's \textup{ in }\mathbb{N}\cup \{0\}.
\end{align*}
Thus the only polynomials $u(x)=x+b$ in $L_{r,\emptyset}^1$ must be of the form $x \pm q_1^{s_1}\cdots q_k^{s_k}$. First suppose that $u(x)=x+q_1^{s_1}\cdots q_k^{s_k}$. Then by the \textit{iteration formula}, $u^{(n)}(r)=q_1^{a_1}\cdots q_k^{a_k}+n\cdot q_1^{s_1}\cdots q_k^{s_k}$, which is always non-zero for every $n\in\mathbb{N}$. That means $x+q_1^{s_1}\cdots q_k^{s_k}\in S_r^1$. Now suppose that $u(x)=x-q_1^{s_1}\cdots q_k^{s_k}$. If for all $i\in\{1,\ldots,k\}$, $s_i\le a_i$, then $u^{(q_1^{a_1-s_1}\cdots q_k^{a_k-s_k})}(r)=0$, which is a contradiction as $u$ is non-nilpotent ! That means we must have at least one $j\in\{1,\ldots,k\}$ such that $a_j<s_j$. Then it can be easily checked that $u^{(n)}(r)$ can never be zero for any $n\in\mathbb{N}$.
If $a=-1$, $u(x)=-x+b$ and $u^{(2)}(x)=x$. So, $u$ cannot be in $L_{r,\emptyset}^1$ unless $b=r$ and in that case it is in fact in $N_{r,1}^1$. So $a\neq -1$. Thus we can $|a|\ge 2$. It follows from \textit{remark }3 that $\exists~m\in\mathbb{N}\cup \{0\}$ such that $a^mb=b+ar-r$. If $m=0$ then $r(1-a)=0$ which is impossibility as $r\neq 0$ and $|a|\ge 2$. That means that $m\in\mathbb{N}$. Thus $b(a^m-1)=r(a-1),$ i.e., $b(1+\cdots+a^{m-1})=r$. This means $b~|~r$.
It follows from \textit{remark} 5 that $u(r)-r\notin \{\pm 1\}$. That means that $u(r)=r\pm q_1^{s_1}\cdots q_k^{s_k}$, for a suitable collection of $s_i's$ in $\mathbb{N}\cup\{0\}$ with $\sum\limits_i s_i\ge 1$. So $b=r-ar\pm q_1^{s_1}\cdots q_k^{s_k}$. But then $b~|~r$ implies that $b~|~q_1^{s_1}\cdots q_k^{s_k}$, i.e., $\exists~t_i\in\mathbb{N}\cup \{0\}$ with $s_i\ge t_i$ for every $i$ such that $b=\pm q_1^{t_1}\cdots q_k^{t_k}$. From $b=r-ar\pm q_1^{s_1}\cdots q_k^{s_k}$ we get $ra=r-b\pm q_1^{s_1}\cdots q_k^{s_k}=r-b(\pm 1\pm q_1^{s_1-t_1}\cdots q_k^{s_k-t_k})$, i.e., $r~|~b(\pm 1\pm q_1^{s_1-t_1}\cdots q_k^{s_k-t_k})$.\\
Suppose, if possible, all the $t_i's$ are zero. Then $b=\pm 1$ and so $r$ must divide $\pm 1\pm q_1^{s_1}\cdots q_k^{s_k}$ but this is clearly absurd as $\gcd(r,\pm 1\pm q_1^{s_1}\cdots q_k^{s_k})=\gcd(q_1^{a_1}\cdots q_k^{a_k},\pm 1\pm q_1^{s_1}\cdots q_k^{s_k})=1$. So $\sum\limits_i t_i\ge 1$. All this now boils down to the following two cases :
\subsubsection*{Case 1. $\exists ~j\in\{1,\ldots,k\}$ s.t $s_j>t_j$.}
Since $\gcd(r,\pm 1\pm q_1^{s_1-t_1}\cdots q_k^{s_k-t_k})=1$, $r~|~b$, i.e., $r=\pm b$ (since we already had $b~|~r$). So $a_i=t_i\le s_i,~\forall~i\in\{1,\ldots,k\}$. So we can use the reduction of polynomials. Define $$v=v(x):=\frac{1}{r}u(rx)=ax\pm 1.$$ Then $v(1)=1\pm q_1^{s_1-a_1}\cdots q_k^{s_k-a_k}$ and $v\in L_{1,A}^1\setminus N_1$. It follows from the list in \textit{Corollary }2 that we have $2$ possibilities for $v$ :
\begin{enumerate}[label=(\roman*)]
\item $v(x)=\pm q_1^{s_1-a_1}\cdots q_k^{s_k-a_k}x+1$. Then $u(x)=\pm q_1^{s_1-a_1}\cdots q_k^{s_k-a_k}x+r$.
\item $v(x)=-2x-1$ (only when $2\in A$). Then $u(x)=-2x-r$.
\end{enumerate}
The reader can check that both (i) and (ii) are in $LS_r$.
\subsubsection*{Case 2. $s_i=t_i,~\forall~i\in\{1,\ldots k\}$.}
Then $\pm q_1^{s_1}\cdots q_k^{s_k}=b~|~r$, i.e., $a_i\ge s_i$ for each $i$.
From $b=r-ar\pm q_1^{s_1}\cdots q_k^{s_k}$ we get $r(1-a)=\pm 2q_1^{s_1}\cdots q_k^{s_k}=\pm 2b$. Thus either $r=\pm b$ or $r=\pm 2b$. The first possibility has been taken care of in \textit{case }2. So we can assume that $r=2q_1^{s_1}\cdots q_k^{s_k}=\pm 2b$. But that means $2\in A$. Without loss of generality, we can assume that $q_1=2$ and so $r=2^{s_1+1}\cdots q_k^{s_k}$. Rewriting $b=r-ar\pm q_1^{s_1}\cdots q_k^{s_k}$ gives us $ra=r-2b.$ Since $ra\neq 0,$ $r=-2b$ and so $ra=-4b$, i.e., $r=-2b$ and $a=2$. This means that $u(x)=2x-\frac{r}{2}$. It follows from the \textit{iteration formula} that $$u^{(n)}(r)=r\cdot \frac{2^n+1}{2},~n\in\mathbb{N}.$$ Letting $\alpha=2,\beta =-1, \gamma=1$, we can see that neither $\frac{\beta}{\gamma}$ nor $\frac{\gamma}{\beta}$ is a power $\alpha$. Thus from \textit{Lemma }1 it follows that $2x-\frac{r}{2}\not\in S_r^1$. This completes the proof of \textit{Theorem }6.$\hfill \blacksquare${}
\vspace{5mm}
It follows directly from \textit{Fact }1 and \textit{Theorem }6 that :
\begin{corollary}
If $r=-q_1^{a_1}\cdots q_k^{a_k}$ is the prime decomposition of an integer $r\le -2$, then the following is the list of all polynomials in $S_r^1$ \textup{:}
\begin{enumerate}[label=(\arabic*)]
\item $x- q_1^{s_1}\cdots q_k^{s_k}$, where $s_i\in\mathbb{N}\cup\{0\}$.
\item $x+q_1^{s_1}\cdots q_k^{s_k}$, where $s_i\in\mathbb{N}\cup\{0\}$ with at least one $j\in\{1,\ldots k\}$ s.t $s_j>a_j$.
\item $\pm q_1^{s_1}\cdots q_k^{s_k}x-r$, where $s_i\in\mathbb{N}\cup\{0\}$ with $\sum\limits_{i} s_i\ge 1$.
\item $-2x+r$ (only when $r$ is even).
\end{enumerate}
\end{corollary}
\newpage
\section*{Some open problems}
For $u(x)\in\mathbb{Z}[x]\setminus\{0\}$, let
$$N(u):=\{r\in\mathbb{Z}~|~u\in N_r \} ,~~LN(u):=\{r\in\mathbb{Z}~|~u\in L_{r,\emptyset} \}.$$
\begin{enumerate}[label=Q\arabic*)]
\item Describe all $u's$ such that $N(u)$ is finite.
\item Describe all $u's$ such that $LN(u)$ is finite.
\item Given $r\in\mathbb{Z}$, describe all $u's$ such that $r\in LN(u)$.
\end{enumerate}
\section*{Acknowledgements} The author gratefully acknowledges his advisors Prof. Alexander Borisov and Prof. Adrian Vasiu for their constant support, their encouragements and very helpful suggestions. The author would also like to thank Prof. Jeremy Rouse for suggesting \cite{CS97} which was one of the main tools used to prove theorems 4 and 6 and Prof. Kiran Kedlaya for maintaining a wonderful archive of William Lowell Putnam Mathematics Competition questions and it's solutions which led to the statement and proof of Theorem 2.
\newpage
|
2,877,628,091,467 | arxiv | \section{Introduction}
A convex body $K$ in ${\mathbb R}^n$ is called isotropic if it has volume $|K|=1$, its center of mass is at the origin
(we call these convex bodies ``centered"), and its inertia matrix is a multiple of the identity matrix: there exists a constant $L_K >0$ such that
\begin{equation}\label{eq:intro-1}\int_K\langle x,\theta\rangle^2dx =L_K^2\end{equation}
for every $\theta $ in the Euclidean unit sphere $S^{n-1}$. For every centered convex body $K$ in ${\mathbb R}^n$
there exists an invertible linear transformation $T\in GL(n)$ such that $T(K)$ is isotropic. This isotropic image of $K$ is
uniquely determined up to orthogonal transformations. A well-known problem in asymptotic convex geometry asks if there exists an absolute constant $C_1>0$ such that
\begin{equation}\label{eq:intro-2}L_n:= \max\{ L_K:K\ \hbox{is isotropic in}\ {\mathbb R}^n\}\leqslant C_1\end{equation}
for all $n\geqslant 1$ (see Section 2 for background information on isotropic convex bodies and log-concave probability
measures). Bourgain proved in \cite{Bourgain-1991} that $L_n\leqslant c\sqrt[4]{n}\log\! n$, and Klartag \cite{Klartag-2006}
improved this bound to $L_n\leqslant c\sqrt[4]{n}$. A second proof of Klartag's bound appears in \cite{Klartag-EMilman-2012}.
Recall that the inradius $r(K)$ of a convex body $K$ in ${\mathbb R}^n$ with $0\in {\rm int}(K)$ is the largest $r>0$ for which $rB_2^n\subseteq K$, while
the radius $R(K):=\max\{\| x\|_2:x\in K\}$ of $K$ is the smallest $R>0$ for
which $K\subseteq RB_2^n$. It is not hard to see that the inradius
and the radius of an isotropic convex body $K$ in ${\mathbb R}^n$ satisfy the bounds $c_1L_K\leqslant r(K)\leqslant R(K)\leqslant c_2nL_K$,
where $c_1,c_2>0$ are absolute constants. In fact, Kannan, Lov\'{a}sz and Simonovits \cite{Kannan-Lovasz-Simonovits-1995} have proved
that
\begin{equation}\label{eq:intro-3}R(K)\leqslant (n+1)L_K.\end{equation}
\smallskip
\noindent {\bf Radius of random sections of isotropic convex bodies.} The first question that we discuss in this article
is to give sharp upper bounds for the radius of a random $(n-k)$-dimensional section of $K$. A natural ``guess" is that the following
question has an affirmative answer.
\begin{question}\label{question}There exists an absolute constant $\overline{c}_0>0$ with the following property: for every isotropic convex body $K$ in ${\mathbb R}^n$
and for every $1\leqslant k\leqslant n-1$, a random subspace $F\in G_{n,n-k}$ satisfies
\begin{equation}\label{eq:question}R(K\cap F)\leqslant \overline{c}_0\sqrt{n/k}\,\sqrt{n}L_K.\end{equation}
\end{question}
It was proved in \cite{Litvak-Milman-Pajor-1999} that if $K$ is a symmetric convex body in ${\mathbb R}^n$ then a random
$F\in G_{n,n-k}$ satisfies
\begin{equation}\label{eq:intro-111}R(K\cap F)\leqslant c(n/k)^{3/2}\tilde{M}(K),\end{equation}
where $c>0$ is an absolute constant and
\begin{equation}\tilde{M}(K):=\frac{1}{|K|}\int_K\|x\|_2dx.\end{equation}
In the case of an isotropic convex body one has $|K|=1$ and
\begin{equation}\tilde{M}(K)\leqslant \left (\int_K\|x\|_2^2dx\right )^{1/2}=\sqrt{n}L_K,\end{equation}
therefore \eqref{eq:intro-111} implies that a random $F\in G_{n,n-k}$ satisfies
\begin{equation}\label{eq:intro-112}R(K\cap F)\leqslant \overline{c}_1(n/k)^{3/2}\sqrt{n}L_K,\end{equation}
where $\overline{c}_1>0$ is an absolute constant.
Our first main result shows that one can have a bound of the order of $\gamma^{-1}\sqrt{n}L_K$ when the codimension $k$ is greater than $\gamma n$.
\begin{theorem}\label{th:intro-2}Let $K$ be an isotropic symmetric convex body in ${\mathbb R}^n$ and let $1\leqslant k\leqslant n-1$.
A random subspace $F\in G_{n,n-k}$ satisfies
\begin{equation}\label{eq:intro-5}R(K\cap F)\leqslant \frac{\overline{c}_0n}{\max\{k,\sqrt{n}\}}\sqrt{n}L_K\end{equation}
with probability greater than $1-\exp (-\sqrt{n})$, where $\overline{c}_0>0$ is an absolute
constant.
\end{theorem}
The proof is given in Section 3. Note that Theorem \ref{th:intro-2} gives non-trivial information when $k>\sqrt{n}$. In this case, writing $k=\gamma n$
for some $\gamma\in (1/\sqrt{n},1)$ we see that
\begin{equation}\label{eq:intro-55}R(K\cap F)\leqslant \frac{\overline{c}_0}{\gamma }\sqrt{n}L_K\end{equation}
with probability greater than $1-\exp (-\sqrt{n})$ on $G_{n,(1-\gamma )n}$. The result of \cite{Litvak-Milman-Pajor-1999}
establishes a $\gamma^{-3/2}$-dependence on $\gamma =k/n$.
A standard approach to Question \ref{question} would have been to combine the low $M^{\ast }$-estimate with an upper bound for the mean width
\begin{equation}\label{eq:intro-7}w(K):=\int_{S^{n-1}}h_K(x)\,d\sigma (x),\end{equation}
of an isotropic convex body $K$ in ${\mathbb R}^n$, that is, the $L_1$-norm of the support function of $K$ with respect to the Haar
measure on the sphere. This last problem was open for a number of years. The upper bound $w(K)\leqslant cn^{3/4}L_K$ appeared in the Ph.D.
Thesis of Hartzoulaki \cite{Hartzoulaki-thesis}. Other approaches leading to the same bound can be found in Pivovarov \cite{Pivovarov-2010a}
and in Giannopoulos, Paouris and Valettas \cite{Giannopoulos-Paouris-Valettas-2012b}. Recently, E.~Milman showed in \cite{EMilman-2014} that
if $K$ is an isotropic symmetric convex body in ${\mathbb R}^n$ then
\begin{equation}\label{eq:intro-8}w(K)\leqslant c_3\sqrt{n}(\log n)^2L_K.\end{equation}
In fact, it is not hard to see that his argument can be generalized to give the same estimate in the
not necessarily symmetric case. The dependence on $n$ is optimal up to the logarithmic term. From the sharp version of V.~Milman's low $M^{\ast }$-estimate
(due to Pajor and Tomczak-Jaegermann \cite{Pajor-Tomczak-1986}; see \cite[Chapter 7]{AGA-book}
for complete references) one has that, for every $1\leqslant k\leqslant n-1$, a subspace $F\in G_{n,n-k}$ satisfies
\begin{equation}\label{eq:intro-9}R(K\cap F)\leqslant c_4\sqrt{n/k}\,w(K)\end{equation}
with probability greater than $1-\exp (-c_5k)$, where $c_4, c_5>0$ are absolute
constants. Combining \eqref{eq:intro-9} with E.~Milman's theorem we obtain the folowing estimate:
\begin{quote}{\sl Let $K$ be an isotropic symmetric convex body in ${\mathbb R}^n$. For every $1\leqslant k\leqslant n-1$, a subspace
$F\in G_{n,n-k}$ satisfies
\begin{equation}\label{eq:emanuel-bound}R(K\cap F)\leqslant \frac{\overline{c}_2n(\log n)^2L_K}{\sqrt{k}}\end{equation}
with probability greater than $1-\exp (-\overline{c}_3k)$, where $\overline{c}_2, \overline{c}_3>0$ are absolute
constants.}
\end{quote}
\noindent Note that the upper bound of Theorem \ref{th:intro-2} has some advantages when compared to \eqref{eq:emanuel-bound}:
If $k$ is proportional to $n$ (say $k\geqslant\gamma n$ for some $\gamma\in (1/\sqrt{n},1)$) then Theorem \ref{th:intro-2} guarantees that
$R(K\cap F)\leqslant c(\gamma )\sqrt{n}L_K$ for a random $F\in G_{n,n-k}$. More generally, for all $k\geqslant \frac{c_6n}{(\log n)^4}$ we have
\begin{equation}\label{eq:intro-10}\frac{\overline{c}_0n\sqrt{n}}{\max\{k,\sqrt{n}\}}\leqslant \frac{\overline{c}_2n(\log n)^2}{\sqrt{k}},\end{equation}
and hence the estimate of Theorem \ref{th:intro-2} is stronger than \eqref{eq:emanuel-bound}. Nevertheless, we emphasize that our
bound is not optimal and it would be very interesting to decide whether \eqref{eq:question} holds true; this would be optimal for all $1\leqslant k\leqslant n$.
\medskip
\noindent {\bf Radius of random sections of $L_q$-centroid bodies and their polars.}
In Section 4 we study the diameter of random sections of the $L_q$-centroid bodies $Z_q(\mu )$ of an isotropic log-concave probability measure
$\mu $ on ${\mathbb R}^n$. Recall that a measure $\mu$ on $\mathbb R^n$ is called log-concave if $\mu(\lambda A+(1-\lambda)B)
\geqslant \mu(A)^{\lambda}\mu(B)^{1-\lambda}$ for any compact subsets $A$ and $B$ of ${\mathbb R}^n$ and any $\lambda \in (0,1)$. A function
$f:\mathbb R^n \rightarrow [0,\infty)$ is called log-concave if its support $\{f>0\}$ is a convex set and the restriction of $\log{f}$ on it is concave.
It is known that if a probability measure $\mu $ is log-concave and $\mu (H)<1$ for every hyperplane $H$, then $\mu $ is absolutely
continuous with respect to the Lebesgue measure and its density
$f_{\mu}$ is log-concave; see \cite{Borell-1974}. Note that if $K$ is a convex body in $\mathbb R^n$ then the Brunn-Minkowski inequality implies that the indicator function
$\mathbf{1}_{K} $ of $K$ is the density of a log-concave measure.
We say that a log-concave probability measure $\mu $ on ${\mathbb R}^n$
is isotropic if its barycenter $\textrm{bar}(\mu )$ is at the origin and
\begin{equation*}\int_{{\mathbb R}^n}\langle x,\theta\rangle^2\,d\mu (x)=1\end{equation*}
for all $\theta\in S^{n-1}$. Note that the normalization is different from the one
in \eqref{eq:intro-1}; in particular, a centered convex body $K$ of volume $1$ in ${\mathbb R}^n$ is isotropic
if and only if the log-concave probability measure $\mu_K$ with density
$x\mapsto L_K^n\mathbf{1}_{K/L_K}(x)$ is isotropic.
The $L_q$-centroid bodies $Z_q(\mu)$, $q\geqslant 1$, are defined through their support function
\begin{equation}\label{eq:intro-11}
h_{Z_q(\mu)}(y):= \|\langle \cdot ,y\rangle \|_{L_q(\mu)} = \left(\int_{{\mathbb R}^n}|\langle x,y\rangle|^qd\mu(x)\right)^{1/q},
\end{equation}
and have played a key role in the study of the distribution of linear functionals with respect to the
measure $\mu$. For every $1\leqslant q\leqslant n$ we obtain sharp upper bounds for the radius of random sections of $Z_q(\mu )$
of dimension proportional to $n$, thus extending a similar result of
Brazitikos and Stavrakakis which was established only for $q\in [1,\sqrt{n}]$.
\begin{theorem}\label{th:intro-Zq}Let $\mu $ be an isotropic log-concave probability measure on ${\mathbb R}^n$ and let $1\leqslant q\leqslant n$.
Then:
\begin{enumerate}
\item[{\rm (i)}] If $k=\gamma n$ for some $\gamma \in (0,1)$, then, with probability greater than $1-e^{-\overline{c}_4k}$, a random $F\in G_{n,n-k}$ satisfies
\begin{equation}\label{eq:intro-12}R(Z_q(\mu )\cap F)\leqslant \overline{c}_5(\gamma )\sqrt{q},\end{equation}
where $\overline{c}_4$ is an absolute constant and $\overline{c}_5(\gamma )=O(\gamma^{-2}\log^{5/2}(c/\gamma ))$ is a positive constant depending only on $\gamma $.
\item[{\rm (ii)}] With probability greater than $1-e^{-n}$, a random $U\in O(n)$ satisfies
\begin{equation}\label{eq:intro-13}Z_q(\mu )\cap U(Z_q(\mu ))\subseteq (\overline{c}_6\sqrt{q})\,B_2^n,\end{equation}
where $\overline{c}_6>0$ is an absolute constant.
\end{enumerate}
\end{theorem}
The method of proof is based on estimates (from \cite{EMilman-2014} and \cite{Giannopoulos-EMilman-2014}) for the Gelfand numbers of symmetric convex bodies in terms of their volumetric parameters; combining these general estimates with fundamental (known) properties of the family of the
centroid bodies $Z_q(\mu )$ of an isotropic log-concave probability measure $\mu $ we provide estimates for the {\sl minimal} radius of a $k$-codimensional
section of $Z_q(\mu )$. Then, we pass to bounds for the radius of random $k$-codimensional sections of $Z_q(\mu )$ using known results
from \cite{Giannopoulos-Milman-Tsolomitis-2005}, \cite{Vershynin-2006} and \cite{Litvak-Pajor-Tomczak-2006}. We conclude Section 4 with
a discussion of the same questions for the polar bodies $Z_q^{\circ }(\mu )$ of the centroid bodies $Z_q(\mu )$.
\smallskip
Using the same approach we study the diameter of random sections of convex bodies which have {\it maximal isotropic constant}. Set
\begin{equation}\label{eq:intro-22}L_n^{\prime }:= \max\{ L_K:K\ \hbox{is an isotropic symmetric convex body in}\ {\mathbb R}^n\}.\end{equation}
It is known that $L_n\leqslant cL_n^{\prime }$ for some absolute constant $c>0$ (see \cite[Chapter 3]{BGVV-book}). We prove the following:
\begin{theorem}\label{th:intro-max}Assume that $K$ is an isotropic symmetric convex body in ${\mathbb R}^n$ with $L_K=L_n^{\prime }$.
Then:
\begin{enumerate}
\item[{\rm (i)}] A random $F\in G_{n,n/2}$ satisfies
\begin{equation}\label{eq:max-1}R(K\cap F)\leqslant \overline{c}_7\sqrt{n}\end{equation}
and
\begin{equation}\label{eq:max-2}L_{K\cap F}\leqslant \overline{c}_8\end{equation} with probability greater than $1-e^{-\overline{c}_9n}$, where $\overline{c}_i>0$ are absolute constants.
\item[{\rm (ii)}] A random $U\in O(n)$ satisfies
\begin{equation}K\cap U(K)\subseteq (\overline{c}_{10}\sqrt{n})\,B_2^n,\end{equation} with probability greater than $1-e^{-n}$, where $\overline{c}_{10}>0$ is an absolute constant.
\end{enumerate}
\end{theorem}
The same arguments work if we assume that $K$ has almost maximal isotropic constant, i.e. $L_K\geqslant\beta L_n^{\prime }$ for some (absolute)
constant $\beta\in (0,1)$. We can obtain similar results, with the constants $\overline{c}_i$ now depending only on $\beta $. It should be noted
that Alonso-Guti\'{e}rrez, Bastero, Bernu\'{e}s and Paouris \cite{Alonso-Bastero-Bernues-Paouris-2010} have proved that every convex body $K$
has a section $K\cap F$ of dimension $n-k$ with isotropic constant
\begin{equation}L_{K\cap F}\leqslant c\sqrt{\frac{n}{k}}\log \Big (\frac{en}{k}\Big ).\end{equation}
For the proof of this result they considered an $\alpha $-regular $M$-position of $K$.
In Theorem \ref{th:intro-max} we consider convex bodies in the isotropic position and the estimates \eqref{eq:max-1} and \eqref{eq:max-2} hold
for a random subspace $F$.
\medskip
\noindent {\bf Radius of random projections of $L_q$-centroid bodies and sub-Gaussian subspaces of isotropic convex bodies.}
Let $K$ be a centered convex body of volume $1$ in $\mathbb R^n$. We say that a direction $\theta\in S^{n-1}$ is a
$\psi_{\alpha }$-direction (where $1\leqslant\alpha\leqslant 2$) for $K$ with constant $b>0$ if
\begin{equation}\|\langle\cdot ,\theta\rangle\|_{L_{\psi_{\alpha }}(K)}\leqslant b\|\langle\cdot, \theta\rangle\|_2,\end{equation}
where \begin{equation} \|\langle \cdot ,\theta\rangle \|_{L_{\psi_{\alpha}}(K)}:=\inf \left \{ t>0 :
\int_K\exp \big((|\langle x,\theta\rangle |/t)^\alpha\big) \,
dx\leqslant 2 \right\}.
\end{equation}Markov's inequality implies that if $K$ satisfies a
$\psi_\alpha$-estimate with constant $b$ in the direction of
$\theta$ then for all $t\geqslant 1$ we have $|\{x\in K : |\langle x,\theta
\rangle|\geqslant t \|\langle\cdot ,\theta\rangle\|_2\}|\leqslant
2e^{-t^a/b^\alpha}$. Conversely, one can check that tail estimates of this form
imply that $\theta $ is a $\psi_{\alpha }$-direction for $K$.
It is well-known that every $\theta \in S^{n-1}$ is a $\psi_1$-direction for $K$ with an absolute constant $C$. An open question is
if there exists an absolute constant $C>0$ such that every $K$ has at least one sub-Gaussian direction ($\psi_2$-direction) with constant $C$.
It was first proved by Klartag in \cite{Klartag-2007} that
for every centered convex body $K$ of volume $1$ in ${\mathbb R}^n$ there exists $\theta \in S^{n-1}$ such that
\begin{equation} |\{ x\in K: |\langle x, \theta \rangle | \geqslant
c t \|\langle\cdot, \theta\rangle\|_2 \}| \leqslant e^{-\frac{t^{2}}{[\log(t+1)]^{2a}}} \end{equation} for all
$t\geqslant 1$, where $a=3$ (equivalently, $\|\langle \cdot ,\theta\rangle\|_{L_{\psi_2}(K)}\leqslant C(\log n)^a\|\langle\cdot ,\theta\rangle\|_2$).
This estimate was later improved by Giannopoulos, Paouris and Valettas in \cite{Giannopoulos-Paouris-Valettas-2011}
and \cite{Giannopoulos-Paouris-Valettas-2012b} (see also \cite{Giannopoulos-Pajor-Paouris-2007})
who showed that the body $\Psi_2(K)$ with support function $y\mapsto \|\langle \cdot ,y\rangle\|_{L_{\psi_2}(K)}$
has volume
\begin{equation}\label{eq:vol-ratio-psi-body-K}c_1\leqslant \left(\frac{|\Psi_{2}(K)|}{|Z_2(K)|}\right)^{1/n}\leqslant
c_2\sqrt{\log n}.\end{equation}
From \eqref{eq:vol-ratio-psi-body-K} it follows that there exists at least one
sub-Gaussian direction for $K$ with constant $b\leqslant C\sqrt{\log n}$.
Brazitikos and Hioni in \cite{Brazitikos-Hioni-2015} proved that if $K$ is isotropic then logarithmic bounds for $\|\langle \cdot ,\theta\rangle\|_{L_{\psi_2}(K)}$
hold true with probability polynomially close to $1$: For any $a>1$ one has
$$\|\langle \cdot ,\theta\rangle\|_{L_{\psi_2}(K)}\leqslant C(\log n)^{3/2}\max\left\{\sqrt{\log n},\sqrt{a}\right\}L_K$$
for all $\theta $ in a subset $\Theta_a $ of $S^{n-1}$ with $\sigma (\Theta_a )\geqslant 1-n^{-a}$, where $C>0$
is an absolute constant.
Here, we consider the question if one can have an estimate of this type {\sl for all} directions $\theta $ of a subspace $F\in G_{n,k}$
of dimension $k$ increasing to infinity with $n$. We say that $F\in G_{n,k}$ is a {\sl sub-Gaussian subspace} for $K$ with constant $b>0$
if
\begin{equation}\|\langle\cdot ,\theta\rangle\|_{L_{\psi_{\alpha }}(K)}\leqslant b\|\langle\cdot, \theta\rangle\|_2\end{equation}
for all $\theta \in S_F:=S^{n-1}\cap F$. In Section 5 we show that if $K$ is isotropic then a random subspace of dimension $(\log n)^4$
is sub-Gaussian with constant $b\simeq (\log n)^2$. More precisely, we prove the following.
\begin{theorem}\label{th:1.3}Let $K$ be an isotropic convex body in ${\mathbb R}^n$. If $k\simeq (\log n)^4$ then there exists a
subset $\Gamma $ of $G_{n,k}$ with $\nu_{n,k}(\Gamma )\geqslant 1-n^{-(\log n)^3}$ such that
\begin{equation}\|\langle \cdot ,\theta\rangle\|_{L_{\psi_2}(K)}\leqslant C(\log n)^2L_K\end{equation}
for all $F\in \Gamma $ and all $\theta\in S_F$, where $C>0$ is an absolute constant.
\end{theorem}
An essential ingredient of the proof is the good estimates on the radius of random projections of the $L_q$-centroid bodies $Z_q(K)$ of $K$,
which follow from E.~Milman's sharp bounds on their mean width $w(Z_q(K))$ (see Theorem \ref{th:Emanuel2}).
\section{Notation and preliminaries}
We work in ${\mathbb R}^n$, which is equipped with a Euclidean structure $\langle\cdot ,\cdot\rangle $. We denote the corresponding
Euclidean norm by $\|\cdot \|_2$, and write $B_2^n$ for the Euclidean unit ball, and $S^{n-1}$ for the unit sphere. Volume is
denoted by $|\cdot |$. We write $\omega_n$ for the volume of $B_2^n$ and $\sigma $ for the rotationally invariant probability measure on
$S^{n-1}$. We also denote the Haar measure on $O(n)$ by $\nu $. The Grassmann manifold $G_{n,k}$ of $k$-dimensional subspaces of
${\mathbb R}^n$ is equipped with the Haar probability measure $\nu_{n,k}$. Let $k\leqslant n$ and $F\in G_{n,k}$. We will denote the
orthogonal projection from $\mathbb R^{n}$ onto $F$ by $P_F$. We also define $B_F=B_2^n\cap F$ and $S_F=S^{n-1}\cap
F$.
The letters $c,c^{\prime }, c_1, c_2$ etc. denote absolute positive constants whose value may change from line to line. Whenever we
write $a\simeq b$, we mean that there exist absolute constants $c_1,c_2>0$ such that $c_1a\leqslant b\leqslant c_2a$. Also if $A,D\subseteq
\mathbb R^n$ we will write $A\simeq D$ if there exist absolute constants $c_1, c_2>0$ such that $c_{1}A\subseteq D \subseteq
c_{2}A$.
\medskip
\noindent \textbf{Convex bodies.} A convex body in ${\mathbb R}^n$ is a compact convex subset $A$ of
${\mathbb R}^n$ with nonempty interior. We say that $A$ is symmetric if $A=-A$. We say that $A$ is centered if
the center of mass of $A$ is at the origin, i.e.~$\int_A\langle
x,\theta\rangle \,d x=0$ for every $\theta\in S^{n-1}$.
The volume radius of $A$ is the quantity ${\rm vrad}(A)=\left (|A|/|B_2^n|\right )^{1/n}$.
Integration in polar coordinates shows that if the origin is an interior point of $A$ then the volume radius of $A$ can be expressed as
\begin{equation}\label{eq:not-1}{\rm vrad}(A)=\left (\int_{S^{n-1}}\|\theta \|_A^{-n}\,d\sigma (\theta )\right)^{1/n},\end{equation}
where $\|\theta \|_A=\min\{ t>0:\theta \in tA\}$. The radial function of $A$ is defined by $\rho_A(\theta )=\max\{ t>0:t\theta\in A\}$,
$\theta\in S^{n-1}$. The support
function of $A$ is defined by $h_A(y):=\max \bigl\{\langle x,y\rangle :x\in A\bigr\}$, and
the mean width of $A$ is the average
\begin{equation}\label{eq:not-2}w(A):=\int_{S^{n-1}}h_A(\theta )\,d\sigma (\theta )\end{equation}
of $h_A$ on $S^{n-1}$. The radius $R(A)$ of $A$ is the smallest $R>0$ such that $A\subseteq RB_2^n$.
For notational convenience we write $\overline{A}$ for
the homothetic image of volume $1$ of a convex body $A\subseteq
\mathbb R^n$, i.e. $\overline{A}:= |A|^{-1/n}A$.
The polar body $A^{\circ }$ of a convex body $A$ in ${\mathbb R}^n$ with $0\in {\rm int}(A)$ is defined by
\begin{equation}\label{eq:not-3}
A^{\circ}:=\bigl\{y\in {\mathbb R}^n: \langle x,y\rangle \leqslant 1\;\hbox{for all}\; x\in A\bigr\}.
\end{equation}
The Blaschke-Santal\'{o} inequality states that if $A$ is centered then $|A||A^{\circ }|\leqslant |B_2^n|^2$,
with equality if and only if $A$ is an ellipsoid.
The reverse Santal\'{o} inequality of J.~Bourgain and V.~Milman \cite{Bourgain-VMilman-1987} states that there exists an absolute constant $c>0$ such
that
\begin{equation}\label{eq:not-4}\left (|A||A^{\circ }|\right )^{1/n}\geqslant c/n\end{equation}
whenever $0\in {\rm int}(A)$.
For every centered convex body $A$ of volume $1$ in ${\mathbb R}^n$ and for every $q\in (-n,\infty )\setminus\{0\}$ we define
\begin{equation}I_q(A)=\left (\int_A\| x\|_2^qdx\right )^{1/q}.\end{equation}
As a consequence of Borell's lemma (see \cite[Chapter 1]{BGVV-book}) one has
\begin{equation}I_q(A)\leqslant c_1 q I_2(A)\end{equation} for all $q\geqslant 2$.
For basic facts from the Brunn-Minkowski theory and the asymptotic theory of convex bodies we refer to the books \cite{Schneider-book} and \cite{AGA-book} respectively.
\smallskip
\noindent \textbf{Log-concave probability measures.}
Let $\mu $ be a log-concave probability measure on ${\mathbb R}^n$. The density of $\mu $ is denoted by $f_{\mu}$. We say that $\mu $
is centered and we write $\textrm{bar}(\mu )=0$ if, for all $\theta\in S^{n-1}$,
\begin{equation}\label{eq:not-5}
\int_{\mathbb R^n} \langle x, \theta \rangle d\mu(x) = \int_{\mathbb
R^n} \langle x, \theta \rangle f_{\mu}(x) dx = 0.
\end{equation}
The isotropic constant of $\mu $ is defined by
\begin{equation}\label{eq:definition-isotropic}
L_{\mu }:=\left (\frac{\sup_{x\in {\mathbb R}^n} f_{\mu} (x)}{\int_{{\mathbb
R}^n}f_{\mu}(x)dx}\right )^{\frac{1}{n}} [\det \textrm{Cov}(\mu)]^{\frac{1}{2n}},\end{equation} where
$\textrm{Cov}(\mu)$ is the covariance matrix of $\mu$ with entries
\begin{equation}\label{eq:not-6}\textrm{Cov}(\mu )_{ij}:=\frac{\int_{{\mathbb R}^n}x_ix_j f_{\mu}
(x)\,dx}{\int_{{\mathbb R}^n} f_{\mu} (x)\,dx}-\frac{\int_{{\mathbb
R}^n}x_i f_{\mu} (x)\,dx}{\int_{{\mathbb R}^n} f_{\mu}
(x)\,dx}\frac{\int_{{\mathbb R}^n}x_j f_{\mu}
(x)\,dx}{\int_{{\mathbb R}^n} f_{\mu} (x)\,dx}.\end{equation} We say
that a log-concave probability measure $\mu $ on ${\mathbb R}^n$
is isotropic if $\textrm{bar}(\mu )=0$ and $\textrm{Cov}(\mu )$ is the identity matrix.
Note that a centered convex body $K$ of volume $1$ in ${\mathbb R}^n$ is isotropic,
i.e.~it satisfies (\ref{eq:intro-1}),
if and only if the log-concave probability measure $\mu_K$ with density
$x\mapsto L_K^n\mathbf{1}_{K/L_K}(x)$ is isotropic. Note that for every log-concave measure $\mu $
on ${\mathbb R}^n$ one has
\begin{equation}\label{eq:Lmu}L_{\mu }\leqslant \kappa L_n,\end{equation}
where $\kappa >0$ is an absolute constant (a proof can be found in \cite[Proposition 2.5.12]{BGVV-book}).
We will use the following sharp result on the growth of $I_q(K)$, where $K$ is an isotropic
convex body in ${\mathbb R}^n$, proved by Paouris in \cite{Paouris-GAFA} and \cite{Paouris-TAMS}.
\begin{theorem}[Paouris]\label{th:grigoris}
There exists an absolute constant $\delta >0$ with the following property: if $K$ is an isotropic convex body in $\mathbb
R^n$, then
\begin{equation}\label{eq:constant-Iq}\frac{1}{\delta }\sqrt{n}L_K=\frac{1}{\delta }I_2(K)\leqslant I_{-q}(K)\leqslant I_q(K)\leqslant \delta I_2(K)=\delta\sqrt{n}L_K\end{equation} for every
$1\leqslant q\leqslant \sqrt{n}$.
\end{theorem}
For every $q\geqslant 1$ and every $y \in {\mathbb R}^n$ we set
\begin{equation}\label{Zq-def}h_{Z_q(\mu )}(y)= \left(\int_{{\mathbb R}^n} |\langle x,y\rangle|^{q}d\mu (x) \right)^{1/q}.\end{equation}
The $L_q$-centroid body $Z_q(\mu )$ of $\mu $ is the symmetric convex body with support function
$h_{Z_{q}(\mu )}$. Note that $\mu $ is isotropic if and only if it is centered and $Z_{2}(\mu )=
B_2^n$. If $K$ is an isotropic convex body in ${\mathbb R}^n$ we define $Z_q(K)=L_KZ_q(\mu_K)$.
From H\"{o}lder's inequality it follows that $Z_1(K)\subseteq Z_p(K)\subseteq Z_q(K)\subseteq Z_{\infty }(K)$ for
all $1\leqslant p\leqslant q\leqslant \infty $, where $Z_{\infty }(K)={\rm conv}\{K,-K\}$.
Using Borell's lemma, one can check that
\begin{equation}\label{eq:Zq-inclusions} Z_q(K)\subseteq c_1\frac{q}{p}Z_p(K)\end{equation}
for all $1\leqslant p<q$. In particular, if $K$ is isotropic, then
$R(Z_q(K))\leqslant c_1qL_K$. One can also check that if $K$ is
centered, then $Z_q(K)\supseteq c_2Z_{\infty }(K)$ for all $q\geqslant n$.
It was shown by Paouris \cite{Paouris-GAFA} that if $1\leqslant q\leqslant\sqrt{n}$ then
\begin{equation}\label{eq:wZq-small}
w\bigl(Z_q(\mu)\bigr)\simeq \sqrt{q},
\end{equation}
and that for all $1\leqslant q\leqslant n$,
\begin{equation}
{\rm vrad}(Z_q(\mu)) \leqslant c_1\sqrt{q}.
\end{equation}
Conversely, it was shown by B.~Klartag and E.~Milman in \cite{Klartag-EMilman-2012} that if $1\leqslant q\leqslant\sqrt{n}$ then
\begin{equation}\label{eq:low-volume-Zq}
{\rm vrad}(Z_q(\mu))\geqslant c_2\sqrt{q}.
\end{equation}
This determines the volume radius of $Z_q(\mu )$ for all $1 \leqslant q\leqslant\sqrt{n}$. For larger values of $q$ one can still use the lower bound:
\begin{equation}\label{eq:3}
{\rm vrad}(Z_q(\mu)) \geqslant c_2\sqrt{q}\, L_{\mu}^{-1} ,
\end{equation}
obtained by Lutwak, Yang and Zhang in \cite{Lutwak-Yang-Zhang-2000} for convex bodies and extended by Paouris and Pivovarov
in \cite{Paouris-Pivovarov-2012} to the class of log-concave probability measures.
Let $\mu $ be a probability measure on ${\mathbb R}^n$ with density $f_{\mu }$ with respect
to the Lebesgue measure. For every $1\leqslant k\leqslant n-1$ and every
$E\in G_{n,k}$, the marginal of $\mu$ with respect to $E$ is the probability
measure $\pi_E(\mu )$ on $E$, with density
\begin{equation}\label{definitionmarginal}f_{\pi_E(\mu )}(x)= \int_{x+
E^{\perp}} f_{\mu }(y) dy.
\end{equation}
It is easily checked that if $\mu $ is centered, isotropic or log-concave, then $\pi_E(\mu )$ is also centered, isotropic or
log-concave, respectively. A very useful observation is that:
\begin{equation}
P_F\bigl(Z_q(\mu )\bigr) = Z_q\bigl(\pi_F(\mu )\bigr)
\end{equation}
for every $1\leqslant k\leqslant n-1$ and every $F\in G_{n,n-k}$.
If $\mu$ is a centered log-concave probability measure on $\mathbb R^n$ then for every $p>0$ we define
\begin{equation}\label{eq:not-7}
K_p(\mu):=K_p(f_\mu)=\left\{x : \int_0^\infty r^{p-1}f_\mu(rx)\,
dr\geqslant \frac{f_{\mu }(0)}{p} \right\}.
\end{equation}
From the definition it follows that $K_p(\mu )$ is a star body with radial function
\begin{equation}\label{eq:not-8}
\rho_{K_p(\mu )}(x)=\left (\frac{1}{f_{\mu }(0)}\int_0^{\infty
}pr^{p-1}f_{\mu }(rx)\,dr\right )^{1/p}\end{equation} for $x\neq 0$.
The bodies $K_p(\mu )$ were introduced in \cite{Ball-1988} by K. Ball who showed that if $\mu $ is log-concave
then, for every $p>0$, $K_p(\mu )$ is a convex body.
If $K$ is isotropic then for every $1\leqslant k\leqslant n-1$ and $F\in G_{n,n-k}$,
the body $\overline{K_{k+1}}(\pi_{F^{\perp }}(\mu_{K}))$ satisfies
\begin{equation}\label{eq:not-9}
|K\cap F|^{1/k} \simeq
\frac{L_{\overline{K_{k+1}}(\pi_{F^{\perp }}(\mu_{K}))}}{L_K}.
\end{equation}
For more information on isotropic convex bodies and log-concave measures see \cite{BGVV-book}.
\section{Random sections of isotropic convex bodies}
The proof of Theorem \ref{th:intro-2} is based on Lemma \ref{lem:sections-1} and Lemma \ref{lem:sections-2} below. They exploit
some ideas of Klartag from \cite{Klartag-2004}.
\begin{lemma}\label{lem:sections-1}Let $K$ be an isotropic convex body in ${\mathbb R}^n$. For every $1\leqslant k\leqslant n-1$
there exists a subset ${\cal A}:={\cal A}(n,k)$ of $G_{n,n-k}$ with $\nu_{n,n-k}({\cal A})\geqslant 1-e^{-\sqrt{n}}$ that has the following property:
for every $F\in {\cal A}$,
\begin{equation}\label{eq:lem-sections-1}|\{x\in K\cap F:\|x\|_2
\geqslant c_1\sqrt{n}L_K\}|\leqslant e^{-(k+\sqrt{n})}|K\cap F|,
\end{equation}
where $c_1>0$ is an absolute constant.
\end{lemma}
\noindent {\it Proof.} Integration in polar coordinates shows that for all $q>0$
\begin{equation}\label{eq:sections-1}\int_{G_{n,n-k}}\int_{K\cap F}\|x\|_2^{k+q}dx\,d\nu_{n,n-k}(F)
=\frac{(n-k)\omega_{n-k}}{n\omega_n}\int_K\|x\|_2^qdx =\frac{(n-k)\omega_{n-k}}{n\omega_n}I_q^q(K),\end{equation}
and an application of Markov's inequality shows that a random $F\in G_{n,n-k}$ satisfies
\begin{equation}\label{eq:sections-2}\int_{K\cap F}\|x\|_2^{k+q}dx\leqslant \frac{(n-k)\omega_{n-k}}{n\omega_n}(eI_q(K))^q\end{equation}
with probability greater than $1-e^{-q}$.
Fix a subspace $F\in G_{n,n-k}$ which satisfies \eqref{eq:sections-2}. From \eqref{eq:not-9} we have
\begin{equation}\label{eq:sections-3}|K\cap F|^{1/k} \geqslant c_2
\frac{L_{\overline{K_{k+1}}(\pi_{F^{\perp }}(\mu_{K}))}}{L_K}\geqslant \frac{c_3}{L_K}
\end{equation}
where $c_2,c_3>0$ are absolute constants. A simple computation shows that
\begin{equation}\frac{(n-k)\omega_{n-k}}{n\omega_n}\leqslant (c_4\sqrt{n})^k\end{equation}
for an absolute constant $c_4>0$. Using also \eqref{eq:constant-Iq} with $q=\sqrt{n}$ we get
\begin{align}\label{eq:sections-4}\frac{1}{|K\cap F|}\int_{K\cap F}\|x\|_2^{k+\sqrt{n}}dx &\leqslant \frac{1}{|K\cap F|}\,\frac{(n-k)\omega_{n-k}}{n\omega_n}(eI_{\sqrt{n}}(K))^{\sqrt{n}}\\
\nonumber &\leqslant (c_5L_K)^k(c_4\sqrt{n})^k(e\delta\sqrt{n}L_K)^{\sqrt{n}}\leqslant (c_6\sqrt{n}L_K)^{k+\sqrt{n}},
\end{align}
where $c_6>0$ is an absolute constant. It follows that
\begin{equation}\label{eq:sections-5}|\{x\in K\cap F:\|x\|_2\geqslant ec_6\sqrt{n}L_K\}|\leqslant e^{-(k+\sqrt{n})}|K\cap F|.
\end{equation}
and the lemma is proved with $c_1=ec_6$. $\quad \hfill \Box$
\medskip
The next lemma comes from \cite{Klartag-2004}.
\begin{lemma}[Klartag]\label{lem:sections-2}Let $A$ be a symmetric convex body in
$\mathbb R^m$. Then, for any $0<\varepsilon <1$ we have
\begin{equation}\label{eq:lem-sections-2}|\{x\in A
: \|x\|_2\geqslant \varepsilon R(A)\}|\geqslant \frac{1}{2}(1-\varepsilon)^m|A|.\end{equation}
\end{lemma}
\noindent {\it Proof.} Let $x_0\in A$ such that $\|x_0\|_2=R(A)$ and define $v=x_0/\|x_0\|_2$. We consider the set $A^+$
defined as
\begin{equation}A^+:=\{x\in A : \langle x,v\rangle \geqslant 0\}.\end{equation}
Since $A$ is symmetric, we have $|A^+|= |A|/2$. Note that
\begin{equation}\{x\in A :\|x\|_2\geqslant \varepsilon R(A)\}\supseteq \varepsilon x_0+
(1-\varepsilon)A^+.\end{equation} Therefore,
\begin{equation}|\{x\in A: \|x\|_2\geqslant \varepsilon
R(A)\}|\geqslant |\varepsilon x_0+(1-\varepsilon) A^+|=(1-\varepsilon)^m|A^+|= \frac{1}{2}(1-\varepsilon)^m|A|,\end{equation}
as claimed. $\quad \hfill \Box$
\medskip
\noindent {\bf Proof of Theorem \ref{th:intro-2}.} Let $K$ be an isotropic symmetric convex body in ${\mathbb R}^n$. Applying
Lemma \ref{lem:sections-1} we find a subset ${\cal A}$ of $G_{n,n-k}$ with $\nu_{n,n-k}({\cal A})\geqslant 1-e^{-\sqrt{n}}$
such that, for every $F\in {\cal A}$,
\begin{equation}\label{eq:final-1}|\{x\in K\cap F:\|x\|_2
\geqslant c_1\sqrt{n}L_K\}|\leqslant e^{-(k+\sqrt{n})}|K\cap F|.
\end{equation}
We distinguish two cases:
\smallskip
\noindent {\it Case 1.} If $k>n/3$ then choosing $\varepsilon_0 =1-e^{-\frac{1}{3}}$ we get
\begin{equation}\frac{1}{2}(1-\varepsilon_0)^{n-k}|K\cap F|=\frac{1}{2}e^{-\frac{n-k}{3}}|K\cap F|>e^{-\frac{n-k}{3}-1}|K\cap F|
>e^{-(k+\sqrt{n})}|K\cap F|,\end{equation}
because $k+\sqrt{n}>\frac{n-k}{3}+1$. By Lemma \ref{lem:sections-2} and (\ref{eq:final-1})
we get that
\begin{equation}|\{x\in K\cap F: \|x\|_2\geqslant \varepsilon_0 R(K\cap F)\}|>|\{x\in K\cap F:\|x\|_2
\geqslant c_1\sqrt{n}L_K\}|,\end{equation}
therefore
\begin{equation}R(K\cap F)<c_2\sqrt{n}L_K,\end{equation}
where $c_2=\varepsilon_0^{-1}c_1>0$ is an absolute constant.
\smallskip
\noindent {\it Case 2.} If $k\leqslant n/3$ then we choose $\varepsilon_1 =\frac{k+\sqrt{n}}{6(n-k)}$. Note that $\varepsilon_1<1/2$.
Using the inequality $1-t>e^{-2t}$ on $(0,1/2)$ we get
\begin{equation}\frac{1}{2}(1-\varepsilon_1)^{n-k}|K\cap F|=\frac{1}{2}\left (1-\frac{k+\sqrt{n}}{6(n-k)}\right )^{n-k}|K\cap F|
>e^{-\frac{k+\sqrt{n}}{3}-1}|K\cap F|>e^{-(k+\sqrt{n})}|K\cap F|,\end{equation}
because $\frac{2(k+\sqrt{n})}{3}>1$. By Lemma \ref{lem:sections-2} this implies that
\begin{equation}|\{x\in K\cap F: \|x\|_2\geqslant \varepsilon_1 R(K\cap F)\}|>|\{x\in K\cap F:\|x\|_2
\geqslant c_1\sqrt{n}L_K\}|,\end{equation}
therefore
\begin{equation}\varepsilon_1 R(K\cap F)< c_1\sqrt{n}L_K,\end{equation}
which, by the choice of $\varepsilon_1$ becomes
\begin{equation}R(K\cap F)< \frac{c_3n}{\max\{ k,\sqrt{n}\}}\,\sqrt{n}L_K\end{equation}
for some absolute constant $c_3>0$. This completes the proof of the theorem (with a probability estimate $1-e^{-\sqrt{n}}$ for all $1\leqslant k\leqslant n-1$).
$\quad \hfill \Box$
\begin{remark}\label{rem:3-2}\rm It is possible to improve the probability estimate $1-e^{-\sqrt{n}}$ in the range $k\geqslant \gamma n$, for any $\gamma\in (1/\sqrt{n},1)$.
This can be done with the help of known results that demonstrate the fact
that the existence of one $s$-dimensional section with radius $r$ implies that random $m$-dimensional sections, where $m<s$, have
radius of ``the same order". This was first observed in \cite{Giannopoulos-Milman-Tsolomitis-2005}, \cite{Vershynin-2006} and, soon after,
in \cite{Litvak-Pajor-Tomczak-2006}. Let us recall this last statement.
\begin{quote}{\sl Let $A$ be a symmetric convex body in ${\mathbb R}^n$ and let
$1\leqslant s<m\leqslant n-1$. If $R(A\cap F)\leqslant r $ for some $F\in G_{n,m}$ then a random subspace $E\in G_{n,s}$ satisfies
\begin{equation}\label{eq:LPT}R(A\cap E)\leqslant r\,\Big ( \frac{c_2n}{n-m}\Big )^{\frac{n-s}{2(m-s)}}\end{equation}
with probability greater than $1-2e^{-(n-s)/2}$, where $c_2>0$ is an absolute constant.}
\end{quote}
We apply this result as follows. Let $k=\gamma n\geqslant\sqrt{n}$ and set $t=\delta n$, where $\delta \simeq\gamma /\log (1+1/\gamma )$. From the proof of Theorem \ref{th:intro-2} we know that there exists $E\in G_{n,n-t}$ such that
\begin{equation}R(K\cap E)\leqslant \frac{c_1n}{t}\sqrt{n}L_K,\end{equation}
where $c_1>0$ is an absolute constant. Applying \eqref{eq:LPT} with $s=n-k$ and $m=n-t$ we see that
a random subspace $F\in G_{n,n-k}$ satisfies
\begin{equation}R(K\cap F)\leqslant \left (\frac{c_2}{\delta }\right )^{\frac{3}{2}}\,R(K\cap E)= c_3(\gamma )\sqrt{n}L_K\end{equation}
with probability greater than $1-2e^{-k/2}$, where $c_3(\gamma )=O((\gamma^{-1}\log (1+1/\gamma ))^{\frac{3}{2}})$.
\end{remark}
\begin{remark}\label{rem:3-3}\rm It is also possible to give lower bounds of the order of $\sqrt{n}L_K$
for the diameter of $(n-k)$-dimensional sections, provided that the codimension $k$ is small. Integration in polar coordinates shows that
\begin{equation}\int_K\|x\|_2^{-q}dx=\frac{n\omega_n}{(n-k)\omega_{n-k}}\int_{G_{n,n-k}}\int_{K\cap F}\|x\|_2^{k-q}dx\,d\nu_{n,n-k}(F)\end{equation}
for every $1\leqslant k\leqslant n-1$ and every $0<q<n$. It follows that
\begin{equation}\label{eq:sections-lower-1}\int_{G_{n,n-k}}\int_{K\cap F}\|x\|_2^{k-q}dx\,d\nu_{n,n-k}(F)=\frac{(n-k)\omega_{n-k}}{n\omega_n}I_{-q}^{-q}(K),\end{equation}
and an application of Markov's inequality shows that a random $F\in G_{n,n-k}$ satisfies
\begin{equation}\label{eq:sections-lower-2}\int_{K\cap F}\|x\|_2^{k-q}dx\leqslant \frac{(n-k)\omega_{n-k}}{n\omega_n}(e/I_{-q}(K))^q\end{equation}
with probability greater than $1-e^{-q}$. Assuming that $q>k$, for any $F\in G_{n,n-k}$ satisfying \eqref{eq:sections-lower-2} we have
\begin{equation}\label{eq:sections-lower-3}|K\cap F|\,R(K\cap F)^{k-q}\leqslant \int_{K\cap F}\|x\|_2^{k-q}dx\leqslant \frac{(n-k)\omega_{n-k}}{n\omega_n}(e/I_{-q}(K))^q,\end{equation}
which implies
\begin{equation}\label{eq:sections-lower-4}R(K\cap F)\geqslant \left (\frac{n\omega_n}{(n-k)\omega_{n-k}}\right )^{\frac{1}{q-k}}|K\cap F|^{\frac{1}{q-k}}\left (\frac{I_{-q}(K)}{e}\right )^{\frac{q}{q-k}}\geqslant \left (\frac{c_1}{\sqrt{n}L_K}\right )^{\frac{k}{q-k}}\left (c_2I_{-q}(K)\right )^{\frac{q}{q-k}}.\end{equation}
If $k\leqslant\sqrt{n}$ then we may choose $q=2\sqrt{n}$ and use the fact that $I_{-2\sqrt{n}}(K)\geqslant c_3\sqrt{n}L_K$ by Theorem \ref{th:grigoris}, to get:
\end{remark}
\begin{proposition}\label{prop:sections-3}Let $K$ be an isotropic convex body in ${\mathbb R}^n$. For every $1\leqslant k\leqslant\sqrt{n}$
there exists a subset ${\cal A}$ of $G_{n,n-k}$ with $\nu_{n,n-k}({\cal A})\geqslant 1-e^{-\sqrt{n}}$
such that, for every $F\in {\cal A}$,
\begin{equation}\label{eq:prop-lower-1}R(K\cap F)\geqslant c\sqrt{n}L_K,\end{equation}
where $c>0$ is an absolute constant.
\end{proposition}
\begin{remark}\label{rem:3-1}\rm Choosing $k=\lfloor n/2\rfloor $ in Theorem \ref{th:intro-2} we see that if $K$ is an isotropic symmetric convex body in ${\mathbb R}^n$
then a subspace $F\in G_{n,\lceil n/2\rceil }$ satisfies
\begin{equation}\label{eq:rotations-1}R(K\cap F)\leqslant c_1\sqrt{n}\,L_K\end{equation}
with probability greater than $1-2\exp (-c_2n)$, where $c_1, c_2>0$ are absolute
constants. A standard argument that goes back to Krivine (see \cite[Proposition 8.6.2]{AGA-book})
shows that there exists $U\in O(n)$ such that
\begin{equation}\label{eq:rotations-2}K\cap U(K)\subseteq (c_3\sqrt{n}L_K)\,B_2^n,\end{equation}
where $c_3>0$ is an absolute constant. In fact, one can prove an analogue of \eqref{eq:rotations-2} for a random $U\in O(n)$ using a result of Vershynin and Rudelson
(see \cite[Theorem 1.1]{Vershynin-2006}): There exist absolute constants $\gamma_0\in (0,1/2)$ and $c_1>0$ with the following property: if $A$ and $D$ are two symmetric convex bodies in ${\mathbb R}^n$ which have sections of dimensions at least $k$ and $n-2\gamma_0k$ respectively whose radius is bounded by $1$, then a random $U\in O(n)$
satisfies
\begin{equation}\label{eq:Rud-Ver}R(A\cap U(D))\leqslant c_1^{n/k}\end{equation} with probability greater than $1-e^{-n}$. As an application, setting $D=A$ and $k=n/2$ one has the following
(see \cite{Brazitikos-Stavrakakis-2014}). If
\begin{equation}r_A:=\min\{ R(A\cap F):{\rm dim}(F)=\lceil (1-\gamma_0)n\rceil \}\end{equation}
then $R(A\cap U(A))\leqslant c_2r_A$ with probability greater than $1-e^{-n}$ with respect to $U\in O(n)$.
Choosing $k=\lfloor \gamma_0n/2\rfloor $ in Theorem \ref{th:intro-2} we see that if $K$ is an isotropic symmetric convex body in ${\mathbb R}^n$
then
\begin{equation}r_K\leqslant c_4\sqrt{n}L_K\end{equation}
for some absolute constant $c_4>0$. This gives that a random $U\in O(n)$ satisfies
\begin{equation}K\cap U(K)\subseteq (c_5\sqrt{n}L_K)\,B_2^n,\end{equation} with probability greater than $1-e^{-n}$, where $c_5>0$ is an absolute constant.
\end{remark}
\section{Minimal and random sections of the centroid bodies of isotropic log-concave measures}
In this section we discuss the case of the $L_q$-centroid bodies $Z_q(\mu )$ of an isotropic log-concave probability
measure $\mu $ on ${\mathbb R}^n$. Our method will be different from the one in the previous section.
In view of \eqref{eq:LPT} we can give an upper bound for the radius of a
random $k$-codimensional section of a symmetric convex body $A$ in ${\mathbb R}^n$ if we are able to give an upper bound
for the radius of {\it some} $t$-codimensional section of $A$, where $t\ll k$. This leads us to the study of the
Gelfand numbers $c_t(A)$, which are defined by
\begin{equation}c_t(A)=\min\{R(A\cap F) : F \in G_{n,n-t}\}\end{equation}
for every $t = 0,\ldots,n-1$.
It was proved in \cite{Giannopoulos-EMilman-2014} that if $A$ is a symmetric convex body in ${\mathbb R}^n$
then, for any $t=1,\ldots,\lfloor n/2 \rfloor$ there exists $F \in G_{n,n-2t}$ such that
\begin{equation} \label{eq:G-EM-1}
A\cap F \subseteq c_1\frac{n}{t} \log\Big(e + \frac{n}{t}\Big) w_{t}(A) B_2^n \cap F,
\end{equation} where
\begin{equation}w_t(A) := \sup \{{\rm vrad}(A\cap E): E\in G_{n,t}\}.\end{equation}
In other words,
\begin{equation} \label{eq:G-EM-2}
c_{2t}(A)\leqslant c_1\frac{n}{t} \log\Big(e + \frac{n}{t}\Big) w_t(A).
\end{equation}
This is a refinement of a result of V.~Milman and G.~Pisier from \cite{VMilman-Pisier-1987}, where a similar estimate was
obtained, with the parameter $w_t(A)$ replaced by (the larger one)
\begin{equation}\label{eq:MP-parameter}v_t(A):= \sup \{{\rm vrad}(P_E(A)): E\in G_{n,t}\}.\end{equation}
We shall apply this method to the bodies $Z_q(\mu )$. The main additional
ingredient is the next fact, which combines results of Paouris and Klartag (see \cite{EMilman-2014} or \cite[Chapter 5]{BGVV-book}
for precise references):
\begin{theorem}\label{th:Emanuel}Let $\mu $ be a centered log-concave probability
measure on ${\mathbb R}^n$. Then, for all $1\leqslant t\leqslant n$ and $q\geqslant 1$ we have
\begin{equation}v_t(Z_q(\mu ))= \sup \{{\rm vrad}(P_E(Z_q(\mu ))): E\in G_{n,t}\}
\leqslant c_0\sqrt{\frac{q}{t}}\max\{ \sqrt{q},\sqrt{t}\}\max_{E\in G_{n,t}}\det\,{\rm Cov}(\pi_E(\mu ))^{\frac{1}{2t}},
\end{equation}
where $c_0>0$ is an absolute constant.
\end{theorem}
We apply Theorem \ref{th:Emanuel} as follows: for every $1\leqslant t\leqslant n/2$ and every $E\in G_{n,t}$ we have that $\pi_E(\mu )$ is isotropic, and
hence $\det\,{\rm Cov}(\pi_E(\mu ))^{\frac{1}{2t}}=1$. Then,
\begin{equation}w_t(Z_q(\mu ))\leqslant v_t(Z_q(\mu ))\leqslant c_0\sqrt{\frac{q}{t}}\max\{ \sqrt{q},\sqrt{t}\}.\end{equation}
From \eqref{eq:G-EM-2} we get
\begin{lemma}\label{lem:Zq-1}Let $\mu $ be an isotropic log-concave probability measure on ${\mathbb R}^n$
and let $1\leqslant t\leqslant \lfloor n/2 \rfloor$ and $1\leqslant q\leqslant n$. Then,
\begin{equation}\label{eq:ct-Zq}c_{2t}(Z_q(\mu ))\leqslant c_2\frac{n}{t}\log\left (e+\frac{n}{t}\right )\sqrt{\frac{q}{t}}\max\{ \sqrt{q},\sqrt{t}\},\end{equation}
where $c_2>0$ is an absolute constant. $\quad \hfill \Box$
\end{lemma}
Let $k\geqslant 4$ and let $t<k/2$. From Lemma \ref{lem:Zq-1} we know that there exists $E\in G_{n,n-2t}$ such that
\begin{equation}R(Z_q(\mu )\cap E)\leqslant c_2\frac{n}{t}\log\left (e+\frac{n}{t}\right )\sqrt{\frac{q}{t}}\max\{ \sqrt{q},\sqrt{t}\},\end{equation}
where $c_2>0$ is an absolute constant. Applying \eqref{eq:LPT} with $s=n-k$ and $m=n-2t$ we see that
a random subspace $F\in G_{n,n-k}$ satisfies
\begin{equation}R(Z_q(\mu )\cap F)\leqslant \left (\frac{c_2n}{t}\right )^{\frac{k}{2(k-2t)}}\,R(Z_q(\mu )\cap E)\leqslant \left (\frac{c_3n}{t}\right )^{\frac{3}{2}+\frac{t}{k-2t}}\log\left (e+\frac{n}{t}\right )
\sqrt{\frac{q}{t}}\max\{ \sqrt{q},\sqrt{k}\}\end{equation}
with probability greater than $1-2e^{-k/2}$, where $c_3>0$ is an absolute constant. In particular, if $k=\gamma n$ we can choose
$t=\gamma n /\log (c/\gamma )$, for $c>e^2$, to get the following.
\begin{theorem}\label{th:Zq-sections}Let $\mu $ be an isotropic log-concave probability measure on ${\mathbb R}^n$
and let $\gamma \in (0,1)$ and $1\leqslant q\leqslant n$. If $k\geqslant \gamma n$ then a random subspace $F\in G_{n,n-k}$ satisfies
\begin{equation}R(Z_q(\mu )\cap F)\leqslant c(\gamma )\sqrt{q}\end{equation}
with probability greater than $1-2e^{-\gamma n/2}$, where $c(\gamma )=O(\gamma^{-2}\log^{5/2}(c/\gamma ))$ is a positive constant depending only on $\gamma $.
\end{theorem}
Next, we apply \eqref{eq:Rud-Ver}: choosing $t=\gamma_0n/2$ in \eqref{eq:ct-Zq} we see that
\begin{equation}r_{Z_q(\mu )}=c_{\gamma_0n}(Z_q(\mu ))\leqslant c_4\sqrt{q} \end{equation}
for every $1\leqslant q\leqslant n$, where $c_4=c_4(\gamma_0)>0$ is an absolute constant. Therefore, we have:
\begin{theorem}\label{th:Zq-rotations}Let $\mu $ be an isotropic log-concave probability measure on ${\mathbb R}^n$ and let $1\leqslant q\leqslant n$.
Then, a random $U\in O(n)$ satisfies
\begin{equation}Z_q(\mu )\cap U(Z_q(\mu ))\subseteq (c\sqrt{q})\,B_2^n,\end{equation}
with probability greater than $1-e^{-n}$, where $c>0$ is an absolute constant.
\end{theorem}
Note that Theorem \ref{th:intro-Zq} summarizes the contents of Theorem \ref{th:Zq-sections} and Theorem \ref{th:Zq-rotations}.
\begin{remark}\label{rem:Zq-polars}\rm We can study the same question for the polar body $Z_q^{\circ }(\mu )$ of $Z_q(\mu )$. Note that
\begin{equation}w_t(Z_q^{\circ }(\mu )) := \sup \{{\rm vrad}(Z_q^{\circ }(\mu )\cap E): E\in G_{n,t}\}
\simeq [\inf \{{\rm vrad}(P_E(Z_q(\mu ))): E\in G_{n,t}\}]^{-1}\end{equation}
by duality and by the Bourgain-Milman inequality. For any $1\leqslant t\leqslant n-1$ and any symmetric convex body $A$ in ${\mathbb R}^n$ define
\begin{equation}v_t^-(A)=\inf \{{\rm vrad}(P_E(A)): E\in G_{n,t}\}.\end{equation}
In the case $A=Z_q(\mu )$ this parameter has been studied in \cite{Giannopoulos-EMilman-2014}:
\begin{lemma}\label{lem:vk-Zq}
Let $\mu $ be an isotropic log-concave probability measure on ${\mathbb R}^n$. For any $q\geqslant 1$ and $1\leqslant k\leqslant n-1$ we have:
\begin{equation}\label{eq:vk-Zq-1}v_k^{-}(Z_q(\mu)) \geqslant c_1 \sqrt{\min(q, \sqrt{k})}.\end{equation}
If we assume that $\sup_nL_n\leqslant\alpha $ then we have
\begin{equation}\label{eq:vk-Zq-2}v_k^{-}(Z_q(\mu)) \geqslant \frac{c_2}{\alpha }\sqrt{\min(q, k)}\end{equation}
\end{lemma}
These estimates are leading to the next bounds on the minimal radius of a $k$-codimensional section of $Z_q^{\circ }(\mu )$. The following theorem is also from \cite{Giannopoulos-EMilman-2014}.
\begin{theorem}\label{thm:Rqk-variant}
Let $\mu $ be an isotropic log-concave probability measure on ${\mathbb R}^n$. For any $q\geqslant 1$ and $1\leqslant k\leqslant n-1$ we have:
\smallskip
{\rm (i)} There exists $F \in G_{n,n-k}$ such that:
\begin{equation}\label{eq:Rqk-variant-1}P_F(Z_q(\mu)) \supseteq \frac{1}{R_{k,q}}B_2^n\cap F\quad\textrm{and hence}\quad R(Z_q^{\circ }(\mu )\cap F)\leqslant R_{k,q},\end{equation}
where
\begin{equation}\label{eq:Rqk-variant-2}R_{k,q} = \min\left\{ 1, c_3\frac{1}{\min(q^{1/2} , k^{1/4})} \frac{n}{k} \log\left( e+ \frac{n}{k}\right) \right\} .
\end{equation}
{\rm (ii)} If we assume that $\sup_nL_n\leqslant\alpha $ then there exists $F \in G_{n,n-k}$ such that:
\begin{equation}\label{eq:Rqk-variant-3}P_F(Z_q(\mu)) \supseteq \frac{1}{R_{k,q,\alpha }}B_2^n\cap F\quad\textrm{and hence}\quad R(Z_q^{\circ }(\mu )\cap F)\leqslant R_{k,q,\alpha },\end{equation}
where
\begin{equation}\label{eq:Rqk-variant-4}R_{k,q,\alpha } = \min \left\{ 1, c_4\alpha\frac{1}{\sqrt{\min(q,k)}} \frac{n}{k} \log\left( e+ \frac{n}{k}\right) \right\}.
\end{equation}
\end{theorem}
Assuming that $q\leqslant\sqrt{n}$ and choosing $k=\gamma_0n$ we see from \eqref{eq:Rqk-variant-1} and \eqref{eq:Rqk-variant-2} that
\begin{equation}c_{\gamma_0n}(Z_q^{\circ }(\mu ))\leqslant c_1(\gamma_0)\frac{1}{\sqrt{q}}\end{equation}
where $c_1(\gamma_0)>0$ is an absolute constant. Then, we apply \eqref{eq:LPT} with $s=n/2$ and $m=(1-\gamma_0)n$
to get that a random subspace $E\in G_{n,n/2}$ satisfies
\begin{equation}R(Z_q^{\circ }(\mu )\cap E)\leqslant c_3\cdot c_{\gamma_0n}(Z_q^{\circ }(\mu ))\leqslant c_2(\gamma_0)\frac{1}{\sqrt{q}}\end{equation}
with probability greater than $1-2e^{-n/4}$, where $c_2(\gamma_0)>0$ is an absolute constant. As usual, this implies that a
random $U\in O(n)$ satisfies
\begin{equation}Z_q^{\circ }(\mu )\cap U(Z_q^{\circ }(\mu ))\subseteq \frac{c}{\sqrt{q}}\,B_2^n,\end{equation}
with probability greater than $1-e^{-n}$, where $c>0$ is an absolute constant. This estimate appears in
\cite{Klartag-EMilman-2012b} (and a second proof is given in \cite{Brazitikos-Stavrakakis-2014}).
Assuming that $\sup_nL_n\leqslant\alpha $ we may apply the same reasoning for every $1\leqslant q\leqslant n$:
choosing $k=\gamma_0n$ we see from \eqref{eq:Rqk-variant-3} and \eqref{eq:Rqk-variant-4} that
\begin{equation}c_{\gamma_0n}(Z_q^{\circ }(\mu ))\leqslant c_1(\gamma_0)\frac{\alpha }{\sqrt{q}},\end{equation}
where $c_1(\gamma_0)>0$ is an absolute constant. Then, we apply \eqref{eq:LPT} with $s=n/2$ and $m=(1-\gamma_0)n$
to get that a random subspace $E\in G_{n,n/2}$ satisfies
\begin{equation}R(Z_q^{\circ }(\mu )\cap E)\leqslant c_3\cdot c_{\gamma_0n}(Z_q^{\circ }(\mu ))\leqslant c_2(\gamma_0)\frac{\alpha }{\sqrt{q}}\end{equation}
with probability greater than $1-2e^{-n/4}$, where $c_2(\gamma_0)>0$ is an absolute constant. Finally, this implies that a
random $U\in O(n)$ satisfies
\begin{equation}Z_q^{\circ }(\mu )\cap U(Z_q^{\circ }(\mu ))\subseteq \frac{c\alpha }{\sqrt{q}}\,B_2^n,\end{equation}
with probability greater than $1-e^{-n}$, where $c>0$ is an absolute constant.
\end{remark}
\subsection{Random sections of bodies with maximal isotropic constant}
Starting with an isotropic symmetric convex body $K$ in ${\mathbb R}^n$ we can use the method of this section in order to estimate the quantities
\begin{equation}c_t(K)=\min\{R(K\cap F) : F \in G_{n,n-t}\}\end{equation}
for every $t = 0,\ldots,n-1$. From \eqref{eq:not-9} we have
\begin{equation}|K\cap E|^{\frac{1}{n-t}} \leqslant
c_2\frac{L_{\overline{K_{k+1}}(\pi_{E^{\perp }}(\mu_{K}))}}{L_K}\leqslant \frac{c_3L_{n-t}}{L_K}
\end{equation}
for every $E\in G_{n,t}$, therefore
\begin{equation}w_t(K)\leqslant c_4\sqrt{t}\left (\frac{c_3L_{n-t}}{L_K}\right )^{\frac{n-t}{t}}.\end{equation}
Assume that $K$ has maximal isotropic constant, i.e. $L_K=L_n^{\prime }$ (the same argument works if we assume that $L_K$ is
almost maximal, i.e. $L_K\geqslant \beta L_n^{\prime }$ for some absolute constant $\beta\in (0,1)$). It is known that
$L_{n-t}\leqslant c_1L_n\leqslant c_2L_n^{\prime }$ for all $1\leqslant t\leqslant n-1$, where $c_1,c_2>0$ are absolute constants. Therefore, we get:
\begin{lemma}\label{lem:max-Ln-1}Let $K$ be an isotropic symmetric convex body in ${\mathbb R}^n$ such that $L_K=L_n^{\prime }$,
and let $1\leqslant t\leqslant \lfloor n/2 \rfloor$. Then,
\begin{equation}c_{2t}(K)\leqslant c_1^{\frac{n-t}{t}}\frac{n}{\sqrt{t}} \log\Big(e + \frac{n}{t}\Big),\end{equation}
where $c>0$ is an absolute constant.
\end{lemma}
Then, we apply \eqref{eq:LPT} with $s=n/2$ and $m=(1-\gamma_0)n$
to get that a random subspace $E\in G_{n,n/2}$ satisfies
\begin{equation}R(K\cap E)\leqslant c_3\cdot c_{\gamma_0n}(K)\leqslant c_1(\gamma_0)\sqrt{n}\end{equation}
with probability greater than $1-2e^{-n/4}$, where $c_1(\gamma_0)>0$ is an absolute constant.
Also, since $c_{\gamma_0n}(K)\leqslant c(\gamma_0)\sqrt{n}$, we may apply \eqref{eq:Rud-Ver} to get:
\begin{theorem}\label{th:max-Ln-2}Let $K$ be an isotropic symmetric convex body in ${\mathbb R}^n$ with $L_K=L_n^{\prime }$. A random $U\in O(n)$ satisfies
\begin{equation}K\cap U(K)\subseteq (c_3\sqrt{n})\,B_2^n,\end{equation} with probability greater than $1-e^{-n}$, where $c_3>0$ is an absolute constant.
\end{theorem}
We can also prove the local analogue of this fact: random proportional sections of a body with maximal isotropic constant
have bounded isotropic constant.
\begin{theorem}\label{th:max-Ln-3}Let $K$ be an isotropic symmetric convex body in ${\mathbb R}^n$ with $L_K=L_n^{\prime }$. A random $F\in G_{n,n/2}$ satisfies
\begin{equation}L_{K\cap F}\leqslant c_4\end{equation} with probability greater than $1-e^{-c_5n}$, where $c_4,c_5>0$ are absolute constants.
\end{theorem}
\noindent {\it Proof.} It was proved in \cite{Dafnis-Paouris-2010} (see also \cite[Lemma 6.3.5]{BGVV-book}) that if $L_K=L_n^{\prime }$ then
\begin{equation}|K\cap F|^{\frac{1}{n}}\geqslant c_6\end{equation}
for every $G_{n,n/2}$, where $c_6>0$ is an absolute constant. Since $R(K\cap F)\leqslant c_3\sqrt{n}$ for a random $F\in G_{n,n/2}$,
for all these $F$ we get
\begin{equation}\frac{n}{2}L_{K\cap F}^2\leqslant \frac{1}{|K\cap F|^{1+\frac{2}{n}}}\int_{K\cap F}\|x\|_2^2dx
\leqslant \frac{1}{|K\cap F|^{\frac{2}{n}}}R^2(K\cap F)\leqslant c_6^{-2}c_3^2n,\end{equation}
which implies that
\begin{equation}L_{K\cap F}\leqslant c_4,\end{equation}
where $c_4=\sqrt{2}c_6^{-1}c_3$. $\quad \hfill \Box$
\section{Sub-Gaussian subspaces}
In this section we prove Theorem \ref{th:1.3}. We will use E.~Milman's estimates \cite{EMilman-2014} on the mean width $w(Z_q(K))$ of
the $L_q$-centroid bodies $Z_q(K)$ of an isotropic convex body $K$ in ${\mathbb R}^n$.
\begin{theorem}[E.~Milman]\label{th:Emanuel2}Let $K$ be an isotropic convex body in ${\mathbb R}^n$. Then, for all $q\geqslant 1$ one has
\begin{equation}w(Z_q(K)) \leqslant c_1\log (1+q)\max\left\{\frac{q\log (1+q)}{\sqrt{n}},\sqrt{q}\right\}L_K\end{equation}
where $c_1>0$ is an absolute constant.
\end{theorem}
We also use the next fact on the diameter of $k$-dimensional projections of symmetric convex bodies (see \cite[Proposition 5.7.1]{AGA-book}).
\begin{proposition}\label{prop:diam-rdm-proj}
Let $D$ be a symmetric convex body in $\mathbb R^n$ and let $1\leqslant k<n$ and $\alpha >1$.
Then there exists a subset $\Gamma_{n,k}\subset G_{n,k}$ with measure $\nu_{n,k}(\Gamma_{n,k})\geqslant 1-e^{-c_2\alpha^2k}$ such that
the orthogonal projection of $D$ onto any subspace $F\in \Gamma_{n,k}$ satisfies
\begin{equation}
R(P_F(D))\leqslant c_3\alpha \max\{w(D),R(D)\sqrt{k/n}\},
\end{equation} where $c_2>0,c_3>1$ are absolute constants.
\end{proposition}
Combining Proposition \ref{prop:diam-rdm-proj} with Theorem \ref{th:Emanuel2} and the fact that $R(Z_q(K))\leqslant cqL_K$, we get:
\begin{lemma}\label{lem:main-1}Let $K$ be an isotropic convex body in ${\mathbb R}^n$. Given $1\leqslant q\leqslant n$ define $k_0(q)$ by the equation
\begin{equation}k_0(q)=\log^2(1+q)\max\{ \log^2(1+q),n/q\}.\end{equation}
Then, for every $1\leqslant k\leqslant k_0(q)$, a random $F\in G_{n,k}$ satisfies
\begin{equation}R(P_F(Z_q(K))) \leqslant c_1\alpha\log (1+q)\max\left\{\frac{q\log (1+q)}{\sqrt{n}},\sqrt{q}\right\}L_K\end{equation}
with probability greater than $1-e^{-c_2\alpha^2k_0(q)}$, where $c_1,c_2>0$ are absolute constants.
\end{lemma}
\noindent {\it Proof.} Since $R(Z_q(K))\leqslant cqL_K$ we see that
\begin{align}\frac{R(Z_q(K))\sqrt{k_0(q)}}{\sqrt{n}} &\leqslant \frac{cq}{\sqrt{n}}\log (1+q)\max\left\{ \log (1+q),\frac{\sqrt{n}}{\sqrt{q}}\right\}L_K\\
\nonumber &= c\log (1+q)\max\left\{\frac{q\log (1+q)}{\sqrt{n}},\sqrt{q}\right\}L_K.\end{align}
From Theorem \ref{th:Emanuel2} we have an upper bound of the same order for $w(Z_q(K))$. Then, we apply Proposition \ref{prop:diam-rdm-proj}
for $Z_q(K)$. $\quad \hfill \Box$
\begin{remark}\label{rem:heredit}\rm Note that if $1\leqslant s\leqslant k$ then the conclusion of Proposition \ref{prop:diam-rdm-proj} continues to hold
for a random $F\in G_{n,s}$ with the same probability on $G_{n,s}$; this is an immediate consequence of Fubini's
theorem and of the fact that $R(P_H(D))\leqslant R(P_F(D))$ for every $s$-dimensional subspace $H$ of a $k$-dimensional
subspace $F$ of ${\mathbb R}^n$.
\end{remark}
\noindent {\bf Proof of Theorem \ref{th:1.3}.} We define $q_0$ by the equation
\begin{equation}q_0\log^2(1+q_0)=n.\end{equation}
Note that $q_0\simeq n/(\log n)^2$ and $\log (1+q_0)\simeq \log n$. For every $2\leqslant q\leqslant q_0$ we have $q\log^2(1+q)\leqslant n$, therefore
\begin{equation}k_0(q)= \frac{n\log^2(1+q)}{q}\geqslant \frac{c_1n\log^2(1+q_0)}{q_0}\end{equation}
for some absolute constant $c_1>0$, because $q\mapsto \log^2(1+q)/q$ is decreasing for $q\geqslant 4$. It follows that
\begin{equation}k_0(q)\geqslant c_1\log^4(1+q_0)\geqslant c_2(\log n)^4\end{equation}
for all $2\leqslant q\leqslant q_0$.
Now, we fix $\alpha >1$ and define
\begin{equation}k_0=c_1\log^4(1+q_0).\end{equation}
Using Lemma \ref{lem:main-1} and Remark \ref{rem:heredit}, for every $q\leqslant q_0$ we can find a set $\Gamma_q\subseteq G_{n,k_0}$
with $\nu_{n,k_0}(\Gamma_q)\geqslant 1-e^{-c\alpha^2k_0}$ such that
\begin{equation}R(P_F(Z_q(K))) \leqslant c_3\alpha\log (1+q)\max\left\{\frac{q\log (1+q)}{\sqrt{n}},\sqrt{q}\right\}L_K\leqslant c_3\alpha\sqrt{q}\log (1+q)L_K\end{equation}
for all $F\in G_{n,k_0}$. If $\Gamma :=\bigcap_{s=1}^{\lfloor\log_2q_0\rfloor }\Gamma_{2^s}$, then
\begin{equation}\nu_{n,k_0} \big(G_{n,k_0}\setminus \Gamma \big)\leqslant\nu_{n,k_0} \Big(G_{n,k_0}\setminus \bigcap_{s=1}^{\lfloor\log_2n\rfloor }\Gamma_{2^s}\Big)\leqslant c(\log n)e^{-c\alpha^2k_0}\leqslant \frac{1}{n^{\log^3n}}\end{equation}
if $\alpha\simeq 1$ is chosen large enough. Then for every $F\in \Gamma $, for all $\theta\in S_F$ and for every $1\leqslant s\leqslant \lfloor\log_2q_0\rfloor $ we have
\begin{equation}\label{eq:small-q}\frac{h_{Z_{2^s}(K)}(\theta )}{\sqrt{2^s}}
=\frac{h_{P_F(Z_{2^s}(K))}(\theta )}{\sqrt{2^s}}\leqslant c_3\alpha\log (1+2^s)L_K\leqslant c_4\alpha (\log n)L_K.\end{equation}
Taking into account the fact that if $2^s\leqslant q<2^{s+1}$ then
\begin{equation}\frac{h_{Z_{q}(K)}(y)}{\sqrt{q}}\leqslant \frac{h_{Z_{2^{s+1}}(K)}(y)}{2^{s/2}}= \sqrt{2}\frac{h_{Z_{2^{s+1}}(K)}(y)}{2^{(s+1)/2}},\end{equation}
we see that
\begin{equation}\label{eq:final-11}\frac{h_{Z_{q}(K)}(y)}{\sqrt{q}}\leqslant c_5\alpha (\log n)L_K\end{equation}
for every $F\in \Gamma $, for all $\theta\in S_F$ and for every $2\leqslant q\leqslant q_0$.
Next, observe that if $q_0\leqslant q\leqslant n$ then we may write
\begin{align}\frac{h_{Z_{q}(K)}(y)}{\sqrt{q}} &\leqslant \frac{c_6q}{q_0}\frac{h_{Z_{q_0}(K)}(y)}{\sqrt{q}}= \frac{c_6\sqrt{q}}{\sqrt{q_0}}\frac{h_{Z_{q_0}(K)}(y)}{\sqrt{q_0}}
\leqslant \frac{c_6\sqrt{n}}{\sqrt{q_0}}\frac{h_{Z_{q_0}(K)}(y)}{\sqrt{q_0}}\\
\nonumber &=c_6\log (1+q_0)\frac{h_{Z_{q_0}(K)}(y)}{\sqrt{q_0}}\leqslant c_7(\log n)\frac{h_{Z_{q_0}(K)}(y)}{\sqrt{q_0}},\end{align}
and hence
\begin{equation}\label{eq:final-2}\frac{h_{Z_{q}(K)}(y)}{\sqrt{q}}\leqslant c_7\alpha (\log n)^2L_K\end{equation}
for every $F\in \Gamma $, for all $\theta\in S_F$ and for every $q_0\leqslant q\leqslant n$.
Recall that $\Psi_2(K)$ is the convex body with support function $h_{\Psi_2(K)}(y)=\|\langle \cdot ,y\rangle\|_{L_{\psi_2}(K)}$. One also has
\begin{equation}h_{\Psi_2(K)}(y)\simeq \sup_{q\geqslant 2}\frac{h_{Z_q(K)}(y)}{\sqrt{q}}\simeq \sup_{2\leqslant q\leqslant n}\frac{h_{Z_q(K)}(y)}{\sqrt{q}}\end{equation}
because $h_{Z_q(K)}(y)\simeq h_{Z_n(K)}(y)$ for all $q\geqslant n$. Then, \eqref{eq:final-11} and \eqref{eq:final-2} and the fact that $\alpha\simeq 1$ show that
\begin{equation}\|\langle \cdot ,\theta\rangle\|_{L_{\psi_2}(K)}\leqslant C(\log n)^2L_K\end{equation}
for every $F\in \Gamma $ and for all $\theta\in S_F$, where $C>0$ is an absolute constant. $\hfill\Box $
\bigskip
\bigskip
\footnotesize
\bibliographystyle{amsplain}
|
2,877,628,091,468 | arxiv | \section{Introduction}\label{intro}
The last decades have witnessed important advancements in policy evaluation methods for assessing the causal effect of a treatment on an outcome of interest, which are particularly relevant in the context of data with many observations and/or observed covariates. Such advancements include the development or refinement of quasi-experimental evaluation techniques, estimators for flexible (i.e.\ semi- or nonparametric) treatment effect models, and machine learning algorithms for a data-driven control for covariates in order to tackle confounding, learn effect heterogeneities across subgroups and target groups for which the treatment is most effective. Policy evaluation methods aim at assessing causal effects despite the problem that for any subject in the data, outcomes cannot be observed at the same time in the presence and absence of the treatment. As an illustration of this fundamental problem for causality, consider the treatment effect of a job application training for jobseekers on employment. Identifying this effect on the individual level requires comparing the employment state for a specific subject at a particular point in time with and without training participation. However, at a specific point in time, an individual can be observed to have either participated or not participated in the training, but not both. Therefore, treatment effects remain unidentified on the individual level without strong assumptions.
Formally, denote by $D$ a binary treatment, such that $D=1$ if for instance someone participates in a training and $D=0$ otherwise. Furthermore, denote by $Y$ the observed outcome, e.g.\ employment. Following \cite{Rubin74}, let $Y(1)$ and $Y(0)$ denote the potential outcomes a subject would realize if $D$ was set to 1 and 0, respectively, e.g.\ the potential employment state with and without training. It is assumed throughout that $Y(1)$ and $Y(0)$ only depend on the subject's own treatment and not on the treatment values of other subjects, which is known at the `Stable Unit Treatment Value Assumption', see \cite{Rubin1990}. Observed employment $Y$ corresponds to either $Y(1)$ if the individual receives the training ($D=1$) or to $Y(0)$ otherwise. The fact that not both potential outcomes are observed at the same time is formally expressed in the following equation:
\begin{eqnarray}\label{eq1}
Y= Y(1)\cdot D + Y(0)\cdot (1-D).
\end{eqnarray}
It is easy to see that \eqref{eq1} is equivalent to $Y=Y(0)+D\cdot[Y(1)-Y(0)]$, where the observed outcome is the sum of the potential outcome without intervention and $D$ times $Y(1)-Y(0)$, i.e.\ the causal effect of $D$ on $Y$. As either $Y(1)$ or $Y(0)$ is unknown depending on the value of $D$, the treatment effect can in general not be identified for any subject.
Under specific assumptions, however, aggregate treatment effects are identified based on groups of individuals receiving and not receiving the treatment. Two parameters that have received substantial attention are the average treatment effect (ATE, denoted by $\Delta$) in the population, e.g.\ among all jobseekers, and the treatment effect on the treated population (ATET, denoted by $\Delta_{D=1}$), e.g.\ among training participants:
\begin{eqnarray}\label{ate}
\Delta=E[Y(1)-Y(0)],\quad \Delta_{D=1}=E[Y(1)-Y(0)|D=1].
\end{eqnarray}
One assumption yielding identification is statistical independence of treatment assignment and potential outcomes. Formally,
\begin{eqnarray}\label{random}
\{Y(1),Y(0)\}\bot D,
\end{eqnarray}
where `$\bot$' denotes statistical independence. \eqref{random} implies that there exist no variables jointly affecting the treatment and the potential outcomes. It is satisfied by design in experiments where the treatment is randomized, i.e.\ not a function of any observed or unobserved characteristics like education, gender, or income. The ATE is then identified by the mean difference in observed outcomes across treated and nontreated groups. This follows from the fact that by \eqref{eq1}, $E[Y|D=1]=E[Y(1)|D=1]$ and $E[Y|D=0]=E[Y(0)|D=0]$, while it follows from \eqref{random} that $E[Y(1)|D=1]=E[Y(1)]$ and $E[Y(0)|D=0]=E[Y(0)]$. As the average outcomes among treated and nontreated are representative for the respective mean potential outcomes under treatment and nontreatment in the population, $E[Y|D=1]-E[Y|D=0]=\Delta$.
When the treatment is not randomized, however, a mean comparison of treated and nontreated outcomes is generally biased due to selective treatment take-up, implying that subjects in the treated and nontreated groups differ in characteristics that also affect the outcome. Jobseekers attending a job application training could, for instance, on average have a different level of labor market experience or education than those not participating. Differences in the observed outcomes of treated and nontreated subjects therefore not exclusively reflect the treatment effect, but also the effects of such characteristics, which are thus confounders of the treatment-outcome relation. Formally, the selection biases for the ATE and ATET are given by
\begin{eqnarray}
E[Y|D=1]-E[Y|D=0]-\Delta&=&E[Y|D=1]-E[Y(1)]+E[Y(0)]-E[Y|D=0],\notag \\
E[Y|D=1]-E[Y|D=0]-\Delta_{D=1}&=&E[Y(0)|D=1]-E[Y|D=0].
\end{eqnarray}
Different strategies have been developed for avoiding or tackling selection into treatment in order to identify causal effects. This chapter reviews the most prominent approaches, focusing on methods for flexible model selection and estimation particularly appropriate in big data contexts with many observations and/or variables. Section \ref{selobs} covers methods relying on selection-on-observables assumptions, implying that observed preselected covariates are sufficient to control for characteristics jointly affecting the treatment and the potential outcomes. Section \ref{prac} discusses practical issues to be verified in the data when invoking the selection-on-observables assumption, e.g.\ the similarity of treated and nontreaded subjects used for estimation in terms of observed characteristics,
as well as extensions e.g.\ to multivalued treatments and different treatment parameters. Section \ref{ML} covers causal machine learning, where observed covariates are not preselected, but it is assumed that important confounders can be controlled for in a data-driven way by machine learning algorithms. Section \ref{MLhet} outlines the application of machine learning for the data-driven detection of effect heterogeneities across subgroups defined upon observed covariates as well as for learning optimal policy rules to target subgroups in a way that maximizes the treatment effect.
Section \ref{IV} considers treatment evaluation based on instrumental variables. Here, treatment selection may be related to unobserved characteristics if a quasi-random instrument exists that affects the treatment, but not directly the outcome. Section \ref{did} discusses difference-in-differences methods, where identification hinges on common trends in mean potential outcomes under nontreatment over time across actually treated and nontreated groups. It also presents the changes-in-changes approach, which assumes that within treatment groups, the distribution of unobserved characteristics that affect the potential outcome under nontreatment remains constant over time. Section \ref{rdd} introduces the regression discontinuity design, which assumes the treatment probability to discontinuously change and be quasi-randomly assigned at a specific threshold value of an observed index variable. It also discusses the regression kink design, which assumes a kink in the (continuous) association of the treatment and the index variable at a specific threshold. Section \ref{conclusion} concludes.
\section{Selection on observables with preselected covariates}\label{selobs}
The selection-on-observables assumption, also called conditional independence or exogeneity, postulates that the
covariate information in the data is rich enough to control for characteristics jointly affecting the treatment and the outcome. This implies that one either directly observes those characteristics confounding the treatment-outcome relationship or that conditional on the observed information, the effects of unobserved confounders on either the treatment or the outcome (or both) are controlled for. As a further assumption known as common support, it is required that for any empirically feasible combination of observed covariates, both treated and nontreated subjects can be observed, which rules out that the covariates deterministically predict participation. Finally, the covariates must in general not be affected by the treatment, but measured at or prior to treatment assignment.
Denote by $X$ the vector of observed covariates and $X(1),X(0)$ the potential covariate values with and without treatment. Formally, the assumptions can be stated as
\begin{eqnarray}\label{assumpselobs}
\{Y(1),Y(0)\}\bot D|X, \quad 0<p(X)<1, \quad X(1)=X(0)=X,
\end{eqnarray}
where $p(X)=\Pr(D=1|X)$ is the conditional treatment probability, also known as propensity score. The first part of \eqref{assumpselobs} means that the distributions of the potential outcomes are conditionally independent of the treatment. This implies that $D$ is as good as randomly assigned among subjects with the same values in $X$. The second part says that the propensity score is larger than zero and smaller than one such that $D$ is not deterministic in $X$ and common support holds. The third part states that $X$ is not a function of $D$ and therefore must not contain (post-treatment) characteristics that are affected by the treatment, in order to not condition away part of the treatment effect of interest. This identification approach mimics the experimental context with the help of observed information. After creating groups with and without treatment that are comparable in the covariates, differences in the outcomes are assumed to be exclusively caused by the treatment.
The first part of \eqref{assumpselobs} is somewhat stronger than actually required for ATE identification and could be relaxed to conditional independence in the means (rather than all moments) of potential outcomes, $E[Y(d)|D=1,X]=E[Y(d)|D=0,X]$ for $d$ $\in$ $\{1,0\}$. In empirical applications it might, however, be hard to argue that conditional independence holds in means but not in other distributional features, which would for instance rule out mean independence for nonlinear (e.g.\ log) transformations of $Y$. Furthermore, the stronger conditional independence assumption in \eqref{assumpselobs} is required for the identification of distributional parameters like the quantile treatment effect, which corresponds to the effect at a particular rank of the potential outcome distribution. Also note that for the identification of treatment parameters among the treated (rather than the total) population like the ATET, \eqref{assumpselobs} can be relaxed to $Y(1)\bot D|X$, $p(X)<1$.
Let $\mu_d(x)=E[Y|D=d,X=x]$ denote the conditional mean outcome given $D$ corresponding to $d$ $\in \{1,0\}$ and $X$ equaling some value $x$ in its support. Analogous to identification under a random treatment discussed in Section \ref{intro}, $\mu_1(x)-\mu_0(x)$ under \eqref{assumpselobs} identifies the conditional average treatment effect (CATE) given $X$, denoted by $\Delta_x$:
\begin{eqnarray}\label{cate}
\Delta_x=E[Y(1)-Y(0)|X=x]=\mu_1(x)-\mu_0(x).
\end{eqnarray}
Averaging CATEs over $X$ in the population or among treated yields the ATE or ATET, respectively:
\begin{eqnarray}\label{condmeanselobs}
\Delta&=&E[\mu_1(X)-\mu_0(X)],\\
\Delta_{D=1}&=&E[\mu_1(X)-\mu_0(X)|D=1]=E[Y|D=1]-E[\mu_0(X)|D=1].\notag
\end{eqnarray}
Noting that the propensity score possesses the so-called balancing property, see \cite{rosenbaum1983}, such that conditioning on $p(X)$ equalizes or balances the distribution of $X$ across treatment groups (i.e.\ $X \bot D | p(X)$), the effects are also identified when substituting control variables $X$ by $p(X)$:
\begin{eqnarray}\label{condmeanselobspscore}
\Delta&=&E[\mu_1(p(X))-\mu_0(p(X))],\\
\Delta_{D=1}&=&E[\mu_1(p(X))-\mu_0(p(X))|D=1]=E[Y|D=1]-E[\mu_0(p(X))|D=1].\notag
\end{eqnarray}
By basic probability theory, implying e.g.\ $\mu_1(X)=E[Y\cdot D|X]/p(X)$, and the law of iterated expectations, the ATE and ATET are also identified by inverse probability weighting (IPW), see \cite{Horvitz52}, using the propensity score:
\begin{eqnarray}\label{ipwselobs}
\Delta&=&E\left[\frac{Y\cdot D}{p(X)}-\frac{Y\cdot (1-D)}{1-p(X)}\right],\\
\Delta_{D=1}&=&E\left[\frac{Y\cdot D}{\Pr(D=1)}-\frac{Y\cdot (1-D)\cdot p(X)}{(1-p(X))\cdot\Pr(D=1)}\right].\notag
\end{eqnarray}
Finally, the effects follow from a combination of conditional mean outcomes and propensity scores based on so-called doubly robust identification using the efficient score function, see \cite{Robins+94}, \cite{RoRo95}, and \cite{Ha98}:
\begin{eqnarray}\label{drselobs}
\Delta&=&E\left[\phi(X) \right],\textrm{ with }\phi(X)=\mu_1(X)-\mu_0(X)+\frac{(Y-\mu_1(X))\cdot D}{p(X)}-\frac{(Y-\mu_0(X))\cdot (1-D)}{1-p(X)},\notag\\
\Delta_{D=1}&=&E\left[\frac{(Y-\mu_0(X))\cdot D}{\Pr(D=1)}-\frac{(Y-\mu_0(X))\cdot (1-D)\cdot p(X)}{(1-p(X))\cdot\Pr(D=1)}\right].
\end{eqnarray}
Note that the identification results in \eqref{drselobs} coincide with those in \eqref{ipwselobs} and \eqref{condmeanselobs} because
\begin{eqnarray*}
&&E\left[\frac{(Y-\mu_1(X))\cdot D}{p(X)}-\frac{(Y-\mu_0(X))\cdot (1-D)}{1-p(X)}\right]=0\quad\textrm{ and}\\
&&E\left[\frac{-\mu_0(X)\cdot D}{\Pr(D=1)}-\frac{-\mu_0(X)\cdot (1-D)\cdot p(X)}{(1-p(X))\cdot\Pr(D=1)}\right]=E\left[\mu_0(X)\cdot\left(\frac{p(X)}{\Pr(D=1)}-\frac{p(X)}{\Pr(D=1)}\right)\right]=0.
\end{eqnarray*}
Assuming the availability of a randomly drawn sample, treatment effect estimation proceeds using the sample analogs of the identification results and plug-in estimates for $p(X), \mu_1(X), \mu_0(X)$ whenever required. When for instance considering the estimation of $\Delta_{D=1}$ based on \eqref{condmeanselobs}, an estimate of $\mu_0(X)$ for each treated observation is obtained as a weighted average of nontreated outcomes, where the weights depend on the similarity of the treated and nontreated observations in terms of $X$. One class of methods in this context are matching estimators, see for instance \cite{rosenbaum1983}, \cite{RosenbaumRubin1985}, \cite{Heck+98}, \cite{HeIcSmTo98}, \cite{DehejiaWahba99}, and \cite{LeMiWu11}. Pair matching, for instance, assigns a weight of 1 (or 100\%) to the most similar nontreated observation and of 0 to all others. $1:M$ matching estimates $\mu_0(X)$ based on the mean outcome of the $M$ most similar nontreated observations, where $M$ is an integer larger than 1. Radius or caliper matching defines a maximum tolerance of dissimilarity in $X$ and relies on the mean outcome of all nontreated observations within the tolerance. Compared to $1:M$ estimation, this may reduce the variance when many similar nontreated observations are available. Due to the multidimensionality of $X$, similarity is to be defined by a distance metric. Examples include the square root of the sum of squared differences in elements of $X$ across some treated and nontreated observation, either normalized by the inverse of the sample covariance matrix of $X$ (then called Mahalanobis distance) or by the diagonal thereof (i.e.\ the variance). See \cite{Zh04} for a discussion of alternative distance metrics.
\cite{AbadieImbens01} show that in contrast to other treatment estimators, pair or $1:M$ matching does not necessarily converge with a rate of $n^{-1/2}$ to the true effect (i.e.\ is not $n^{-1/2}$-consistent) if $X$ contains citehan one continuous element, with $n$ being the sample size. Second, even under $n^{-1/2}$-consistency, it does not attain the semiparametric efficiency bounds derived in \cite{Ha98}. Therefore, pair or $1:M$ matching has a higher large sample variance than the most efficient (or least noisy) treatment effect estimators that rely on the same assumptions. Third, \cite{AbadieImbens06} demonstrate that bootstrapping, a popular inference method based on estimating the standard error based on repeatedly resampling from the data, is inconsistent due to the discontinuous weights in pair and $1:M$ matching. The authors, however, provide a consistent asymptotic approximation of the estimator's variance based on matching within treatment groups.
To improve upon its properties, matching can be combined with a regression-based correction of the bias that stems from not fully comparable treated and nontreated matches, see \cite{Ru79} and \cite{AbIm11}. This matching-weighted regression is $n^{-1/2}$-consistent and its weights are smooth such that bootstrap inference is consistent. Another smooth method is kernel matching, which estimates $\mu_0(X)$ by a kernel function giving more weight to nontreated observations that are more similar to the treated reference observation and can attain the semiparametric efficiency bound. This requires no distance metric, as kernel functions are applied to each element in $X$ and then multiplied. Finally, genetic matching of \cite{diamond2013} matches treated and nontreated observations in a way that maximizes the balance of covariate distributions across treatment groups according to predefined balance metrics, based on an appropriately weighted distance metric.
In empirical applications, matching on the estimated propensity score is much more common than matching directly on $X$. The propensity score is typically specified parametrically by logit or probit functions. Collapsing the covariate information into a single parametric function avoids the curse of dimensionality, which implies that in finite samples, the probability of similar matches in all elements of $X$ quickly decreases in the dimension of $X$. At the same time, it allows for effect heterogeneity across $X$. On the negative side, a misspecification of the propensity score model may entail an inconsistent treatment effect estimator, which is avoided by directly matching on $X$ or using a nonparametric propensity score estimate. Matching on the estimated propensity score has a different variance than matching directly on $X$, which for the ATET can be either higher or lower, see \cite{Heck+98}. \cite{AbadieImbens2016} provide an asymptotic variance approximation for propensity score matching that appropriately accounts for uncertainty due to propensity score estimation.
Matching estimators typically require the choice of tuning parameters, be it the number of matches $M$, the bandwidth in kernel or radius matching, or the distance metric. However, theoretical guidance is frequently not available, see \cite{Froe2005} for an exception. Practitioners commonly pick tuning parameters ad hoc or based on data-driven methods that are not necessarily optimal for treatment effect estimation, as e.g.\ cross-validation for estimating $\mu_0(X)$. It appears thus advisable to investigate the sensitivity of the effect estimates w.r.t.\ varying these parameters.
As an alternative to matching, \cite{Hirano+00} discuss treatment effect estimation based on the IPW sample analog of \eqref{ipwselobs}, using series regression to obtain nonparametric plug-in estimates of the propensity score, which attains the semiparametric efficiency bounds. \cite{IchimuraLinton01} and \cite{LiRaWo04} consider IPW with kernel-based propensity score estimation. Practitioners mostly rely on logit or probit specifications, which generally is not semiparametrically efficient, see \cite{ChenHongTarozzi2008}. In any case, it is common and recommended to use normalized sample analogs of the expressions in \eqref{ipwselobs}, which ensures that the weights of observations within treatment groups sum up to one, see \cite{BuDNMC09}. Compared to matching, IPW has the advantages that it is computationally inexpensive and does not require choosing tuning parameters (other than for nonparametric propensity score estimation, if applied). On the negative side, IPW is likely sensitive to propensity scores that are very close to one or zero, see the simulations in \cite{Froe00a} and \cite{BuDNMC09} and the theoretical discussion in \cite{KhTa07}. Furthermore, IPW may be less robust to propensity score misspecification than matching, which merely uses the score to match treated and non-treated observations, rather than plugging it directly into the estimator, see \cite{Waernbaum2012}.
A variation of IPW are the empirical likelihood methods of \cite{GrahamPintoEgel2012} and \cite{ImaiRatkovic2014}. In spirit comparable to genetic matching, the methods iterate an initial propensity score estimate (e.g.\ by changing the coefficients of a logit specification) until prespecified moments of $X$ are maximally balanced across treatment groups. A related approach is entropy balancing, see \cite{hainmueller2012}, which iterates initially provided (e.g.\ uniform) weights until balance in the moments of $X$ is maximized, under the constraint that weights sum up to one in either treatment group. In contrast to methods aiming for perfect covariate balance in prespecified moments, \cite{Zubizarreta2015} trades off balance and variance in estimation. The algorithm finds the weights of minimum variance that balance the empirical covariate distribution up to prespecified levels, i.e.\ approximately rather than exactly.
Estimation based on the sample analog of \eqref{drselobs} with plug-in estimates for $p(X), \mu_1(X), \mu_0(X)$ is called doubly robust (DR) estimation, as it is consistent if either the conditional mean outcome or the propensity score is correctly specified, see \cite{RobinsMarkNewey1992} and \cite{RoRoZa95}. If both are correctly specified, DR is semiparametrically efficient. This is also the case if the plug-in estimates are nonparametrically estimated, see \cite{Cattaneo2010}. Furthermore, \cite{RotheFirpo2013} show that nonparametric DR has a lower first order bias and second order variance than either IPW using a nonparametric propensity score or nonparametric outcome regression. This latter property is relevant in finite samples and implies that the accuracy of the DR estimator is less dependent on the accuracy of the plug-in estimates, e.g.\ the choice of the bandwidth in the kernel-based estimation of propensity scores and conditional mean outcomes. A further method satisfying the DR property is targeted maximum likelihood (TMLE), see \cite{VanderLaanRubin2006}, in which an initial regression estimate is updated (or robustified) based on an IPW parameter.
\section{Practical issues and extensions}\label{prac}
This section discusses practical issues related to propensity score methods as well as extensions of treatment evaluation to non-binary treatments and different effect parameters. One important question is whether the estimated propensity score successfully balances $X$ across treatment groups, e.g.\ in matched samples or after reweighting covariates (rather than outcomes) by IPW. Practitioners frequently consider hypothesis tests, e.g.\ two-sample t-tests applied to each element in $X$ or F-tests for jointly testing imbalances in $X$, see also the joint tests of \cite{Sianesi04} and \cite{SmTo05}. As an alternative to hypothesis tests, \cite{RosenbaumRubin1985} consider a covariate's absolute mean difference across treated and nontreated matches, divided or standardized by the square root of half the sum of the covariate's variances in either treatment group prior to matching. In contrast to a t-test, which rejects balance under the slightest difference if the sample grows to infinity, this standardized difference is insensitive to the sample size. Rather than judging balance based on a p-value as in hypothesis tests, a standardized difference larger than a specific threshold, say 0.2, may be considered as indication for imbalance. On the negative side, the choice of the threshold appears rather arbitrary and data-driven methods for its determination are currently lacking. Taking the average of standardized differences for each covariate permits constructing a joint statistic for all covariates.
A second practical issue is whether common support in the propensity score distributions across treatment groups is sufficiently decent in the data. For the ATET, this implies that for each treated observation, nontreated matches with similar propensity scores exist, while for the ATE, this also needs to hold vice versa. Strictly speaking, common support is violated whenever for any reference observation, no observation in the other treatment group with exactly the same propensity score is available. In practice, propensity scores should be sufficiently similar, which requires defining a criterion based on which dissimilar observations may be discarded from the data to enforce common support. However, discarding observations implies that effect estimation might not be (fully) representative for the initial target population and thus sacrifices (some) external validity. On the other hand, it likely reduces estimation bias within the subpopulation satisfying common support, thus enhancing internal validity. For possible common support criteria, see for instance \cite{HeIcSmTo98}, who suggest discarding observations whose propensity scores have a density of or close to zero in (at least) one treatment group. For ATET estimation, \cite{DehejiaWahba99} propose discarding all treated observations with an estimated propensity score higher than the highest value among the nontreated. For the ATE, one additionally discards nontreated observations with a propensity score lower than the lowest value among the treated. \cite{CrumpImbensMitnik09} discuss dropping observations with propensity scores close to zero or one in a way that minimizes the variance of ATE estimation in the remaining sample. \cite{HuLeWu10} discard observations that receive a too large relative weight within their treatment group when estimating the treatment effect. See \cite{LechnerStrittmatter2019} for an overview of alternative common support criteria and an investigation of their performance in a simulation study.
The discussion so far focussed on a binary treatment, however, the framework straightforwardly extends to multivalued discrete treatments. The latter may either reflect distinct treatments (like different types of labor market programs as a job search training, a computer course, etc.) or discrete doses of a single treatment (like one, two, or three weeks of a training). Under appropriate selection-on-observable assumptions, treatment effects are identified by pairwise comparisons of each treatment value with nontreatment, or of two nonzero treatment values, if the effect of one treatment relative to the other is of interest. More formally, let $d'$ and $d''$ denote the treatment levels to be compared and $I\{A\}$ the indicator function, which is one if event $A$ holds and zero otherwise. Assume that conditions analogous to \eqref{assumpselobs} are satisfied for $D=d'$ and $D=d''$, such that conditional independence assumptions $Y(d')\bot I\{ D=d'\}|X$ and $Y(d'')\bot I\{ D=d''\}|X$ hold and the so-called generalized propensity scores satisfy the common support restrictions $\Pr(D=d'|X)>0$ and $\Pr(D=d''|X)>0$, see \cite{Im00}. Then, replacing $D$ by $I\{D=d'\}$ and $1-D$ by $I\{D=d''\}$ as well as $p(X)=\Pr(D=1|X)$ by $\Pr(D=d'|X)$ and $1-p(X)$ by $\Pr(D=d''|X)$ in the identification results \eqref{condmeanselobs}, \eqref{condmeanselobspscore}, \eqref{ipwselobs}, and \eqref{drselobs} yields the ATE when comparing $D=d'$ vs.\ $D=d''$ as well as the ATET when considering those with $D=d'$ as the treated. As shown in \cite{Cattaneo2010}, a range of treatment effect estimators for multivalued discrete treatments are $n^{-1/2}$-consistent and semiparametrically efficient under nonparametric estimation of the plug-in parameters. See also \cite{Le01} for a discussion of matching-based estimation with multivalued discrete treatments.
When $D$ does not have discrete probability masses but is continuously distributed, the generalized propensity score corresponds to a conditional density, denoted by $f(D=d'|X)$ to distinguish it from the previously used probability $\Pr(D=d'|X)$. In the spirit of \eqref{condmeanselobs} for binary treatments, \cite{Flores07} proposes kernel regression of $Y$ on $D$ and $X$ for estimating the mean potential outcomes of the continuous treatment. In analogy to \eqref{condmeanselobspscore}, \cite{HiranoImbens2005} regress $Y$ on polynomials of $D$ and estimates of $f(D|X)$ along with interactions, while \cite{ImaivanDyk2004} consider subclassification by the generalized propensity score. IPW-based methods as considered in \cite{Floresetal2012} require replacing indicator functions, e.g.\ $I\{D=d'\}$, by continuous weighting functions in the identification results. Consider, for instance, the kernel weight $K\left((D-d')/h\right)/h$, where $K$ is a symmetric second order kernel function (e.g.\ the standard normal density function) that assigns more weight to values of $D$ the closer they are to $d'$. $h$ is a bandwidth gauging by how quickly the weight decays as values in $D$ become more different to $d'$ and must go to zero as the sample size increases (albeit not too fast) for consistent estimation. Then, IPW-based identification of the ATE, for instance, corresponds to
\begin{eqnarray}
\Delta = \lim_{h \rightarrow 0} E\left[ \frac{Y\cdot K\left((D-d')/h\right)/h}{f(D=d'|X)} - \frac{Y\cdot K\left((D-d'')/h\right)/h}{f(D=d''|X))} \right],
\end{eqnarray}
where $\lim_{h \rightarrow 0}$ means `as $h$ goes to zero'. See \cite{GalvaoWang} for a further IPW approach and \cite{Kennedyetal2017} for kernel-based DR estimation under continuous treatments, including data-driven bandwidth selection.
A further conceptual extension is the dynamic treatment framework, see for instance \cite{Ro86}, \cite{RoHeBr00}, and \cite{Lech09}. It is concerned with the evaluation of sequences of treatments (like consecutive labor market programs) based on sequential selection-on-observable assumptions w.r.t.\ each treatment. Related assumptions are also commonly imposed in causal mediation analysis aiming at disentangling a total treatment effect into various causal mechanisms, see for instance \cite{RoGr92}, \cite{Pearl01}, \cite{ImKeYa10}, \cite{TchetgenTchetgenShpitser2011}, and \cite{Huber2012}, or the survey by \cite{Huber2019}. Finally, several contributions consider effect parameters related to distributions rather than means. \cite{fir07} proposes an efficient IPW estimator of quantile treatment effects (QTE) at specific ranks (like the median) of the potential outcome distribution and derives the semiparametric efficiency bounds. \cite{DoHsu2014} suggest IPW-based estimation of the distribution functions of potential outcomes under treatment and nontreatment, see also \cite{di96} and \cite{ChFeMe13} for estimators of counterfactual distributions. \cite{Imbens03} and \cite{ImWo08} provide comprehensive reviews on treatment evaluation under selection on observables.
\section{Causal machine learning}\label{ML}
The treatment evaluation methods discussed so far consider covariates $X$ as being preselected or fixed. This assumes away uncertainty related to model selection w.r.t.\ $X$ and requires substantial or strictly speaking exact contextual knowledge about the confounders that need to be controlled for and in which functional form. In reality, however, practitioners frequently select covariates based on their predictive power for the treatment, typically without appropriately accounting for this model selection step in the causal inference to follow. Fortunately, this issue can be tackled by more recent treatment evaluation methods that incorporate machine learning to control for important confounders in a data-driven way and honestly account for model selection in the estimation process. This is particularly useful in big, and more specifically in wide (or high dimensional) data with a vast number of covariates that could potentially serve as control variables, which can render researcher-based covariate selection complicated if not infeasible.
It is important to see that when combining evaluation methods for the ATE or ATET with machine learning, henceforth called causal machine learning (CML), the data must contain sufficiently rich covariate information to satisfy the selection-on-observables assumption, just as discussed in Section \ref{selobs}. Therefore, CML is not a magic bullet that can do away with fundamental assumptions required for effect identification. However, it may be fruitfully applied if there exists a subset of covariate information that suffices to by and large tackle confounding, but is unknown to the researcher. Under the assumption that a relative to the sample size limited subset of information permits controlling for the most important confounders, CML can be shown to be approximately unbiased, even when confounding is not perfectly controlled for.
\cite{Chetal2018} consider for instance a CML approach called double machine learning that relies on so-called orthogonalized statistics. The latter imply that treatment effect estimation is rather insensitive to approximation errors in the estimation of $p(X), \mu_1(X), \mu_0(X)$. As discussed in Section \ref{selobs}, the sample analog of \eqref{drselobs} satisfies this (doubly) robustness property along with its desirable finite sample behaviour. In contrast, estimation based on \eqref{condmeanselobs} is rather sensitive to approximation errors of $\mu_1(X), \mu_0(X)$, while estimation based on \eqref{ipwselobs} is sensitive to errors in $p(X)$. Because DR, however, incorporates both propensity score and conditional mean outcome estimation, the approximation errors enter multiplicatively into the estimation problem, which is key for the robustness property, see for instance \cite{Farrell2015}.
A further element of many CML approaches including double machine learning is the use of independent samples for estimating the specifications of plug-in parameters like $p(X)$, $\mu_1(X)$, and $\mu_0(X)$ on the one hand and of the treatment effects $\Delta, \Delta_{D=1}$ on the other hand. This is similar in spirit to the idea of training and testing data in conventional machine learning or cross-validation for tuning parameter selection and obtained by randomly splitting the sample. After estimating models for $p(X), \mu_1(X), \mu_0(X)$ in one part of the data, the model parameters (e.g.\ coefficients) are used in the other part to predict $p(X), \mu_1(X), \mu_0(X)$ and ultimately estimate the treatment effect. Sample-splitting prevents overfitting the models for the plug-in parameters, but comes at the cost that only part of the data are used for effect estimation, thus increasing the variance. So-called cross-fitting tackles this issue by swapping the roles of the data parts for estimating the plug-in models and the treatment effect. The treatment effect estimate is obtained as the average of the estimated treatment effects in each part and in fact, citehan just two data splits may be used for this procedure. When combining DR with sample splitting, it suffices for $n^{-1/2}$-convergence of treatment effect estimation that the estimates of $p(X), \mu_1(X), \mu_0(X)$ converge to their respective true values at a rate of $n^{-1/4}$ (or faster), see \cite{Chetal2018}. Under specific regularity conditions, this convergence rate is attained by many machine learning algorithms and even by deep learning (which is popular in computer science e.g.\ for pattern recognition), see \cite{FarrellLiangMisra2018}.
However, it needs to be stressed that CML is conceptually different to standard machine learning, which aims at accurately predicting an outcome by observed predictors based on minimizing the prediction error (e.g.\ the mean squared error) through optimally trading off prediction bias and variance. This mere forecasting approach generally does not allow learning the causal effects of any of the predictors. One reason is that a specific predictor might obtain a smaller weight (e.g.\ regression coefficient) than implied by its true causal effect if the predictor is sufficiently correlated with other predictors, such that constraining its weight hardly affects the prediction bias, while reducing the variance. Therefore, predictive machine learning with $Y$ as outcome and $D$ and $X$ as predictors generally gives a biased estimate of the causal effect of $D$, due to correlations between the treatment and the covariates. In CML, however, machine learning is not directly applied to ATE or ATET estimation, but merely for predicting the plug-in parameters, e.g.\ those of the DR expression (i.e.\ the sample analog of \eqref{drselobs}) in the case of double machine learning. To this end, three separate machine learning predictions of $D$, $Y$ among the treated, and $Y$ among the nontreated are conducted with $X$ being the predictors in each step. This is motivated by the fact that covariates $X$ merely serve the purpose of tackling confounding, while their causal effects are (contrarily to the effect of $D$) not of interest, which makes the estimation of $p(X)$, $\mu_1(X)$, and $\mu_0(X)$ a prediction problem to which machine learning can be applied.
Assume for instance that $\mu_1(X)$ and $\mu_0(X)$ are estimated by a linear lasso regression, see \cite{Tibshirani96}, where $X$ as well as higher order and interaction terms thereof may be included as predictors to allow for flexible model specifications. Including too many terms with low predictive power (as it would be the case in an overfitted polynomial regression) likely increases the variance of prediction, with little gain in terms of bias reduction. On the other hand, omitting important predictors implies a large increase in prediction bias relative to the gain in variance reduction due to a parsimonious specification. For this reason, lasso regression aims to optimally balance bias and variance through regularization, i.e.\ by shrinking the absolute coefficients obtained in a standard OLS regression towards or exactly to zero for less important predictors, e.g.\ based on cross-validation for determining the optimal amount of shrinkage. Analogously, lasso logit regression may be applied for the prediction of $p(X)$, which is a regularized version of a standard logit regression. Alternatively, lasso-based estimation of $\mu_1(X)$ and $\mu_0(X)$ can be combined with approximate covariate balancing of \cite{Zubizarreta2015} instead of estimating a propensity score model for $p(X)$, see the CML algorithm suggested by \cite{AtheyImbensWager2018}.
As discussed in \cite{Chetal2018}, lasso regression attains the required convergence rate of $n^{-1/4}$ under so-called approximate sparsity. The latter implies that the number of important covariates or interaction and higher order terms required for obtaining a sufficiently decent (albeit not perfect) approximation of the plug-in parameters is small relative to the sample size $n$.
To see the merits of cross-fitting, note that when disregarding the latter and instead conducting the lasso and treatment estimation steps in the same (total) data, the number of important predictors is required to be small relative to $n^{-1/2}$ rather than $n$, see \cite{Bellonietal2014}. Importantly, neither cross-fitting, nor the estimation of the plug-in parameters by some $n^{-1/4}$-consistent machine learning algorithm affects the asymptotic variance of treatment effect estimation (albeit it may matter in small samples). Therefore, CML is $n^{-1/2}$-consistent and attains the semiparametric efficiency bound as if the covariates to be controlled for in DR estimation had been correctly preselected. In large enough samples, standard errors may thus be estimated by conventional asymptotic approximations without adjustment for the machine learning steps. For a more in depth review of various machine learning algorithms and CML, see for instance \cite{AtheyImbens2019}.
\section{Effect heterogeneity, conditional effects, and policy learning}\label{MLhet}
Machine learning can also be fruitfully applied to investigate treatment effect heterogeneity across $X$, while possibly mitigating inferential multiple testing issues related to snooping for subgroups with significant(ly different) effects that might be spurious. For randomized experiments where \eqref{random} holds or under the selection-on-observables assumption \eqref{assumpselobs} with preselected $X$, \cite{AtheyImbens2016} suggest a method that builds on a modification of so-called regression trees, see \cite{Breimanetal1984}. In standard machine learning for outcome prediction, the tree structure emerges by recursively partitioning the sample with respect to the predictor space such that the sum of squared deviations of outcomes and their respective partition means is minimized. This increases outcome homogeneity within and heterogeneity between partitions. Prediction of $E[Y|X=x]$ proceeds by taking the average of $Y$ in the partition that includes the value $X=x$. This is equivalent to an OLS regression with predictors and interaction terms that are discretized according to specific threshold values in the covariate space as implied by the partitions. Cross-validation may be applied to find the optimal depth of partitions e.g.\ w.r.t.\ the mean squared error.
The causal tree approach of \cite{AtheyImbens2016} contains two key modifications when compared to standard regression trees. First, instead of $Y$, the mean difference in $Y$ across treatment groups within partitions serves as outcome in the experimental context, while under selection on observables with preselected $X$, outcomes are reweighted by the inverse of the propensity score (in analogy to \ref{ipwselobs}) prior to taking mean differences. In either case, recursive partitioning increases the homogeneity in estimated treatment effects within and its heterogeneity between partitions, in order to find the largest effect heterogeneities across subgroups defined in terms of $X$. Secondly, applying sample splitting in order to use different data parts for estimating (a) the tree's model structure and (b) the treatment effects within partitions prevents spuriously large effect heterogeneities due to overfitting.
\cite{WagerAthey2018} and \cite{AtheyTibshiraniWager2019} provide a further approach for investigating effect heterogeneity that is based on the related concept of random forests, see \cite{Breiman2001}, and also applies under selection on observables when control variables are not preselected but to be learnt from the data, see Section \ref{ML}. Random forests consist of randomly drawing many subsamples from the original data and estimating trees in each subsample. Differently to standard trees, only a random subset of predictors (rather than all) is considered at each partitioning step, which safeguards against heavily correlated trees across subsamples. Predictions are obtained by averaging over the predictions of individual trees, which makes the random forest a smooth estimator and also reduces the variance when compared to discrete partitioning of a single tree. Forest-based predictions can therefore be represented by smooth weighting functions that bear some resemblance with kernel regression.
More concisely, the so-called generalized random forest of \cite{AtheyTibshiraniWager2019} proceeds as follows. First, both $Y$ and $D$ are predicted as a function of $X$ using random forests and leave-one-out cross-fitting. The latter implies that the outcome or treatment of each observation is predicted based on all observations in the data but its own, in order to prevent overfitting when conditioning on $X$. Second, the predictions are used for computing residuals of the outcomes and treatments, which is in the spirit of orthogonalized statistics as discussed in the context of DR in Section \ref{ML}. Third, the effect of the residuals of $D$ on the residuals of $Y$ is predicted as a function of $X$ by another random forest that averages over a large number of causal trees with residualized outcomes and treatments that use different parts of the respective subsamples for tree-modelling and treatment effect estimation. Bluntly speaking, this method combines the idea of sample splitting and orthogonalization to control for important confounders as discussed in Section \ref{ML} with the approach of \cite{AtheyImbens2016} for finding effect heterogeneity.
When comparing a single causal tree and a generalized random forest, an advantage of the former is that it directly yields an easy-to-interpret partitioning based on the most predictive covariates in terms of effect heterogeneity. On the negative side, tree structures frequently have a rather high variance such that a small change in the data may entail quite different partitions. The generalized random forest is more attractive in terms of variance, but does not provide a single covariate partitioning due to averaging over many trees. It, however, yields an estimate of the CATE $\Delta_x=E[Y(1)-Y(0)|X=x]$, see \eqref{cate}, such that its heterogeneity as a function of $X$ can be investigated. Also note that averaging over the estimates of $\Delta_x$ in the total sample or among the treatment provides consistent estimates of the ATE and ATET, respectively. For surveys on further machine learning methods for investigating treatment effect heterogeneity, see for instance \cite{Powersetal2018} and \cite{Knausetal2018}.
A concept related to the CATE is optimal policy learning, see e.g.\ \cite{Manski2004}, \cite{HiranoPorter2008}, \cite{Stoye2009}, \cite{QianMurphy2011}, \cite{BhattacharyaDupas2012}, and \cite{KitagawaTetenov2018}, which typically aims at optimally allocating a costly treatment in some population under budget constraints. This for instance requires analyzing which observations in terms of covariate values $X$ should be assigned the constrained treatment to maximize the average outcome. Examples include the optimal selection of jobseekers to be trained to maximize the overall employment probability or the optimal choice of customers to be offered a discount in order to maximize average sales. Formally, let $\pi'(X)$ denote a specific treatment policy defined as function of $X$. To give just one example, $\pi(X)$ could require $D=1$ for all observations whose first covariate in $X$ is larger than a particular threshold and $D=0$ otherwise. The average effect of policy $\pi'(X)$, denoted by $Q(\pi'(X))$, corresponds to the difference in mean potential outcomes under $\pi(X)$ vs.\ nontreatment of everyone:
\begin{eqnarray}\label{pollearn}
Q(\pi'(X))=E[Y(\pi'(X))-Y(0)]=E[\pi(X)\cdot \Delta_X].
\end{eqnarray}
The second equality highlights the close relationship of policy learning and CATE identification. The optimal policy, denoted by $\pi^*(X)$, maximizes the average effect among the set of all feasible policies contained in the set $\Pi$:
\begin{eqnarray}\label{pollearnopt}
\pi^*(X)=\max_{\pi\in \Pi} Q(\pi(X)).
\end{eqnarray}
\eqref{pollearn} and \eqref{pollearnopt} permit defining the so-called regret function associated with treatment policy $\pi'(X)$, which is denoted by $R\pi'(X)$ and equals the (undesirable) reduction in the average policy effect due to implementing $\pi'(X)$ rather than the optimal policy $\pi^*(X)$:
\begin{eqnarray}
R(\pi'(X))=Q(\pi^*(X))-Q(\pi'(X)).
\end{eqnarray}
Finding the optimal policy among the set of feasible policies $\Pi$, which implies that the average policy effect $Q$ is maximized and regret $R$ is equal to zero, amounts to solving the following maximization problem:
\begin{eqnarray}\label{pollearnmax}
\pi^*(X)=\max_{\pi\in \Pi} E[(2\pi(X)-1)\cdot \phi(X)].
\end{eqnarray}
Note that $\phi(X)$ is the DR statistic of \eqref{drselobs}, see for instance \cite{Dudiketal2011}, \cite{Zhangetal2012}, and \cite{Zhouetal2017} for DR-based policy learning. The term $(2\pi(X)-1)$ implies that the CATEs of treated and nontreated subjects enter positively and negatively into the expectation, respectively. Maximizing the expectation therefore requires optimally trading off treated and nontreated subjects in terms of their CATEs when choosing the treatment policy among all feasible policies. Estimation of the optimal policy may be based on the sample analog of \eqref{pollearnmax}, where $\phi(X)$ is estimated by cross-fitting and machine learning-based prediction of the plug-in parameters as outlined in Section \ref{ML}. \cite{AtheyWager2018} demonstrate that similar to ATE estimation, basing policy learning on DR machine learning has desirable properties under specific conditions, even if the important elements in $X$ driving confounding and/or effect heterogeneity are a priori unknown. The regret of the estimated optimal policy in the data when compared to the true optimal policy $\pi^*(X)$ decays at rate $n^{-1/2}$ under selection on observables if all plug-in parameters are estimated at rate $n^{-1/4}$. \cite{ZhouAtheyWager2018} show how this result extends to policy learning for multivalued discrete treatments as also considered in \cite{Kallus2017}.
\section{Instrumental variables}\label{IV}
The selection-on-observables assumption imposed in the previous sections fails if selection into treatment is driven by unobserved factors that affect potential outcomes conditional on $X$. As an example, consider an experiment with imperfect compliance in which access to a training program is randomly assigned, but a subset of jobseekers that are offered the training does not comply and decides to not participate. If compliance behaviour is driven by unobserved factors (e.g.\ ability or motivation) that also affect the outcome (e.g.\ employment), endogeneity jeopardizes a causal analysis based on a naive comparison of treated and nontreated outcomes even when controlling for observed characteristics. However, if mere treatment assignment satisfies a so-called exclusion restriction such that it does not directly affect the outcome other than through actual treatment participation, it may serve as instrumental variable (IV), denoted by $Z$, to identify the treatment effect among those complying with the assignment. The intuition of IV-based identification is that the effect of $Z$ of $Y$, which is identified by the randomization of the instrument, only operates through the effect of $Z$ on $D$ among compliers due to the exclusion restriction. Therefore, scaling (or dividing) the average effect of $Z$ on $Y$ by the average effect of $Z$ on $D$ yields the average effect of $D$ on $Y$ among compliers, see \cite{Imbens+94} and \cite{Angrist+96}.
However, in many applications it may not appear credible that IV assumptions like random assignment hold unconditionally, i.e.\ without controlling for observed covariates. This is commonly the case in observational data in which the instrument is typically not explicitly randomized like in an experiment. For instance, \cite{Card95} considers geographic proximity to college as IV for the likely endogenous treatment education when assessing its effect on earnings. While proximity might induce some individuals to go to college who would otherwise not, e.g.\ due to housing costs associated with not living at home, it likely reflects selection into neighborhoods with a specific socio-economic status that affects labor market performance, implying that the IV is not random. If all confounders of the instrument-outcome relationship are plausibly observed in the data, IV-based estimation can be conducted conditional on observed covariates. For this reason, \cite{Card95} includes a range of control variables like parents' education, ethnicity, urbanity, and geographic region.
To formally state the IV assumptions that permit identifying causal effects conditional on covariates $X$ in the binary instrument and treatment case, denote by $D(1)$ and $D(0)$ the potential treatment decision if instrument $Z$ is set to 1 or 0, respectively. This permits defining four compliance types: Individuals satisfying $(D(1)=1,D(0)=0)$ are compliers as they only take the treatment when receiving the instrument. Non-compliers may consist of never takers who never take the treatment irrespective of the instrument $(D(1)=D(0)=0)$, always takers $(D(1)=D(0)=1)$, and defiers, who counteract instrument assignment $(D(1)=0,D(0)=1)$. Furthermore, denote (for the moment) the potential outcome as $Y(z,d)$, i.e.\ as function of both the instrument and the treatment. Then, the local average treatment effect (LATE) among compliers, denoted by $\Delta_{D(1)=1,D(0)=0}=E[Y(1)-Y(0)|D(1)=1,D(0)=0]$, is nonparametrically identified under the following assumptions, see \cite{Abadie00}.
\begin{eqnarray}\label{assumpiv}
Z \bot (D(z), Y(z',d))|X\textrm{ for }z,z',d\in\{1,0\},\quad X(1)=X(0)=X,\quad 0<P(Z=1|X)<1,\\
\Pr(D(1)\ge D(0)|X)=1,\quad E[D|Z=1,X]-E[D|Z=0,X]\neq 0,\notag\\
\Pr(Y(1,d)=Y(0,d)=Y(d)|X)=1\textrm{ for }z,z',d\in\{1,0\}.\notag
\end{eqnarray}
The first line of \eqref{assumpiv} says that $Z$ is not deterministic in $X$ (common support) and that conditional on $X$ (which must not be affected by $D$), the IV is as good as random and thus not influenced by unobserved factors affecting the treatment and/or outcome. This is a selection-of-observables assumption similar to \eqref{assumpselobs}, however now imposed w.r.t.\ the instrument rather than the treatment. Therefore, the effects of $Z$ on $Y$ and on $D$ are identified conditional on $X$, just in analogy to the identification of the effect of $D$ on $Y$ given $X$ in Section \ref{selobs}. For this reason, replacing $D$ by $Z$ and the treatment propensity score $p(X)=\Pr(D=1|X)$ by the instrument propensity score $\Pr(Z=1|X)$ in the identification results for the ATE in \eqref{condmeanselobs}, \eqref{condmeanselobspscore}, \eqref{ipwselobs}, \eqref{drselobs} yields the average effect of the instrument on the outcome. The latter is known as intention-to-treat effect (ITT) and henceforth denoted by $\theta$. Additionally replacing $Y$ by $D$ yields the average effect of the instrument on the treatment (i.e. $E[D(1)-D(0)]$), the so-called first stage effect, denoted by $\gamma$.
The second line of \eqref{assumpiv} rules out the existence of defiers, but requires the existence of compliers conditional on $X$, due to the non-zero conditional first stage, while never and always takers might exist, too. By the law of total probability, this implies that $\gamma$ corresponds to the share of compliers, as $D(1)-D(0)$ equals one for compliers and zero for never and always takers. The third line invokes the exclusion restriction such that $Z$ must not have a direct effect on $Y$ other than through $D$. By the law of total probability, the ITT in this case corresponds to the first stage effect $\gamma$ times the LATE $\Delta_{D(1)=1,D(0)=0}$. This follows from the nonexistence of defiers and the fact that the effect of $Z$ on $Y$ is necessarily zero for always and never takers, whose $D$ is not affected by $Z$. Therefore, the LATE is identified by scaling the ITT by the first stage effect. Formally,
\begin{eqnarray}\label{LATEident}
\theta=\Delta_{D(1)=1,D(0)=0}\cdot \gamma\quad \Leftrightarrow\quad \Delta_{D(1)=1,D(0)=0}=\frac{\theta}{\gamma}.
\end{eqnarray}
If $X$ is preselected, estimation of $\Delta_{D(1)=1,D(0)=0}$ proceeds by estimating both $\theta$ and $\gamma$ based on any of the treatment effect estimators outlined in Section \ref{selobs} and by dividing one by the other, which is $n^{-1/2}$-consistent under specific regularity conditions. \cite{Froe02a}, for instance, considers nonparametric matching- and (local polynomial and series) regression-based estimation. \cite{HongNekipelov2010} derive semiparametric efficiency bounds for LATE estimation and propose efficient estimators. \cite{DoHsLi2014} and \cite{DoHsLi2014b} propose IPW estimation using series logit and local polynomial regression-based estimation of the instrument propensity score. \cite{Tan2006} and \cite{Uysal2011} discuss DR estimation with parametric plug-in parameters. If IV confounders are not preselected but in analogy to Section \ref{ML} are to be learnt from possibly high dimensional data, then causal machine learning may be applied to the DR representation of both $\theta$ and $\gamma$ in order to estimate the LATE, see for instance \cite{Bellonietal2017}. Finally, the analysis of effect heterogeneity and optimal policies discussed in Section \ref{MLhet} also extends to the IV context by using doubly robust statistics appropriate for LATE estimation, see \cite{AtheyWager2018} and \cite{AtheyTibshiraniWager2019}.
\cite{FrMe13} discuss the identification of the local quantile treatment effect on compliers (LQTE) and propose an IPW estimator based on local polynomial regression for IV propensity score estimation.
\cite{Bellonietal2017} consider LQTE estimation based on causal machine learning when $X$ are not preselected and important instrument confounders are to be learned from the data. In contrast to the previously mentioned studies, \cite{Abadie+99} consider estimation of the conditional LQTE given particular values in $X$ by applying the so-called $\kappa$-weighting approach of \cite{Abadie00}. The latter permits identifying a broad class of complier-related statistics, based on the following weighting function $\kappa$:
\begin{eqnarray}
\kappa=1-\frac{D\cdot (1-Z)}{1-\Pr(Z=1|X)}-\frac{(1-D)\cdot Z}{\Pr(Z=1|X)}.
\end{eqnarray}
For instance, $\frac{E (\kappa \cdot X)}{E (\kappa)}=E[X|D(1)=1,D(0)=0]$ yields the mean of $X$ among compliers, which permits judging the similarity of this subgroup and the total population in terms of observed characteristics.
The LATE assumptions are partly testable by investigating specific moment inequalities w.r.t.\ outcomes across complier types that need to hold for valid instruments, see the tests proposed by \cite{Kitagawa2008}, \cite{HuMe11}, \cite{MoWa2014}, \cite{Sharma2016}, and \cite{Guber2018}. The latter uses a modified version of the causal tree of \cite{AtheyImbens2016} to increase asymptotic power by searching for the largest violations in IV validity across values $X$ in a data-driven way.
It is also worth noting that even if monotonicity $\Pr(D(1)\ge D(0)|X)=1$ is violated and defiers exist, the LATE on a fraction of compliers can still be identified if a subset of compliers is equal to the defiers in terms of the average effect and population size, see \cite{deChaisemartin2016}.
When extending the binary instrument and treatment case to a multivalued instrument $Z$ and a binary $D$, LATEs are identified w.r.t.\ any pair of values $(z'',z')$ satisfying the IV assumptions. Each of them may have a different first stage and thus, complier population. Particularly interesting appears the LATE for the largest possible complier population. The latter is obtained by defining the treatment propensity score $p(z)=\Pr(D=1|Z=z,X=x)$ as instrument and considering the pair of propensity score values that maximizes compliance given $X=x$, see \cite{Froe02a}.
A continuously distributed instrument even permits identifying a continuum of complier effects under appropriately adapted IV assumptions. Specifically, a marginal change in the instrument yields the so-called marginal treatment effect (MTE), see \cite{HeckVytlacil00} and \cite{HeVy05}, which can be interpreted as the average effect among individuals who are indifferent between treatment or nontreatment given their values of $Z$ and $X$. Technically speaking, the MTE is the limit of the LATE when the change in the instrument goes to zero.
In contrast to multivalued instruments, generalizing identification from binary to nonbinary treatments is not straightforward. Assume a binary instrument and an ordered treatment $D\in \{0, 1,...,J\}$, with $J+1$ being the number of possible (discrete) treatment doses. \cite{AngristImbens95} show that effects for single compliance types at specific treatment values, e.g.\ for those increasing the treatment from 1 to 2 when the increasing the instrument from $0$ to $1$, are not identified. It is, however, possible to obtain a non-trivially weighted average of effects of unit-level increases in the treatment on heterogeneous complier groups defined by different margins of the potential treatments. Albeit this is a proper causal parameter, its interpretability is compromised by the fact that the various complier groups generally enter with non-uniform weights. Similar issues occur if both instruments and treatments are multivalued.
There has been a controversial debate about the practical relevance of the LATE, as it only refers to the subgroup of compliers, see e.g.\ \cite{De10}, \cite{Im10}, \cite{HeUr10}. It is therefore interesting to see under which conditions this effect can be extrapolated to other populations. As discussed in \cite{Angrist2004}, the LATE is directly externally valid, i.e.,\ corresponds to the ATE when either all mean potential outcomes are homogeneous across compliance types, or at least the average effects. For testing the equality of mean potential outcomes across treated compliers and always takers as well as across nontreated compliers and never takers, see \cite{Angrist2004}, \cite{deLunaJohansson2012}, \cite{Huber2013}, and \cite{BlackJooLaLondeSmithTaylor2015}. See also \cite{DoHsLi2014} for a related, but yet different testing approach. If equality in all mean potential outcomes holds at least conditional on $X$, instruments are in fact not required for identification as selection into $D$ is on observables only, see Section \ref{selobs}. \cite{AnFe2010} and \cite{AronowCarnegie2013} do not consider homogeneity in mean potential outcomes but discuss extrapolation of the LATE when assuming homogeneous effects across compliance types. This assumption, which rules out selection into treatment by unobserved gains as assumed in standard \cite{Roy51} models, is testable if several instruments are available. For a comprehensive survey on methodological advancements in LATE evaluation, see \cite{HuberWuthrich2019}.
\section{Difference-in-Differences}\label{did}
The difference-in-differences (DiD) approach bases identification on the so-called common trend assumption. The latter says that the mean potential outcomes under nontreatment of the actually treated and nontreated groups experience a common change over time when comparing periods before and after the treatment. Assuming that both groups would in the absence of the treatment have experienced the same time trend in potential outcomes, however, permits for differences in the levels of potential outcomes due to selection bias. As an example, assume that of interest is the employment effect of a minimum wage ($D$), which is introduced in one geographic region, but not in another one, see for instance \cite{CardKrueger1994}. While the employment level ($Y$) may differ in both regions due to differences in the industry structure, DiD-based evaluation requires that employment changes e.g.\ due to business cycles would be the same in the absence of a minimum wage. In this setup, a comparison of average employment in the post-treatment period across regions does not give the effect of the minimum wage due to selection bias related to the industry structure. A before-after comparison of employment (i.e.\ before and after treatment introduction) within the treated region is biased, too, as it picks up both the treatment effect and the business cycle-related time trend. Under the common trend assumption, however, the time trend for either region is identified by the before-after comparison in the nontreated region. Subtracting the before-after difference in employment in the nontreated region (time trend) from the before-after difference in the treated region (treatment effect plus time trend) therefore gives the treatment effect on the treated. That is, taking the difference in (before-after) differences across regions yields identification under the common trend assumption.
In many empirical problems, common trends may only appear plausible after controlling for observed covariates $X$. For instance, it could be argued that the assumption is more likely satisfied for treated and nontreated subjects within the same occupation or industry. Formally, let $T$ denote a time index which is equal to zero in the pre-treatment period, when neither group received the treatment, and one in the post-treatment period, after one out of the two groups received the treatment. To distinguish the potential outcomes in terms of pre- and post-treatment periods, the subindex $t$ $\in$ $\{1,0\}$ is added, such that $Y_0(1),Y_0(0)$ and $Y_1(1),Y_1(0)$ correspond to the pre- and post-treatment potential outcomes, respectively. The following conditions permit identifying the ATET in the post-treatment period, denoted by $\Delta_{D=1,T=1}=E[Y_1(1)-Y_1(0)|D=1,T=1]$, see the review of the DiD framework in \cite{Lechner2010}:
\begin{eqnarray}\label{didass}
E[Y_1(0)-Y_0(0)|D=1,X]&=&E[Y_1(0)-Y_0(0)|D=0,X],\quad X(1)=X(0)=X, \\
E[Y_0(1)-Y_0(0)|D=1,X]&=&0, \notag \\
\Pr(D=1,T=1|X, (D,T)&\in&\{ (d,t),(1,1)\})<1\textrm{ for all }(d,t)\in \{(1,0),(0,1),(0,0)\}. \notag
\end{eqnarray}
The first line of \eqref{didass} imposes that $X$ is not affected by $D$ and formalizes the conditional common trend assumption stating that conditional on $X$, no unobservables jointly affect the treatment and the trend of mean potential outcomes under nontreatment. This is a selection-on-observables assumption on $D$, however, w.r.t.\ the changes in mean potential outcomes over time, rather than their levels as in \eqref{assumpselobs} of Section \ref{selobs}. The two types of assumptions are not nested, such that neither implies the other, and cannot be combined for the sake of a more general model, see the discussion in \cite{ChabeFerret2017}. The second line in \eqref{didass} rules out (average) anticipation effects among the treated, implying that $D$ must not causally influence pre-treatment outcomes in expectation of the treatment to come. The third line imposes common support: For any value of $X$ appearing in the group with $(D=1,T=1)$, subjects with such values of $X$ must also exist in the remaining three groups with $(D=1,T=0)$, $(D=0,T=1)$, and $(D=0,T=0)$.
Given that the identifying assumptions hold, the DiD strategy applies to both panel data with the same subjects in pre- and post-treatment periods as well as to repeated cross sections
with different subjects in either period. Under \eqref{didass}, $E[Y|D=0,T=1,X]-E[Y|D=0,T=0,X]=E[Y_1(0)-Y_0(0)|D=0,X]=E[Y_1(0)-Y_0(0)|D=1,X]$. This may be subtracted from $E[Y|D=1,T=1,X]-E[Y|D=1,T=0,X]=E[Y_1(1)-Y_0(1)|D=1,X]=E[Y_1(1)-Y_1(0)|D=1,X]+E[Y_1(0)-Y_0(1)|D=1,X]=E[Y_1(1)-Y_0(1)|D=1,X]=E[Y_1(1)-Y_1(0)|D=1,X]+E[Y_1(0)-Y_0(0)|D=1,X]$, where the second equality follows from subtracting and adding $Y_1(0)$ and the third from ruling out anticipation effects, in order to obtain the conditional ATET $E[Y_1(1)-Y_1(0)|D=1,X]$. Therefore, averaging over the distribution of $X$ among the treated in the post-treatment period yields the ATET in that period:
\begin{eqnarray}\label{DiDidentpost}
&&\Delta_{D=1,T=1}=E[ \mu_1(1,X)-\mu_1(0,X) - (\mu_0(1,X)-\mu_0(0,X))|D=1, T=1 ] \\
&=& E\left[ \left\{ \frac{D\cdot T}{ \Pi} - \frac{D\cdot (1-T)\cdot \rho_{1,1}(X)}{ \rho_{1,0}(X)\cdot \Pi}
- \left( \frac{(1-D)\cdot T\cdot \rho_{1,1}(X)}{ \rho_{0,1}(X) \cdot \Pi} - \frac{(1-D)\cdot (1-T)\cdot \rho_{1,1}(X)}{ \rho_{0,0}(X)\cdot \Pi}\right)\right\}\cdot Y \right],\notag
\end{eqnarray}
where $\Pi=\Pr(D=1,T=1)$, $\rho_{d,t}(X)=\Pr(D=d,T=t|X)$, and $\mu_d(t,x)=E[Y|D=d,T=t,X=x]$.
As pointed out in \cite{Hong2013}, many DiD studies at least implicitly make the additional assumption that the joint distributions of treatment $D$ and covariates $X$ remain constant over time $T$, formalized by $(X,D)\bot T$. This for instance rules out that the composition of $X$ changes between periods in either treatment group. Under this additional assumption, $\Delta_{D=1,T=1}$ coincides with the `standard' ATET $\Delta_{D=1}$, which is then identified by the following expressions:
\begin{eqnarray}\label{DiDident}
\Delta_{D=1}&=&E[ \mu_1(1,X)-\mu_1(0,X) - (\mu_0(1,X)-\mu_0(0,X))|D=1 ] \\
&=& E\left[ \left\{ \frac{D\cdot T}{ P\cdot \Lambda} - \frac{D\cdot (1-T)}{ P\cdot (1-\Lambda)}
- \left( \frac{(1-D)\cdot T\cdot p(X)}{ (1-p(X))\cdot P\cdot \Lambda} - \frac{(1-D)\cdot (1-T)\cdot p(X)}{ (1-p(X))\cdot P\cdot (1-\Lambda)}\right)\right\}\cdot Y \right]\notag \\
&=& E\left[ \left\{ \frac{D\cdot T}{ P\cdot \Lambda} - \frac{D\cdot (1-T)}{ P\cdot (1- \Lambda)}
- \left(\frac{(1-D)\cdot T\cdot p(X)}{ (1-p(X))\cdot P\cdot \Lambda} - \frac{(1-D)\cdot (1-T)\cdot p(X)}{ (1-p(X))\cdot P\cdot (1-\Lambda)}\right) \right\}\cdot (Y-\mu_0(T,X)) \right],\notag
\end{eqnarray}
where $p(X)=\Pr(D=1|X)$, $P=\Pr(D=1)$, and $\Lambda=\Pr(T=1)$. Exploiting the identification results after the first, second, and third equalities in \eqref{DiDident}, $n^{-1/2}$-consistent estimation may be based on regression or matching, on IPW as considered in \cite{Abadie2005}, or on DR estimation as in \cite{SantAnnaZhao2018}, respectively. \cite{Zimmert2018} shows that in the presence of high dimensional covariate information, causal machine learning based on the DR representation in \eqref{DiDident} can be semiparametrically efficient in analogy to the results in Section \ref{ML}.
A general practical issue concerning DiD inference is clustering, due to a correlation in uncertainty over time (e.g.\ in panel data due to having the same subjects in either period) or within regions (e.g.\ due to being exposed to the same institutional context). In this case, observations are not independently sampled from each other, implying that inference methods not accounting for clustering might perform poorly. See e.g.\ \cite{BertrandDuflo04}, \cite{DonaldLang07}, \cite{Cameronetal2008}, \cite{ConleyTaber2011}, \cite{FermanPinto2019} for a discussion of this issue as well as of (corrections of) asymptotic or bootstrap-based inference methods under a large or small number of clusters in the treatment groups. The findings of this literature suggest that cluster- and heteroskedasticity-robust variance estimators might only work satisfactorily if the number of treated and nontreated clusters is large enough, while a small number of clusters requires more sophisticated inference methods.
The subsequent discussion reviews some methodological extensions.
\cite{cha15} discuss identification when the introduction of the treatment does not induce everyone in the treatment group to be treated, but (only) increases the treatment rate citehan in the nontreated group in the spirit of an instrument, see Section \ref{IV}. \cite{AbrahamSun2018}, \cite{AtheyImbens2018DiD}, \cite{BorusyakJaravel2018}, \cite{CallawaySantAnna2018}, \cite{GoodmanBacon2018}, \cite{Hull2018}, \cite{Strezhnev2018}, \cite{cha19}, and \cite{ImaiKim2019} discuss DiD identification with multiple time periods and treatment groups that might experience treatment introduction at different points in time. \cite{Arkhangelskyetal2019} consider unit- and time-weighted DiD estimation.
\cite{AtheyImbens06} suggest the so-called Changes-in-Changes (CiC) approach, which is related to DiD in that it exploits differences in pre- and post-treatment outcomes, however, based on different (and non-nested) identifying assumptions. While CiC does not invoke any common trend assumption, it imposes that potential outcomes under nontreatment are strictly monotonic in unobserved heterogeneity and that the distribution of the latter remains constant over time within treatment groups. Such a conditional independence between unobserved heterogeneity and time is satisfied if the subjects' ranks in the outcome distributions within treatment groups do not systematically change from pre- to post-treatment periods. In contrast to DiD, CiC allows identifying both the ATET and QTET, but generally requires a continuously distributed outcome for point identification.
Finally, another approach related to, but in terms of identification yet different from DiD is the synthetic control method of \cite{AbadieGardeazabal2003} and \cite{Abadieetal2010}, which was originally developed for case study set ups with only one treated, but many nontreated units. It is based on appropriately weighting nontreated units to synthetically impute the treated unit's potential outcome under nontreatment. See e.g.\ the review article of \cite{AbadieCattaneo2018} which contains a section on the synthetic control method that provides references to methodological advancements.
\section{Regression discontinuity and kink designs}\label{rdd}
The regression discontinuity design (RDD), see \cite{Thistlethwaite60}, is based on the assumption that at a particular threshold of some observed running variable, the treatment status either changes from zero to one for everyone (sharp design) or for a subpopulation (fuzzy design). As an example, assume that the treatment of interest is extended eligibility to unemployment benefits, to which only individuals aged 50 or older are entitled, see for instance \cite{Lalive2008785}. The idea is to compare the outcomes (like unemployment duration) of treated and nontreated subjects close to the (age) threshold, e.g.\ of individuals aged 50 and 49, who are arguably similar in characteristics potentially affecting the outcome, due to their minor difference in age. The RDD therefore aims at imitating the experimental context at the threshold to evaluate the treatment effect locally for the subpopulation at the threshold.
Formally, let $R$ denote the running variable and $r_0$ the threshold value. If the treatment is deterministic in $R$ such that it is one whenever the threshold is reached or exceeded, i.e.\ $D=I\{R\geq r_{0}\}$, the RDD is sharp: All individuals change their treatment status exactly at $r_{0}$. Identification in the sharp RDD relies on the assumption that mean potential outcomes $E[Y(1)|R]$ and $E[Y(0)|R]$ are continuous and sufficiently smooth around $R=r_0$, see e.g.\ \cite{HahnToodKlaauw01}, \cite{Porter03}, and \cite{Lee08}, meaning that any factors other than $D$ that affect the outcome are continuous at the threshold. Continuity implies that if treated and nontreated populations with values of $R$ exactly equal to $r_0$ existed, the treatment would be as good as randomly assigned w.r.t.\ mean potential outcomes. This corresponds to a local selection-on-observables assumption conditional on $R=r_0$. Furthermore, the density of the running variable $R$ must be continuous and bounded away from zero around the threshold, such that treated and nontreated observations are observed close to $R=r_0$.
Under these assumptions, the ATE at the threshold, denoted by $\Delta_{R=r_0}$, is identified based on treated and nontreated outcomes in a neighbourhood $\varepsilon>0$ around the threshold when letting $\varepsilon$ go to zero:
\begin{eqnarray}\label{sharp}
&&\underset{\varepsilon \rightarrow 0}{\lim }E[ Y| R\in [r_{0}, r_{0}+\varepsilon) ]-\underset{\varepsilon \rightarrow 0}{\lim }E[ Y| R\in [r_{0}-\varepsilon, r_{0}) ]\\
&=&\underset{\varepsilon \rightarrow 0}{\lim }E[ Y(1)|R\in [r_{0}, r_{0}+\varepsilon) ]-\underset{\varepsilon \rightarrow 0}{\lim }E[ Y(0)|R \in [r_{0}-\varepsilon, r_{0}) ]=E[Y(1)-Y(0)|R=r_{0}]=\Delta_{R=r_0}.\notag
\end{eqnarray}
In the fuzzy RDD, $D$ is not deterministic in $R$ but may also depend on other factors. It is, however, assumed that the treatment share changes discontinuously at the threshold. Assume e.g.\ that admittance to a college ($D$) depends on passing a particular threshold of the score in a college entrance exam ($R$). While some students might decide not to attend college even if succeeding in the exam, a discontinuous change in the treatment share occurs if compliers exists that are induced to go to college when passing the threshold. Denote by $D(z)$ the potential treatment state as a function of the binary indicator $Z=I\{R\geq r_{0}\}$, which serves as instrument in an analogous way as discussed in Section \ref{IV}. Similar to \cite{Dong2014}, assume that around the threshold, defiers do not exist and that the shares of compliers, always takers, and never takers as well as their mean potential outcomes under treatment and nontreatment are continuous. This implies that IV-type assumptions similar to those postulated in $\eqref{assumpiv}$ conditional on $X$ hold conditional on $R=r_0$.
Under these conditions, the first stage effect of $Z$ on $D$, denoted by $\gamma_{R=r_0}$ is identified by
\begin{eqnarray}
&&\underset{\varepsilon \rightarrow 0}{\lim }E[ D| R\in [r_{0}, r_{0}+\varepsilon) ]-\underset{\varepsilon \rightarrow 0}{\lim }E[ D| R\in [r_{0}-\varepsilon, r_{0}) ]\\
&=&\underset{\varepsilon \rightarrow 0}{\lim }E[ D(1)|R\in [r_{0}, r_{0}+\varepsilon) ]-\underset{\varepsilon \rightarrow 0}{\lim }E[ D(0)|R \in [r_{0}-\varepsilon, r_{0}) ]=E[D(1)-D(0)|R=r_{0}]=\gamma_{R=r_0}.\notag
\end{eqnarray}
Furthermore, the first line of \eqref{sharp} identifies the ITT effect of $Z$ on $Y$ at the threshold, denoted by $\theta_{R=r_0}$ in the fuzzy RDD (rather than $\Delta_{R=r_0}$ as in the sharp RDD). In analogy to \eqref{LATEident} in Section \ref{IV}, the LATE on compliers at the treshold, denoted by $\Delta_{D(1)=1,D(0)=0,R=r_0}=E[Y(1)-Y(0)|D(1)=1,D(0)=0, R=r_{0}]$, is identified by dividing the ITT by the first stage effect at the threshold:
\begin{eqnarray}
\Delta_{D(1)=1,D(0)=0,R=r_0}=\frac{\theta_{R=r_0}}{\gamma_{R=r_0}}
\end{eqnarray}
In empirical applications of the RDD, the treatment effect is predominantly estimated by a local regression around the threshold. Practitioners for instance frequently use a linear regression for estimating $E[Y|D=0,R<r_0]$ and $E[Y|D=1,R\geq 0]$ within some bandwidth around $r_0$ in order to estimate $\Delta_{R=r_0}$ by the difference of the regression functions at $r_0$ in the case of the sharp RDD. A smaller bandwidth decreases estimation bias, because observations closer to the threshold are more comparable and effect estimation is more robust to model misspecification, see \cite{Imbens18}, but increases the variance due to relying on a lower number of observations. \cite{Imbens01072012} propose a method for bandwidth selection that minimizes the squared error of the estimator. However, the optimal bandwidth for point estimation is generally suboptimal (and too large) for conducting inference, e.g.\ for computing confidence intervals. For this reason, \cite{ECTA1465} propose inference methods that are more robust to bandwidth choice and yield confidence intervals more closely matching nominal coverage, along with optimal bandwidth selection for inference. Their results imply that when $\Delta_{R=r_0}$ is estimated by linear regression within some bandwidth, then quadratic regression (i.e.\ one order higher) with the same bandwidth should be used for the computation of the standard error and confidence intervals. \cite{ArmstrongKolesar2018} suggest an alternative approach to inference that takes into account the worst case bias that could arise given a particular bandwidth choice. \cite{CattaneoFrandsenTitiunik2015} develop randomization methods for exact finite sample inference in the RDD under somewhat stronger identifying assumptions.
The identifying assumptions of the RDD are partly testable in the data. \cite{Mccrary08} proposes a test for the continuity or the running variable at the threshold, as a discontinuity points to a manipulation of $R$ and selective bunching at a one side of the threshold. In the previous example based on \cite{Lalive2008785}, certain employees and companies might for instance manipulate age at entry into unemployment by postponing layoffs such that the age requirement for extended unemployment benefits is just satisfied. As a further test, \cite{Lee08} suggests investigating whether observed pre-treatment covariates $X$ are locally balanced at either side of the threshold. Covariates also permit weakening the RDD assumptions to only hold conditional on $X$, implying that all variables jointly affecting manipulation at the threshold and the outcome are observed, see \cite{FrHu2018} who propose a nonparametric kernel estimator in this context. In contrast, \cite{CalonicoCattaneoTitiunik2016} do not exploit covariates for identification, but investigate variance reductions when linearly controlling for $X$ and provide methods for optimal bandwidth selection and robust inference for this case.
Several studies investigate conditions under which the rather local RDD effect can be extrapolated to other populations. \cite{DongLewbel2011} show the identification of the derivative of the RDD treatment effect in both sharp and fuzzy designs, which permits identifying the change in the treatment effect resulting from a marginal change in the threshold. \cite{AngristRokkanen2015} test whether the running variable's association with the outcome vanishes on either side of the threshold conditional on covariates $X$. For the case of the sharp RDD, this implies that $X$ is sufficient to control for confounding just as under the selection-on-observables framework of Section \ref{selobs}, such that effects are also identified away from the threshold. In context of the fuzzy RDD, \cite{BertanhaImbens2019} propose a test for the equality in mean outcomes of treated compliers and always takers, as well as of untreated compliers and never takers. This permits investigating whether the effect on compliers at the threshold may be extrapolated to all compliance types at and away from the threshold. \cite{Cattaneoetal2019} demonstrate extrapolation under multiple thresholds, i.e.\ when the threshold may vary for various subjects instead of being equal for everyone, as considered in \cite{Cattaneoetal2016}.
\cite{LeeCard08}, \cite{Dong2015}, \cite{KolesarRothe2018} discuss identification and inference when the forcing variable is discrete rather than continuous, which is highly relevant for empirical applications. \cite{Papayetal2011} and \cite{KeeleTitiunik2015} extend the regression-discontinuity approach to multiple running variables. \cite{ImbensWager2019} propose an optimization-based inference method for deriving the minimax linear RDD estimator which can be applied to continuous, discrete, and multiple running variables. \cite{FrFrMe2012} discuss the identification of quantile treatment effects in the RDD. See also \cite{ImbensLemieux08} and \cite{LeLe09} for surveys on the applied and theoretical RDD literature.
Related to the fuzzy RDD is the regression kink design (RKD), see \cite{Cardetal2015}, which is technically speaking a first derivative version of the former. The treatment is assumed to be a continuous function of the running variable $R$ (rather than discontinuous as in the RDD), with a kink at $r_0$. This implies that the first derivative of $D$ w.r.t.\ $R$ (rather than the level of $D$ as in the RDD) is discontinuous at the threshold. In \cite{Landais2015}, for instance, unemployment benefits ($D$) are a kinked function of the previous wage ($R$): $D$ corresponds to $R$ times a constant percentage up to a maximum previous wage $r_0$ beyond which $D$ does not increase any further but remains constant. For this piecewise linear function, the derivative of $D$ w.r.t.\ $R$ corresponds to the percentage for $R<r_0$ and to zero for $R\geq 0$. As the treatment is deterministic in the running variable, this is known as sharp RKD.
Given appropriate continuity and smoothness conditions w.r.t.\ mean potential outcomes and the density of $R$ around $r_0$, scaling the change in the first derivatives of mean outcomes w.r.t.\ to $R$ at the threshold by the corresponding change in first derivatives of $D$ identifies a causal effect. The latter corresponds to the average derivative of the potential outcome with respect to $D$ when the latter corresponds to its value at the threshold, denoted by $d_0$, within the local population at $R=r_0$:
\begin{eqnarray}\label{rkd}
\Delta_{R=r_0}(d_0)=\frac{\partial E[Y(d_0)|R=r_0]}{\partial D}= \frac{\underset{\varepsilon \rightarrow 0}{\lim } \frac{\partial E[ Y| R\in [r_{0}, r_{0}+\varepsilon) ]}{\partial R} -
\underset{\varepsilon \rightarrow 0}{\lim } \frac{\partial E[ Y| R\in [r_{0}-\varepsilon, r_{0}) ]}{\partial R}
}{ \underset{\varepsilon \rightarrow 0}{\lim } \frac{\partial D| R\in [r_{0}, r_{0}+\varepsilon) }{\partial R} - \underset{\varepsilon \rightarrow 0}{\lim } \frac{ \partial D| R\in [r_{0}-\varepsilon, r_{0}) }{\partial R} }
\end{eqnarray}
The fuzzy RKD permits deviations from the kinked function characterizing how the running variable affects the treatment, such that $D$ is not deterministic in $R$, see for instance \cite{SimonsenSkipperSkipper2016} for a study investigating the price sensitivity of product demand. Under specific continuity conditions and the monotonicity-type assumption that the kink of any individual either goes in the same direction or is zero, a causal effect at the threshold is identified among individuals with nonzero kinks. To this end, the derivatives of the treatment in \eqref{rkd}, namely $\frac{\partial D| R\in [r_{0}, r_{0}+\varepsilon) }{\partial R}$ and $\frac{ \partial D| R\in [r_{0}-\varepsilon, r_{0}) }{\partial R}$, are to be replaced by the derivatives of expectations
$\frac{\partial E[ D| R\in [r_{0}, r_{0}+\varepsilon) ]}{\partial R}$ and $\frac{\partial E[ D| R\in [r_{0}-\varepsilon, r_{0}) ]}{\partial R}$. As the expectation of a treatment maybe continuous even if the treatment itself is not, the fuzzy RKD may also be applied to a binary $D$, see \cite{Dong2014}. \cite{ECTA1465} provide robust inference methods for the RKD, while \cite{GaJa2018} propose a permutation method for exact finite sample inference.
\section{Conclusion}\label{conclusion}
This chapter provided an overview of different approaches to policy evaluation for assessing the causal effect of a treatment on an outcome. Starting with an introduction to causality and the experimental evaluation of a randomized treatment, it subsequently discussed identification and flexible estimation under selection on observables, instrumental variables, difference-in-differences, changes-in-changes, and regression discontinuities and kinks. Particular attention was devoted to approaches combining policy evaluation with machine learning to provide data-driven procedures for tackling confounding related to observed covariates, investigating effect heterogeneities across subgroups, and learning optimal treatment policies. In a world with ever increasing data availability, such causal machine learning methods aimed at optimally exploiting large amounts of information for causal inference will likely leverage the scope of policy evaluation to unprecedented levels. Besides the classic domain of public policies, this concerns not least the private sector, with ever more firms investing in data analytics to assess and optimize the causal impact of their actions like price policies or advertising campaigns.
\setlength\baselineskip{14.0pt}
\bibliographystyle{agsm}
{\footnotesize
|
2,877,628,091,469 | arxiv | \section{Overview of Galaxy Scaling Relations}
The dynamics of disk galaxies are well-organized, following a variety of well-established scaling relations.
These relations are remarkably tight,
and can be summarized with a few simple rules: \\
\begin{enumerate}[label=\arabic*.\ ]
\item \textbf{Flat Rotation Curves}
\begin{itemize}
\item[] The rotation curves of galaxies tend towards an approximately constant rotation speed that persists to indefinitely
large radii \cite[(Rubin et al.\ 1978, 1980, Bosma 1981a,b)]{Rubin1978,Rubin1980,Bosma1981a,Bosma1981b}.
\end{itemize}
\item \textbf{Renzo's Rule}
\begin{itemize}
\item[] For any feature in the luminosity profile there is a corresponding feature in the rotation curve, and vice versa \cite[(Sancisi 2004)]{renzorule}.
\end{itemize}
\item \textbf{The Baryonic Tully-Fisher Relation (BTFR)}
\begin{itemize}
\item[] The amplitude of the flat rotation speed of a galaxy correlates with its baryonic mass (the sum of stars and gas:
\cite[McGaugh et al.\ 2000, Lelli et al.\ 2016a, 2019]{btforig,sparcbtfr,Lelli2019}).
\end{itemize}
\item \textbf{The Central Density Relation (CDR)}
\begin{itemize}
\item[] The dynamically measured central mass surface density of a galaxy correlates with its
photometrically measured central surface brightness \cite[(Lelli et al.\ 2013, 2016c)]{Lelli2013,sparc_cdr}.
\end{itemize}
\item \textbf{The Radial Acceleration Relation (RAR)}
\begin{itemize}
\item[] The observed centripetal acceleration correlates with that predicted by the distribution of
baryonic mass \cite[(McGaugh et al.\ 2016, Lelli et al.\ 2017, Li et al.\ 2018)]{RAR,OneLaw,LiRARfit}. \\
\end{itemize}
\end{enumerate}
These rules have been established over the years by the efforts of many astronomers working across optical, infrared, and radio wavelengths.
For brevity, we illustrate these rules utilizing recent work for which the mass distributions of both stars and gas are well constrained:
the SPARC (Spitzer Photometry and Accurate Rotation Curves) database \cite[(Lelli et al.\ 2016b)]{SPARC} supplemented by the
gas-rich galaxies discussed by \cite[McGaugh (2012)]{M12}.
The SPARC database contains galaxies for which both HI data cubes and Spitzer [3.6] surface photometry are available, providing
a comprehensive picture of the distribution of both stars and gas. The supplementary gas rich galaxies lack Spitzer photometry,
but are so gas dominated that optical data suffice to trace the minority stars.
The ideal galaxy sample includes all galaxies within a suitably large volume of the universe \cite[(e.g., Eckert et al.\ 2015)]{RESOLVE}.
In practice, this can never be achieved: there is always a minimum luminosity and surface brightness below which galaxies cannot be detected.
To make matters worse, the vast majority of galaxy catalogs are magnitude limited rather than volume limited. This strongly biases samples
to the brightest objects that exist in the intrinsic distribution \cite[(McGaugh et al.\ 1995)]{MBS1995}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=5.4in]{mcgaugh_fig1-eps-converted-to.pdf}
\caption{The stellar masses of rotating galaxies compared to their gas masses (left) and disk scale lengths (right).
Blue points are galaxies in the SPARC database \cite[(Lelli et al.\ 2016b)]{SPARC} and the gas rich galaxies discussed by \cite[McGaugh (2012)]{M12}.
The location of the Milky Way is noted in red \cite[(McGaugh 2016)]{M16}: it is a typical bright spiral.
Grey points at left are the SDSS sample of \cite[Bradford et al.\ (2015)]{Bradford2015}; at right that of \cite[Courteau et al.\ (2007)]{C2007}.
The line at left is the line of equality ($M_* = M_g$); that at right is a line of constant surface brightness ($M_* \sim R_d^2$).
The inset at lower right shows the raw number of galaxies in SDSS DR7 as a function of stellar mass.
\label{MstMg}}
\end{center}
\end{figure}
It being impossible to obtain a perfect galaxy sample, the next best thing is to sample randomly across all
decades of the mass function. The mass function rises to lower masses \cite[(Moffett et al.\ 2016)]{GAMA},
so a representative sample will have more low mass than high mass galaxies.
This is the opposite of what happens in magnitude-limited samples, where the numbers of low mass galaxies decline
sharply below a peak around $M_* = 5 \times 10^{10}\;\mathrm{M}_{\odot}$. While an obvious statement, Fig.\ \ref{MstMg} makes viscerally apparent
how stark the difference is between declining apparent numbers and increasing intrinsic numbers of low mass galaxies.
The sample discussed here provides a much broader perspective in terms of mass, surface brightness, and gas content
than is available from the magnitude limited samples that pervade the literature.
\begin{figure}[t]
\begin{center}
\includegraphics[width=5.4in]{mcgaugh_fig2-eps-converted-to.pdf}
\caption{Rotation curves (points) and mass models (lines) for the spiral galaxy
NGC 6946 \cite[(left: Blais-Ouellette et al.\ 2004, Daigle et al.\ 2006, Boomsma 2007)]{BO2004,D2006,Boomsma2007}
and the gas rich dwarf DDO 154 \cite[(right: de Blok et al.\ 2008)]{THINGS}.
In neither case is an exponential disk (dotted line) an adequate approximation for the mass model:
proper treatment requires numerical solution of the Poisson equation.
\label{renzorule}}
\end{center}
\end{figure}
\noindent\textbf{1.\ Flat Rotation Curves} are a sufficiently famous result that they require little review. Examples can be seen in
Figures \ref{renzorule} and \ref{RCs}. That rotation curves become flat is, at this juncture, a \textit{de facto} Law of Nature akin to Kepler's Laws.
Of course, no rotation curve is \textit{perfectly} flat in the sense that $dV/dR = 0.000$. Rather, there is inevitably a range of radii over which
the rotation speed is nearly constant, within 5\% \cite[(Lelli et al.\ 2016a)]{sparcbtfr}. More generally, there is a tendency for the rotation
curves of bright galaxies to rise steeply then fall a little before flattening out \cite[(Noordermeer et al.\ 2007)]{Noord2007},
while those of faint galaxies tend to rise gradually, rolling over
toward flatness only slowly \cite[(de Blok \& McGaugh 1996, Swaters et al.\ 2009)]{dBM96,}. Often times, data do not reach far enough out to clearly see this:
the data taper off while $V(R)$ is still rising \cite[(Stark et al.\ 2009, Trachternach et al.\ 2009)]{Stark,Trach}.
However, in those cases where more extended data have been obtained, the flattening has always been seen \cite[(de Blok et al.\ 2008)]{THINGS}.
\noindent\textbf{2.\ Renzo's Rule} highlights the correspondence between detailed features measured independently in the kinematics
and photometry of galaxies. This is natural when stars dominate the mass:
features in the dominant mass distribution must also be reflected in the gravitational potential.
Renzo's rule also applies in low surface brightness (LSB)
galaxies, where the correspondence persists despite the mass discrepancy being large.
This is \textit{not} natural, as a dynamically hot, quasi-spherical dark matter halo cannot support the same features as a dynamically cold,
thin baryonic disk \cite[(Binney \& Tremaine 1987)]{BT87}.
Fig.\ \ref{renzorule} illustrates Renzo's rule in both high and low surface brightness galaxies.
The inner shape of the mass model and rotation curve in NGC 6946 is driven by a compact bulge component
that contains a mere 6\% of the total light. Similarly, the mass model of DDO 154, which is dominated by gas
($\mathrm{M}_{\mathrm{HI}}/\mathrm{L}_{[3.6]} = 5.2$, implying $f_g \approx 0.93$),
has kinks around $R = 0.5$, 2, and 5 kpc that are also apparent in the rotation curve.
This occurs despite the baryons being sub-dominant at essentially all radii.
The general correspondence between surface brightness and kinematics is also apparent in Fig.\ \ref{RCs}.
\noindent\textbf{3.\ The Baryonic Tully-Fisher Relation} is a generalization of the original \cite[Tully \& Fisher (1977)]{TF77} relation to include gas mass
as well as stellar luminosity. Bright galaxies are star dominated (Fig.\ \ref{MstMg}), so a tight correlation between luminosity and
rotation speed give a Tully-Fisher relation for galaxies with $M_* > 10^9\;\mathrm{M}_{\odot}$. Below this mass scale,
the relation deteriorates. This is simply because an important mass component --- the gas --- has been neglected. A continuous
relation is restored when both stars and gas are included. The physics underpinning the relation is concerned with the total amount of
baryonic mass, but not whether it is stars or gas.
\begin{figure}
\begin{center}
\includegraphics[width=3.4in]{mcgaugh_fig3-eps-converted-to.pdf}
\caption{Rotation curves of galaxies in the SPARC database \cite[(Lelli et al.\ 2016b)]{SPARC}
color coded by 3.6 micron effective surface brightness. Note the
near-perfect rainbow sequence from the slowly rising
rotation curves of LSB galaxies (blue) to the steeply rising ones of HSB galaxies (red).
The dynamics of galaxies is closely connected to the distribution of light.
\label{RCs}}
\end{center}
\end{figure}
The BTFR is a simple power law of the form $M_b \propto V^x$.
Various measures for the rotation speed give similar but subtly different Tully-Fisher relations, differing in details like the slope $x$ and scatter.
To the best we are able to determine, the scatter is minimized when the flat rotation velocity can be measured.
This cannot be said of other measures like the linewidth \cite[(Lelli et al.\ 2019)]{Lelli2019}. Indeed, the intrinsic scatter
of the $V_f$ BTFR --- that left over after accounting for scatter caused by observational uncertainties ---
is remarkably small for an extragalactic correlation, $\sim 0.1$ dex \cite[(Lelli et al.\ 2016a)]{sparcbtfr}.
An irreducible component of the intrinsic scatter is that in stellar mass-to-light ratios ($M_*/L$).
At near-IR wavelengths, stellar population models anticipate a scatter
from variations in the star formation history to be 0.1 to 0.15 dex \cite[(Bell \& de Jong 2001, Portinari et al.\ 2004, Meidt et al.\ 2014)]{BdJ01,P04,Meidt}.
This consumes the entire budget for intrinsic scatter in the BTFR, leaving essentially no room for variation in the galaxy-averaged IMF or
scatter in the halo mass-concentration relation.
The scatter in the BTFR has declined steadily as the data have improved.
Every time a new type of galaxy is identified and measured, it falls
close to the BTFR but not necessarily right on. As improved observations are made, the discrepant cases become more consistent with the BTFR.
A failure to measure $V_f$ combined with the inevitable systematic uncertainties in distances and inclinations
above and beyond those indicated by the random errors in these quantities are the most common problems.
Experience leads us to anticipate a similar evolution from outliers
to adherents as new galaxies are measured on the fringes of current knowledge.
\noindent\textbf{4.\ The Central Density Relation} is a relation between the central surface brightness of galaxies and their dynamical mass surface density
(Fig.\ \ref{CDR}). This would be a trivial statement if there were no mass discrepancy.
Indeed, at high surface brightness (HSB), there is a 1:1 relation between
surface brightness and mass surface density, as expected when stars dominate. However, as the surface brightness declines, the stars no longer suffice
to account for the mass budget, and the data depart from the 1:1 line. Despite the increasing need for dark matter towards the centers of LSB galaxies,
the correlation persists. The dynamical surface density can be predicted from the surface brightness of the sub-dominant stars.
Similar result can be seen in \cite[de Blok \& McGaugh (1996)]{dBM96}, \cite[McGaugh \& de Blok (1998)]{MdB98a},
\cite[Swaters et al.\ (2012, 2014)]{S2012,S2014} and \cite[Lelli et al.\ (2013)]{Lelli2013}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=5.4in]{mcgaugh_fig4-eps-converted-to.pdf}
\caption{The stellar mass (left) and baryonic Tully-Fisher relation (right). Baryonic mass (stars plus gas) correlates strongly with the flat rotation speed.
Data from \cite[Lelli et al.\ (2016b)]{SPARC} and \cite[McGaugh (2012)]{M12} are shown as
blue points if both axes are measured with at least 20\% accuracy; less accurate data are shown in grey.
The latter include cases for which the rotation curve does not extend far enough to measure $V_f$, in which case
the last measure point is used. These cases are systematically offset to lower velocity.
The quantity $\mathrm{g}_{\mathrm{TF}} = \chi V_f^4/(G M_b)$ defines a line of constant acceleration,
illustrated here for $\mathrm{g}_{\mathrm{TF}} = 1.2\;\times10^{10}\;\mathrm{m}\,\mathrm{s}^{-2}$
with $\chi = 0.8$ to account for the cylindrical geometry of disks.
The location of the Milky Way is noted in red.
\label{BTFR}}
\end{center}
\end{figure}
The CDR can be seen by inspection in Fig.\ \ref{RCs}, where the color coding by surface brightness results in a near-perfect rainbow.
Low surface brightness galaxies have slowly rising rotation curves, high surface brightness galaxies have rapidly rising ones.
The dynamics responds to the distribution of luminous mass.
The CDR contrasts with the BTFR in that the CDR depends on the distribution of luminous mass as quantified by the surface brightness,
while the BTFR depends only on the total baryonic mass and not its distribution. Indeed, one can find examples of galaxies of the same mass
but different surface brightness \cite[(de Blok \& McGaugh 1996, Tully \& Verheijen 1997)]{dBM96,TV97}.
Such pairs of galaxies are indistinguishable in the BTFR, but reside in different locations on the CDR.
This has more recently come to be called `diversity' \cite[(Oman et al.\ 2015)]{Oman}, in which the rotation velocity measured at
small radius ($R = 2$ kpc) differs for galaxies of similar mass. This variation is entirely accounted for by the CDR: LSB galaxies have more slowly rising
rotation curves than HSB galaxies, even those of the same mass, so $V(R=2\;\mathrm{kpc})$ differs even when $V_f$ is the same.
\begin{figure}[t]
\begin{center}
\includegraphics[width=3.4in]{mcgaugh_fig5-eps-converted-to.pdf}
\caption{The central density relation \cite[(Lelli et al.\ 2016c)]{sparc_cdr}.
The dynamical mass surface density implied by the rate of rise of the rotation curve correlates with the stellar surface brightness
at the centers of galaxies. Care is taken to measure both quantities in the same region as $R \rightarrow 0$ subject to the limitations of resolution.
The data follow the 1:1 line at high surface densities, where stars dominate the mass budget.
The data peel away from the 1:1 line at low surface brightness, evincing a mass discrepancy even at the centers of LSB galaxies.
\label{CDR}}
\end{center}
\end{figure}
\textbf{5.\ The Radial Acceleration Relation} is a correlation between the observed centripetal acceleration ($\mathrm{g}_{\mathrm{obs}} = V^2/R$)
and that predicted by the observed distribution of baryons ($\mathrm{g}_{\mathrm{bar}} = - \partial \Phi_{\mathrm{bar}}/\partial R$).
These two quantities are measured independently, and should correspond in a 1:1 fashion in a universe without dark matter.
This is true at high acceleration (Fig.\ \ref{RARfig}), which corresponds to high surface brightness through the Poisson equation.
As the acceleration declines, the data peel away from the 1:1 line in the same fashion as seen in the CDR.
Low surface brightness means low acceleration means a large mass discrepancy.
\begin{figure}[t]
\begin{center}
\includegraphics[width=5.4in]{mcgaugh_fig6-eps-converted-to.pdf}
\caption{The radial acceleration relation. The centripetal acceleration observed in the rotation curve,
$\mathrm{g}_{\mathrm{obs}} = V^2/R$, correlates with that predicted by the observed distribution of baryons,
$\mathrm{g}_{\mathrm{bar}} = -\partial \Phi_{\mathrm{bar}}/\partial R$,
obtained from numerical solution of the Poisson equation applied to the observed distribution of stars and gas.
The data available to \cite[McGaugh (2004)]{M04} are shown in the left panel; those from \cite[McGaugh et al.\ (2016)]{RAR} in
the right panel. All available data are shown with a population synthesis $M_*/L$ and
without selection for quality control. A few individual galaxies stand out in the left-hand panel, presumably because the
$B$-band $M_*/L$ is not always correctly predicted by population synthesis models. Nevertheless,
the main relation was already clear twenty years ago \cite[(McGaugh 1999)]{M99}.
Individual galaxies do not distinguish themselves in the right panel when Spitzer [3.6] data are used to trace the stellar mass distribution
\cite[(Lelli et al.\ 2017)]{OneLaw}. Open squares show the data binned;
the trend is well fit by a simple function ${\cal F}(\mathrm{g}_{\mathrm{bar}})$ (red line --- see text).
\label{RARfig}}
\end{center}
\end{figure}
\cite[Sanders (1990)]{S90} identified an empirical correlation between the amplitude of the mass discrepancy and acceleration at
the last measured point along then-available rotation curves. This was generalized to include every resolved point by
\cite[McGaugh (1999)]{M99}. The relation has steadily improved as more data have accumulated.
Fig.\ \ref{RARfig} shows the progress from the data available to \cite[McGaugh (2004)]{M04} to now \cite[(McGaugh et al.\ 2016)]{RAR}.
In both cases, the axes are measured independently using a stellar population estimator for converting starlight to stellar mass.
A tremendous step forward has been provided by deep, near-IR data from \textit{Spitzer} that
provide an excellent tracer of stellar mass for computing the baryonic gravitational potential $\Phi_{\mathrm{bar}}$.
The simplest possible assumption that all galaxies have the same [3.6] mass-to-light ratio
\cite[(Schombert et al.\ 2014, 2019)]{SM14b,SML19} suffices to construct a relation in which individual galaxies
do not stand out. This contrasts with the situation for optical data, which always has a few outliers due to misestimates of $M_*/L$.
The trend of the data in Fig.\ \ref{RARfig} can be described by a simple function ${\cal F}(\mathrm{g}_{\mathrm{bar}})$
with a single characteristic scale, $\mathrm{g}_{\mathrm{\dagger}}$:
\begin{displaymath}
\mathrm{g}_{\mathrm{obs}} = {\cal F}(\mathrm{g}_{\mathrm{bar}})
= \frac{\mathrm{g}_{\mathrm{bar}}}{1-e^{-\sqrt{\mathrm{g}_{\mathrm{bar}}/\mathrm{g}_{\mathrm{\dagger}}}}}.
\label{eq:RFRfit}
\end{displaymath}
The detailed shape of the baryonic mass model of each galaxy maps to its rotation curve through this equation.
Detailed fits \cite[(Li et al.\ 2018)]{LiRARfit} can be made with a single
physical\footnote{One can also marginalize over the nuisance parameters of distance and inclination.}
fit parameter: the mass-to-light ratio. There is remarkably little variation in $M_*/L_{[3.6]}$ from galaxy to galaxy,
with the occasional, inevitable oddball: a small number of anomalous cases exist, as always happens in any large astronomical sample.
The simplicity of the organization apparent in the data has the consequence that they can only constrain a single
fit parameter per galaxy ($M_*/L$).
Fits with traditional dark matter halo models necessarily introduce a minimum of two additional parameters per galaxy, typically
a size and mass scale. Considerable degeneracy between these parameters ensues, as is inevitable whenever three parameters
are fit to data that require only one to describe them. It is therefore impossible to uniquely constrain the properties of dark matter
halos with rotation curve fits unless strong priors from independent information are
imposed \cite[(Katz et al.\ 2017, Li et al.\ 2019)]{Katz17,Lihalos}.
A more effective approach is to utilize the RAR: the acceleration attributable to dark matter is simply
$\mathrm{g}_{\mathrm{DM}} = \mathrm{g}_{\mathrm{obs}} - \mathrm{g}_{\mathrm{bar}} = {\cal F}(\mathrm{g}_{\mathrm{bar}})$.
This is not subject to multi-parameter degeneracy, being limited only by the accuracy of the stellar mass-to-light ratio
used to determine $\mathrm{g}_{\mathrm{bar}}$. In practice, a crude approximation is provided by
a nearly constant $\mathrm{g}_{\mathrm{DM}} \approx 0.3 \times 10^{-10}\;\mathrm{m}\,\mathrm{s}^{-2}$
\cite[(Walker et al.\ 2010)]{W2010}.
\noindent \textbf{A Common Acceleration Scale}: The three scaling relations, the BTFR, the CDR, and the RAR, are
connected by a common acceleration scale. The scale $\mathrm{g}_{\mathrm{\dagger}}$ is most obvious in the RAR,
as it marks the transition where the
data bend away from the 1:1 line apparent at high acceleration. The same structure is also present in the CDR, which departs
from 1:1 at the surface density scale $\Sigma_{\mathrm{CDR}} \approx 860\;\mathrm{M}_{\odot}\,\mathrm{pc}^{-2}$.
Surface density is related to acceleration by Newton's constant, so these are effectively the same thing:
$\mathrm{g}_{\mathrm{CDR}} = G \Sigma_{\mathrm{CDR}}$.
Similarly, the data in the BTFR follow a line of constant acceleration $\mathrm{g}_{\mathrm{TF}} = \chi V_f^4/(G M_b)$.
Within the uncertainties, these are the same
scale: $\mathrm{g}_{\mathrm{\dagger}} \approx \mathrm{g}_{\mathrm{CDR}} \approx \mathrm{g}_{\mathrm{TF}}$.
The physical cause of the acceleration scale in galaxy dynamics is of fundamental importance.
There need be no such scale in galaxy dynamics at all, but it is clearly present.
Whether this scale is unique and universal, or has some finite intrinsic scatter,
is critical to its interpretation \cite[(Di Cintio \& Lelli 2016, Desmond 2017)]{dCL16,Desomd17}.
The \textit{observed} scatter in each relation is small ($< 0.15$ dex).
The \textit{intrinsic} scatter must be smaller as
some of the observed scatter is due to measurement errors.
An important and irreducible source of scatter is that due to variations in $M_*/L$.
Population models suggest this should be $\ge 0.1$ dex simply from differences
in star formation histories \cite[(Bell \& de Jong 2001)]{BdJ01}.
This implies that disk galaxies share essentially the same galaxy-averaged IMF.
It also means that any intrinsic scatter
is small --- perhaps imperceptibly small given the inevitable scatter in $M_*/L$.
To the extent that we are able to discern, $\mathrm{g}_{\mathrm{\dagger}}$ is a fundamental scale shared by all rotating galaxies.
|
2,877,628,091,470 | arxiv | \section{Introduction}
It is an important theme of current research in
analysis to decompose
more complicated operators, such as the Cauchy integral on Lipschitz curves
\cite{calderon}, as a sum of simpler operators. This theme has taken special
prominence in multilinear Harmonic Analysis, beginning with the work
of Lacey and Thiele \cite{LT1}, which expressed the bilinear
Hilbert
transforms as a sum of modulated paraproducts. This theme has found
much broader application as well.
The bilinear Hilbert transforms have a bilinear symbol
given by restriction to a half-plane, with slope that
depends upon the transform in question.
In considering more complicated symbols,
one is lead to to paraproducts which have a complicated underlying
description.
One then seeks certain estimates
of these paraproducts that are \emph{uniform} in the
parametrizations. This line of investigation was started in
\cite{T}, the results of which give a new, multilinear proof
of the boundedness of the Calderon commutator,
fulfilling a program of study of Calderon \cite{calderon}.
It was further extended in work of the author and
Grafakos \cite{LL1,LL2,Li}, in the study of the disc as
a bilinear multiplier. Muscalu, Tao and Thiele \cite{MTT1,MTT2,MTT3}
gave alternate proofs (and more general proofs) of these results in the multilinear
operator setting.
In this paper, we continue this line of study,
considering certain uniform estimates that are motivated
by an analysis of
a blinear Hilbert transform along polynomial curves. Namely,
consider the operators
\begin{equation} \label{e.PPPPP}
(f,g) \longrightarrow \textup{p.v.} \int _{-\infty } ^{\infty }
f (x-y) g (x- p (y))\; \frac {dy} y \,,
\end{equation}
for some polynomial $ p(y)$.
The study of these operators leads to
subtle questions in multilinear analysis, stationary phase
methods, and paraproducts. An initial investigation
into operators of this type is given in \cite{FL}, where
the polynomial is taken to be a square, and the singular
kernel is mollified to $e^{i|t|^{-\beta}}/|t|$ for some $\beta >0$.
Without this modification, a significant difficulty might be encountered.
There is a natural analogue of the bilinear Hilbert transform along parabolas
in the ergodic theory setting, that is, the non-conventional ergodic
average $\frac{1}{N}\sum_{n=0}^{N-1}f(T^nx)g(T^{n^2}x)$. In \cite{Fur}, Furstenberg
proved that the characteristic factor of the trilinear ergodic averages
$\frac{1}{N}\sum_{n=0}^{N-1}f(T^{an})g(T^{bn})h(T^{cn})$ for all $a, b, c\in \mathbb Z$
is characteristic for the previous non-conventional ergodic average.
We are indebted to M. Lacey for bringing these Furstenberg's theorems
to our attention.
Thus a possible method
for the bilinear Hilbert transform along a parabola is to understand the tri-linear Hilbert
transform first. Unfortunately, it turns out the tri-linear Hilbert transform
is very difficult to handle.
It is very interesting to find a proof for the bilinear Hilbert
transform along curves without using any information of the trilinear Hilbert transform.
It might be possible to obtain such a way by combining time-frequency analysis and
the known results for the trilinear oscillatory integrals. This investigation
will appear in another paper.
The paraproducts that arise have a richer parametrization
than what has been considered before.
The question of uniform estimates is
the main focus of this article.
In the next section, a class of paraproducts are
introduced. They are parametrized by
\begin{itemize}
\item The \emph{width} of the frequency window associated to the
paraproducts, denoted by $ L_1$ and $ L_2 $ below.
\item The \emph{overlap} of the frequency window associated
to the paraproducts, denoted by $ M_1$ and $ M_2$ below.
\item A \emph{modulation} of the frequency window,
denoted by the (lower case) parameters $ n_1,n_2, 2^m$ below.
\end{itemize}
Prior results have concentrated on the uniformity of estimates with respect to
$M_1, M_2$ from $L^p\times L^q$ to $L^r$ for $r\geq 1$ and $L_1=L_2$ \cite{MTT1}.
The principal point of this article is to get the estimates for $1/2<r<1$ and arbitrary $L_1, L_2$.
Another new point of this article
is the (weak) uniformity that we establish in $L_1, L_2$ and the modulation
parameters $2^m$ (see Theorem \ref{para2est} below).
This novelty is forced upon us by the
stationary phase methods that one must use in the analysis of
(\ref{e.PPPPP}). One of anticipated applications of our theorems
is the bilinear multiplier problems associated to the symbol defined by
a characteristic function of a suitable domain with a smooth boundary. \\
\noindent
{\bf Acknowledgement} The author would like to thank
his wife, Helen, and his son, Justin, for being together through
the hard times in the past two years. And he is also very thankful to
Michael Lacey for his constant support and encouragement.
\section{Main Results}
\setcounter{equation}0
Let $j\in \mathbb Z$, $L_1, L_2$ be positive integers and $M_1,
M_2$ be integers.
$$ \omega_{1,j}=[2^{L_1j+M_1}/2, 2\cdot2^{L_1 j+M_1}]$$
and
$$\omega_{2,j}=[-2^{L_2 j+M_2}, 2^{L_2 j+M_2}]\,.$$
Let $\Phi_1$ be a Schwartz function whose Fourier transform is a
standard bump function supported on $[1/2, 2]$, and $\Phi_2$ be a
Schwartz function such that $\widehat\Phi_2$ is a standard bump function
supported on $[-1, 1]$ and $\widehat\Phi_2(0)=1 $. For $\ell\in\{1,2\}$
and $n_1, n_2\in \mathbb Z$, define $\Phi_{\ell, j, n_\ell}$ by
$$
\widehat\Phi_{\ell, j, n_\ell}(\xi)= \big(e^{2\pi i n_\ell (\cdot)} \widehat\Phi_\ell (\cdot)\big) \bigg(\frac{\xi}{2^{L_\ell j +M_\ell}}\bigg)\,.
$$
It is clear that ${\widehat\Phi_{\ell,j,n_\ell}}$ is supported on
$\omega_{\ell,j}$. For locally integrable functions $f_\ell$'s, we
define $f_{\ell,j}$'s by
$$
f_{\ell,j, n_\ell}(x)=f_{\ell}*\Phi_{\ell, j, n_\ell}(x)\,.
$$
We define a paraproduct to be
\begin{equation}\label{defofpara0}
\Pi_{L_1, L_2, M_1, M_2, n_1, n_2}(f_1, f_2)(x) = \sum_{j\in\mathbb Z}
\prod_{\ell =1}^2 f_{\ell,j, n_\ell}(x) \,.
\end{equation}
Another paraproduct we should introduce is the following.
For $\ell\in\{1,2\}$, let $\omega'_{\ell,j}$ denote the set
$\{\xi: 2^{L_\ell j+ M_\ell}/2\leq |\xi|\leq 2\cdot 2^{L_\ell j+M_\ell}\}$.
Let $m$ be a nonnegative integer and define $\Phi_{\ell,j,m}$ by
$$
\widehat\Phi_{\ell,j,m}(\xi) = \big( e^{2\pi i 2^m(\cdot)} \widehat\Phi_1(\cdot)\big)\bigg(\frac{\xi}{2^{L_\ell j+ M_\ell}}\bigg)\,.
$$
Let $f_{\ell,j, m}$ be the function defined by
$$
f_{\ell,j, m}(x) =f_\ell*\Phi_{\ell,j, m}(x)\,.
$$
We define a paraproduct to be
\begin{equation}\label{type2para}
\Pi_{L_1, L_2, M_1, M_2, m}(f_1, f_2)(x)=\sum_{j\in\mathbb Z}\prod_{\ell=1}
^2 f_{\ell, j, m}(x)\,.
\end{equation}
One reason we study these paraproducts is that one will encounter
such paraproducts in the study of the bilinear Hilbert transforms
along polynomial curves.
We have the following uniform estimates for these paraproducts.
\begin{theorem}\label{para0uniest}
For any $p_1>1$, $p_2>1$ with $1/p_1+1/p_2=1/r$, there exists a
constant $C$ independent of $M_1, M_2, n_1, n_2$ such that
\begin{equation}\label{uniest1}
\big\|\Pi_{L_1, L_2, M_1, M_2, n_1, n_2}(f_1, f_2)\big\|_r \leq
C\big(1+|n_1|\big)^{10}\big(1+|n_2|\big)^{10} \|f_1\|_{p_1}\|f_2\|_{p_2}\,,
\end{equation}
for all $f_1\in L^{p_1}$ and $f_2\in L^{p_2}$.
\end{theorem}
\begin{theorem}\label{para2est}
Let $\Pi_{L_1, L_2, M_1, M_2, m}(f_1, f_2)$ be the paraproduct defined
by (\ref{type2para}). Suppose that for all $j$,
\begin{equation}\label{2large1}
2^{L_2 j +M_2} \geq 2^{L_1 j +M_1+m}\,.
\end{equation}
For any $\varepsilon>0$, $p_1>1$, $p_2>1$ with $1/p_1+1/p_2=1/r$, there exists a
constant $C$ independent of $m, M_1, M_2, L_1, L_2$ such that
\begin{equation}\label{uniestpara2}
\big\|\Pi_{L_1, L_2, M_1, M_2, m}(f_1, f_2)\big\|_r \leq
C 2^{\varepsilon m}\|f_1\|_{p_1}\|f_2\|_{p_2}\,,
\end{equation}
for all $f_1\in L^{p_1}$ and $f_2\in L^{p_2}$.
\end{theorem}
The case when $L_1=L_2$ and $r>1$ was proved in \cite{MTT1}.
The constant $C$ in Theorem \ref{para0uniest} may depend on $L_1, L_2$. It is easy to see by the
following argument that $C$ is ${O}(\max\{2^{L_1}, 2^{L_2}\})$. It is
possible to get a much better upper bound such as $O\big(\log(1+
\max\{L_2/L_1, L_1/L_2\} )\big)$ by tracking the constants carefully in the proof we will provide. But we do not pursue the sharp constant in this article. The independence of $M_1, M_2$ is the most important issue.
In Sections \ref{para1}, \ref{para2}, we give a proof for Theorem
{\ref{para0uniest}}. The proof of Theorem \ref{para2est} will be
given in Section \ref{proofpara2}. By using Theorem \ref{para0uniest},
we get the $L^r$ bound for $\Pi_{L_1, L_2, M_1, M_2, m}$ with a operator norm
$O(2^{10m})$. Unfortunately sometimes this is not enough for our application.
The desired norm is $O(2^{\varepsilon m})$ for a very small positive number $\varepsilon$. It might be
possible to remove the condition (\ref{2large1}) or
get the uniform estimate for $\Pi_{L_1, L_2, M_1, M_2, m}$ in which the operator norm is independent of $m$.
The uniform estimate from $L^2\times L^2$ to $L^1$ is trivial and
(\ref{2large1}) is redundant for this case.
In Section \ref{proofpara2}, we see that the uniform estimates
for $\Pi_{L_1, L_2, M_1, M_2, m}$ can
be achieved for $p_1, p_2>2$ and $1< r<2$ (see Proposition {\ref{uniestp2good}}) and (\ref{2large1}) is
superfluous for Theorem \ref{para2est} when $p_1, p_2>2$ and $1<r<2$
(see Corollary \ref{cor91}).\\
\section{A Telescoping Argument}\label{para1}
\setcounter{equation}0
We now start to prove Theorem {\ref{para0uniest}}.
To prove Theorem {\ref{para0uniest}}, we first introduce a definition of
admissible trilinear form. And we should show that by a telescoping argument
used in \cite{LL1, T}, we can reduce the problem to estimates for an admissible trilinear form. And thus $L^r$ estimates for $r>1$
can be obtained by Littlewood-Paley theorem. The $r<1$ case is more complicated.
We have to use the time frequency analysis to deal with this case in
Section \ref{para2}.
\begin{definition}\label{goodtri}
An admissible trilinear form is a trilinear form
\begin{equation}\label{adtridef}
\Lambda_{L_1, L_2, M_1, M_2, n_1, n_2}(f_1, f_2, f_3) =
\int \sum_{j\in\mathbb Z}
\prod_{\ell =1}^3 \tilde f_{\ell,j, n_\ell}(x)dx\,,
\end{equation}
where $n_3=0$,
$\tilde f_{\ell, j, n_\ell} = f_{\ell}*\tilde\Phi_{\ell, j, n_\ell}$ and
$\tilde\Phi_{\ell,j, n_\ell}$
is a function whose Fourier
transform is supported on $\tilde\omega_{\ell,j}$
such that
\begin{itemize}
\item[(1)] Each $\tilde\omega_{\ell,j}$ is an interval in $\mathbb R$ such that
the distance from the origin to the interval is not more than $3|\tilde\omega_{\ell,j}|$. And $\{\tilde\omega_{\ell, j}\}_j$ forms a sequence of lacunary
intervals, that is, $|\tilde\omega_{\ell, j}|/|\tilde\omega_{\ell, j+1}|\leq 1/2$ for
all $j\in \mathbb Z$.
Moreover, $|\tilde\omega_{3,j}|\geq C\max\{|\tilde\omega_{1,j}|, |\tilde\omega_{2,j}|\}$ for some constant $C$ independent of $M_1, M_2, n_1, n_2$.
\item[(2)] There are at least two indices $\ell \in \{1,2,3\}$ such that
$\tilde\Phi_{\ell, j, n_\ell}$ satisfies
\begin{equation}\label{vanish}
\widehat{\tilde\Phi_{\ell,j,n_\ell}}(0)=0\,\,\,
\end{equation}
\begin{equation}\label{deriest1}
\bigg | D^{\alpha}\bigg(\widehat{\tilde\Phi_{\ell, j, n_\ell}}\big(|\tilde\omega_{\ell,j}|\xi\big)\bigg)\bigg|
\leq \frac{ C_N(1+ |n_\ell|)^{\alpha}}{ (1 + |\xi|)^N}\,,
\end{equation}
for all $\xi\in\mathbb R$ and all nonnegative integers $\alpha, N$.
If an index in $\{1,2,3\}$ satisfies (\ref{vanish}) and (\ref{deriest1}),
we call the index a good index in the trilinear form $\Lambda_{L_1, L_2, M_1, M_2, n_1, n_2}$. For the index which is not
a good index, we call it a bad index in the trilinear form
$\Lambda_{L_1, L_2, M_1, M_2, n_1, n_2}$.
\item[(3)] If $\ell\in\{2,3\}$ is a bad index, then
$\tilde\Phi_{\ell, j, n_\ell}$ satisfies (\ref{deriest1}).
Moreover, among the other two good indices $\ell'\neq \ell$, at least
one of them satisfies $|\tilde\omega_{\ell',j}|\leq C\min\{|\tilde\omega_{1,j}|,
|\tilde\omega_{2,j}|, |\tilde\omega_{3,j}|\}$ for some constant $C$ independent of $f_1$, $f_2$, $f_3$, $M_1$, $M_2$, $n_1$, $n_2$.
\item[(4)]If $1$ is a bad index, then $\tilde\Phi_{1, j, n_1}$ satisfies
\begin{equation}\label{bad1}
\tilde\Phi_{1, j, n_1}(x) = \sum_{k=0}^{m'(j)}\Phi_{1, j+k,
n_1}(x)\,,
\end{equation}
where $m'(j)$ is some nonnegative integer.
\end{itemize}
\end{definition}
\begin{lemma}\label{telelem1}
Let $f_3$ be a locally integrable function. Then
$$\int \Pi_{L_1, L_2, M_1, M_2, n_1, n_2}(f_1, f_2)(x)f_3(x)dx
$$
is a sum of finitely many admissible trilinear forms such that
the number of admissible trilinear forms in the sum is no more
than a constant $C$ independent of $M_1, M_2, n_1, n_2$.
\end{lemma}
\begin{proof}
For $\ell\in \{1,2\}$, write $\omega_{\ell,j}$ as $[a_{\ell,j}, b_{\ell, j}]$.
If $b_{2,j}< b_{1,j}/16$, then $|\omega_{2,j}|< |\omega_{1,j}|/6$ and
the distance from $\omega_{1,j}+\omega_{2,j}$ to the origin is not less than
$|\omega_{1,j}|/4$. In this case, simply let $\tilde\omega_{3,j}$ be a small
neighborhood of $-(\omega_{1,j}+\omega_{2,j})$ and the Fourier transform of
$\tilde\Phi_{3,j}$ is a suitable bump function adapted to $\tilde\omega_{3,j}$,
then we have the desired lemma.
Thus we now only consider the case $b_{2,j}\geq b_{1,j}/16$. Let
$\omega_{3,j}$ be $[-18b_{2,j}, 18b_{2,j}]$. And $\Phi_{3,j}$ be a
Schwartz function such that its Fourier transform is a bump
function adapted to $\omega_{3,j}$ and
$\widehat\Phi_{3,j}(\xi)=1$ for all $\xi\in [-17b_{2,j}, 17b_{2,j}]$. Then
$$
\int\Pi(f_1,f_2)(x)f_3(x) dx = \int \sum_{j\in \mathbb Z} \prod_{\ell=1}^3
f_{\ell,j, n_\ell}(x) dx\,,
$$
where $f_{3,j,n_3}(x)=f_3*\Phi_{3,j}(x)$ and $n_3=0$. Let $\tilde\Phi_2$
be a Schwartz function such that $\widehat{\tilde\Phi_2}$ is a bump function on
$[-1,1] $ and $\widehat{\tilde\Phi_2}(\xi)=1$ for all $\xi \in[-3/4, 3/4]$. And
define $\Phi_{2,j}$ by $\widehat\Phi_{2,j}(\xi)=\widehat{\tilde\Phi_2}(\xi/b_{2,j})$.
Let $f_{2,j}=f*\Phi_{2,j}$. We also denote $f_{3,j, n_3}$ by
$f_{3,j}$. We can replace $f_{2,j, n_2}$ by $f_{2,j}$ because
$$
\int \sum_{j\in \mathbb Z}
f_{\ell,j, n_1}(x) \big( f_{2,j,n_2}-f_{2,j}\big)(x)f_{3,j}(x) dx\,
$$
is an admissible trilinear form. Hence the only thing we need to
show is that
$$
\Lambda'(f_1, f_2, f_3)=\int \sum_{j\in \mathbb Z}
f_{\ell,j, n_1}(x) f_{2,j}(x)f_{3,j} (x)dx\,
$$
is admissible. For any real number $x$, let $[x]$ denote the largest
integer not exceeding $x$. Let $m(j)$ be the integer defined by
$$
m(j) = \big[ \frac{(L_2j+M_2)-(L_1j+M_1)+6}{L_2}\big]\,.
$$
By $b_{2,j}\geq b_{1,j}/16$, we see that $m(j)\geq 0$. By a
telescoping argument, $\Lambda'(f_1, f_2, f_3) $ equals to
$$
\int \sum_{j\in \mathbb Z}f_{1,j, n_1}(x) \sum_{k=0}^{m(j)}
\bigg( f_{2, j-k}(x)f_{3, j-k}(x) -
f_{2, j-k-1}(x)f_{3, j-k-1}(x)\bigg)dx\,,
$$
since $\int f_{1,j,n_1}(x)f_{2,j-m(j)-1}(x)f_{3,j-m(j)-1}(x)dx=0$
due to the following simple fact on the support of Fourier transform
of each function in the integrand, i.e.,
$$
\bigg({\rm supp}\widehat f_{1,j, n_1} + {\rm supp }\widehat
f_{2,j-m(j)-1}\bigg) \cap \bigg(- \big({\rm supp}\widehat
f_{3,j-m(j)-1}\big)\bigg) =\emptyset\,.
$$
By a change of variables $j\rightarrow j+k$, we have that $\Lambda'(f_1,
f_2, f_3) $ is equal to
$$
\int \sum_{j\in \mathbb Z} \sum_{k=0}^{m'(j)}f_{1,j+k, n_1}(x)
\bigg( f_{2, j}(x)f_{3, j}(x) -
f_{2, j-1}(x) f_{3, j-1}(x)\bigg)dx\,,
$$
where $m'(j)$ is the integer defined by
$$
m'(j) = \big[ \frac{(L_2j+M_2)-(L_1j+M_1)+6}{L_1}\big]\,.
$$
We write this integral as a sum of three parts $\Lambda_1, \Lambda_2, \Lambda_3$, where
$$
\Lambda_1=\int \sum_{j\in \mathbb Z} \bigg(\sum_{k=0}^{m'(j)}f_{1,j+k, n_1}(x)\bigg)
f_{2, j}(x) \big( f_{3, j}(x) - f_{3, j-1}(x)\big)dx\,,
$$
$$
\Lambda_2=\int \sum_{j\in \mathbb Z} \bigg(\sum_{k=0}^{m'(j)}f_{1,j+k,n_1}(x)\bigg)
\big( f_{2, j}(x)-f_{2,j-1}(x) \big)
\big(f_{3, j-1}(x) - f_{3, j-8}(x)\big)dx\,,
$$
$$
\Lambda_3=\int \sum_{j\in \mathbb Z} \bigg(\sum_{k=0}^{m'(j)}f_{1,j+k,n_1}(x)\bigg)
\big( f_{2, j}(x)-f_{2,j-1}(x)\big)
f_{3, j-8}(x) dx\,.
$$
It is clear that $\Lambda_2$ is an admissible trilinear form. Write
$\Lambda_1$ as $\Lambda_{11}+\Lambda_{12}$, where
$$
\Lambda_{11} = \int \sum_{j\in \mathbb Z} \bigg(\sum_{k=0}^{m'(j)}f_{1,j+k,n_1}(x)\bigg)
\big(f_{2, j}(x)-f_{2,j-1}(x)\big) \big(f_{3, j}(x) - f_{3, j-1}(x)\big)dx\,,
$$
$$
\Lambda_{12} = \int \sum_{j\in \mathbb Z} \bigg(\sum_{k=0}^{m'(j)}f_{1,j+k,n_1}(x)\bigg)
f_{2,j-1}(x)\big(f_{3, j}(x) - f_{3, j-1}(x)\big)dx\,,
$$
Clearly, $\Lambda_{11}$ is an admissible trilinear form. Notice that
$$
{\rm supp}\bigg( \sum_{k=0}^{m'(j)-10-[L_2/L_1]}\widehat f_{1,j+k,n_1}\bigg)
\subseteq [0, 2^{-2}2^{L_2j+M_2} ] = [0, 2^{-2}2^{-L_2}b_{2,j}]\,,
$$
and
$$
{\rm supp}\big(\widehat f_{3, j} - \widehat f_{3, j-1}\big) \subseteq
[-18b_{2,j}, 18b_{2,j}]\backslash [-16\cdot 2^{-L_2}b_{2,j},
16\cdot 2^{-L_2}b_{2,j}]\,.
$$
Thus $\Lambda_{12}$ is equal to
$$
\int \sum_{j\in \mathbb Z} \bigg(\sum_{k=m'(j)-10-[L_2/L_1]}^{m'(j)}f_{1,j+k,n_1}(x)\bigg)
f_{2,j-1, n_2}(x)\big(f_{3, j, n_3}(x) - f_{3, j-1, n_3}(x)\big)dx\,,
$$
which is obviously a finite sum of admissible trilinear forms. As
for $\Lambda_3$, observe that
$$
{\rm supp}\bigg( \sum_{k=0}^{m'(j)-100-[L_2/L_1]}\widehat
f_{1,j+k,n_1}\bigg) \subseteq [0, 2^{-80}2^{L_2j+M_2} ] = [0,
2^{-80}2^{-L_2}b_{2,j}]\,,
$$
and
$$
{\rm supp}\big(\widehat f_{2, j} - \widehat f_{2, j-1}\big) \subseteq
[-b_{2,j}, b_{2,j}]\backslash [-2^{-L_2-1}b_{2,j},
2^{-L_2-1}b_{2,j}]\,.
$$
Thus $\Lambda_{3}$ is equal to
$$
\int \sum_{j\in \mathbb Z}
\bigg(\sum_{k=m'(j)-100-[L_2/L_1]}^{m'(j)}f_{1,j+k,n_1}(x)\bigg)
\big(f_{2,j}-f_{2,j-1}(x)\big)f_{3, j-8}(x)dx\,,
$$
which is a finite sum of admissible trilinear forms.
\end{proof}
\begin{lemma}\label{largerest}
Let $\Lambda_{L_1, L_2, M_1, M_2, n_1, n_2}$ be an admissible trilinear form.
Then for any real numbers $p_1, p_2, p_3>1$ with $1/p_1+1/p_2+1/p_3=1$, there
exists $C$ independent of $M_1$, $M_2$, $n_1$, $n_2$ such that
\begin{equation}\label{largep}
\big| \Lambda_{L_1, L_2, M_1, M_2, n_1, n_2}(f_1, f_2, f_3)\big|
\leq C(1+|n_1|)^{10}(1+|n_2|)^{10}\|f_1\|_{p_1}\|f_2\|_{p_2}\|f_3\|_{p_3}\,,
\end{equation}
for all $f_1\in L^{p_1}$, $f_2\in L^{p_2}$ and $f_3\in L^{p_3}$.
\end{lemma}
\begin{proof}
If there is no bad index in the trilinear form, take $\ell_0$
to be any integer in $\{1,2,3\}$. Otherwise,
let $\ell_0$ be a bad index. Applying Cauchy-Schwarz inequality,
$\Lambda_{L_1, L_2, M_1, M_2, n_1, n_2}$ is dominated by
$$
\int \sup_{j\in\mathbb Z}\big|\tilde f_{\ell_0, j, n_{\ell_0}}\big|
\prod_{\ell\neq \ell_0} \bigg(\sum_j\big|\tilde f_{\ell, j, n_\ell}\big|^2
\bigg)^{1/2}dx\,.
$$
Using H\"older inequality, we dominate the trilinear form by
$$
\bigg\|\sup_{j\in\mathbb Z}\big|\tilde f_{\ell_0, j, n_{\ell_0}} \big|
\bigg\|_{p_1}
\prod_{\ell\neq \ell_0} \bigg\|\bigg(\sum_j\big|\tilde f_{\ell, j, n_\ell}\big|^2
\bigg)^{1/2}\bigg\|_{p_\ell}\,.
$$
The Littlewood-Paley theorem yields that for $\ell\neq \ell_0$
$$
\bigg\|\bigg(\sum_j\big|\tilde f_{\ell, j, n_\ell}\big|^2
\bigg)^{1/2}\bigg\|_{p_\ell}\leq C(1+|n_\ell|)^{10}\|f_\ell\|_{p_\ell}\,.
$$
If $\ell_0\in\{2, 3\}$, then by (\ref{deriest1}), we have
$$
\sup_{j\in\mathbb Z}\big|\tilde f_{\ell_0, j, n_{\ell_0}} \big|\leq
(1+|n_{\ell_0}|^{10})M(f_{\ell_0})\,,
$$
which clearly yields the lemma. We now only need to consider the case $\ell_0=1$. It suffices to prove that
\begin{equation}\label{1bad}
\bigg\|\sup_j \big|\sum_{k=0}^{m'(j)} f_1*\Phi_{1, j+k, n_1}\big|\bigg\|_{p_1} \leq C(1 + |n_1|^{10})\|f_1\|_{p_1}\,.
\end{equation}
Notice that $\omega_{1,j}$'s are essentailly disjoint intervals and
Fourier transform of $\sum_{k=0}^{m'(j)}\Phi_{1, j+k, n_1}$ is
supported on a bounded interval depending on $j$. The left hand side
of (\ref{1bad}) is less than
$$
C \big\|M\big(\sum_{j}f_1*\Phi_{1, j, n_1}\big)\big\|_{p_1}\,.
$$
It is easy to verify that $\sum_{j} f_1*\Phi_{1,j, n_1}$ is a
bounded operator on $L^2$ associated to a standard Calder\'on-Zygmund
kernel by paying at most a cost of $(1+|n_1|^{10})$ in the corresponding
estimates. Thus by a standard Calder\'on-Zygmund argument, we have
for any real number $p>1$,
there is a constant $C$ independent of $M_1, M_2, n_1, n_2$ such that
$$
\big\|\sum_{j} f*\Phi_{1,j, n_1}\big\|_p \leq C(1+|n_1|^{10})\|f\|_p\,
$$
holds for all $f\in L^p$, which yields (\ref{1bad}).
Therefore we complete the proof of the lemma.
\end{proof}
Combining Lemma {\ref{telelem1}} and Lemma {\ref{largerest}}, we
obtain (\ref{uniest1}) for $p_1, p_2, r>1$. To finish the proof of
Theorem {\ref{para0uniest}}, we need to provide a proof of $L^r$
estimate with $1/2<r\leq 1$ for (\ref{uniest1}), which will be
given in Section {\ref{para2}}.
\section{ Time Frequency Analysis}\label{para2}
\setcounter{equation}0
In this section we prove (\ref{uniest1}) with $1/2<r\leq 1$
for the paraproducts by time frequency analysis, which was used for establishing
$L^p$ (uniform) estimates for the bilinear Hilbert transforms in
\cite{LL2, LT1, LT2, Li, MTT2, MTT1, MTT3, T}.
Let $F$ be a measurable set in $\mathbb R$. $X(F)$ denotes the set
of all measurable functions supported on $F$ such that the
$L^\infty$ norms of the functions are no more than $1$. A function
in $X(F)$ can be considered essentially as the characteristic
function ${\bf 1}_F$.
To obtain Theorem {\ref{para0uniest}}, by Lemma {\ref{largerest}},
an interpolation argument in \cite{MTT2}, and the scaling
invariance, it is sufficient to prove that for any $p_1, p_2>1$ such
that $1/p_1+1/p_2 \geq 1$
and any measurable set $F_3\subseteq\mathbb R$ with $|F_3|=1$,
there exists a subset $F'_3\subset F_3$ such that $|F'_3|\geq 1/2$
and
\begin{equation}\label{triest000}
\bigg|\int\Pi_{L_1, L_2, M_1, M_2, n_1, n_2}(f_1, f_2)(x)f_3(x)
dx\bigg|\leq
C (1+|n_1|)^{10}(1+|n_2|)^{10}|F_1|^{1/p_1}|F_2|^{1/p_2}\,
\end{equation}
holds for all $f_1\in X(F_1), f_2\in X(F_2), f_3\in X(F'_3)$, where
$C$ is a constant independent of $f_1, f_2, f_3$, $M_1, M_2, n_1,
n_2$.
If $2^{L_2j+M_2}< 2^{L_1j+M_1}/8$, let $\omega_{3,j}=[-19\cdot
2^{L_1j+M_1}/8, -2^{L_1j+M_1}/8]$ and $\Phi_{3, j}$ be a Schwartz
function whose Fourier transform is a bump function adapted to
$\omega_{3,j}$ such that $\widehat\Phi_{3,j}(\xi)=1$ for all $\xi\in [-9\cdot
2^{L_1j+M_1}/4, -2^{L_1j+M_1}/4]$. If $2^{L_2j+M_2}\geq
2^{L_1j+M_1}/8$, let $\omega_{3,j}=[-18\cdot 2^{L_2j+M_2}, 18\cdot
2^{L_2j+M_2}]$ and $\Phi_{3, j}$ be a Schwartz function whose
Fourier transform is a bump function adapted to $\omega_{3,j}$ such
that $\widehat\Phi_{3,j}(\xi)=1$ for all $\xi\in [-17\cdot 2^{L_2j+M_2},
17\cdot 2^{L_2j+M_2}]$. Let $n_3=0$, $\Phi_{3,j,n_3}=\Phi_{3,j} $,
$f_{3, j, n_3}(x)=f_3*\Phi_{3,j, n_3}(x)$. Define a trilinear form
$\Lambda_{L_1, L_2, M_1, M_2, n_1, n_2}$ by
\begin{equation}\label{defoftriform}
\Lambda_{L_1, L_2, M_1, M_2, n_1, n_2}(f_1, f_2, f_3)
=\int \sum_{j\in\mathbb Z}\prod_{\ell=1}^3 f_{\ell, j, n_\ell}(x) dx\,.
\end{equation}
Clearly $ \Lambda_{L_1, L_2, M_1, M_2, n_1, n_2} = \int \Pi_{L_1, L_2,
M_1, M_2, n_1, n_2}(f_1, f_2)(x)f_3(x)dx$.
Thus to prove
(\ref{triest000}), it suffices to prove the following lemma.
\begin{lemma}\label{triest0}
Let $p_1, p_2>1$ such that $1/p_1+1/p_2\geq 1 $ and $\Lambda_{L_1, L_2,
M_1, M_2, n_1, n_2}$ be the trilinear form defined by
(\ref{defoftriform}). Let $F_1, F_2, F_3$ be measurable sets in
$\mathbb R$ with $|F_3|=1$.
Then there exists a subset $F_3'\subseteq F_3$ such that
$|F_3'|>1/2$ and there exists a constant $C$ independent of $F_1$,
$F_2$, $F_3$, $f_1$, $f_2, f_3$, $M_1$, $M_2$, $n_1, n_2$ such that
\begin{equation}\label{triest002}
\big|\Lambda_{L_1, L_2, M_1, M_2, n_1, n_2}(f_1, f_2, f_3)\big|\leq
C (1+|n_1|)^{10}(1+|n_2|)^{10}|F_1|^{1/p_1}|F_2|^{1/p_2}\,
\end{equation}
holds for all $f_1\in X(F_1), f_2\in X(F_2), f_3\in X(F'_3)$.
\end{lemma}
Lemma {\ref{triest0}} and Lemma {\ref{largerest}} implies the
estimates (\ref{uniest1}) by an interpolation argument in
\cite{MTT2}. Therefore we obtain Theorem \ref{para0uniest} once we
finish a proof of Lemma \ref{triest0}. The following subsections are
devoted to proof of Lemma \ref{triest0}.
\subsection{Definitions}
To prove Lemma \ref{triest0}, we introduce some definitions first.
Let $\psi$ be a nonnegative Schwartz function such that $\widehat \psi$
is supported in $[-1/100,1/100]$ and satisfies $\widehat\psi(0)=1$. Let
$\psi_k(x)=2^k\psi(2^k x)$ for any $k\in \mathbb Z$. For
$j\in\mathbb Z$ and $\ell\in\{1,2,3\}$, define $k_{j\ell}$ to be an
integer such that $|\omega_{\ell, j}|\sim 2^{k_{j\ell}}$. Denote
$\min_{\ell\in\{1,2,3\}}k_{j\ell}$ by $k_j$.
And define
$$
I_{k_j,n} = [2^{-k_j}n,
2^{-k_j}(n+1)]\,.
$$
Define
$$
{\bf 1}^*_{j,n}(x) = {\bf 1}_{I_{k_j, n}}*\psi_{k_{j}}(x)\,.
$$
It is easy to see that
$$
\Lambda_{L_1, L_2, M_1, M_2, n_1, n_2}(f_1, f_2, f_3) =
\int\sum_{j\in\mathbb Z}\sum_{n\in\mathbb Z}{\bf 1}^*_{j,n}(x)
\prod_{\ell=1}^3 f_{\ell,j, n_\ell}(x)dx\,.
$$
For an integer $ \gamma$ with $0\leq \gamma < 2^{100}$, let $\mathbb
Z(\gamma)$ be the set of all integers congruent to $\gamma$ modulo
$2^{100}$. For
${\bf S}\subset \mathbb Z(\gamma)\times \mathbb Z$
we define
\begin{equation}\label{ladef3}
\Lambda_{\bf S}(f_1, f_2, f_3) = \int_{\mathbb R}
\sum_{(j,n)\in \mathbb {\bf S}}{\bf 1}^*_{j,n}(x)
\prod_{\ell =1}^3 f_{\ell, j, n_\ell}(x)dx\,.
\end{equation}
$\Lambda_{\bf S}$ depends on $L_1, L_2, M_1, M_1, n_1, n_2$. We suppress
this dependence for notational convenience. Note that there are
finite congruence classes modulo $2^{100}$. We will therefore
concentrate on proving Lemma {\ref{triest0}} for the trilinear form
$\Lambda_{{\bf S}}$.
In time-frequency space, each function $f_{\ell, j, n}$ for
$\ell\in\{1,2, 3\}$ corresponds to a box $I_{k_j,n}\times
\omega_{\ell, j}$. The most difficult situation is when only one of
boxes is the Heisenberg box, i.e., $|I_{k,j,n}||\omega_{\ell,j}|\sim 1$.
In this situation, we can use the John-Nirenberg type argument to
get the equivalence of $L^p$ estimates of Littlewood-Paley type
square functions for only one of
functions. For other two functions, there is no such an equivalence and
an extra cost for it has to been paid if one estimates the $BMO$
norm. It turns out that the $L^p$ equivalence for at least one of
three functions is the most crucial key to solve the problem. Our
proof will heavily rely on this equivalence for one of functions.
Let $p$ be a positive number close to $1$. To obtain the Lemma
\ref{triest0}, it suffices to prove (\ref{triest002}) for $p_1\geq
p$, $p_2\geq p$ and $1/p_1+1/p_2\geq 1$. For simplicity, we only
deal with the case $n_1=n_2=n_3=0$. The general case can be handled
in the same way by paying at most a cost of
$(1+|n_1|)^{10}(1+|n_2|)^{10}$ in the constants.
We now start to prove that for $n_1=n_2=0$, any $1<p<2$ and any
measurable set $F_3$ with $|F_3|=1$ in $\mathbb R$, there exists a
subset $F'_3$ of $F_3$ with $|F_3'|\geq 1/2$ such that
\begin{equation}\label{triestp}
\big|\Lambda_{{\bf S}}(f_1, f_2, f_3)\big|\leq
C |F_1|^{1/p_1}|F_2|^{1/p_2}\,
\end{equation}
holds for all $p_1\geq p, p_2\geq p$ with $1/p_1+1/p_2\geq 1$,
$f_1\in X(F_1), f_2\in X(F_2), f_3\in X(F'_3)$, where the constant
$C$ is independent of ${\bf S}$, $F_1$, $F_2$, $F_3$, $f_1$, $f_2, f_3$,
$M_1$, $M_2$. Let us introduce some definitions first.
\begin{definition}\label{defofeset}
Let $p>1$. Define the exceptional set $\Omega$ by
\begin{equation}\label{defOmega}
\Omega = \bigcup_{\ell=1}^3\big\{ x\in\mathbb R:
M_p\big(M{\bf 1}_{F_\ell}\big)(x)> C_0 |F_\ell|^{1/p} \big\}
\end{equation}
where $Mf$ is the Hardy-Littlewood maximal function of $f$ and
$M_pf$ equals to $\big(M(|f|^p)\big)^{1/p}$.
\end{definition}
By this definition, for the measurable set $F_3$ with $|F_3|=1$, we
take $F'_3=F_3\backslash \Omega$. If $C_0$ is chosen sufficiently
large we see that $|F'_3|\geq |F_3|/2$.
\begin{definition}\label{def4}
Given ${{\bf S}}\subset \mathbb Z(\gamma)\times \mathbb Z$ and
$s=(j,n)\in {\bf S}$. Let
$k_s=\min_{\ell\in\{1,2,3\}}\{k_{j\ell}\}$. The dyadic interval $
[2^{-k_s}n, 2^{-k_s}(n+1)]$ is called the time interval of $s$. We
denote it by $I_{s}$.
\end{definition}
\begin{definition}
Let ${\bf S} $ be a subset of $\mathbb Z(\gamma)\times\mathbb Z$. We
say that ${\bf S}$ is a convex set in $\mathbb Z(\gamma)\times\mathbb
Z$ if for any $s\in \mathbb Z(\gamma)\times\mathbb Z$ with
$I_{s_1}\subseteq I_s \subseteq I_{s_2}$ for some $s_1, s_2\in{\bf S}$,
we have $s\in {\bf S}$.
\end{definition}
\begin{definition}
Let ${\bf T}\subset {\bf S}$.
If there is $t\in{\bf T}$ such that $I_{s}\subset I_{t}$ holds
for all $s\in T$, then ${\bf T}$ is called a tree with top $t$. ${\bf T}$ is called a
maximal tree with top $t$ in ${\bf S}$ if there does not
exist a larger tree in ${\bf S}$ with the same top strictly containing
${\bf T}$.
\end{definition}
\begin{definition}\label{shadow}
Let ${\bf T} $ be a tree in ${\bf S}$. Define
${\rm scl}({\bf T})$ the set of scale indices
of ${\bf T}$ by
$$
{\rm scl}({\bf T})=\{j\in\mathbb Z: \exists n\in\mathbb Z, {\rm s.}\,\,{\rm t.}\,\,
(j,n)\in {\bf T}\}\,.
$$
For $j\in{\rm scl}({\bf T})$, the $j$-th shadow of ${\bf T}$ is defined by
$$
{\bf Sh}_j({\bf T})=\bigcup\big \{ I_s: s=(j,n)\in{\bf T}\big\} \,.
$$
Define an approximation of ${\bf 1}_{{\bf Sh}_j({\bf T})}$ by
$$
{\bf 1}^*_{{\bf Sh}_j({\bf T})}(x) = {\bf 1}_{{\bf Sh}_j({\bf T})}*\psi_{k_j}(x)\,.
$$
\end{definition}
\begin{definition}\label{defzetanorm}
Let $(j,n)=s\in{\bf S}$ and $\ell\in\{1,2,3\}$. And let
$$
{\bf 1}^{**}_{ j,n}(x)=\int_{I_{k_j,n}}\frac{2^{ k_{j}}}
{(1+ 2^{2 k_{j}}|x-y|^2)^{200}} dy
\,.
$$
Define a semi-norm $\|f_\ell\|_{j,n}$ by
\begin{equation}\label{defseminorm}
\big\|f_\ell\big\|_{j,n}\, =\big\|f_\ell\big\|_s\,=
\frac{1}{|I_s|^{1/p}}\big\|{\bf 1}^{**}_{j,n}f_{\ell, j,
n_\ell}\big\|_p+
\frac{1}{|I_s|^{1/p}}\big\|2^{-k_{j\ell}}{\bf 1}^{**}_{j,n}Df_{\ell, j,
n_\ell}\big\|_p\,
\end{equation}
where $Df_{\ell, j, n_\ell}$ is the derivative of $f_{\ell, j,
n_\ell} $.
Define $\zeta(j,M,K)$ by
\begin{equation}\label{defofzeta}
\zeta(j,M,K)=\big[\frac{L_1j+M_1-M_2-6}{L_2}\big]+
\big[\frac{L_1}{L_2}\big]M + K\,, \end{equation}
where $L=2^{100}$,
$K$ is an integer between $-10L$ and $10L$ and $M$ is an integer
between $0$ and $6L$. For $\ell\in\{2,3\}$, we define a $\zeta$
semi-norm $\big\|f_\ell\big\|_{j,n,\zeta}$ by
\begin{equation}\label{defseminormzeta}
\big\|f_\ell\big\|_{j,n,\zeta}= \|f_\ell\|_{j,n}+\sup_{M,K}
\frac{1}{|I_s|^{1/p}}\big(\big\|{\bf 1}^{**}_{j,n}f_{\ell, \zeta(j,M,K),
0}\big\|_p+\big\||I_s|{\bf 1}^{**}_{j,n}Df_{\ell, \zeta(j,M, K),
0}\big\|_p\big)\,.
\end{equation}
For $\ell=1$, let the $\zeta$ semi-norm $\big\|f_1\big\|_{j,n,\zeta}
=\big\|f_1\big\|_{j,n}$.
\end{definition}
\begin{definition}
Let ${\bf T}\subset {\bf S}$ be a tree and $t=(j_{\bf T}, n_{{\bf T}})\in{\bf T}$ be
the top of ${\bf T}$. Denote by $I_{{\bf T}}$ the time interval of the top of
tree ${\bf T}$.
\begin{itemize}
\item[(a)]
In the case $|\omega_{2,j}| \leq |\omega_{1,j}|/6$ for all $j\in{\rm scl}({\bf T})$, define
$\Delta^*_\ell({\bf T})$ for $\ell\in\{1,3\}$ by
\begin{equation}\label{defdelta1}
\Delta^*_\ell({\bf T})(x) = \bigg(\sum_{(j,n)\in {\bf T}}
\big|{\bf 1}^{**}_{j,n} f_{\ell,j,n_\ell}(x)\big|^2 \bigg)^{1/2}\,.
\end{equation}
For $\ell=2$, define
\begin{equation}\label{defofdelta12}
\Delta^*_2({\bf T})(x) = \big |{\bf 1}^{**}_{j_{\bf T},n_{\bf T}}
f_{2,j_{\bf T},n_\ell}(x)\big|
\,.
\end{equation}
And in this case, for $\ell\in\{1,2,3\}$, define the $\ell$-size of
${\bf T}$ by
\begin{equation}\label{defofsizec1}
{\rm size}_\ell({\bf T})= \frac{1}{|I_{{\bf T}}|^{1/p}}
\big\|\Delta^*_\ell({\bf T})\big\|_{p} +
\big\|f_\ell\big\|_{j_{\bf T},n_{\bf T}}\,.
\end{equation}
\\
\item[(b)] In the case $|\omega_{2,j}| > |\omega_{1,j}|/6$ for
all $j\in{\rm scl}({\bf T})$, for $\ell=2,3$,
let $f_{\ell,j, {\bf T}}=f_{\ell,j,0}$ if $j\in{\rm scl}({\bf T})$ and
$f_{\ell,j,{\bf T}}\equiv 0$ if $j\notin {\rm scl}({\bf T})$.
Define the $\Delta^*_\ell({\bf T})$ to be
\begin{equation}\label{defdelta2}
\bigg(\sum_{(j,n)\in {\bf T}} \big|{\bf 1}^{**}_{j,n} \big(f_{\ell,j,{\bf T}}
- f_{\ell, j-L ,{\bf T}} \big)(x)\big|^2 \bigg)^{1/2} +
\bigg(\sum_{(j,n)\in {\bf T}} \big|{\bf 1}^{**}_{j,n} \big(f_{\ell,j,n_\ell}
- f_{\ell, j ,0} \big)(x)\big|^2 \bigg)^{1/2} \,.
\end{equation}
And define $\Delta^*_1({\bf T})$ by
\begin{equation}\label{defdelta21}
\Delta^*_1({\bf T})(x)= \bigg(\sum_{(j,n)\in {\bf T}}
\big|{\bf 1}^{**}_{j,n}f_{1,j,n_1}(x)\big|^2 \bigg)^{1/2}\,.
\end{equation}
In this case, for $\ell\in\{1,2,3\}$, define the $\ell$-size of
${\bf T}$ by
\begin{equation}\label{defofsize2b}
{\rm size}_\ell({\bf T})= \frac{1}{|I_{{\bf T}}|^{1/p}}
\big\|\Delta^*_\ell({\bf T})\big\|_{p} +
\big\|f_\ell\big\|_{j_{\bf T},n_{\bf T},\zeta}\,.
\end{equation}
\end{itemize}
Let ${\bf P}$ be a subset of ${\bf S}$. Define the $\ell$-${size}^*$
of ${\bf P}$
by
\begin{equation}\label{defofsize1}
{\rm size}^*_\ell({\bf P}) = \sup_{ {\bf T}: {\bf T}\subset {\bf P}}{\rm size}_\ell({\bf T})\,,
\end{equation}
where ${\bf T}$ ranges over all trees in ${\bf P}$.
In the definition of ${\bf 1}^{**}_{j,n}$, we can replace the exponent
$200$ by a larger number $2^{100}$ to define a new function. We denote
this function by $\tilde{\bf 1}^*_{j,n}$.
If ${\bf 1}^{**}_{j,n}$ is replaced by $\tilde{\bf 1}^*_{j,n}$ in the definition
of $\Delta^*_\ell({\bf T})$, we denote the corresponding function by
$\Delta_\ell({\bf T})$.
\end{definition}
\begin{definition}\label{defcount}
Let ${{\bf S}}$ be a subset of $\mathbb Z(\gamma)\times\mathbb Z$.
Suppose that ${\bf S}$ is a union of trees ${\bf T}\in {\mathcal F}$.
Define ${\rm count}({\bf S})$ by
\begin{equation}\label{defofct}
{\rm count}({\bf S}) =\sum_{{\bf T}\in {\mathcal F}}|I_{{\bf T}}|\,.
\end{equation}
\end{definition}
\subsection{Reduction}
Let ${\bf S}$ be a subset of $\mathbb Z(\gamma)\times \mathbb Z$. For
$\Omega$ defined in (\ref{defOmega}), we define
\begin{equation}\label{defofbsomega}
{\bf S}(\Omega) =\{s\in{\bf S}: I_s\nsubseteq
\Omega\}\,.
\end{equation}
The following lemma indicates that we only need to seek the upper
bound for the trilinear form $\Lambda_{{\bf S}(\Omega)}$.
\begin{lemma}\label{lemomega}
Let $n_1=n_2=0$ and $f_3\in X({F_3'})$. For all functions $f_1\in
X(F_1)$, $f_2\in X(F_2)$, the following inequality holds.
\begin{equation}\label{triest112}
\big| \Lambda_{{\bf S}}(f_1, f_2, f_3)-\Lambda_{{\bf S}(\Omega)}(f_1, f_2, f_3)\big|
\leq C\min\big\{1, |F_1|^{1/p}\big\}\min\big\{1,
|F_2|^{1/p}\big\}\,,
\end{equation}
where $C$ is a constant independent of ${\bf S}$, $F_1$, $F_2$, $F_3$, $
f_1, f_2$, $f_3, M_1, M_2$.
\end{lemma}
\begin{proof}
Notice that if $s=(j, n)\in {\bf S}(\Omega)^c$, then $I_s\subseteq \Omega$.
Let ${\bf S}_{L}(\Omega)$ be defined by
$$
{\bf S}_L(\Omega) = \{s\in {\bf S}(\Omega)^c: 2^LI_s\subseteq\Omega,
\,\, {\rm but}\,\, 2^{L+1}I_s\nsubseteq \Omega \}\,.
$$
We see that ${\bf S}(\Omega)^c =\cup_{L=0}^\infty {\bf S}_L(\Omega)$.
Let ${\mathcal J}_L$ be the set of all time intervals $I_s$'s for
$s\in {\bf S}_L(\Omega)$. It is easy to see that ${\mathcal J}_L$ is
a collection of disjoint intervals and
$\sum_{J\in {\mathcal J}_L}|J|\leq |\Omega|< 1$. Hence,
it suffices to show that for any $J\in {\mathcal J}_L$ and
any $(j,n)=s\in {\bf S}_L(\Omega)$ such that $I_s=J$, we have
\begin{equation}\label{diffest2}
\bigg| \int{\bf 1}^*_{j,n}(x)\prod_{\ell}
f_{\ell, j, n_\ell} (x) dx\bigg|
\leq {C 2^{-L}\min\big\{1, |F_1|^{1/p}\big\}\min\big\{1,
|F_2|^{1/p}\big\}} |J| \,,
\end{equation}
where $C$ is a constant independent of $f_1, f_2, f_3, M_1, M_2$,
since (\ref{triest112}) follows by summing all $L$'s and $J$'s
together.
We now prove (\ref{diffest2}).
Since $F'_3=F_3\backslash \Omega$ and $f_3\in X({F'_3})$, we get
for any $(j,n)\in {\bf S}$ and any positive integer $N$,
\begin{equation}\label{estf3}
\big| {\bf 1}^*_{j,n}(x)f_{3,j,n_\ell}(x)\big|\leq
\frac{C_N}
{ \big(1+ 2^{k_{j}}{\rm dist}(x, I_s)\big)^{3N} \big(1+ 2^{k_{j3}}{\rm dist}(x, \Omega^c)\big)^{3N}}\,.
\end{equation}
Clearly we have for $\ell\in\{1,2\}$ and $(j,n)\in{\bf S}$,
\begin{equation}\label{estfl1}
\big| f_{\ell,j, n_\ell}(x)\big|\leq
\int\!\frac{C_N|f_\ell(y)| 2^{k_{j\ell}}\, dy}
{ \big(1+ 2^{k_{j\ell}} |x-y| \big)^N} \,.
\end{equation}
By the definition of $\Omega$, we have
for $\ell\in\{1,2\}$ and $(j,n)\in{\bf S}$,
\begin{equation}\label{estfl2}
\big| f_{\ell, j, n_\ell}(x)\big|\leq {C_N \min\big\{1,
|F_\ell|^{1/p}\big\}\big(1+ 2^{k_{j\ell}}{\rm dist}(x, \Omega^c)
\big)^2} \,.
\end{equation}
Thus (\ref{estf3}), (\ref{estfl2}) and the fact
$2^{k_{j3}}\sim 2^{\max\{k_{j\ell}\}}$ yield that
the left hand side of (\ref{diffest2}) is no more than
$$C_N 2^{-LN} \prod_{\ell=1}^2\min\big\{1, |F_\ell|^{1/p}\big\} |J| $$
for any positive integer $N\geq 2$, which is the desired estimate.
\end{proof}
Hence, to prove (\ref{triestp}), we only need to prove the following
lemma for $\Lambda_{{\bf S}(\Omega)}$. The details of the proof of Lemma
\ref{ftriest} will be given in the next few subsections.
\begin{lemma}\label{ftriest}
Let $n_1=n_2=0$, $1<p <2$, $F_3\subset \mathbb R $, and
${\bf S}(\Omega)$ be the set defined in (\ref{defofbsomega}) and
$F'_3=F\backslash\Omega$. For all $p_1, p_2\geq p$ with
$1/p_1+1/p_2\geq 1 $, and all functions $f_1\in X(F_1)$, $f_2\in
X(F_2)$, $f_3\in X(F'_3)$, the following inequality holds.
\begin{equation}\label{triest1121}
\big|\Lambda_{{\bf S}(\Omega)}(f_1, f_2, f_3)\big| \leq
C|F_1|^{1/{p_1}}|F_2|^{1/{p_2}}\,,
\end{equation}
where $C$ is a constant independent of ${\bf S}$, $F_1$, $F_2$, $F_3$,
$f_1, f_2$, $f_3, M_1, M_2$.
\end{lemma}
\subsection{Principle Lemmas}
We now state some lemmata which will be used in proof of Lemma
\ref{ftriest}.
\begin{lemma}\label{Lpest}
Let $1<q<\infty$, $\ell\in\{1,2,3\}$ and ${\bf T}$ be a tree in ${\bf S}$.
Then
\begin{equation}\label{Lpest1}
\big\|\Delta^*_\ell({\bf T})\big\|_{q}\leq C \inf_{x\in
I_{\bf T}}M_q(Mf_\ell)(x)|I_{\bf T}|^{1/q}\,,
\end{equation}
\begin{equation}\label{Lpest2}
{\rm size}_\ell({\bf T})\leq C \inf_{x\in I_{\bf T}}M_p(Mf_\ell)(x)\,,
\end{equation}
where $C$ is a constant independent of $f_\ell, {\bf T}$, ${\bf S}$, $M_1,
M_2$.
\end{lemma}
\begin{proof}
(\ref{Lpest1}) is a consequence of the following $L^q$ estimates of
$\Delta_\ell({\bf T})$.
\begin{equation}\label{Lpest12}
\big\|\Delta^*_\ell({\bf T})\big\|_q\leq C\|f_\ell\|_q\,.
\end{equation}
In fact, one can decompose $f_\ell$ into $f_\ell{\bf 1}_{2I_{\bf T}}$ and
$f_\ell{\bf 1}_{(2I_{\bf T})^c}$. For the first function, apply
(\ref{Lpest12}) to get the desired estimates. For the second
function, the desired estimates follow by the fast decay due to
$\Delta^*_\ell({\bf T})$ is essentially supported on $I_{\bf T}$.
Note that we consider only the case $n_\ell=0$. For $n_\ell\neq 0$,
the following argument still works if one changes the constant $C$
to $C(1+|n_\ell|)^{5}$. We only give the details for the case
$|\omega_{2,j}|\leq |\omega_{1,j}|/2$ and $\ell\in\{1,3\}$ since other cases
can be done in the same way. In this case, we have
$$
\Delta^*_\ell({\bf T})(x)=\bigg(\sum_{(j,n)\in{\bf T}}\big|{\bf 1}_{j,n}^{**}f_{\ell,
j, 0}(x)\big|^2\bigg)^{1/2}\,.
$$
Notice that $\Delta^*_\ell({\bf T})(x)$ is dominated by
$$
\bigg(\sum_{j\in\mathbb Z}\big|f_{\ell, j,0}(x)\big|^2
\bigg)^{1/2}\,,
$$
where $f_{\ell, j,0}$ is defined by $\widehat f_{\ell, j,0} = \widehat f_\ell
\widehat\Phi_{\ell, j,0}$. Note that $\widehat\Phi_{\ell,j,0}$ is supported on
$\omega_{\ell,j}$ and $\omega_{\ell,j}$'s are disjoint. Thus the Littlewood-Paley theorem then yields the $L^q$ estimates (\ref{Lpest12}). To get
(\ref{Lpest1}), it suffices to show that
$$
\big\|\Delta^*_{\ell, {\rm out}}({\bf T})\big\|_{q}\leq C \inf_{x\in
I_{\bf T}}M_q(Mf_\ell)(x)|I_{\bf T}|^{1/q}\,,
$$
where $\Delta^*_{\ell,{\rm out}}({\bf T})$ is defined by
$$ \Delta^*_{\ell, {\rm
out}}({\bf T})(x)=\bigg(\sum_{(j,n)\in{\bf T}}\big|{\bf 1}_{j,n}^{**}(x)
\big((f{\bf 1}_{(2I_{\bf T})^c})*\Phi_{\ell,
j, 0}\big)(x)\big|^2\bigg)^{1/2}\,.$$ By the definition of
${\bf 1}^{**}_{j, n}$ and $\Phi_{\ell, j,0}$, we have that for any
positive integer $N$,
$$
\big|{\bf 1}_{j,n}^{**}(x)
\big((f{\bf 1}_{(2I_{\bf T})^c})*\Phi_{\ell,
j, 0}\big)(x)\big| \leq \frac{C_N}{\big(1+ 2^{k_j}{\rm dist}(x,
I_s)\big)^{100}} \int_{(2I_{{\bf T}})^c}
\frac{|f_\ell(y)|2^{k_{j\ell}}}{\big(1+2^{k_{j\ell}}|x-y|\big)^N}dy\,.
$$
which is clearly dominated by
$$
\frac{CMf_\ell(x)}{\big(1+ 2^{k_j}{\rm dist}(x, I_s)\big)^{50}
\big(1+ 2^{k_j}{\rm dist}(I_s, (2I_{\bf T})^c)\big)^{50} }\,.
$$
Thus for $s\in {\bf T}$,
$$
\big\|{\bf 1}_{j,n}^{**} \big((f{\bf 1}_{(2I_{\bf T})^c})*\Phi_{\ell, j, 0}\big)
\big\|_q^q\leq \frac{C|I_s|}{\big(1+ 2^{k_j}{\rm dist}(I_s,
(2I_{\bf T})^c)\big)^{25q}} \big(\inf_{x\in
I_{\bf T}}M_q(Mf_\ell)(x)\big)^q\,.
$$
By triangle inequality, we obtain that
$$
\big\|\Delta^*_{\ell, {\rm out}}({\bf T})\big\|_q\leq \sum_{s\in {\bf T}}
\frac{C|I_s|^{1/q}}{\big(1+ |I_s|^{-1}{\rm dist}(I_s,
(2I_{\bf T})^c)\big)^{25}} \inf_{x\in
I_{\bf T}}M_q(Mf_\ell)(x)\,,
$$
which yields the desired estimate (\ref{Lpest1}). Notice that
$$
\big\| {\bf 1}^{**}_{j_{\bf T},n_{\bf T}}f_{\ell,j_{\bf T},n_\ell} \big\|_p
+
\big\| {2^{-k_{j_{\bf T}\ell}}}{\bf 1}^{**}_{j_{\bf T},n_{\bf T}}
Df_{\ell,j_{\bf T},n_\ell} \big\|_p
\leq \bigg\| \frac{CMf_\ell(\cdot)}{\big(1+ |I_{\bf T}|^{-1} {\rm dist}(\cdot, I_{\bf T})\big)^N}
\bigg\|_p\,,
$$
which is clearly dominated by $\inf_{x\in
I_{\bf T}}M_p(Mf_\ell)(x)|I_{\bf T}|^{1/p} $. Therefore we obtain
(\ref{Lpest2}).
\end{proof}
\begin{lemma}\label{goodbmo1}
Suppose that $s=(j,n)\in{\bf S}$.
If $2^{k_{j\ell}}\sim 2^{k_j}$, then
\begin{equation}\label{smallbound}
\big\|{\bf 1}^{**}_{j,n} f_{\ell,j,n_\ell}\big\|_\infty \leq
C\big\|f_\ell\big\|_{j,n}
\end{equation}
holds for $\ell\in\{1,2,3\}$, where $C$ is a constant independent of
$s, f_\ell, n_\ell$.
If $2^{k_{j1}}\sim 2^{k_j}$, then
\begin{equation}\label{smallboundzeta}
\big\|{\bf 1}^{**}_{j,n} f_{\ell,\zeta(j,M, K),n_\ell}\big\|_\infty \leq
C\big\|f_\ell\big\|_{j,n,\zeta}
\end{equation}
holds for $\ell\in\{2,3\}$, where $\zeta(j,M, K) $ is defined in
Definition \ref{defzetanorm} and $C$ is a constant independent of
$s, f_\ell, n_\ell, \zeta, M, K$.
\end{lemma}
\begin{proof}
We only prove (\ref{smallbound}) since (\ref{smallboundzeta})
essentially is a consequence of (\ref{smallbound}).
Let $\mu=\big\|f_\ell\big\|_{j,n}$.
By the definition of the semi-norm, we have
\begin{equation}\label{smalldelta2}
\big\|{\bf 1}^{**}_{j,n} f_{\ell,j,n_\ell}\big\|_p +
\big\|{|I_{s}|}{\bf 1}^{**}_{j,n}
Df_{\ell,j,n_\ell} \big\|_p \leq \mu |I_s|^{1/p}\,.
\end{equation}
First we prove the BMO estimate for the function, that is
\begin{equation}\label{smallbmo}
\big\|{\bf 1}^{**}_{j,n} f_{\ell,j,n_\ell}\big\|_{BMO} \leq C\mu\,.
\end{equation}
If $|I_s|\leq |J|$, by (\ref{smalldelta2}) we have
$$\inf_c\int_J\big| {\bf 1}^{**}_{j,n}(x)f_{\ell,j,n_\ell}(x)-c\big| dx
\, \leq \, \big\|{\bf 1}^{**}_{j,n} f_{\ell,j,n_\ell} \big\|_p|J|^{1-\frac{1}{p}}
\,\leq \, \mu |I_s|^{\frac{1}{p}}|J|^{1-\frac{1}{p}}
\, \leq \, \mu |J|\,.
$$
If $|I_s|\geq |J|$, by (\ref{smalldelta2}) we obtain that
\begin{eqnarray*}
& & \inf_c\int_J\big| {\bf 1}^{**}_{j,n}(x)f_{\ell,j,n_\ell}(x)-c\big| dx\\
& \leq & |J|\int_J\bigg| \big({\bf 1}^{**}_{j,n}f_{\ell,j,n_\ell}\big)' (x)\bigg| dx\\
& \leq & |J|\int_J \big|\big({\bf 1}^{**}_{j,n}\big)'(x)f_{\ell,j,n_\ell}(x) \big|dx + |J|\int_J\big| {\bf 1}^{**}_{j,n}(x)Df_{\ell,j,n_\ell}(x) \big|dx\\
& \leq & C|J||I_s|^{-1}\big\|{\bf 1}^{**}_{j,n}f_{\ell,j,n_\ell}\big\|_p|J|^{1-\frac{1}{p}} + |J|\big\| {\bf 1}^{**}_{j,n}Df_{\ell,j,n_\ell}\big\|_p
|J|^{1-\frac{1}{p}}\\
&\leq & C\mu |J|^{2-\frac{1}{p}} |I_s|^{\frac{1}{p}-1}\,\leq\, C\mu|J|\,.
\end{eqnarray*}
Thus we get the BMO estimate (\ref{smallbmo}). Interpolating (\ref{smallbmo})
and (\ref{smalldelta2}), we have for any $p\leq q <\infty$,
$$
\big\|{\bf 1}^{**}_{j,n}f_{\ell,j,n_\ell}\big\|_q\leq C\mu
|I_s|^{1/q}\,.
$$
Notice that an integration by parts and H\"older inequality yield that
$$
\big\|{\bf 1}^{**}_{j,n} f_{\ell,j,n_\ell}\big\|_\infty \leq
\big\| {\bf 1}^{**}_{j,n} f_{\ell,j,n_\ell} \big\|_{p'}^{1/2}
\big\| \big( {\bf 1}^{**}_{j,n} f_{\ell,j,n_\ell} \big)' \big\|_p^{1/2}\,,
$$
where $1/p+1/p'=1$. Hence the desired estimate (\ref{smallbound})
follows by (\ref{smalldelta2}) and $L^{p'}$ estimates for the functions.
\end{proof}
\begin{lemma}\label{goodbmo2}
Suppose that $2^{k_{j\ell}}\sim 2^{k_j}$ holds for
all $(j,n)\in {\bf S}$.
Then for any tree ${\bf T} $ in ${\bf S}$, we have
\begin{equation}\label{smallBMOp}
\big\| \Delta_\ell({\bf T}) \big\|_{BMO} \leq C {\rm size}^*_\ell({\bf T}) \,,
\end{equation}
where $C$ is a constant independent of $ {\bf T}, {\bf S}, L_1, L_2, M_1, M_2,
f_\ell, n_\ell$.
\end{lemma}
\begin{proof}
We only give the a proof for $\ell=1$. Other cases can be handled in the same way. Let $\mu={\rm size}^*_\ell({\bf S})$.
Let $J$ be a dyadic interval and ${\bf T}_J=\{s\in{\bf T}: I_s\subseteq J\}$.
We then dominate $\inf_c\int_J\big|\Delta_\ell({\bf T})(x) - c\big|dx$ by a sum of
the following three parts.
$$
\int_J \bigg(\sum_{s\in {\bf T}_J} \big|\tilde{\bf 1}^{*}_{j,n}(x) f_{\ell,j,n_\ell}
(x) \big|^2\bigg)^{1/2} dx\,,
$$
$$
\int_J \bigg(\sum_{\substack{s\in {\bf T}\backslash{\bf T}_J\\|I_s|\leq |J|}} \big|\tilde{\bf 1}^{*}_{j,n}(x) f_{\ell,j,n_\ell}
(x) \big|^2\bigg)^{1/2} dx\,,
$$
and
$$
\inf_c\int_J \bigg|\bigg(\sum_{\substack{s\in {\bf T}\backslash{\bf T}_J\\|I_s|>|J|}} \big|\tilde{\bf 1}^{*}_{j,n}(x) f_{\ell,j,n_\ell}
(x) \big|^2\bigg)^{1/2}-c\bigg| dx\,.
$$
The first part is clearly dominated by $\mu |J|$ because of the H\"older inequality and the fact that $\mu$ is the $\ell$-${\rm size}^*$ of ${\bf S}$.
Since $p\leq 2$ we estimate the second part by
\begin{eqnarray*}
& & \bigg\| \bigg(\sum_{\substack{s\in {\bf T}\backslash{\bf T}_J\\|I_s|\leq |J|}} \big|\tilde{\bf 1}^{*}_{j,n} f_{\ell,j,n_\ell}\big|^2\bigg)^{1/2} \bigg\|_{L^p(J)}|J|^{1-\frac{1}{p}}\\
& \leq & \bigg(\sum_{\substack{s\in {\bf T}\backslash{\bf T}_J\\|I_s|\leq |J|}} \big\|\tilde{\bf 1}^{*}_{j,n} f_{\ell,j,n_\ell}\big\|^p_{L^p(J)}\bigg)^{1/p}
|J|^{1-\frac{1}{p}}\\
& \leq & \bigg(\sum_{\substack{s\in {\bf T}\backslash{\bf T}_J\\|I_s|\leq |J|}}
\frac{ C
\big\|{\bf 1}^{**}_{j,n} f_{\ell,j,n_\ell}\big\|^p_p }
{\big( 1+ |I_s|^{-1}{\rm dist}(J, I_s)\big)^{100}}\bigg)^{1/p}|J|^{1-\frac{1}{p}}\\
&\leq & \mu \bigg(\sum_{\substack{s\in {\bf T}\backslash{\bf T}_J\\|I_s|\leq |J|}}
\frac{ C |I_s| }
{\big( 1+ |I_s|^{-1}{\rm dist}(J, I_s)\big)^{100}} \bigg)^{1/p} |J|^{1-\frac{1}{p}}\,\, \leq \,\, C\mu|J|\,.
\end{eqnarray*}
The third part is estimated by
\begin{eqnarray*}
& & \bigg(\inf_c\int_J \bigg|\bigg(\sum_{\substack{s\in {\bf T}\backslash{\bf T}_J\\|I_s|>|J|}} \big|\tilde{\bf 1}^{*}_{j,n}(x) f_{\ell,j,n_\ell}
(x) \big|^2\bigg)^{1/2}-c\bigg|^2dx\bigg)^{1/2}|J|^{1/2}\\
& \leq &\bigg(\inf_c\int_J \bigg|\sum_{\substack{s\in {\bf T}\backslash{\bf T}_J\\|I_s|>|J|}} \big|\tilde{\bf 1}^{*}_{j,n}(x) f_{\ell,j,n_\ell}
(x) \big|^2-c\bigg|dx\bigg)^{1/2}|J|^{1/2}\\
& \leq & C\bigg(\int_J \sum_{\substack{s\in {\bf T}\backslash{\bf T}_J\\|I_s|>|J|}} \bigg|
\bigg(\big|\tilde{\bf 1}^{*}_{j,n}(x) f_{\ell,j,n_\ell}
(x) \big|^2\bigg)'\bigg|dx \bigg)^{1/2}|J|\,,
\end{eqnarray*}
which is dominated by a sum of the following two terms,
$$
R_1= C\bigg(\int_J \sum_{\substack{s\in
{\bf T}\backslash{\bf T}_J\\|I_s|>|J|}}
|I_s|^{-1}\big|\tilde{\bf 1}^{*}_{j,n}(x) f_{\ell,j,n_\ell}
(x) \big|^2 dx \bigg)^{1/2}|J|\,,
$$
and
$$
R_2= C\bigg(\int_J \sum_{\substack{s\in
{\bf T}\backslash{\bf T}_J\\|I_s|>|J|}}\big|\tilde{\bf 1}^{*}_{j,n}(x)f_{\ell,j,n_\ell}
(x) \big| \big|\tilde{\bf 1}^{*}_{j,n}(x) Df_{\ell,j,n_\ell} (x) \big| dx
\bigg)^{1/2}|J|\,,
$$
By Lemma \ref{goodbmo1}, we see that for any
$q\geq p$,
$$
\big\|{\bf 1}^{**}_{j,n}f_{\ell,j,n_\ell}\big\|_q\leq
C\mu|I_{\bf T}|^{1/q}\,.
$$
Thus, by H\"older inequality, the first term $R_1$ is estimated by
\begin{eqnarray*}
& & C\bigg( \sum_{\substack{s\in
{\bf T}\backslash{\bf T}_J\\|I_s|>|J|}}\frac{|I_s|^{-1}}{\big(1+|I_s|^{-1}{\rm
dist}(J, I_s)\big)^{100}}
\big\|{\bf 1}^{**}_{j,n}f_{\ell,j,n_\ell}\big\|_4^2 |J|^{1/2}
\bigg)^{1/2}|J| \\
& \leq & C\mu\bigg( \sum_{\substack{s\in
{\bf T}\backslash{\bf T}_J\\|I_s|>|J|}}\frac{|I_s|^{-1/2}|J|^{1/2}}{\big(1+|I_s|^{-1}{\rm
dist}(J, I_s)\big)^{100}}\bigg)^{1/2}|J| \,\,\leq \, C\mu |J|\,,
\end{eqnarray*}
and the second term $R_2$ is estimated by
\begin{eqnarray*}
& &C\bigg(\sum_{\substack{s\in
{\bf T}\backslash{\bf T}_J\\|I_s|>|J|}}\big\|\tilde{\bf 1}^{*}_{j,n}f_{\ell,j,n_\ell}
\big\|_{L^{p'}(J)} \big\|{\bf 1}^{**}_{j,n}Df_{\ell,j,n_\ell}
\big\|_p\bigg)^{1/2}|J|\\
& \leq &C\bigg( \mu \sum_{\substack{s\in
{\bf T}\backslash{\bf T}_J\\|I_s|>|J|}}\frac{|I_s|^{\frac{1}{p}-1}
\big\|{\bf 1}^{**}_{j,n}f_{\ell,j,n_\ell}
\big\|_{p'+1}|J|^{\frac{1}{p'(p'+1)}} }
{\big(1+|I_s|^{-1}{\rm dist}(J, I_s)\big)^{100}}
\bigg)^{1/2}|J|\\
& \leq & C\mu\bigg( \sum_{\substack{s\in
{\bf T}\backslash{\bf T}_J\\|I_s|>|J|}}\frac{|I_s|^{-\frac{1}{p'(p'+1)}}
|J|^{\frac{1}{p'(p'+1)}} } {\big(1+|I_s|^{-1}{\rm dist}(J,
I_s)\big)^{100}} \bigg)^{1/2}|J|\,\,\leq \,\, C\mu|J|\,.
\end{eqnarray*}
This completes the proof of (\ref{smallBMOp}).
\end{proof}
The principal lemma is the following organization lemma.
\begin{lemma}\label{prilem}
Let $\ell\in\{1,2, 3\}$ and ${\bf S}$ be a subset of $\mathbb
Z(\gamma)\times \mathbb Z$. ${\bf S}$ can be partitioned to two
parts ${\bf S}_1$ and ${\bf S}_2$ such that ${\bf S}_1$ is a union
of maximal trees with
\begin{equation}\label{s1est}
{\rm count}({\bf S}_1) \leq C\big({\rm size}^*_\ell({\bf S})\big)^{-p}|F_\ell|\,,
\end{equation}
and
\begin{equation}\label{s2est}
{\rm size}^*_\ell({\bf S}_2)\leq \frac{1}{2}{\rm size}^*_\ell({\bf S})\,,
\end{equation}
where $C$ is a constant independent of ${\bf S}, M_1, M_2, f_\ell,
F_\ell$.
\end{lemma}
\begin{proof}
Let ${\mathcal F}_0$ be the set of all trees ${\bf T}\subset {\bf S}$ such
that ${\rm size}_\ell({\bf T}) > {\rm size}^*_\ell({\bf S})/2$. Recall that $I_{\bf T}$ is
the time interval for the top of ${\bf T}$. Let $\mathcal I$ denote the
collection of all possible $I_{\bf T}$'s for trees ${\bf T}\in\mathcal F_0$.
Initially, set ${\bf S}_1:=\emptyset$, $\mathcal I_{\rm stock}:=\mathcal
I$, and ${\bf S}_{\rm stock}:={\bf S}$. Take a longest interval $J$ in
$\mathcal I_{\rm stock}$. By the defintion of $\mathcal I$, there
must be a tree ${\bf T}\in \mathcal F_0$ whose top is $J$. Let $\tilde{\bf T}$
be the maximal tree in ${\bf S}_{\rm stock}$ with the top $J$. Obviously
${\rm size}_\ell(\tilde{\bf T})\geq {\rm size}^*_\ell({\bf S})/2$.
We remove
this maximal tree from ${\bf S}_{\rm stock}$. Update ${\bf S}_{\rm stock}:=
{\bf S}_{\rm stock}\backslash \tilde{\bf T}$, ${\bf S}_1 :={\bf S}_1\cup\tilde{\bf T}$,
and
$$\mathcal I_{\rm stock}:= \mathcal I_{\rm
stock}\backslash\{I\in\mathcal I_{\rm stock}: I\subseteq J\} \,.$$
Repeat this procedure until $\mathcal I_{\rm stock}=\emptyset$.
Clearly when this process terminates, ${\bf S}_1$ is a union of a trees
$\tilde{\bf T}$'s and $I_{\tilde{\bf T}}$'s are disjoint due to the maximality of
trees. By (\ref{Lpest2}) and the size condition on $\tilde{\bf T}$, we have
$$
\inf_{x\in I_{\tilde{\bf T}}}M_p(Mf_\ell)(x)\geq {\rm size}^*_\ell({\bf S})/2\,,
$$
which implies that
$$
\bigcup_{\tilde{\bf T}} I_{\tilde{\bf T}} \subseteq \big\{x\in \mathbb R:
M_p(Mf_\ell)(x)\geq {\rm size}^*_\ell({\bf S})/2\big\}\,.
$$
Thus the disjointness property of $I_{\tilde{\bf T}}$'s and (weak)
$L^q$ estimates for $1\leq q\leq \infty $ of Hardy-Littlewood
maximal functions yield (\ref{s1est}).
Let ${\bf S}_2={\bf S}\backslash {\bf S}_1$. Clearly ${\bf S}_2$ satisfies
(\ref{s2est}). Therefore we complete the proof of Lemma
{\ref{prilem}}.
\end{proof}
\subsection{The size estimate for a tree}
Let ${\bf S}$ be a convex subset
of $\mathbb Z(\gamma)\times \mathbb Z$. By the definition of
${\bf S}(\Omega)$ in (\ref{defofbsomega}), it is clear that ${\bf S}(\Omega)$
is convex. Partition ${\bf S}(\Omega)$ into two subsets ${\bf S}^{(1)}(\Omega)$ and ${\bf S}^{(2)}(\Omega)$, where
\begin{equation}\label{defofSgood}
{\bf S}^{(1)}(\Omega) = \big\{(j,n)\in{\bf S}(\Omega): |\omega_{2,j}|\leq
|\omega_{1,j}|/6\big\}
\end{equation}
\begin{equation}\label{defofSbad}
{\bf S}^{\rm (2)}(\Omega) = \big\{(j,n)\in{\bf S}(\Omega): |\omega_{2,j}|>
|\omega_{1,j}|/6\big\}\,.
\end{equation}
For any $(j,n)\in{\bf S}^{(1)}(\Omega)$, $k_{j2}=k_{j}$ by the definition
of $k_j$. And for any $(j,n)\in{\bf S}^{(2)}(\Omega)$,
$ 2^{k_{j1}}\sim 2^{k_j}$.
\begin{lemma}\label{convexitylem}
For $\kappa\in\{1, 2\}$, ${\bf S}^{(\kappa)}(\Omega)$ is convex.
\end{lemma}
\begin{proof}
We only prove the lemma for $\kappa=2$.
One can prove the lemma for $\kappa=1$ similarly.
Let $s_1=(j_1, n_1), s_2=(j_2, n_2)$ in ${\bf S}^{(2)}(\Omega)$. And
$s=(j,n)\in \mathbb Z(\gamma)\times\mathbb Z$ such that
$ I_{s_2}\subseteq I_s\subseteq I_{s_1}$. By the convexity of ${\bf S}(\Omega)$
we get $s\in{\bf S}(\Omega)$. In order to get $s\in{\bf S}^{(2)}(\Omega)$, we
need to show that $|\omega_{2,j}| > |\omega_{1,j}|/6$.
The simple case is the case $2^{k_j}=|\omega_{1,j}|$. In this case,
$|\omega_{1,j_2}|/10 \leq |\omega_{1,j}|\leq 10|\omega_{1, j_1}|$, which implies $j_2\leq j\leq j_1$. Since $|\omega_{2, j_1}|>|\omega_{1, j_1}|/6$
and $|\omega_{2,j_2}|>|\omega_{1, j_2}|/6$, the linearity of
the function $f(j)=(L_1j+M_1)-(L_2j+M_2)$
yields that $|\omega_{2,j}|>|\omega_{1, j}|/6$.
We now turn to another case $2^{k_j}=|\omega_{2,j}|$. Since $I_s$
is nested between $I_{s_1}$ and $I_{s_2}$, we get
$|\omega_{1, j_2}|/10 \leq |\omega_{2,j}|\leq 10|\omega_{1, j_1}|$.
The first half part of this inequality and the definition of $k_{j}$ imply
$j_2\leq j$. And the second half part of the inequality and the fact
$(j_1, n_1)\in{\bf S}^{(2)}(\Omega)$ yield $j\leq j_1$. Thus we
get $|\omega_{2,j}|>|\omega_{1,j}|/6$ by the linearity of the function $f(j)$.
Hence $s$ must be in ${\bf S}^{(2)}(\Omega)$ in
either case. This proves the lemma.
\end{proof}
\begin{lemma}\label{shadowest}
Let $\kappa\in\{1,2\}$, ${\bf T}$ be a convex tree in ${\bf S}^{(\kappa)}(\Omega)$ with the top $t=(j_{\bf T}, n_{\bf T})$ and
$\partial{\bf Sh}_j({\bf T})$ be the boundary of the $j$-th shadow of ${\bf T}$.
Let ${\rm Card} (\partial{\bf Sh}_j({\bf T}))$ denote the cardinality of
the boundary of the $j$-th shadow. Then
\begin{equation}\label{shadowest1}
\sum_{j\geq j_{\bf T}}2^{-k_j}{\rm Card}(\partial{\bf Sh}_j({\bf T})) \leq C|I_{\bf T}|\,,
\end{equation}
where $C$ is a constant independent of ${\bf T}$.
\end{lemma}
\begin{proof}
This lemma is similar to one technical lemma (Lemma 4.8) in \cite{MTT3}. We give a
similar proof.
Note that the $j$-th shadow consists of finite disjoint intervals and
its boundary thus contains all endpoints of the intervals. It is
sufficient to consider only all left endpoints since the right endpoints
can be handled in the same way. Let $\partial_{\rm left}({\bf Sh}_j({\bf T}))$ denote
the collection of all left endpoints of the intervals in the $j$-th shadow.
Let $z\in \partial_{\rm left}({\bf Sh}_j({\bf T}))$ and
$I_j(z)=(z-2^{-k_j}, z-2^{-k_j}/2)$.
To prove (\ref{shadowest1}), it suffices to show that the
intervals $I_j(z)$'s are disjoint for all possible $j, z$.
Assume that there are $j, j'\in{\rm scl}({\bf T})$, $z\in
\partial_{\rm left}({\bf Sh}_j({\bf T}))$ and $z'\in \partial_{\rm left}({\bf Sh}_{j'}({\bf T}))$ such that $(j, z)\neq (j', z')$ and $I_j(z)\cap I_{j'}(z')\neq \emptyset$.
By the nesting property of dyadic intervals and the fact that
$z-2^{-k_j}$ is an endpoint of some dyadic intervals, we see that
$j\neq j'$. Without loss of generality, suppose that $j<j'$. The
fact that $I_j(z)$ and $I_{j'}(z')$ have nonempty intersection then
implies $z'\in (z-2^{-k_j}, z)$. Since $z$ is a left endpoint of
some intervals in the $j$-th shadow, $z'$ can not be in
${\bf Sh}_j({\bf T})$. However, the convexity of ${\bf T}$ yields that
${\bf Sh}_{j'}({\bf T})\subseteq {\bf Sh}_j({\bf T})$. This is a contradiction.
Therefore
we obtain the lemma.
\end{proof}
\begin{lemma}\label{difftree}
Let $\kappa\in\{1,2\}$,
${\bf T}$ be a convex tree in ${\bf S}^{(\kappa)}(\Omega)$ and
$\tilde\Lambda_{\bf T}(f_1, f_2, f_3)$ be defined by
\begin{equation}\label{deftila}
\tilde\Lambda_{\bf T}(f_1, f_2, f_3)=\sum_j\int\prod_{\ell=1}^3
\sum_{n\in{\bf T}_j} F_{\ell, j, n}(x)dx \,,
\end{equation}
where ${\bf T}_j=\{n\in \mathbb Z: (j,n)\in {\bf T}\}$ and
$F_{\ell, j, n}$ is defined by
\begin{equation}
F_{\ell, j, n}(x)={\bf 1}^*_{j,n}(x)f_{\ell, j, n_\ell}(x)\,.
\end{equation}
Then we have
\begin{equation}\label{tri1diffest}
\big|\Lambda_{\bf T}(f_1, f_2, f_3)-\tilde\Lambda_{\bf T}(f_1, f_2, f_3)\big|
\leq C{\rm size}^*_1({{\bf T}}){\rm size}^*_2({{\bf T}})|I_{{\bf T}}|\,,
\end{equation}
where $C$ is a constant independent of ${\bf T}, {{\bf S}}, f_1, f_2, f_3$.
\end{lemma}
\begin{proof}
Observe that the difference $|\Lambda_{\bf T}-\tilde\Lambda_{\bf T}|$ by
$$
\sum_{j\in{\rm scl}({\bf T})}\int \bigg|{\bf 1}^*_{{\bf Sh}_j({\bf T})}(x) -
\big({\bf 1}^*_{{\bf Sh}_j({\bf T})}\big)^3 (x) \bigg|\prod_{\ell=1}^3 \big|f_{\ell,j, n_\ell}(x)\big| dx\,,
$$
which is dominated by
$$
\sum_{j\in{\rm scl}({\bf T})}\sum_{I: |I|=2^{-k_j}}
\int_I \bigg|\bigg({\bf 1}^*_{{\bf Sh}_j({\bf T})}(x) -
\big({\bf 1}^*_{{\bf Sh}_j({\bf T})}\big)^3 (x)\bigg)
\big(\tilde{\bf 1}^{*}_{{\bf Sh}_j({\bf T})}(x)\big)^{-\frac{1}{10}} \bigg|
\Pi_{j,{\bf T}}(f_1, f_2, f_3)(x)dx\,,
$$
where
\begin{equation}\label{defoftish}
\tilde{\bf 1}^{*}_{{\bf Sh}_j({\bf T})}(x) = \int_{{\bf Sh}_j({\bf T})} \frac{2^{k_j}}{\big(1+2^{2k_j}|x-y|^2\big)^{2^{1000}}} dx \,
\end{equation}
and
$$
\Pi_{j,{\bf T}}(f_1, f_2, f_3)(x)=
\prod_{\ell=1}^3
\big|\big(\tilde{\bf 1}^{*}_{{\bf Sh}_j({\bf T})}\big)^{1/30} f_{\ell,j, n_\ell}(x)\big| \,.
$$
H\"older inequality, Lemma \ref{goodbmo1} and (\ref{Lpest1}) then yield
that
\begin{equation}\label{sigle}
\big\|\Pi_{j, {\bf T}}(f_1, f_2, f_3)\big\|_{L^1(I)}
\leq C{\rm size}^*_1({{\bf T}}){\rm size}^*_2({{\bf T}})
2^{-k_j} \,.
\end{equation}
Thus we estimate the difference $|\Lambda_{\bf T}-\tilde\Lambda_{\bf T}|$ by
$$
C{\rm size}^*_1({{\bf T}}){\rm size}^*_2({{\bf T}})\sum_{j\in{\rm scl}({\bf T})}\sum_{I: |I|=2^{-k_j}}
|I| \bigg\|\bigg({\bf 1}^*_{{\bf Sh}_j({\bf T})} -
\big({\bf 1}^*_{{\bf Sh}_j({\bf T})}\big)^3\bigg)
\big(\tilde{\bf 1}^{*}_{{\bf Sh}_j({\bf T})}\big)^{-1/10} \bigg\|_{L^\infty(I)}\,,
$$
By the definition of ${\bf 1}^{*}_{{\bf Sh}_j({\bf T})}$, it is easy to see that
it is a smooth approximation of ${\bf 1}_{{\bf Sh}_j({\bf T})}$ and for any
positive interger $N$ the following inequality holds.
$$
|I|\bigg\|\bigg({\bf 1}^*_{{\bf Sh}_j({\bf T})} -
\big({\bf 1}^*_{{\bf Sh}_j({\bf T})}\big)^3\bigg)
\big(\tilde{\bf 1}^{*}_{{\bf Sh}_j({\bf T})}\big)^{-1/10} \bigg\|_{L^\infty(I)}
\leq \frac{C_N|I|}{\big( 1+|I|^{-1}{\rm dist}(I, \partial{\bf Sh}_j({\bf T}))\big)^N}\,.
$$
Summing up all $I$'s with $|I|=2^{-k_j}$, we estimate the difference by
$$
C{\rm size}^*_1({{\bf T}}){\rm size}^*_2({{\bf T}})\sum_{j\in {\rm scl}({\bf T})} 2^{-k_j}{\rm Card}(\partial{\bf Sh}_j({\bf T})) \,.
$$
Hence the lemma follows by Lemma \ref{shadowest}.
\end{proof}
\begin{lemma}\label{diffest23}
Let ${\bf T}$ be a convex tree in ${\bf S}^{(2)}(\Omega)$.
For $\ell\in\{2,3\}$,
let $F_{\ell, j}$ be defined by
\begin{equation}\label{defofFlj}
F_{\ell,j}(x)={\bf 1}^*_{{\bf Sh}_j({\bf T})}(x)f_{\ell,j, 0}(x)\,,
\end{equation}
if ${\bf T}_j\neq \emptyset$, and $F_{\ell, j}\equiv 0$ if
${\bf T}_j=\emptyset$.
Then we have
\begin{equation}\label{diffestfinal}
\sup_M\bigg\|\bigg(\sum_j\big|
F_{\ell,j-M}-F_{\ell,j-M-L}\big|^2\bigg)^{1/2}\bigg\|_p\leq
C{\rm size}^*_\ell({\bf T})|I_{\bf T}|^{1/p}\,,
\end{equation}
where $L=2^{100}$, $M$ ranges over all integers between $0$ and $6L$
and $C$ is a constant independent of $f_\ell, {\bf T}$.
\end{lemma}
\begin{proof}
For simplicity, we only prove the lemma for $M=0$.
It is easy to see that
$|F_{\ell, j}-F_{\ell,j-L}(x)|$ is dominated by
$$
\big|{\bf 1}^*_{{\bf Sh}_j({\bf T})}(x)\big(f_{\ell, j, 0}(x)- f_{\ell,j-L,0}(x)\big) \big| +
\big|\big({\bf 1}^*_{{\bf Sh}_j({\bf T})}(x) - {\bf 1}^*_{{\bf Sh}_{j-L}({\bf T})}(x)\big)f_{\ell,j-L}(x)\big|\,.
$$
Clearly, by the definition of $\Delta^*_\ell({\bf T})$ and ${\rm size}_\ell^*({\bf T})$,
we get
$$
\bigg\|\bigg(\sum_j\big|{\bf 1}^*_{{\bf Sh}_j({\bf T})}\big(f_{\ell, j, 0}- f_{\ell,j-L,0}\big) \big|^2\bigg)^{1/2}\bigg\|_p \leq
C\big\|\Delta^*_\ell({\bf T})\big\|_p\leq C{\rm size}^*({\bf T})|I_{\bf T}|^{1/p}\,.
$$
Thus to obtain (\ref{diffestfinal}), it suffices to show that
\begin{equation}\label{diffestfinal1}
\bigg\|\bigg(\sum_j\big|\big({\bf 1}^*_{{\bf Sh}_j({\bf T})} - {\bf 1}^*_{{\bf Sh}_{j-L}({\bf T})}\big)f_{\ell,j-L,0} \big|^2\bigg)^{1/2}\bigg\|_p\leq C{\rm size}^*({\bf T})|I_{\bf T}|^{1/p} \,.
\end{equation}
Heuristically one can consider ${\bf 1}^*_{{\bf Sh}_j({\bf T})}$ as $ {\bf 1}_{{\bf Sh}_j({\bf T})}$.
Then by the nesting property of the $j$-th shadows due to the convexity of
the tree, we see that ${\bf Sh}_{j-L}({\bf T})\backslash{\bf Sh}_j({\bf T})$'s are disjoint
and this is the reason why we have such an estimate.
Now we go to the technical details.
Since $p\leq 2$, we estimate
the left hand side of (\ref{diffestfinal1}) by
$$
\bigg(\sum_{j\in{\rm scl}({\bf T})}\big\|\big({\bf 1}^*_{{\bf Sh}_j({\bf T})} - {\bf 1}^*_{{\bf Sh}_{j-L}({\bf T})}\big) f_{\ell, j-L,0}\big\|_p^p \bigg)^{1/p}\,.
$$
This is dominated by
$$
\bigg(\sum_{j\in{\rm scl}({\bf T})}\sum_{I:|I|=2^{-k_j}}
\int_I \bigg| \big({\bf 1}^*_{{\bf Sh}_j({\bf T})} - {\bf 1}^*_{{\bf Sh}_{j-L}({\bf T})}\big)(x)
(\tilde{\bf 1}^*_{{\bf Sh}_{j-L}({\bf T})}(x))^{-\frac{1}{10}} \Pi_j^*(f_\ell)(x) \bigg|^p dx \bigg)^{1/p}\,,
$$
where $\tilde{\bf 1}^*_{{\bf Sh}_j({\bf T})}$ is the function defined in (\ref{defoftish})
and $\Pi^*_j(f_\ell)=(\tilde{\bf 1}^*_{{\bf Sh}_{j-L}({\bf T})})^{1/10}f_{\ell,j-L,0}$.
H\"older inequality, Lemma \ref{goodbmo1} and (\ref{Lpest1}) then yield
that
\begin{equation}\label{sigle2}
\big\|\Pi^*_{j}(f_\ell)\big\|_{L^p(I)}
\leq C {\rm size}^*_1({{\bf T}}){\rm size}^*_2({{\bf T}})|I|^{1/p} \,.
\end{equation}
Thus we dominate the left hand side of (\ref{diffestfinal1}) by
$$
C{\rm size}^*_1({{\bf T}}){\rm size}^*_2({{\bf T}})
\bigg( \sum_{j\in{\rm scl}({\bf T})}\sum_{I:|I|=2^{-k_j}}
\bigg\| \big({\bf 1}^*_{{\bf Sh}_j({\bf T})} - {\bf 1}^*_{{\bf Sh}_{j-L}({\bf T})}\big)
(\tilde{\bf 1}^*_{{\bf Sh}_{j-L}({\bf T})})^{-\frac{1}{10}} \bigg\|_{L^\infty(I)} |I|
\bigg)^{1/p}
$$
Since ${\bf Sh}_{j}({\bf T})\subset{\bf Sh}_{j-L}({\bf T})$, it is easy to see that
$$
\big| {\bf 1}^*_{{\bf Sh}_j({\bf T})}(x) - {\bf 1}^*_{{\bf Sh}_{j-L}({\bf T})}(x) \big|
\leq C \tilde{\bf 1}^*_{{\bf Sh}_{j-L}({\bf T})}(x)\,.
$$
On the other hand,
observe that
$|{\bf 1}^*_{{\bf Sh}_j({\bf T})}-{\bf 1}^*_{{\bf Sh}_{j-L}({\bf T})}| $ is dominated by
$$
{\bf dSh}^*_j(x) = {\bf 1}_{{\bf Sh}_{j-L}({\bf T})\backslash {\bf Sh}_{j}({\bf T})}*\psi_{k_{j-L}}(x)
+ \frac{C_N}{\big(1+ 2^{k_j}{\rm dist}(x, \partial({\bf Sh}_j({\bf T})))\big)^N}\,,
$$
for any positive integer $N$. Hence the $L^{\infty}(I)$ norm
of
$ \big({\bf 1}^*_{{\bf Sh}_j({\bf T})} - {\bf 1}^*_{{\bf Sh}_{j-L}({\bf T})}\big)
(\tilde{\bf 1}^*_{{\bf Sh}_{j-L}({\bf T})})^{-\frac{1}{10}} $ is estimated
by
$$
\frac{C_N}{\big(1+|I|^{-1}{\rm dist}(I, {\bf Sh}_{j-L}({\bf T})\backslash{\bf Sh}_{j}({\bf T})) \big)^N } + \frac{C_N}{\big(1+|I|^{-1}{\rm dist}(I, \partial({\bf Sh}_j({\bf T})) ) \big)^N }\,.
$$
For those $I$'s contained in ${\bf Sh}_j({\bf T})$, we have
$$
\frac{1}{\big(1+|I|^{-1}{\rm dist}(I, {\bf Sh}_{j-L}({\bf T})\backslash{\bf Sh}_{j}({\bf T})) \big)^N }\leq \frac{1}{\big(1+|I|^{-1}{\rm dist}(I, \partial({\bf Sh}_j({\bf T})) ) \big)^N }\,.
$$
For those $I$'s contained in $({\bf Sh}_{j-L}({\bf T}))^c$, we get
$$
\frac{1}{\big(1+|I|^{-1}{\rm dist}(I, {\bf Sh}_{j-L}({\bf T})\backslash{\bf Sh}_{j}({\bf T})) \big)^N }\leq \frac{1}{\big(1+|I|^{-1}{\rm dist}(I, \partial({\bf Sh}_{j-L}({\bf T})) ) \big)^N }\,.
$$
Thus we have
\begin{eqnarray*}
& & \sum_{I: |I|=2^{-k_j}} \frac{1}{\big(1+|I|^{-1}{\rm dist}(I, {\bf Sh}_{j-L}({\bf T})\backslash{\bf Sh}_{j}({\bf T}))\big)^N }\\
& \leq &
|I|^{-1} \big|{\bf Sh}_{j-L}({\bf T})\backslash{\bf Sh}_j({\bf T})\big|
+ {\rm Card}\big(\partial{\bf Sh}_j({\bf T})\big) + {\rm Card}\big(\partial{\bf Sh}_{j-L}({\bf T})\big)\,.
\end{eqnarray*}
By the nesting property of $j$-th shadows, the fact $2^{k_j}\sim 2^{k_{j-L}}$,
and Lemma \ref{shadowest}, we obtain that
$$
\sum_{j\in{\rm scl}({\bf T})}\sum_{I:|I|=2^{-k_j}}
\bigg\| \big({\bf 1}^*_{{\bf Sh}_j({\bf T})} - {\bf 1}^*_{{\bf Sh}_{j-L}({\bf T})}\big)
(\tilde{\bf 1}^*_{{\bf Sh}_{j-L}({\bf T})})^{-\frac{1}{10}} \bigg\|_{L^\infty(I)} |I|
\leq C|I_{\bf T}|\,,
$$
which yields the desired estimate (\ref{diffestfinal1}).
Therefore we finish the proof.
\end{proof}
\begin{lemma}\label{sizelem}
Let $\kappa\in\{1, 2\}$
and ${\bf T}$ be a convex tree in ${\bf S}^{(\kappa)}(\Omega)$. Then we have
\begin{equation}\label{lemest1115}
\big|\Lambda_{{\bf T}}(f_1, f_2, f_3)\big|
\leq C{\rm size}^*_1({{\bf T}}){\rm size}^*_2({{\bf T}})|I_{{\bf T}}|\,,
\end{equation}
where $C$ is a constant independent of ${\bf T}, {{\bf S}}, f_1, f_2, f_3$.
\end{lemma}
\begin{proof}
By Lemma \ref{difftree}, it is sufficient to show that
\begin{equation}\label{lemest1116}
\big|\tilde\Lambda_{{\bf T}}(f_1, f_2, f_3)\big|
\leq C{\rm size}^*_1({{\bf T}}){\rm size}^*_2({{\bf T}})|I_{{\bf T}}|\,,
\end{equation}
where $C$ is a constant independent of ${\bf T}, {{\bf S}}, f_1, f_2, f_3$.\\
We first prove the simple case $\kappa=1$. In this case,
$k_{j2}= k_j$ for all $(j, n)\in{\bf T}$.
We thus dominate $|\tilde\Lambda_{\bf T}|$ by
$$
\int_{\mathbb R} \sup_{j}\bigg|\sum_{n\in{\bf T}_j}F_{2,j, n}(x)\bigg|
\prod_{\ell\neq 2} \bigg( \sum_{(j,n)\in{\bf T}}\big| F_{\ell, j, n}(x)\big|^2 \bigg)^{1/2} dx\,.
$$
By the definition of $\Delta_\ell$ and H\"older inequality, we estimate
$|\Lambda_{\bf T}|$ by
$$
\big\|\sup_{(j,n)\in{\bf T}}\big|F^*_{2,j, n_2}\big| \big\|_\infty
\big\|\Delta_1({\bf T})\big\|_p \big\|\Delta_3({\bf T})\big\|_{p'}\,,
$$
where $1/p+1/p'=1$ and $F^*_{\ell,j, n}={\bf 1}^{**}_{j,n}f_{\ell, j, n_\ell}$.
Lemma \ref{goodbmo1} yields that
$$
\big\| F^*_{2,j, n}\big\|_\infty\leq {\rm size}^*_2({{\bf T}})\,.
$$
Clearly the definition of size yields
$$\|\Delta_1({\bf T})\|_p\leq {\rm size}^*_1({{\bf T}})|I_{\bf T}|^{1/p} \,.$$
And (\ref{Lpest1}) yields
$$
\big\|\Delta_3({\bf T})\big\|_{p'}\leq C|I_{\bf T}|^{1/p'}\,.
$$
Putting all of them together, we obtain (\ref{lemest1115}) for the case $\kappa=1$.
\\
We now prove the case $\kappa=2$. In this case, $ 2^{k_j}\sim
2^{k_{j1}}$ for all $(j,n)\in{\bf T} $.
For simplicity, we only consider the case $n_\ell=0$. The general case can be done in the same way
by paying a cost of $(1+|n_\ell|)^{10}$ in the constant.
Then we write the trilinear form $\tilde\Lambda_{\bf T}$ as
$$
\tilde\Lambda_{\bf T}(f_1, f_2, f_3)=\sum_{j\in\mathbb Z} \int\prod_{\ell=1}^{3}
F_{\ell, j}(x) dx\,,
$$
where $F_{\ell,j}$ is defined in (\ref{defofFlj}). Here we take a
convenient notation that $F_{\ell, j}$ is identically zero if $j\notin
{\rm scl}({\bf T})$. Let $L=2^{100}$. By the telescoping argument used in
Lemma \ref{telelem1}, we can write $\tilde\Lambda_{\bf T}$ as a finite sum of
two types of trilinear forms. One type of them is defined by
\begin{equation}\label{goodtype}
\Lambda_{{\bf T}, 1}(f_1, f_2, f_3)=\int\sum_{j\in\mathbb Z } F_{1,
j+m'(j)-M}(x)
\Pi_{j,L}(F_{2,j}, F_{3,j})(x)dx\,,
\end{equation}
where $m'(j)=[(L_2j+M_2-L_1j-M_1+6)/L_1]$, $M$ is an integer between
$0$ and $6L$, and $\Pi_{j,L}(F_{2,j}, F_{3,j})$ equals to
$(F_{2,j}-F_{2, j-L})
F_{3, j-8L}$ or $F_{2, j-L}(F_{3,j}-F_{3,j-L})$.
Another type of them is defined by
\begin{equation}\label{badtype}
\int\sum_{j\in\mathbb Z } \bigg(\sum_{k=0}^{m'(j)}F_{1, j+k}(x)
\bigg)
\big(F_{2,j}(x)-F_{2,j-L}(x)\big)\big(F_{3,j-M}(x)-F_{3,j-M-L}(x)\big)dx\,,
\end{equation}
which is denoted by $\Lambda_{{\bf T}, 2}(f_1, f_2, f_3) $. \\
We now prove the estimate for the first type trilinear form $\Lambda_{{\bf T},1}$.
Let us first consider the case
$$
\Lambda_{{\bf T},1}(f_1, f_2, f_3)=\int\sum_{j\in\mathbb Z } F_{1,
j+m'(j)-M}(x)(F_{2,j}-F_{2, j-L})(x)
F_{3, j-8L}(x)dx\,.
$$
In this case, by Cauchy-Schwarz inequality,
$|\Lambda_{{\bf T},1}|$ is estimated by
$$
\int \bigg(\sum_j\big|F_{1,j+m'(j)-M}(x)F_{3,j-8L}(x)\big|^2\bigg)^{1/2}
\bigg(\sum_j\big| F_{2,j}(x)-F_{2,j-L}(x)\big|^2\bigg)^{1/2}
dx\,.
$$
Using H\"older inequality, we dominate it by
$$
\bigg\|\bigg(\sum_j\big|F_{1,j+m'(j)-M}F_{3,j-8L}\big|^2\bigg)^{1/2}
\bigg\|_{p'}\bigg\|\bigg(\sum_j\big|
F_{2,j}-F_{2,j-L}\big|^2\bigg)^{1/2} \bigg\|_p\,.
$$
The first factor in this expression is no more than
$$
\bigg\|\bigg(\sum_j\bigg|
\sum_{n\in{\bf T}_{j+m'(j)-M}}{\bf 1}^*_{j+m'(j)-M,n}f_{1,j+m'(j)-M,n_1}
f_{3,j-8L,0}\bigg|^2\bigg)^{1/2} \bigg\|_{p'}\,,
$$
which is dominated by
$$
\bigg\|\bigg(\sum_j
\sum_{n\in{\bf T}_{j+m'(j)-M}}\bigg|\big(\tilde{\bf 1}^{*}_{j+m'(j)-M,n}\big)^2f_{1,j+m'(j)-M,n_1}
f_{3,j-8L,0}\bigg|^2\bigg)^{1/2} \bigg\|_{p'}\,.
$$
We estimate it by
$$
\bigg\|\bigg(\sum_{(j,n)\in{\bf T}}
\big|\tilde{\bf 1}^{*}_{j,n}f_{1,j,n_1}\big|^2\bigg)^{1/2} \bigg\|_{p'}
\sup_{(j,n)\in{\bf T}}\big\|\tilde{\bf 1}^{*}_{j,n}f_{3,\zeta(j,M,K),0}\big\|_\infty\,,
$$
where $K$ is some integer between $-10L$ and $10L$ and
$\zeta(j,M,K)$is defined as in (\ref{defofzeta}). Clearly,
$\tilde{\bf 1}^*_{j,n}f_{3,\zeta(j,M,K),0}$ is bounded. Also by Lemma
\ref{goodbmo2} and an interpolation, we have
\begin{equation}\label{est1term}
\bigg\|\bigg(\sum_j\big|\tilde{\bf 1}^{*}_{j,n}f_{1,j,n_1}
\big|^2\bigg)^{1/2} \bigg\|_{p'}\leq
C{\rm size}^*_1({\bf T})|I_{\bf T}|^{1/p'}\,.
\end{equation}
And Lemma \ref{diffest23} yields that
\begin{equation}\label{est2term}
\bigg\|\bigg(\sum_j\big| F_{2,j}-F_{2,j-L}\big|^2\bigg)^{1/2}
\bigg\|_p\leq {\rm size}_2^*({\bf T})|I_{\bf T}|^{1/p}\,.
\end{equation}
(\ref{est1term}) and (\ref{est2term}) give us the desired estimate
for $\Lambda_{{\bf T},1}$ in the first case.
We now consider the case
$$
\Lambda_{{\bf T},1}(f_1, f_2, f_3)=\int\sum_{j\in\mathbb Z } F_{1,
j+m'(j)-M}(x)F_{2,j-L}(x) \big( F_{3, j}-F_{3, j-L}\big)(x)dx\,.
$$
In this case, using Cauchy-Schwarz inequality, we have that
$|\Lambda_{{\bf T},1}|$ is estimated by
$$
\int \bigg(\sum_j\big|F_{1,j+m'(j)-M}(x)F_{2,j-L}(x)\big|^2\bigg)^{1/2}
\bigg(\sum_j\big| F_{3,j}(x)-F_{3,j-L}(x)\big|^2\bigg)^{1/2}
dx\,.
$$
By H\"older inequality, we dominate it by
$$
\bigg\|\bigg(\sum_j\big|F_{1,j+m'(j)-M}F_{2,j-L}\big|^2\bigg)^{1/2}
\bigg\|_{p'}\bigg\|\bigg(\sum_j\big|
F_{3,j}-F_{3,j-L}\big|^2\bigg)^{1/2} \bigg\|_p\,.
$$
The first factor in this expression is no more than
$$
\bigg\|\bigg(\sum_j
\sum_{n\in{\bf T}_{j+m'(j)-M}}\bigg|\big(\tilde{\bf 1}^{*}_{j+m'(j)-M,n}\big)^2f_{1,j+m'(j)-M,n_1}
f_{2,j-L,0}\bigg|^2\bigg)^{1/2} \bigg\|_{p'}\,.
$$
We estimate it by
$$
\bigg\|\bigg(\sum_{(j,n)\in{\bf T}}
\big|\tilde{\bf 1}^{*}_{j,n}f_{1,j,n_1}\big|^2\bigg)^{1/2} \bigg\|_{p'}
\sup_{(j,n)\in{\bf T}}\big\|\tilde{\bf 1}^{*}_{j,n}f_{2,\zeta(j,M,K),0}\big\|_\infty\,,
$$
where $K$ is some integer between $-10L$ and $10L$ and
$\zeta(j,M,K)$is defined as in (\ref{defofzeta}). By
(\ref{smallboundzeta}) and the definition of size, we see that
\begin{equation}\label{2case2est}
\sup_{(j,n)\in{\bf T}}\big\|\tilde{\bf 1}^{*}_{j,n}f_{2,\zeta(j,M,K),0}\big\|_\infty
\leq C{\rm size}^*_2({\bf T})\,.
\end{equation}
Lemma \ref{diffest23} and (\ref{Lpest2}) yield that
\begin{equation}\label{est3term}
\bigg\|\bigg(\sum_j\big| F_{3,j}-F_{3,j-L}\big|^2\bigg)^{1/2}
\bigg\|_p\leq |I_{\bf T}|^{1/p}\,.
\end{equation}
Putting (\ref{est1term}), (\ref{2case2est}) and (\ref{est3term})
together, we thus get the desired estimate for $\Lambda_{{\bf T},1}$ in the
second case.\\
Finally let us estimate $\Lambda_{{\bf T},2}$. The integrand in
(\ref{badtype}) is dominated by
$$
\sup_j\bigg|\sum_{k=0}^{m'(j)}F_{1, j+k}(x) \bigg|
\bigg(\sum_{j\in\mathbb Z }
\big|\big(F_{2,j}-F_{2,j-L}\big)(x)\big|^2\bigg)^{\frac{1}{2}}
\bigg(\sum_{j\in\mathbb Z }
\big|\big(F_{3,j-M}-F_{3,j-M-L}\big)(x)\big|^2\bigg)^{\frac{1}{2}}.
$$
There exist $p_1, p_3\in\mathbb R$ such that $1/p_1+1/p+1/p_3=1$ and
$p_1>p', p_3>1$. By H\"older inequality we dominate $\Lambda_{{\bf T},2}$ by
$$
\bigg\|\sup_j\bigg|\sum_{k=0}^{m'(j)}F_{1, j+k}(x)
\bigg|\bigg\|_{p_1}\bigg\| \bigg(\sum_{j\in\mathbb Z }
\big|F_{2,j}-F_{2,j-L}\big|^2\bigg)^{\frac{1}{2}}\bigg\|_{p}
\bigg\|\bigg(\sum_{j\in\mathbb Z }
\big|F_{3,j-M}-F_{3,j-M-L}\big|^2\bigg)^{\frac{1}{2}}\bigg\|_{p_3}.
$$
Just notice that one can simply define the size with respect to any
number $p_3$ by using $L^{p_3}$, then (\ref{Lpest2}) and Lemma
\ref{diffest23} still hold. Thus we have
\begin{equation}\label{anyqest}
\bigg\|\bigg(\sum_{j\in\mathbb Z }
\big|F_{3,j-M}-F_{3,j-M-L}\big|^2\bigg)^{\frac{1}{2}}\bigg\|_{p_3}
\leq C|I_{\bf T}|^{1/p_3}
\end{equation}
Notice that the supports of Fourier transform of $F_{1,j+k}$'s are
essentially disjoint. We thus have
$$
\bigg\|\sup_j\bigg|\sum_{k=0}^{m'(j)}F_{1, j+k}(x)
\bigg|\bigg\|_{p_1}\leq C\bigg \|\sum_j F_{1,j}\bigg\|_{p_1}\,.
$$
Clearly,
$$
\bigg\|\sum_j F_{1,j}\bigg\|_2 \leq
\big\|\Delta_1({\bf T})\big\|_2\,.
$$
By Lemma \ref{goodbmo2} and an interpolation, we have that
$$
\big\|\Delta_1({\bf T})\big\|_2\leq C{\rm size}_1^*({\bf T})|I_{\bf T}|^{1/2}\,.
$$
Thus we get
$$
\bigg\|\sum_j F_{1,j}\bigg\|_2\leq C{\rm size}_1^*({\bf T})|I_{\bf T}|^{1/2}\,.
$$
A routine argument as we did in Lemma \ref{goodbmo2} yields
\begin{equation}\label{bmoest111}
\bigg\|\sum_j F_{1,j}\bigg\|_{BMO}\leq C{\rm size}_1^*({\bf T})\,.
\end{equation}
Now by an interpolation, we obtain that
\begin{equation}\label{p1normest}
\bigg\|\sum_j F_{1,j}\bigg\|_{p_1} \leq
C{\rm size}_1^*({\bf T})|I_{\bf T}|^{1/p_1}\,.
\end{equation}
Hence the desired estimate for $\Lambda_{{\bf T},2}$ now follows by
(\ref{p1normest}), (\ref{est2term}) and (\ref{anyqest}). Therefore
we obtain Lemma \ref{sizelem}.
\end{proof}
\subsection{Proof of Lemma \ref{ftriest}}
We now prove Lemma \ref{ftriest}. Without loss of generality, we
can assume that ${\bf S}$ is a convex set. Lemma \ref{convexitylem}
then yields that ${\bf S}^{(1)}(\Omega)$ and ${\bf S}^{(2)}(\Omega)$ are
convex. By the definition of convexity, we see that the convexity is
preserved for a maximal tree in a convex set and the remaining set
obtained by removing a maximal tree from a convex set. Thus,
applying the organization lemma \ref{prilem} for
${\bf S}^{(\kappa)}(\Omega)$ inductively, we decompose
\begin{equation}\label{goodde}
{\bf S}^{(\kappa)}(\Omega)=\bigcup_{\sigma} {\bf S}^{(\kappa)}_\sigma\,,
\end{equation}
where $\kappa \in\{1, 2\}$, $\sigma $ ranges over all possible dyadic numbers,
${\bf S}^{(\kappa)}_\sigma=\cup_{{\bf T}\in\mathcal F_\sigma^{(\kappa)}}{\bf T}$
such that $\mathcal F_\sigma^{\kappa} $ is a collection of convex trees
with
\begin{equation}\label{ctest1}
{\rm count}({\bf S}_\sigma^{(\kappa)})\leq C\sigma^{-p}\,,
\end{equation}
and for both $\ell=1$ and $\ell=2$,
\begin{equation}\label{sizekaest}
{\rm size}^*_\ell({\bf S}_\sigma^{(\kappa)})\leq \sigma|F_\ell|^{1/p}\,.
\end{equation}
By Lemma \ref{Lpest} and the definition of ${\bf S}(\Omega)$,
we know that $\sigma \leq 1$ in order to make
${\bf S}_\sigma^{(\kappa)}$ nonempty and
we can also sharpen the upper bound in
the size estimate for ${\bf S}_\sigma^{(\kappa)}$ by
\begin{equation}\label{sizekaest1}
{\rm size}^*_\ell({\bf S}_\sigma^{(\kappa)})\leq \min\{1, \sigma|F_\ell|^{1/p}\}\,.
\end{equation}
Hence we estimate $\Lambda_{{\bf S}(\Omega)}$ by
$$
\big|\Lambda_{{\bf S}(\Omega)}(f_1, f_2, f_3)\big|\leq \sum_{\kappa=1}^2
\sum_{\sigma \leq 1} \sum_{{\bf T}\in\mathcal F_\sigma^{(\kappa)}}
\big|\Lambda_{\bf T}(f_1, f_2, f_3) \big|\,.
$$
Lemma \ref{sizelem} yields that
$$
\big|\Lambda_{{\bf S}(\Omega)}(f_1, f_2, f_3)\big|\leq \sum_{\kappa=1}^2
\sum_{\sigma \leq 1} \sum_{{\bf T}\in\mathcal F_\sigma^{(\kappa)}}
{\rm size}^*_1({\bf S}_\sigma^{(\kappa)}) {\rm size}^*_2({\bf S}_\sigma^{(\kappa)}) |I_{\bf T}| \,.
$$
Applying (\ref{sizekaest1}) and (\ref{ctest1}), we thus obtain
\begin{equation}\label{finalest}
\big|\Lambda_{{\bf S}(\Omega)}(f_1, f_2, f_3)\big|\leq
C\sum_{\sigma\leq 1}\min\{1, \sigma|F_1|^{1/p}\}
\min\{1, \sigma|F_2|^{1/p}\}\sigma^{-p}\,,
\end{equation}
which clearly implies (\ref{triest1121}).
Therefore we complete the proof of Lemma {\ref{ftriest}}.\\
\section{Proof of Theorem \ref{para2est}}\label{proofpara2}
\setcounter{equation}0
We now prove Theorem \ref{para2est}.
The uniform estimate from $L^2\times L^2$ to $L^1$
follows immediately by a change of variables and Littlewood-Paley theory
and (\ref{2large1}) is superfluous.
Take this simple idea and we can get the uniform estimate for $p_1, p_2>2$
and $1<r<2$ in Proposition \ref{uniestp2good} for
the case $2^{L_2j+M_2}< 2^{L_1j+M_1}/8$ or $2^{L_1j+M_1}<2^{L_2j+M_2}/8$.
For the general case, we pay a cost of $m$ in the operator norm
in this range of $p_1, p_2, p$ to get Lemma \ref{uniestp2}.
For $r<1$ case, we use some idea from Section \ref{para2} and one can
see that technically it is much simpler than what we did in Section \ref{para2}.
We have to assume (\ref{2large1}) and pay a little more for the operator norm such as $2^{\varepsilon m}$ (see Lemma \ref{uniestp11}).
The uniform estimate might be true but $2^{\varepsilon m}$ for a small $\varepsilon>0$ is good enough for our application.
As we did in Section \ref{para2}, we set up a trilinear form first.
Let us ignore the condition (\ref{2large1}) for a while.
If $2^{L_2j+M_2}< 2^{L_1j+M_1}/8$, let $\omega'_{3,j}=\{\xi:
2^{L_1j+M_1}/8\leq |\xi|\leq 19\cdot 2^{L_1j+M_1}/8\}$ and $\Phi_{3, j}$ be a Schwartz
function whose Fourier transform is a bump function adapted to
$\omega'_{3,j}$ such that $\widehat\Phi_{3,j}(\xi)=1$ for all $2^{L_1j+M_1}/4 \leq
|\xi|\leq 9\cdot 2^{L_1j+M_1}/4$.
If $2^{L_1j+M_1}< 2^{L_2j+M_2}/8$, let $\omega'_{3,j}=\{\xi:
2^{L_2j+M_2}/8\leq |\xi|\leq 19\cdot 2^{L_2j+M_2}/8\}$ and $\Phi_{3, j}$ be a Schwartz
function whose Fourier transform is a bump function adapted to
$\omega'_{3,j}$ such that $\widehat\Phi_{3,j}(\xi)=1$ for all $2^{L_2j+M_2}/4 \leq
|\xi|\leq 9\cdot 2^{L_2j+M_2}/4$.
If $ 2^{L_1j+M_1}/8 \leq 2^{L_2j+M_2}\leq
8\cdot 2^{L_1j+M_1}$, let
$\omega'_{3,j}=\{\xi: |\xi|\leq 18\cdot \max\{2^{L_1j+M_1}, 2^{L_2j+M_2}\}\}$
and $\Phi_{3, j}$ be a Schwartz function whose
Fourier transform is a bump function adapted to $\omega'_{3,j}$ such
that $\widehat\Phi_{3,j}(\xi)=1$ for all $ |\xi| \leq 17\cdot
\max\{2^{L_1j+M_1}, 2^{L_2j+M_2}\}$.
Let $\Phi_{3,j,m}=\Phi_{3,j} $,
$f_{3, j, m}(x)=f_{3,j,0}(x)=f_3*\Phi_{3,j, 0}(x)$. Define a trilinear form
$\Lambda_{L_1, L_2, M_1, M_2, m}$ by
\begin{equation}\label{defoftriformlam}
\Lambda_{L_1, L_2, M_1, M_2, m}(f_1, f_2, f_3)
=\int \sum_{j\in\mathbb Z}\prod_{\ell=1}^3 f_{\ell, j, m}(x) dx\,.
\end{equation}
Clearly $ \Lambda_{L_1, L_2, M_1, M_2, m} = \int \Pi_{L_1, L_2,
M_1, M_2, m}(f_1, f_2)(x)f_3(x)dx$.
We will prove the following two lemmata.
\begin{lemma}\label{uniestp20}
Let $p_1, p_2> 2$ and $1< r < 2$ such that $1/p_1+1/p_1=1/r$. Let
$F_1, F_2, F_3$ be measurable sets in $\mathbb R$. There exists a
constant $C$ independent of $F_1, F_2, F_3, f_1, f_2, f_3$, $M_1$,
$M_2$, $m$ such that
\begin{equation}\label{largepest20}
\big|\Lambda_{L_1, L_2, M_1, M_2,m}(f_1, f_2, f_3)\big|\leq
Cm|F_1|^{1/p_1}|F_2|^{1/p_2}|F_3|^{1/r'}\,
\end{equation}
holds for all $f_1\in X(F_1)$, $f_2\in X(F_2)$ and $f_3\in X(F_3)$.
\end{lemma}
\begin{lemma}\label{uniestp110}
Let $\varepsilon$ be any positive number, $1<p<2$ and $F_1, F_2, F_3$ be
measurable sets in $\mathbb R$ such that $|F_3|=1$.
Suppose (\ref{2large1}) holds for all $j$'s. Then
there is a
subset $F_3'\subset F_3$ with $|F'_3|\geq |F_3|/2$ such that for all
$p_1, p_2\geq p$ with $1/p_1+1/p_2\geq 1$, and all functions $f_1\in
X(F_1)$, $f_2\in X(F_2)$, $f_3\in X(F_3)$, the following inequality
holds.
\begin{equation}\label{psmallest110}
\big|\Lambda_{L_1, L_2, M_1, M_2, m}(f_1, f_2, f_3)\big|\leq C2^{\varepsilon
m}|F_1|^{1/p_1}|F_2|^{1/p_2}
\, ,
\end{equation}
where $C$ is a constant independent of ${\bf S}$, $F_1, F_2, F_3, f_1,
f_2, f_3, M_1, M_2, m$.
\end{lemma}
Theorem \ref{para2est} is a consequence of these two lemmas
by using interpolation and duality. We also have a corollary from
Lemma \ref{uniestp20} by a simple interpolation.
\begin{corollary}\label{cor91}
Let $p_1, p_2> 2$ and $1< r < 2$ such that $1/p_1+1/p_1=1/r$.
There exists a
constant $C$ independent of $F_1, F_2, F_3, f_1, f_2, f_3$, $M_1$,
$M_2$, $m$ such that
\begin{equation}\label{largepest200}
\big\|\Pi_{L_1, L_2, M_1, M_2,m}(f_1, f_2)\big\|_r\leq
Cm \|f_1\|_{p_1}\|f_2\|_{p_2}\,
\end{equation}
holds for all $f_1\in L^{p_1}$ and $f_2\in L^{p_2}$.
\end{corollary}
\subsection{Proof of Lemma \ref{uniestp20}}\label{subpl}
For $\ell\in\{1,2,3\}$, let ${\rm Tr}_{\ell,j, m}$ be a translation
function defined by
\begin{equation}\label{defoftr}
{\rm Tr}_{\ell,j,m}(x)=x+{m_{j\ell}}\,,
\end{equation}
where $m_{j\ell}=2^{m-jL_\ell-M_\ell}$ if $\ell\in\{1,2\}$ and
$m_{j3}=0$. Notice that
$f_{\ell,j,m}(x)=f_{\ell,j,0}({\rm Tr}_{\ell,j,m}(x))$. Write $\Lambda_{L_1,
L_2, M_1, M_2, m}$ as
$$
\Lambda_{L_1, L_2, M_1, M_2, m}(f_1, f_2, f_3)= \int_{\mathbb
R}\prod_{\ell=1}^{3}\sum_{(j, n)\in\mathbb Z\times \mathbb Z}
{\bf 1}^*_{j,n}\big({\rm Tr}_{\ell,j,m}(x)\big)f_{\ell,j,
0}\big({\rm Tr}_{\ell,j,m}(x)\big)
dx\,.
$$
For ${\bf S}\subset \mathbb Z(\gamma)\times \mathbb Z$ we define
\begin{equation}\label{deflam}
\Lambda_{{\bf S}, m}(f_1, f_2, f_3)=\int_{\mathbb R}\sum_{j\in\mathbb Z}
\prod_{\ell=1}^{3}\sum_{n\in{\bf S}_j}
F_{\ell,j, n, m}(x) dx\,,
\end{equation}
where ${\bf S}_j=\{n: (j,n)\in {\bf S}\}$ and $F_{\ell,j,n,m}$ is defined by
\begin{equation}\label{defFljnm}
F_{\ell, j, n, m}(x)=
\big( ({\bf 1}^*_{j,n}f_{\ell,j, 0})\circ{\rm Tr}_{\ell,j,m}\big)(x)\,.
\end{equation}
Let $k_{ j\ell}$ be an integer such that $|\omega'_{\ell,j}|\sim 2^{k_{
j\ell}}$. For $s=(j,n)\in{\bf S}$, let $k_s=k_j=\min_\ell{k_{j\ell}}$.
The time interval of $s$ is defined by $I_s=[2^{-k_s}n,
2^{-k_s}(n+1)]$. We then can define a tree in ${\bf S}$ as in Section
\ref{para2}. To prove Lemma \ref{uniestp20}, it is sufficient to
prove the following lemma.
\begin{lemma}\label{uniestp2}
Let $p_1, p_2> 2$ and $1< r < 2$ such that $1/p_1+1/p_1=1/r$.
Let $F_1, F_2, F_3$ be measurable sets in $\mathbb R$.
There exists a constant $C$ independent of $F_1, F_2, F_3, f_1, f_2, f_3$,
$M_1$, $M_2$, $m$
such that
\begin{equation}\label{largepest2}
\big|\Lambda_{{\bf S}, m}(f_1, f_2, f_3)\big|\leq Cm|F_1|^{1/p_1}|F_2|^{1/p_2}|F_3|^{1/r'}\,
\end{equation}
holds for all $f_1\in X(F_1)$, $f_2\in X(F_2)$ and $f_3\in X(F_3)$.
\end{lemma}
By scaling invariance, we can assume that $|F_3|=1$.
We partition ${\bf S}$ into two subsets ${\bf S}^{(1)}$ and ${\bf S}^{(2)}$, where
\begin{equation}\label{defS(1)}
{\bf S}^{(1)}= \{(j,n)\in {\bf S}: |\omega'_{2,j}|\leq |\omega'_{1,j}|/10\,\, {\rm or}\,\,
|\omega'_{1,j}|\leq |\omega'_{2,j}|/10\}
\end{equation}
\begin{equation}\label{defS(2)}
{\bf S}^{(2)}= {\bf S}\backslash {\bf S}^{(1)}\,.
\end{equation}
We should change the definitions of sizes of trees in ${\bf S}$.
\begin{definition}\label{defofnorm111}
Let $(j,n)\in{\bf S}$ and $\ell\in\{1, 2,3\}$. Define a semi-norm
$\big\|f_\ell\big\|_{j,n}$ by
\begin{equation}\label{defnormm}
\big\|f_{\ell}\|_{j,n} =\frac{1}{|I_s|^{1/2}}
\big\|{\bf 1}^{**}_{j,n}f_{\ell, j,0}\big\|_2
+ \frac{1}{|I_s|^{1/2}}\big\|2^{-k_{j\ell}}{\bf 1}^{**}_{j,n}
Df_{\ell, j, 0}\big\|_2\,,
\end{equation}
where $Df_{\ell,j, 0}$ is the derivative of $f_{\ell,j,0}$.
\end{definition}
\begin{definition}
For $\ell\in\{1,2, 3\}$ and a tree ${\bf T}$,
let $(j_{\bf T}, n_{\bf T})$ be the top of the tree ${\bf T}$. And define
\begin{equation}\label{defdelta1111}
\Delta^*_{\ell}({\bf T})(x)=\bigg(\sum_{(j,n)\in{\bf T}}
\big|{\bf 1}^{*}_{j,n}f_{\ell, j,0}(x)\big|^2 \bigg)^{1/2}\,.
\end{equation}
If ${\bf T} $ is a tree in ${\bf S}^{(1)}$, we
define
\begin{equation}\label{size12p2}
{\rm size}_{\ell}({\bf T}) = \frac{1}{|I_{\bf T}|^{1/2}}\big\|\Delta^*_{\ell}({\bf T})
\big\|_2 + \big\|f_\ell\big\|_{j_{\bf T},n_{\bf T}}\,,
\end{equation}
for all $\ell\in\{1,2,3\}$.
If ${\bf T} $ is a tree in ${\bf S}^{(2)}$, define ${\rm size}_\ell({\bf T})$
by (\ref{size12p2}) only for $\ell\in\{1,2\}$. For $\ell=3$,
we define the size by
\begin{equation}\label{size3p2}
{\rm size}_{3}({\bf T}) = \big\|f_3\big\|_{j_{\bf T},n_{\bf T}}\,,
\end{equation}
Let ${\bf P}$ be a subset of ${\bf S}$. Define the $\ell$-${\rm size}^*$
of ${\bf T}$ by
\begin{equation}
{\rm size}^*_\ell({\bf P})=\sup_{{\bf T}: {\bf T}\subset {\bf P}}{\rm size}_\ell({\bf T})\,,
\end{equation}
where ${\bf T}$ ranges over all trees in ${\bf P}$.
\end{definition}
One should notice that for $\Lambda_{{\bf S}^{(1)},m}$ we have a uniform estimate
for $p_1, p_2>2$ and $1<r<2$. We state it as follow
\begin{proposition}\label{uniestp2good}
Let $p_1, p_2>2$ and $1<r<2$ with $1/p_1+1/p_2=1/r$.
Let $f_1\in L^{p_1}$, $f_2\in L^{p_2}$ and $f_3\in L^{r'}$. Then
\begin{equation}\label{goodp2est}
\big|\Lambda_{{\bf S}^{(1)},m}(f_1, f_2, f_3)\big|
\leq C\|f_1\|_{p_1}\|f_2\|_{p_2}\|f_3\|_{r'}\,,
\end{equation}
where $C$ is independent of $m$, $f_1, f_2, f_3$.
\end{proposition}
\begin{proof}
We do not need time frequency analysis for this proposition.
The key point is that when $s\in{\bf S}^{(1)}$ the support of Fourier
transform of $f_{3, j, 0}$ is away from the origin so that we can apply
Littlewood-Paley Theorem for the square function generated by $f_{3, j, 0}$'s.
Clearly $|\Lambda_{{\bf S}^{(1)}, m}|$ is estimated by
$$
\int_{\mathbb R} \sum_j\prod_{\ell=1}^3 f_{\ell, j, 0}({\rm Tr}_{\ell, j, m}(x)) dx\,.
$$
By
H\"older inequality, we dominate $|\Lambda_{{\bf S}^{(1)}, m}|$ by
$$
\bigg\| \bigg( \sum_j \big|f_{1,j,0}\circ{\rm Tr}_{1,j,m}\big|^{p_1}\bigg)^{1/p_1}
\bigg\|_{p_1}
\bigg\|\bigg( \sum_j \big|f_{2,j,0}\circ{\rm Tr}_{2,j,m}\big|^{p_2}\bigg)^{1/p_2}
\bigg\|_{p_2}
\bigg\|\bigg( \sum_j \big|f_{3,j,0}\big|^{r'}\bigg)^{1/r'}\bigg\|_{r'}\,.
$$
By a change of variables, it is clear that for $\ell=1, 2$,
$$
\bigg\| \bigg( \sum_j \big|f_{\ell,j,0}\circ{\rm Tr}_{\ell,j,m}\big|^{p_\ell}\bigg)^{1/p_\ell}
\bigg\|_{p_\ell}
= \bigg\| \bigg( \sum_j \big|f_{\ell,j,0}\big|^{p_\ell}\bigg)^{1/p_\ell}
\bigg\|_{p_\ell}\,.
$$
Notice the elementary inequality
$$
\bigg(\sum_{j}|a_j|^q\bigg)^{1/q} \leq \bigg(\sum_{j}|a_j|^2\bigg)^{1/2}
$$
holds for $q \geq 2$.
We thus dominate $|\Lambda_{{\bf S}^{(1)}, m}|$ by
$$
\bigg\| \bigg( \sum_j \big|f_{1,j,0}\big|^{2}\bigg)^{1/2}
\bigg\|_{p_1}
\bigg\|\bigg( \sum_j \big|f_{2,j,0}\big|^{2}\bigg)^{1/2}
\bigg\|_{p_2}
\bigg\|\bigg( \sum_j \big|f_{3,j,0}\big|^{2}\bigg)^{1/2}\bigg\|_{r'}\,.
$$
Now Littlewood-Paley theorem yields the desired estimate
(\ref{goodp2est}). This proves the proposition.
\end{proof}
We now use time frequency analysis to prove Lemma \ref{uniestp2}.
Although we only need to estimate $\Lambda_{{\bf S}^{(2)}, m}$ due to Proposition
\ref{uniestp2good}, we still write a proof for both of
$\Lambda_{{\bf S}^{(1)}, m}$ and $\Lambda_{{\bf S}^{(2)}, m}$.
We first prove the size estimate for a single tree, that is,
\begin{equation}\label{atreesize}
\big|\Lambda_{{\bf T}, m}(f_1, f_2, f_3)\big|\leq C \prod_{\ell=1}^3
{\rm size}^*_\ell({\bf T})|I_{\bf T}|\,.
\end{equation}
We only prove the case when ${\bf T}$ is a tree in ${\bf S}^{(2)}$ for (\ref{atreesize})
since the other case is similar.
In this case $2^{k_{j\ell}}\sim 2^{k_j}$ for all $\ell$ in $\{1,2,3\}$.
We thus dominate $|\Lambda_{{\bf T},m}|$ by
$$
\int_{\mathbb R} \sup_{(j,n)\in{\bf T}}\big|
({\bf 1}^{**}_{j,n}f_{3,j,0})\circ{\rm Tr}_{\ell,j,m}(x)\big|
\prod_{\ell\neq 3} \bigg( \sum_{(j,n)\in{\bf T}}\big|
({\bf 1}^{**}_{j,n}f_{\ell,j,0})\circ{\rm Tr}_{\ell,j,m}(x)\big|^2 \bigg)^{1/2} dx\,.
$$
By the definition of $\Delta_\ell$ and H\"older inequality, we estimate
$|\Lambda_{{\bf T},m}|$ by
$$
\sup_{(j,n)\in{\bf T}}\big\|F^*_{3,j, 0}\big\|_\infty
\big\|\Delta^*_{1}({\bf T})\big\|_2 \big\|\Delta^*_{2}({\bf T})\big\|_2\,,
$$
where $F^*_{3,j,0}={\bf 1}^{**}_{j,n}f_{3,j,0}$.
Notice that Lemma \ref{goodbmo1} holds for the semi-norm. Thus we have
$$
\big\| F^*_{3,j,0}\big\|_\infty\leq {\rm size}^*_3({{\bf T}})\,.
$$
Clearly the definition of size yields
$$\|\Delta_\ell({\bf T})\|_2\leq {\rm size}^*_\ell({{\bf T}})|I_{\bf T}|^{1/2} \,$$
for $\ell\in\{1,2\}$.
Putting all of them together, we obtain (\ref{atreesize}).\\
\begin{lemma}\label{treeout}
Let $\kappa\in\{1,2\}$, ${\bf T}$ be a tree in ${\bf S}^{(\kappa)}$
and ${\bf P}$ be a subset of ${\bf S}^{(\kappa)}$.
Suppose that ${\bf P}\cap{\bf T}=\emptyset$ and ${\bf T}$ is a maximal tree in ${\bf P}\cup
{\bf T}$. Then we have
\begin{equation}\label{treeout1}
\big|\Lambda_{{\bf P}\cup{\bf T}, m}(f_1, f_2, f_3)-\Lambda_{{\bf P}, m}(f_1, f_2, f_3)\big|
\leq Cm\prod_{\ell=1}^3{\rm size}^*_\ell({\bf T}\cup {\bf P})|I_{\bf T}|\,,
\end{equation}
where $C$ is independent of $f_1, f_2, f_3, {\bf P}$, ${\bf T}$.
\end{lemma}
\begin{proof}
Clearly the difference $|\Lambda_{{\bf P}\cup{\bf T},m}-\Lambda_{{\bf P},m}|$ is dominated by
a sum of $C\Lambda_{{\bf T}, m}$ and
at most finite many following trilinear forms
$$
\bigg|\int
\sum_{j\in{\rm scl}({\bf T})}
\bigg(\sum_{n\in {\bf T}_j}F_{\ell_1,j, n, m}(x)\bigg)
\bigg( \sum_{n\in {\bf P}_j}F_{\ell_2,j, n, m}(x)\bigg)
\bigg(\sum_{n\in ({\bf P}\cup{\bf T})_j} F_{\ell_3,j, n, m}(x)\bigg)dx\bigg|\,,
$$
where $(\ell_1, \ell_2, \ell_3)$ is a permutation of $(1,2,3)$.
By (\ref{atreesize}), it sufficient to show that this trilinear
form can be estimated by the right hand side of (\ref{treeout}).
We only handle the most difficult case $\ell_1=1, \ell_2=2$.
Other cases are similar. We estimate the trilinear form by
\begin{equation}\label{localI}
\sum_{j\in{\rm scl}({\bf T})}\sum_{I:|I|=2^{-k_j}}
\bigg\|\bigg(\sum_{n\in{\bf T}_j}F_{1,j,n,m}\bigg)
\bigg(\sum_{n\in{\bf P}_j}F_{2,j,n,m}\bigg)
\bigg(\sum_{n\in ({\bf P}\cup{\bf T})_j}F_{3,j,n,m} \bigg)
\bigg\|_{L^{1}(I)} \,.
\end{equation}
There is at least one of indices $\ell\in\{1,2\}$ satisfying
$k_{j\ell}=k_j$. Without loss of generality, assume $k_{j1}=k_j$.
We have that for any positive integer $N$,
$$
\bigg\|\sum_{n\in{\bf T}_j}F_{1,j,n,m}\bigg\|_{L^{\infty}(I)}
\leq \frac{C_N}{\big(1+ 2^{k_j}{\rm dist}(I(m_{j1}), I_{\bf T})\big)^N}
\big\|{\bf 1}^{**}_{j,n'}f_{\ell,j,0}\big\|_{\infty}\,,
$$
where $I(m_{j1})=I+m_{j1}$ is an interval generated by shifting $I$
to the right by $m_{j1}$ and
$n'\in({\bf P}\cup{\bf T})_{j}$ which minimizes the distance between $I_{j,n}$ and $I(m_{j1})$.
Since Lemma \ref{goodbmo1} holds for the semi-norm, we get
$$
\bigg\|\sum_{n\in{\bf T}_j}F_{1,j,n,m}\bigg\|_{L^{\infty}(I)}
\leq \frac{C_N{\rm size}^*_{1}({\bf P}\cup{\bf T})}{\big(1+ 2^{k_j}{\rm dist}(I(m_{j1}), I_{\bf T})\big)^N}\,.
$$
And since ${\bf P}\cap{\bf T}=\emptyset$ and ${\bf T}$ is a maximal tree in ${\bf P}\cup{\bf T}$,
we have
$$
\bigg\|\sum_{n\in{\bf P}_j}F_{2,j,n,m}
\bigg\|_{L^{2}(I)} \leq
\frac{C_N}{\big(1+ 2^{k_j}{\rm dist}(I(m_{j2}), (I_{\bf T})^c)\big)^N}
\big\|{\bf 1}^{**}_{j,n'}f_{\ell,j,0}\big\|_{2}\,,
$$
which is obviously bounded by
$$
\frac{C_N{\rm size}^*_2({\bf P}\cup{\bf T})|I|^{1/2}}{\big(1+ 2^{k_j}{\rm dist}(I(m_{j2}), (I_{\bf T})^c)\big)^N}
\,.
$$
Similarly, we also have
$$
\bigg\|\sum_{n\in({\bf P}\cup{\bf T})_j}F_{3,j,n,m}
\bigg\|_{L^{2}(I)} \leq C{\rm size}^*_3({\bf P}\cup{\bf T})|I|^{1/2}\,.
$$
Thus we estimate (\ref{localI}) by
$$
\sum_{j\in{\rm scl}({\bf T})}\sum_{I:|I|=2^{-k_j}}
\frac{C_N{\rm size}^*_{1}({\bf P}\cup{\bf T}){\rm size}^*_2({\bf P}\cup{\bf T})
{\rm size}^*_3({\bf P}\cup{\bf T})|I|}{\big(1+ 2^{k_j}{\rm dist}(I(m_{j1}), I_{\bf T})\big)^N
\big(1+ 2^{k_j}{\rm dist}(I(m_{j2}), (I_{\bf T})^c)\big)^N}\,.
$$
Let $j_{\bf T}$ be the index for the top of ${\bf T}$. If $j_{\bf T}+10m \geq j\geq
j_{\bf T}$, we only have at most $10m$ different values for $j$. Notice that
if $I(m_{j1})\subset (I_{\bf T})^c$, then we can replace ${\rm dist}(I(m_{j1}),
I_{{\bf T}})$ by ${\rm dist}(I(m_{j1}), \partial I_{{\bf T}})$. Thus if we only sum
$j$ from $j_{\bf T}$ to $j_{\bf T}+10m$ we get that (\ref{localI}) is dominated by
$$
Cm\prod_{\ell=1}^3{\rm size}^*_\ell({\bf P}\cup{\bf T})|I_{\bf T}| \,.
$$
The remaining thing we need to deal with is to sum all $j>j_{\bf T}+10m$.
The main difficulty is the case $I(m_{j1})\nsubseteq (I_{{\bf T}})^c$
and $I(m_{j2})\nsubseteq I_{\bf T}$, because in other cases we gain
$\big(1+2^{k_j}{\rm dist}(I(m_{j\ell}), \partial I_{\bf T})\big)^{-100}$
in the estimate for at least one of $\ell\in\{1,2\}$, which trivializes the estimate. We also know from the definition of $m_{j\ell}$ that
$
{\rm dist}(I(m_{j1}), I(m_{j2})) \leq 2^m|I|\,.
$
To make the difficult case happen, the interval $I$ must satisfy
${\rm dist}(I(m_{j\ell}), \partial I_{\bf T})
\leq 10\cdot 2^m |I|$ for both $\ell=1, 2$. Sum $|I(m_{j\ell})|$
for all such $I$'s to get a upper bound $C 2^m 2^{-k_j} $.
Then summing these upper bounds for all $j>j_{\bf T}+10m$ we get
a bound $C 2^{-8m}|I_{\bf T}|$. Therefore we estimate (\ref{localI})
by $Cm\prod_{\ell=1}^3{\rm size}^*_\ell({\bf P}\cup{\bf T})|I_{\bf T}|$. This proves the lemma.
\end{proof}
Lemma \ref{prilem} still holds for the sizes of trees defined in
Subsection \ref{subpl}. Let $\kappa\in\{1,2\}$.
Applying this organization lemma inductively for ${\bf S}^{(\kappa)}$,
we decompose
\begin{equation}\label{gooddep2}
{\bf S}^{(\kappa)}=\bigcup_{\sigma} {\bf S}^{(\kappa)}_\sigma\,,
\end{equation}
where $\sigma $ ranges over all possible dyadic numbers,
${\bf S}^{(\kappa)}_\sigma=\cup_{{\bf T}\in\mathcal F_\sigma^{(\kappa)}}{\bf T}$
such that $\mathcal F_\sigma^{\kappa} $ is a collection of maximal trees
with
\begin{equation}\label{ctestp2}
{\rm count}({\bf S}_\sigma^{(\kappa)})\leq C\sigma^{-2}\,,
\end{equation}
and
\begin{equation}\label{sizekaestp2}
{\rm size}^*_\ell({\bf S}_\sigma^{(\kappa)})\leq \sigma|F_\ell|^{1/2}
\end{equation}
holds for all $\ell\in\{1,2,3\}$.
Notice that Lemma \ref{Lpest} holds for the new sizes of trees defined in Subsection
\ref{subpl}.
We thus can also sharpen the upper bound in
the size estimate for ${\bf S}_\sigma^{(\kappa)}$ by
\begin{equation}\label{sizekaest123p2}
{\rm size}^*_\ell({\bf S}_\sigma^{(\kappa)})\leq \min\{1, \sigma|F_\ell|^{1/2}\}\,.
\end{equation}
Hence by Lemma \ref{treeout} we estimate $\Lambda_{{\bf S}, m}$ by
$$
\sum_{\kappa=1}^2
\sum_{\sigma}\sum_{{\bf T}\in\mathcal F_\sigma^{(\kappa)}}
m\prod_{\ell=1}^3{\rm size}^*_{\ell}({\bf S}_\sigma^{(\kappa)})|I_{\bf T}|
\,.
$$
Applying (\ref{sizekaest123p2}) and (\ref{ctestp2}), we thus obtain
\begin{equation}\label{finalestpp2}
\big|\Lambda_{{\bf S},m}(f_1, f_2, f_3)\big|\leq
Cm\sum_{\sigma }\sigma^{-2} \min\{1, \sigma|F_1|^{1/2}\}
\min\{1, \sigma|F_2|^{1/2}\}\min\{1, \sigma\}\,,
\end{equation}
which clearly implies (\ref{largepest2}).
Therefore we complete the proof of Lemma {\ref{uniestp2}}.\\
\subsection{A truncated trilinear form}
First by a change of variable, we write $\Lambda_{L_1, L_2, M_1, M_2, m}$
as
\begin{equation}\label{chanlam}
\Lambda_{L_1, L_2, M_1 M_2, m}(f_1, f_2, f_3) =
\int \sum_{j}\prod_{\ell=1}^{3}f_{\ell,j,0}\big(\tilde{\rm Tr}_{\ell,j,m}(x)\big)dx\,,
\end{equation}
where $\tilde{\rm Tr}_{1,j,m}(x)={\rm Tr}_{1,j,m}-m_{j2}$, $\tilde{\rm Tr}_{2,j,m}(x)=x$,
$\tilde{\rm Tr}_{3,j,m}(x)=x-m_{j2}$. \\
To prove Lemma \ref{uniestp110}, we have to set up our
time-frequency decomposition in a slightly different way for
technical reasons. Recall that $\psi$ is a nonnegative Schwartz
function such that $\widehat\psi$ is supported in $[-1/100, 1/100]$ and
$\widehat\psi(0)=1$. And $\psi_k(x)=2^k\psi(2^kx)$. Let $\Omega$ be the
set defined as in (\ref{defOmega}). As before, $k_{j\ell}$ is an
integer such that $2^{k_{j\ell}}\sim |\omega'_{\ell,j}|$ for for
$\ell\in\{1,2,3\}$ and $k_j=\min\{k_{j\ell}\}$. For a very small
positive number $\varepsilon$, we define
\begin{equation}\label{defomegajl}
\Omega_{j} = \{x\in\Omega: {\rm dist}(
x, \Omega^c)\geq 2^{\varepsilon^2m}2^{-k_j}\}\,.
\end{equation}
\begin{equation}\label{defpsijl}
\psi_{j1}=\psi_{j2}=\psi_{j3}= {\bf 1}_{(\Omega_{j})^c}*\psi_{k_{j}}(x)\,.
\end{equation}
$\Omega_j$, $\psi_{j\ell}$ depend on $m, \varepsilon$ but this dependence
is suppressed for notational convenience.
A truncated trilinear form is defined by
\begin{equation}\label{deftruntri}
\Lambda_{\Omega, m}(f_1, f_2, f_3)= \int\sum_{j\in\mathbb Z}
\prod_{\ell=1}^3\psi_{j\ell}(x)
f_{\ell, j, 0}\big(\tilde{\rm Tr}_{\ell, j, m}(x)\big)dx\,.
\end{equation}
Heuristically, $\psi_{j\ell}$ can be considered as ${\bf 1}_{(\Omega_{j})^c}$
since it is a smooth approximation of ${\bf 1}_{(\Omega_j)^c}$. In time
space, $\Omega_j$ is an exceptional set which can be removed.
we can handle it well. The technical details about this can be found in
Section \ref{para2}. In order to get $2^{\varepsilon m}$ instead of $2^m$ in the estimates,
we have to remove only a smaller set. Here is the lemma which allows us to do so.
\begin{lemma}\label{trunlem}
Let $F_1$, $F_2$, $F_3$ be measurable sets. Let $F'_3=F_3\backslash\Omega$.
Then
\begin{equation}\label{trundiff}
\big| \big(\Lambda_{L_1, L_2, M_1, M_2, m}-\Lambda_{\Omega, m}\big)(f_1, f_2, f_3)\big| \leq C2^{-100m}\min\big\{1, |F_1|^{1/p}\big\}\min\big\{1,
|F_2|^{1/p}\big\}\,
\end{equation}
holds for all functions $f_1\in X(F_1), f_2\in X(F_2), f_3\in X(F'_3)$,
where $C$ is a constant independent of $L_1, L_2$, $M_1, M_2, m$, $f_1, f_2, f_3$, $F_1, F_2, F_3$.
\end{lemma}
\begin{proof}
The difference $|\Lambda_{L_1, L_2, M_1, M_2, m}-\Lambda_{\Omega,m}|$ is
dominated by
$$
\int\sum_j\big|1-\prod_{\ell=1}^3\psi_{j\ell}(x)|\bigg|\prod_{\ell=1}^3f_{\ell,j,0} \big(\tilde{\rm Tr}_{\ell,j,m}(x)\big)\bigg| dx\,.
$$
Clearly,
$$
\big|1-\prod_{\ell=1}^3\psi_{j\ell}(x)\big|
\leq 3\sum_{\ell=1}^3\big|1- \psi_{j\ell}(x)\big|
$$
For $\ell=\{1,2\}$, by the definition of $\Omega$, we have for any positive integer $N$,
\begin{eqnarray*}
& & \big|f_{\ell, j,0}\big(\tilde{\rm Tr}_{\ell,j,m}(x)\big) \big| \\
& \leq & \int \frac{C_N|f_\ell(y)| 2^{k_{j\ell}}}{ \big(
1+2^{k_{j\ell}}|\tilde {\rm Tr}_{\ell,j,m}(x)-y| \big)^N} dy\\
& \leq & C 2^{2m}(1+ 2^{k_{j\ell}}{\rm dist}(\tilde{\rm Tr}_{\ell,j,m}(x), \Omega^c))^2
\min\{1, |F_\ell|^{1/p}\}\,.
\end{eqnarray*}
Since $f_3\in X(F'_3)$, we obtain that
\begin{equation}\label{estf3trun}
\big|f_{3,j,0}(x)\big|\leq \frac{C_N}{\big(1+ 2^{k_{j3}} {\rm dist}(
\tilde{\rm Tr}_{3,j,m}(x), \Omega^c)\big)^N}\,.
\end{equation}
Thus by the fact that $2^{k_{j3}}\sim \max \{2^{k_{j\ell}}\}$,
$k_{j2}> k_{j1}+m$ and the definition of $\Omega_j$,
the difference in the left hand side of (\ref{trundiff}) is estimated by
\begin{eqnarray*}
& &
\sum_j\int\int_{\Omega_{j}}\frac{2^{k_{j}}}{\big(
1+2^{k_{j}}|x - y|\big)^N} dy
\frac{C_N2^{4m}\min\{1, |F_1|^{1/p}\}\min\{1, |F_2|^{1/p}\} } {\big(1+ 2^{k_{j3}} {\rm dist}(\tilde{\rm Tr}_{3,j,m}(x), \Omega^c)\big)^N} dx\\
& \leq & \sum_j\int_{\Omega_{j}}
\frac{C_N2^{4m}\min\{1, |F_1|^{1/p}\}\min\{1, |F_2|^{1/p}\}}
{\big(1+2^{k_{j}}{\rm dist}(y, \Omega^c)\big)^N} dy
\\
& \leq & C2^{-100m}\min\{1, |F_1|^{1/p}\}\min\{1, |F_2|^{1/p}\}\,.
\end{eqnarray*}
Therefore we finish the proof.
\end{proof}
By this lemma, we only need to consider $\Lambda_{\Omega,m}$.
For ${\bf S}\subset \mathbb Z(\gamma)\times \mathbb Z$ we define
\begin{equation}\label{deftrunlam}
\Lambda_{{\bf S}, \Omega, m}(f_1, f_2, f_3)=\int_{\mathbb R}\sum_{j\in\mathbb Z}
\prod_{\ell=1}^{3}\sum_{n\in{\bf S}_j}\tilde F_{\ell,j, n, m}(x) dx\,,
\end{equation}
where $\tilde F_{\ell,j,n,m}$ is defined by
\begin{equation}\label{tiFljnm}
\tilde F_{\ell,j,n,m}(x)= \psi_{j\ell}(x){\bf 1}^{*}_{j,n}
\big(\tilde{\rm Tr}_{\ell,j,m}(x)\big)f_{\ell,j,0}\big(\tilde{\rm Tr}_{\ell,j,m}(x)\big) \,.
\end{equation}
As before we only need to consider the trilinear form (\ref{deftrunlam}).
To prove Lemma \ref{uniestp110}, it is sufficient to show the
following lemma due to Lemma \ref{trunlem}.
\begin{lemma}\label{uniestp11}
Let $\varepsilon$ be any positive number, $1<p<2$ and $F_1, F_2, F_3$ be
measurable sets in $\mathbb R$ such that $|F_3|=1$. There is a
subset $F_3'\subset F_3$ with $|F'_3|\geq |F_3|/2$ such that for all
$p_1, p_2\geq p$ with $1/p_1+1/p_2\geq 1$, and all functions $f_1\in
X(F_1)$, $f_2\in X(F_2)$, $f_3\in X(F_3)$, the following inequality
holds.
\begin{equation}\label{psmallest11}
\big|\Lambda_{{\bf S}, \Omega, m}(f_1, f_2, f_3)\big|\leq C2^{\varepsilon
m}|F_1|^{1/p_1}|F_2|^{1/p_2}
\, ,
\end{equation}
where $C$ is a constant independent of ${\bf S}$, $F_1, F_2, F_3, f_1,
f_2, f_3, L_1, L_2, M_1, M_2, m$.
\end{lemma}
\subsection{Preliminary Lemmata}
To prove Lemma \ref{uniestp11},
we should change the definitions of size of a tree in ${\bf S}$
and set up some lemmata first.
\begin{definition}\label{defofnormp11}
Let $(j,n)\in{\bf S}$ and $\ell\in\{1, 2,3\}$.
Let $\psi_{j\ell}^*$ be the function
\begin{equation}\label{defofpsistar}
\psi^*_{j\ell}(x)= \int_{(\Omega_j)^c}\frac{2^{k_j}}{\big(
1 +2^{2k_j}|x-y|^2 \big)^{200}}dy
\end{equation}
Define a semi-norm
$\big\|f_\ell\big\|_{j,n,m}$ by
\begin{equation}\label{defnormmp11}
\big\|f_{\ell}\|_{j,n,m} =\frac{1}{|I_s|^{1/p}}
\big\|{\bf 1}^{**}_{j,n}\big(\psi^*_{j\ell}\circ \tilde{\rm Tr}^{-1}_{\ell,j,m}\big)
f_{\ell, j,0}\big\|_p
+ \frac{1}{|I_s|^{1/2}}\big\|2^{-k_{j\ell}}{\bf 1}^{**}_{j,n}
\big(\psi^*_{j\ell}\circ \tilde{\rm Tr}^{-1}_{\ell,j,m}\big)Df_{\ell, j, 0}\big\|_p\,,
\end{equation}
where $\tilde{\rm Tr}^{-1}_{\ell,j,m}$ is the inverse of $\tilde{\rm Tr}_{\ell,j,m}$
and $Df_{\ell,j, 0}$ is the derivative of $f_{\ell,j,0}$.
\end{definition}
\begin{definition}
For $\ell\in\{1,2\}$ and a tree ${\bf T}$,
let $(j_{\bf T}, n_{\bf T})$ be the top of the tree ${\bf T}$. And let
$\Delta^*_{\ell,m}({\bf T})$ be defined by
\begin{equation}\label{defdeltap11m}
\Delta^*_{\ell,m}({\bf T})(x)=\bigg(\sum_{(j,n)\in{\bf T}}
\big|{\bf 1}^{*}_{j,n}(x)
\big(\psi^*_{j\ell}\circ \tilde{\rm Tr}^{-1}_{\ell,j,m}\big)(x)
f_{\ell, j,0}(x)\big|^2 \bigg)^{1/2}\,.
\end{equation}
If ${\bf T} $ is a tree in ${\bf S}$, we
define
\begin{equation}\label{size12p11}
{\rm size}_{\ell,m}({\bf T}) = \frac{1}{|I_{\bf T}|^{1/2}}\big\|\Delta^*_{\ell,m}({\bf T})
\big\|_p + \big\|f_\ell\big\|_{j_{\bf T},n_{\bf T},m}\,,
\end{equation}
for all $\ell\in\{1,2\}$.
Let ${\bf P}$ be a subset of ${\bf S}$. Define the $(\ell,m)$-${\rm size}^*$
of ${\bf T}$ by
\begin{equation}
{\rm size}^*_{\ell,m}({\bf P})=\sup_{{\bf T}: {\bf T}\subset {\bf P}}{\rm size}_{\ell,m}({\bf T})\,,
\end{equation}
where ${\bf T}$ ranges over all trees in ${\bf P}$.
In the definition of $\psi^*_{j\ell}$, we can replace the exponent $200$
by a larger number $2^{100}$ to define a new function. Denote this function
by $\tilde\psi^*_{j\ell}$. If ${\bf 1}^{*}_{j,n}$ and $\psi^*_{j\ell}$ are
replaced by $\tilde{\bf 1}^*_{j,n}$ and $\tilde\psi^*_{j\ell}$ respectively
in the definition $\Delta_{\ell,m}^*({\bf T})$, we denote the corresponding
function by $\Delta_{\ell,m}({\bf T})$.
\end{definition}
\begin{lemma}\label{Lpestm}
Let $1<q<\infty$, $\ell\in\{1,2,3\}$ and ${\bf T}$ be a tree in ${\bf S}$.
Then
\begin{equation}\label{Lpest1m}
\big\|\Delta^*_{\ell,m}({\bf T})\big\|_{q}\leq C \inf_{x\in
I_{\bf T}}M_q(Mf_\ell)(x)|I_{\bf T}|^{1/q}\,,
\end{equation}
\begin{equation}\label{Lpest2m}
{\rm size}_{\ell,m}({\bf T})\leq C \min\{2^{\beta_\ell m}|F_\ell|^{1/p},
\inf_{x\in I_{\bf T}}M_p(Mf_\ell)(x)\}\,,
\end{equation}
where $\beta_\ell=1$ if $\ell=1$, $\beta_\ell=\varepsilon^2$ if $\ell=2$, and
$C$ is a constant independent of $f_\ell, {\bf T}$, ${\bf S}$, $L_1$, $L_2$,
$M_1, M_2$.
\end{lemma}
\begin{proof}
Repeating a similar argument in the proof of (\ref{Lpest1}) and (\ref{Lpest2}),
we obtain easily (\ref{Lpest1m}) and part of (\ref{Lpest2m}). The only thing
we need to prove is
\begin{equation}\label{Lpest3m}
{\rm size}_{\ell,m}({\bf T})\leq C 2^{\beta_\ell m}|F_\ell|^{1/p}\,.
\end{equation}
Assume $2^{\beta_\ell m+10}I_{\bf T}\subset \Omega$, otherwise (\ref{Lpest3m})
follows by the upper bound $\inf_{x\in I_{\bf T}}M_p(Mf_\ell)(x)$. Let
${\bf T}_{L}$ be a collection of all $s=(j,n)\in{\bf T}$ such that
$ 2^{L}I_s\subset \Omega$ but $2^{L+1}I_s\nsubseteq \Omega$.
Then
$$ {\bf T}= \bigcup_{L= [\beta_\ell m+10]}^\infty {\bf T}_{L} $$
Let ${\mathbb J}_L$ be the set of all time intervals $I_s$'s for $s\in{\bf T}_L$.
Clearly, ${\mathbb J}_L$ is a set of disjoint intervals and
$\sum_{J\in{\mathbb J}_L}|J|\leq \min\{|I_{\bf T}|, 1\}$. Thus it is sufficient
to show that for any $J\in {\mathbb J}_L$ and any $(j,n)\in{\bf T}$ such that
$I_s=J$,
\begin{equation}\label{localLpest3m}
\big\|{\bf 1}^{**}_{j,n}\big(\psi^*_{j\ell}\circ\tilde{\rm Tr}^{-1}_{\ell,j,m}\big)\tilde
f_{\ell,j,0}\big\|_p^p \leq C_N\big( \inf_{x\in J}M_p(Mf_\ell)(x)\big)^{p} L^{-N} |J|\,
\end{equation}
holds for a large integer $N$, where
$\tilde f_{\ell,j,0}$ is $f_{\ell,j,0}$ or $2^{-k_{j\ell}}Df_{\ell,j,0}$,
since the desired estimate follows by summing up all $L$'s and $J$'s.
By the definition of $\psi^*_{j\ell}$, we have
$$
\big|{\bf 1}^{**}_{j,n}(x) \psi^*_{j\ell}\circ \tilde{\rm Tr}^{-1}_{\ell,j,m}(x)\big|\leq \frac{C}{ \big(1+2^{k_j}{\rm dist}(x, J)\big)^{200}
\big( 1+ 2^{k_j}{\rm dist}\big( \tilde{\rm Tr}^{-1}_{\ell,j,m}(x), (\Omega_j)^c\big) \big)^{200}}\,,
$$
which is clearly dominated by
$$
\frac{C}{ \big(1+2^{k_j}{\rm dist}(x, J)\big)^{100}
\big( 1+ 2^{k_j}{\rm dist}\big( J_{j,m}, (\Omega_j)^c\big) \big)^{100}}\,,
$$
where $J_{j,m}$ is the interval $\{\tilde{\rm Tr}_{\ell,j,m}(x): x\in J\}$. Since
$L\geq \beta_\ell m +9$, by the definition of $\tilde{\rm Tr}_{\ell,j,m}$ we thus dominate $\big|{\bf 1}^{**}_{j,n}( \psi^*_{j\ell}\circ \tilde{\rm Tr}^{-1}_{\ell,j,m})\big|$ by
$$
\frac{C}{ \big(1+2^{k_j}{\rm dist}(x, J)\big)^{100}
\big( 1+ 2^{k_j}{\rm dist}\big( J, (\Omega)^c\big) \big)^{100}}\,.
$$
Thus we have
$$
\big\|{\bf 1}^{**}_{j,n}\big(\psi^*_{j\ell}\circ\tilde{\rm Tr}^{-1}_{\ell,j,m}\big)\tilde
f_{\ell,j,0}\big\|_p^p \leq C
\big( \inf_{x\in J}M_p(Mf_\ell)(x)\big)^{p} L^{-100p} |J|\,,
$$
which yields (\ref{localLpest3m}). Therefore we finish the proof.
\end{proof}
\begin{lemma}\label{goodbmo1p11}
Suppose that $s=(j,n)\in{\bf S}$.
If $2^{k_{j\ell}}\sim 2^{k_j}$, then
\begin{equation}\label{smallboundp11}
\big\|{\bf 1}^{**}_{j,n} \big(\psi^*_{j\ell}\circ\tilde{\rm Tr}^{-1}_{\ell,j,m}\big)f_{\ell,j,0}\big\|_\infty \leq
C\big\|f_\ell\big\|_{j,n,m}
\end{equation}
holds for $\ell\in\{1,2,3\}$, where $C$ is a constant independent of
$s, f_\ell, m$, $L_1, L_2, M_1, M_2$.
\end{lemma}
\begin{proof}
Let $\mu=\big\|f_\ell\big\|_{j,n,m}$.
By the definition of the semi-norm, we have
\begin{equation}\label{smalldelta2p11}
\big\|{\bf 1}^{**}_{j,n}\big(\psi^*_{j\ell}\circ\tilde{\rm Tr}^{-1}_{\ell,j,m}\big) f_{\ell,j, 0}\big\|_p +
\big\|{|I_{s}|}{\bf 1}^{**}_{j,n}\big(\psi^*_{j\ell}\circ\tilde{\rm Tr}^{-1}_{\ell,j,m}\big)
Df_{\ell,j, 0} \big\|_p \leq \mu |I_s|^{1/p}\,.
\end{equation}
First we prove the BMO estimate for the function, that is
\begin{equation}\label{smallbmop11}
\big\|{\bf 1}^{**}_{j,n}\big(\psi^*_{j\ell}\circ\tilde{\rm Tr}^{-1}_{\ell,j,m}\big) f_{\ell,j,0}\big\|_{BMO} \leq C\mu\,.
\end{equation}
If $|I_s|\leq |J|$, by (\ref{smalldelta2p11}) we
have
\begin{eqnarray*}
& & \inf_c\int_J\big| {\bf 1}^{**}_{j,n}(x)\big(\psi^*_{j\ell}\circ\tilde{\rm Tr}^{-1}_{\ell,j,m}\big)(x)f_{\ell,j,0}(x)-c\big| dx \\
& \leq & \big\|{\bf 1}^{**}_{j,n}
\big(\psi^*_{j\ell}\circ\tilde{\rm Tr}^{-1}_{\ell,j,m}\big)
f_{\ell,j,0} \big\|_p|J|^{1-\frac{1}{p}}
\,\leq \, \mu |I_s|^{\frac{1}{p}}|J|^{1-\frac{1}{p}}
\, \leq \, \mu |J|\,.
\end{eqnarray*}
If $|I_s|\geq |J|$, by (\ref{smalldelta2p11}) we obtain that
\begin{eqnarray*}
& & \inf_c\int_J\big| {\bf 1}^{**}_{j,n}(x)
\big(\psi^*_{j\ell}\circ\tilde{\rm Tr}^{-1}_{\ell,j,m}\big)(x) f_{\ell,j,n_\ell}(x)-c\big| dx\\
& \leq & |J|\int_J\bigg| \bigg({\bf 1}^{**}_{j,n}
\big(\psi^*_{j\ell}\circ\tilde{\rm Tr}^{-1}_{\ell,j,m}\big)
f_{\ell,j,n_\ell}\bigg)' (x)\bigg| dx\\
& \leq & C|J||I_s|^{-1}
\int_J \big|{\bf 1}^{**}_{j,n}(x) \big(\psi^*_{j\ell}\circ\tilde{\rm Tr}^{-1}_{\ell,j,m}\big)(x)f_{\ell,j,n_\ell}(x) \big|dx \\
& & + |J|\int_J\big| {\bf 1}^{**}_{j,n}(x)
\big(\psi^*_{j\ell}\circ\tilde{\rm Tr}^{-1}_{\ell,j,m}\big)(x)
Df_{\ell,j,n_\ell}(x) \big|dx\\
& \leq & C|J||I_s|^{-1}\big\|{\bf 1}^{**}_{j,n}\big(\psi^*_{j\ell}\circ\tilde{\rm Tr}^{-1}_{\ell,j,m}\big)f_{\ell,j,n_\ell}\big\|_p|J|^{1-\frac{1}{p}} + |J|\big\| {\bf 1}^{**}_{j,n}\big(\psi^*_{j\ell}\circ\tilde{\rm Tr}^{-1}_{\ell,j,m}\big)Df_{\ell,j,n_\ell}\big\|_p
|J|^{1-\frac{1}{p}}\\
&\leq & C\mu |J|^{2-\frac{1}{p}} |I_s|^{\frac{1}{p}-1}\,\leq\, C\mu|J|\,.
\end{eqnarray*}
Thus we get the BMO estimate (\ref{smallbmop11}). Interpolating (\ref{smallbmop11})
and (\ref{smalldelta2p11}), we have for any $p\leq q <\infty$,
$$
\big\|{\bf 1}^{**}_{j,n}\big(\psi^*_{j\ell}\circ\tilde{\rm Tr}^{-1}_{\ell,j,m}\big)f_{\ell,j,n_\ell}\big\|_q\leq C\mu |I_s|^{1/q}\,.
$$
Notice that an integration by part and H\"older inequality yield that
$$
\big\|{\bf 1}^{**}_{j,n}\big(\psi^*_{j\ell}\circ\tilde{\rm Tr}^{-1}_{\ell,j,m}\big) f_{\ell,j,n_\ell}\big\|_\infty \leq
\big\| {\bf 1}^{**}_{j,n}\big(\psi^*_{j\ell}\circ\tilde{\rm Tr}^{-1}_{\ell,j,m}\big) f_{\ell,j,n_\ell} \big\|_{p'}^{1/2}
\big\| \big( {\bf 1}^{**}_{j,n}\big(\psi^*_{j\ell}\circ\tilde{\rm Tr}^{-1}_{\ell,j,m}\big) f_{\ell,j,n_\ell} \big)' \big\|_p^{1/2},
$$
where $1/p+1/p'=1$. Hence the desired estimate (\ref{smallboundp11})
follows by (\ref{smalldelta2p11}) and the $L^{p'}$ estimate for the functions.
\end{proof}
\begin{lemma}\label{goodbmo2p11}
For any tree ${\bf T} $ in ${\bf S}$,
let
\begin{equation}\label{tideltalm}
\tilde\Delta_{\ell,m}({\bf T})(x)=\bigg(\sum_{(j,n)\in {\bf T}}
\big| \tilde{\bf 1}^{*}_{j,n}\big(\tilde{\rm Tr}_{\ell,j,m}(x)\big)
\tilde\psi^{*}_{j\ell}(x)f_{\ell,j,0}\big(\tilde{\rm Tr}_{\ell,j,m}(x)\big|^2\bigg)^{1/2}\,.
\end{equation}
Then for $\ell=1$ we have
\begin{equation}\label{smallBMOp110}
\big\| \Delta_{\ell,m}({\bf T}) \big\|_{BMO}\leq C {\rm size}^*_\ell({\bf T}) \,,
\end{equation}
\begin{equation}\label{smallBMOp11}
\big\| \tilde\Delta_{\ell,m}({\bf T}) \big\|_{BMO} \leq C m\,{\rm size}^*_\ell({\bf T}) \,,
\end{equation}
\begin{equation}\label{Lqestp11}
\big\| \tilde\Delta_{\ell,m}({\bf T}) \big\|_q\leq
C m^{1-2/q}{\rm size}^*_\ell({\bf T})|I_{\bf T}|^{1/q} \,,
\end{equation}
where $q\geq 2$ and $C$ is a constant independent of $ {\bf T}, {\bf S}, L_1, L_2, M_1, M_2,
f_\ell, n_\ell$.
\end{lemma}
\begin{proof}
(\ref{smallBMOp110}) can be obtained by a routine way as we did
for Lemma \ref{goodbmo2}. We omit the details. We should only prove (\ref{smallBMOp11}). (\ref{Lqestp11}) is a simple consequence of (\ref{smallBMOp110}), (\ref{smallBMOp11}) and an interpolation argument.
Clearly by a change of variables $\big\|\Delta_{\ell,m}({\bf T}) \big\|_2
= \big\|\tilde\Delta_{\ell,m}({\bf T}) \big\|_2$.
Thus (\ref{smallBMOp110}) and an interpolation yield
\begin{equation}\label{L2estmp11}
\big\|\tilde\Delta_{\ell,m}({\bf T}) \big\|_2\leq
C {\rm size}^*_\ell({\bf T})|I_{\bf T}|^{1/2} \,.
\end{equation}
Let $\mu={\rm size}^*_\ell({\bf T})$.
Let $J$ be a dyadic interval and ${\bf T}_J=\{s\in{\bf T}: I_s\subseteq 3J\}$.
We then dominate $\inf_c\int_J\big|\Delta_\ell({\bf T})(x) - c\big|dx$ by a sum of
the following three parts.
$$
\int_J \bigg(\sum_{s\in {\bf T}_J} \big|\tilde{\bf 1}^{*}_{j,n}\big(\tilde{\rm Tr}_{\ell,j,m}(x)\big) \tilde\psi^{*}_{j\ell}(x)f_{\ell,j,0}\big(\tilde{\rm Tr}_{\ell,j,m}(x)\big)
\big|^2\bigg)^{1/2} dx\,,
$$
$$
\int_J \bigg(\sum_{\substack{s\in {\bf T}\backslash{\bf T}_J\\|I_s|\leq |J|}} \big|
\tilde{\bf 1}^{*}_{j,n}\big(\tilde{\rm Tr}_{\ell,j,m}(x)\big)
\tilde\psi^{*}_{j\ell}(x)f_{\ell,j,0}\big(\tilde{\rm Tr}_{\ell,j,m}(x)\big)
\big|^2\bigg)^{1/2} dx\,,
$$
and
$$
\inf_c\int_J \bigg|\bigg(\sum_{\substack{s\in {\bf T}\backslash{\bf T}_J\\|I_s|>|J|}} \big|\tilde{\bf 1}^{*}_{j,n}\big(\tilde{\rm Tr}_{\ell,j,m}(x)\big)
\tilde\psi^{*}_{j\ell}(x)f_{\ell,j,0}\big(\tilde{\rm Tr}_{\ell,j,m}(x)\big)
\big|^2\bigg)^{1/2}-c\bigg| dx\,.
$$
${\bf T}_J$ can be decomposed to a union of trees ${\bf T}_{J,k}$'s
such that the time intervals $I_{{\bf T}_{J,k}}$'s are disjoint
and all of them are contained in $3J$. Using Cauchy-Schwarz inequality, the first part is estimated by
$$
\big( \sum_{k} \big\|\tilde\Delta_{\ell,m}({\bf T}_{J,k}) \big\|_2^2 \big)^{1/2}
|J|^{1/2}\,.
$$
Appying (\ref{L2estmp11}), we dominated the first part by $C\mu |J|$.
Since $p\leq 2$ we estimate the second part by
\begin{eqnarray*}
& & \bigg\| \bigg(\sum_{\substack{s\in {\bf T}\backslash{\bf T}_J\\|I_s|\leq |J|}} \big|(\tilde{\bf 1}^{*}_{j,n}\circ\tilde{\rm Tr}_{\ell,j,m})
\tilde\psi^*_{j\ell}
(f_{\ell,j,n_\ell}\circ\tilde{\rm Tr}_{\ell,j,m} )\big|^2\bigg)^{1/2} \bigg\|_{L^p(J)}|J|^{1-\frac{1}{p}}\\
& \leq & \bigg(\sum_{\substack{s\in {\bf T}\backslash{\bf T}_J\\|I_s|\leq |J|}} \big\| (\tilde{\bf 1}^{*}_{j,n}\circ\tilde{\rm Tr}_{\ell,j,m})
\tilde\psi^*_{j\ell} (f_{\ell,j,n_\ell}\circ\tilde{\rm Tr}_{\ell,j,m})
\big\|^p_{L^p(J)}\bigg)^{1/p}
|J|^{1-\frac{1}{p}}\\
& \leq & \bigg(\sum_{\substack{s\in {\bf T}\backslash{\bf T}_J\\|I_s|\leq |J|}}
\frac{ C
\big\| ({\bf 1}^{**}_{j,n}\circ\tilde{\rm Tr}_{\ell,j,m})
\psi^*_{j\ell} (f_{\ell,j,n_\ell}\circ\tilde{\rm Tr}_{\ell,j,m} ) \big\|^p_p }
{\big( 1+ |I_s|^{-1}{\rm dist}(J, \tilde{\rm Tr}^{-1}_{\ell,j,m}(I_s))\big)^{100}}\bigg)^{1/p}|J|^{1-\frac{1}{p}}\\
&\leq & \mu \bigg(\sum_{\substack{s\in {\bf T}\backslash{\bf T}_J\\|I_s|\leq |J|}}
\frac{ C |I_s| }
{\big( 1+ |I_s|^{-1}{\rm dist}(J, \tilde{\rm Tr}^{-1}_{\ell,j,m}(I_s))\big)^{100}} \bigg)^{1/p} |J|^{1-\frac{1}{p}}\,,
\end{eqnarray*}
where $\tilde{\rm Tr}^{-1}_{\ell,j,m}(I_s)$ is the interval
$\{\tilde{\rm Tr}^{-1}_{\ell,j,m}(x): x\in I_s\}$. Observe that
if $|I_s|\leq 2^{-m-10}|J|$ and $s\in{\bf T}\backslash{\bf T}_J$, then
${\rm dist}(J, \tilde{\rm Tr}^{-1}_{\ell,j,m}(I_s))\sim {\rm dist}(J, I_s)$.
Thus summing for all $s$ in this case, we get the desired estimate $C\mu|J|$.
In the remaining case, there are only $10m$ different scales for
$|I_s|$'s since $s$'s satisfy $2^{-m-10}|J|< |I_s|\leq |J|$.
The worst situation is that when $\tilde{\rm Tr}^{-1}_{\ell,j,m}(I_s)
\cap J\neq\emptyset$, because otherwise ${\rm dist}(J, \tilde{\rm Tr}^{-1}_{\ell,j,m}(I_s))$ can be replaced by
${\rm dist}(\partial J, \tilde{\rm Tr}^{-1}_{\ell,j,m}(I_s))$ and thus
the desired estimate follows. But in this situation,
$\tilde{\rm Tr}^{-1}_{\ell,j,m}(I_s)$ must be a subset of $3J$ since
$|I_s|\leq |J|$. For all $\tilde{\rm Tr}^{-1}_{\ell,j,m}(I_s) \subset 3J$ with a fixed scale, the sum of $|I_s|$'s is no more than $3|J|$. Summing for at most
$10m$ different scales, we thus get the upper bound
$Cm\mu|J|$. Hence the second part is dominated by $Cm\mu|J|$.
The third part is estimated by
\begin{eqnarray*}
& & \bigg(\inf_c\int_J \bigg|\bigg(\sum_{\substack{s\in {\bf T}\backslash{\bf T}_J\\|I_s|>|J|}}
\big|\tilde{\bf 1}^{*}_{j,n}\big(\tilde{\rm Tr}_{\ell,j,m}(x)\big)
\tilde\psi^{*}_{j\ell}(x)f_{\ell,j,0}\big(\tilde{\rm Tr}_{\ell,j,m}(x)\big)
\big|^2\bigg)^{1/2}-c\bigg|^2dx\bigg)^{1/2}|J|^{1/2}\\
& \leq &\bigg(\inf_c\int_J \bigg|\sum_{\substack{s\in {\bf T}\backslash{\bf T}_J\\|I_s|>|J|}} \big|\tilde{\bf 1}^{*}_{j,n}\big(\tilde{\rm Tr}_{\ell,j,m}(x)\big)
\tilde\psi^{*}_{j\ell}(x)f_{\ell,j,0}\big(\tilde{\rm Tr}_{\ell,j,m}(x) \big)\big|^2-c\bigg|dx\bigg)^{1/2}|J|^{1/2}\\
& \leq & C\bigg(\int_J \sum_{\substack{s\in {\bf T}\backslash{\bf T}_J\\|I_s|>|J|}} \bigg|
\bigg(\big|\tilde{\bf 1}^{*}_{j,n}\big(\tilde{\rm Tr}_{\ell,j,m}(x)\big)
\tilde\psi^{*}_{j\ell}(x)f_{\ell,j,0}\big(\tilde{\rm Tr}_{\ell,j,m}(x) \big)\big|^2\bigg)'\bigg|dx \bigg)^{1/2}|J|\,,
\end{eqnarray*}
which is dominated by a sum of following two terms,
$$
R_1= C\bigg(\int_J \sum_{\substack{s\in
{\bf T}\backslash{\bf T}_J\\|I_s|>|J|}}
|I_s|^{-1}\big|\tilde{\bf 1}^{*}_{j,n}\big(\tilde{\rm Tr}_{\ell,j,m}(x)\big)
\tilde\psi^{*}_{j\ell}(x)f_{\ell,j,0}\big(\tilde{\rm Tr}_{\ell,j,m}(x) \big)
\big|^2 dx \bigg)^{1/2}|J|\,,
$$
and
$$
R_2= C\bigg(\int_J \sum_{\substack{s\in
{\bf T}\backslash{\bf T}_J\\|I_s|>|J|}}\big|\tilde{\bf 1}^{*}_{j,n}\big(\tilde{\rm Tr}_{\ell,j,m}(x)\big)
\tilde\psi^{*}_{j\ell}(x)f_{\ell,j,0}\big(\tilde{\rm Tr}_{\ell,j,m}(x) \big)\big|
\big| G_{\ell,j,m}(x)
\big| dx \bigg)^{1/2}|J|\,,
$$
where $G_{\ell,j,m}$ is the function defined by
$$
G_{\ell,j,m}(x)= \tilde{\bf 1}^{*}_{j,n}\big(\tilde{\rm Tr}_{\ell,j,m}(x)\big)
\tilde\psi^{*}_{j\ell}(x) Df_{\ell,j,0}\big(\tilde{\rm Tr}_{\ell,j,m}(x)\big)
$$
By Lemma \ref{goodbmo1p11}, we see that for any
$q\geq p$,
$$
\big\|({\bf 1}^{**}_{j,n}\circ\tilde{\rm Tr}_{\ell,j,m} )\psi^{*}_{j\ell}
(f_{\ell,j,n_\ell}\circ\tilde{\rm Tr}_{\ell,j,m} )\big\|_q\leq
C\mu|I_s|^{1/q}\,.
$$
Thus, by H\"older inequality, the first term $R_1$ is estimate by
\begin{eqnarray*}
& & C\bigg( \sum_{\substack{s\in
{\bf T}\backslash{\bf T}_J\\|I_s|>|J|}}\frac{|I_s|^{-1}
\big\|({\bf 1}^{**}_{j,n}\circ\tilde{\rm Tr}_{\ell,j,m} )\psi^{*}_{j\ell}
(f_{\ell,j,n_\ell}\circ\tilde{\rm Tr}_{\ell,j,m} ) \big\|_4^2 |J|^{1/2}
}{\big(1+|I_s|^{-1}{\rm
dist}(J, \tilde{\rm Tr}^{-1}_{\ell,j,m}(I_s))\big)^{100}}
\bigg)^{1/2}|J| \\
& \leq & C\mu\bigg( \sum_{\substack{s\in
{\bf T}\backslash{\bf T}_J\\|I_s|>|J|}}\frac{|I_s|^{-1/2}|J|^{1/2}}
{\big(1+|I_s|^{-1}{\rm dist}(J, \tilde{\rm Tr}^{-1}_{\ell,j,m}(I_s) )\big)^{100}}\bigg)^{1/2}|J| \,\,\leq \, C\mu |J|\,.
\end{eqnarray*}
It is obvious by the fact $2^{k_{j\ell}}\sim 2^{k_j}$
when $\ell=1$ and the definition of the semi-norm that
\begin{equation}
\|G_{\ell,j,m}\|_p\leq \|f_\ell\|_{j,n,m}|I_s|^{1/p-1}\,.
\end{equation}
Thus the second term $R_2$ is estimated by
\begin{eqnarray*}
& &C\bigg(\sum_{\substack{s\in
{\bf T}\backslash{\bf T}_J\\|I_s|>|J|}}\big\|
\big(\tilde{\bf 1}^{*}_{j,n}\circ\tilde{\rm Tr}_{\ell,j,m}\big)
\tilde\psi^{*}_{j\ell} \big(f_{\ell,j,0}\circ\tilde{\rm Tr}_{\ell,j,m}\big)
\big\|_{L^{p'}(J)} \big\|G_{\ell,j,m}\big\|_p\bigg)^{1/2}|J|\\
& \leq &C\bigg( \mu \sum_{\substack{s\in
{\bf T}\backslash{\bf T}_J\\|I_s|>|J|}}\frac{|I_s|^{\frac{1}{p}-1}
\big\| ({\bf 1}^{**}_{j,n}\circ\tilde{\rm Tr}_{\ell,j,m} )\psi^{*}_{j\ell}
(f_{\ell,j,n_\ell}\circ\tilde{\rm Tr}_{\ell,j,m} )
\big\|_{p'+1}|J|^{\frac{1}{p'(p'+1)}} }
{\big(1+|I_s|^{-1}{\rm dist}(J, \tilde{\rm Tr}^{-1}_{\ell,j,m}(I_s) )\big)^{100}}
\bigg)^{1/2}|J|\\
& \leq & C\mu\bigg( \sum_{\substack{s\in
{\bf T}\backslash{\bf T}_J\\|I_s|>|J|}}\frac{|I_s|^{-\frac{1}{p'(p'+1)}}
|J|^{\frac{1}{p'(p'+1)}} } {\big(1+|I_s|^{-1}{\rm dist}(J,
\tilde{\rm Tr}^{-1}_{\ell,j,m}(I_s) )\big)^{100}} \bigg)^{1/2}|J|\,\,\leq \,\, C\mu|J|\,.
\end{eqnarray*}
This completes the proof of (\ref{smallBMOp11}).
\end{proof}
\begin{lemma}\label{treeoutp11}
Let ${\bf T}$ be a tree in ${\bf S}$ and ${\bf P}$ be a subset of ${\bf S}$.
Suppose that ${\bf P}\cap{\bf T}=\emptyset$ and ${\bf T}$ is a maximal tree in ${\bf P}\cup
{\bf T}$. Then we have
\begin{equation}\label{treeout1p11}
\big|\Lambda_{{\bf P}\cup{\bf T},\Omega, m}(f_1, f_2, f_3)-\Lambda_{{\bf P}, \Omega, m}(f_1, f_2, f_3)\big|
\leq \big|\Lambda_{{\bf T},\Omega, m}(f_1, f_2, f_3)\big|+Cm\prod_{\ell=1}^2{\rm size}^*_\ell({\bf T}\cup {\bf P})|I_{\bf T}|\,,
\end{equation}
where $C$ is independent of $f_1, f_2, f_3, L_1, L_2, M_1, M_2, {\bf P}$, ${\bf T}$.
\end{lemma}
The proof is similar to the proof of Lemma \ref{treeoutp11}. We omit
the details and leave it as an exercise to the readers.
\subsection{Proof of Lemma \ref{uniestp11}}
It is easy to prove a size estimate for the trilinear form on a
single tree, that is, for any tree ${\bf T}$,
\begin{equation}\label{sizeestp11}
\big|\Lambda_{{\bf T}, \Omega, m}(f_1, f_2, f_3) \big|\leq Cm^{2/p-1}
\prod_{\ell=1}^2
{\rm size}_\ell^*({\bf T})|I_{\bf T}|\,,
\end{equation}
where $C$ is independent of $L_1, L_2, M_1, M_2, m, f_1, f_2, f_3,
{\bf T}$.
In fact, by H\"older inequality, we estimate $|\Lambda_{{\bf T}, \Omega,
m}|$ by
$$
\big\|\tilde\Delta_{1,m}^*({\bf T})\big\|_{p'}\big\|\Delta_{2,m}^*({\bf T})\big\|_{p}\,.
$$
By (\ref{Lqestp11}) and the definition of size, we obtain
(\ref{sizeestp11}) immediately.
Lemma \ref{prilem} still holds for our new sizes of trees and
${\bf S}$. Applying this organization
lemma inductively for ${\bf S}$, we decompose
\begin{equation}\label{gooddep11}
{\bf S}=\bigcup_{\sigma} {\bf S}_\sigma\,,
\end{equation}
where $\sigma $ ranges over all possible dyadic numbers,
${\bf S}_\sigma=\cup_{{\bf T}\in\mathcal F_\sigma}{\bf T}$ such that $\mathcal F_\sigma $
is a collection of maximal trees with
\begin{equation}\label{ctestp11}
{\rm count}({\bf S}_\sigma)\leq C\sigma^{-p}\,,
\end{equation}
and
\begin{equation}\label{sizekaestp11}
{\rm size}^*_\ell({\bf S}_\sigma)\leq \sigma|F_\ell|^{1/p}
\end{equation}
holds for all $\ell\in\{1,2\}$.
By (\ref{Lpest2m}), the upper bound in the size
estimates for ${\bf S}_\sigma$ can be sharpened by,
\begin{equation}\label{sizekaest123p11}
{\rm size}^*_\ell({\bf S}_\sigma)\leq \min\{1, 2^{\beta_\ell m}|F_\ell|^{1/p},
\sigma|F_\ell|^{1/p} \}\,.
\end{equation}
Hence by Lemma \ref{treeoutp11} and (\ref{sizeestp11}) we estimate
$\Lambda_{{\bf S},\Omega, m}$ by
$$
\sum_{\sigma}\sum_{{\bf T}\in\mathcal F_\sigma}
m\prod_{\ell=1}^2{\rm size}^*_{\ell}({\bf S}_\sigma)|I_{\bf T}|
\,.
$$
Applying (\ref{sizekaest123p11}) and (\ref{ctestp11}), we thus
dominate $\big|\Lambda_{{\bf S},\Omega, m}(f_1, f_2, f_3)\big|$ by
\begin{equation}\label{finalestpp11}
Cm\sum_{\sigma }\sigma^{-p} \min\{1, 2^m|F_1|^{1/p}, \sigma|F_1|^{1/p}\}
\min\{1, 2^{\varepsilon^2 m}|F_2|^{1/p}, \sigma|F_2|^{1/p}\}\,,
\end{equation}
which clearly implies (\ref{psmallest11}).
Therefore we complete the proof of Lemma {\ref{uniestp11}}.\\
|
2,877,628,091,471 | arxiv | \section{Introduction}
\subsection{Stable matching in ${\mathbb R}^d$}
Let $S$ be a set of points in ${\mathbb R}^d$,
and let $M$ be a matching of $S$.
For each $x\in S$, write
$d_M(x)$ for the distance of $x$ from its partner
in the matching $M$, with $d_M(x)=\infty$ if $x$ is
unmatched by $M$.
A pair of distinct points $x,y\in S$
is an \textbf{unstable pair} for the matching $M$
if $|x-y|<\min(d_M(x), d_M(y))$.
If $M$ has no unstable pairs,
then it is said to be
a \textbf{stable matching}
(as introduced by Gale and Shapley \cite{GaleShapley}).
We can interpret this definition by
saying that each point would like to find a partner
as near as possible to itself (and would prefer any partner
rather than remaining unmatched), and that a matching $M$ is stable
if there is no pair of points that would both prefer to be matched
to each other over their situation in $M$.
Holroyd, Pemantle, Peres and Schramm \cite{HPPS}
studied stable matching for the points of a
homogenous
Poisson process. They showed
that with probability 1, there exists
a unique stable matching, under which every point is matched.
Let the random variable $X$ represent the distance of
a typical point of the process to its partner in this stable matching. Then Theorem 5 of \cite{HPPS} says that ${\mathbb E} X^d=\infty$,
but $\P(X>r)\leq Cr^{-d}$ for some constant $C=C(d)<\infty$.
Now suppose that the points of the process are of two types;
each point is independently coloured blue with probability
$p \in(0,1)$ and red with probability $1-p$.
Restrict to matchings in which a red point and a blue
point may be matched, but two points of the same colour may
not be matched.
Correspondingly
the definition of unstable pair is restricted to
pairs consisting of one red point and one blue point; the definition of a
stable matching is otherwise unchanged.
Again, it is shown
in \cite{HPPS} that with probability 1 there exists a unique stable matching.
If $p=1/2$ (so that the model is symmetric between red and blue)
then with probability 1, every point is matched;
it is shown that the distribution of the distance
from a typical point to its partner has a polynomial tail
(although for $d\geq 2$ there is a gap between the upper and lower
bounds on the exponent). On the other hand, suppose $p<1/2$. Then
with probability 1, all blue points are matched, and a positive
density of red points remain unmatched.
\begin{figure}
\begin{center}
\includegraphics[width=0.68\textwidth]
{match-RR-RB-2000R-1000B.pdf}
\\[1ex]
\includegraphics[width=0.68\textwidth]
{match-RR-RB-2500R-500B.pdf}
\caption{
\label{fig:asymm}
The two-color asymmetric model
(with red-red and red-blue matches allowed)
for uniformly distributed points in a two-dimensional torus.
Unmatched points are shown larger. \emph{Top:} 2000 red and 1000 blue points. \emph{Bottom:} 2500 red and 500 blue points.
}
\end{center}
\end{figure}
For the models above, the question of whether
the stable matching is perfect (i.e.\ whether every point is matched)
is easy to answer using arguments involving translation invariance,
ergodicity and mass transport (although many interesting questions
remain about the nature of the matching).
In this paper we study natural variants where, in contrast, the question of
whether the stable matching is perfect already
presents a challenge.
Again suppose the points of a Poisson process in ${\mathbb R}^d$ are colored
independently blue (with probability $p$) or red (with probability $1-p$).
We now consider an asymmetric rule, under which red-red and red-blue
matches are allowed, while blue-blue matches are forbidden.
Subject to this restriction, each point prefers to be matched
at as short a distance as possible.
From Proposition \ref{prop:uniquestable} below,
we will be able to obtain that with probability 1 a stable matching
exists and is unique. Let $M$ be this stable matching. From the ergodicity of the Poisson process,
the intensity of
the set of red points matched by $M$ to blue points
is an almost sure constant, and the same is true for the
set of blue points matched by $M$ to red points. By a mass transport argument,
these intensities are equal.
If $p>1/2$, then there must be some blue points left unmatched.
In fact, it is easy to see that this is still the case for some $p<1/2$,
since some pairs of red points will be matched to each other
(for example, any pair which are each other's nearest neighbours).
We conjecture that this remains true for all $p>0$. As far as we are aware,
this is not known for any $d$. Here we make the following progress towards the conjecture:
for any fixed $p>0$, if $d$ is sufficiently large, then there are unmatched blue points.
\begin{thm}
\label{thm:asymmetricRd}
For a Poisson process of intensity $1$ in ${\mathbb R}^d$ in which each point independently is blue with probability $p\in(0,1)$ and red with probability
$1-p$, consider the asymmetric two-type stable matching, under which only
red-red and red-blue matches are allowed. For fixed $p$, the intensity of
unmatched blue points
converges to $p e^{-(1-p)/p}$ as $d\to\infty$.
For a non-zero density of unmatched blue points, it suffices to take $d>\frac{c}{p}e^{1/p}$,
where $c$ is some absolute constant.
\end{thm}
See Figure \ref{fig:asymm} for simulations of the asymmetric two-type model
in a two-dimensional torus.
We next consider a multi-type symmetric model.
Suppose that there are $k$ different colours,
and each point of the Poisson process independently
receives colour $i$ with probability $p_i$, where $(p_1, \dots, p_k)$ is a probability vector.
Any two points of different colours can be matched,
but two points of the same colour may not be matched.
Again, Proposition \ref{prop:uniquestable} will yield that
there exists a unique stable matching. If $p_1>1/2$,
then with probability 1, some points of colour 1 will remain
unmatched in this stable matching.
We conjecture that this remains true whenever
$p_1>\max_{2\leq i\leq k}p_i$.
We show that
for a given collection $p_1,\dots,p_k$, this
is true for sufficiently large $d$.
\begin{thm}
\label{thm:symmetricRd}
Fix a probability vector $(p_1,\dots, p_k)$.
Consider the multi-type symmetric stable matching
of a Poisson process of rate 1 in ${\mathbb R}^d$
with $k$ colours,
where $p_i$ gives the probability of colour $i$.
Suppose $p_1>\max_{2\leq i\leq k}p_i$.
Then there exists some strictly positive $\lambda=\lambda(p_1,\dots,p_k)$
such that the intensity of unmatched type-1 points
converges to $\lambda$ as $d\to\infty$.
In the case where $p_i$ are equal for all $i\geq 2$, we have
$\lambda=(p_1-p_2)^{k-1} p_1^{-(k-2)}$.
\end{thm}
Our method to prove Theorems \ref{thm:asymmetricRd} and \ref{thm:symmetricRd}
involves the analysis of stable matchings on the
\textit{Poisson-weighted infinite tree} (or \textit{PWIT}).
The PWIT was introduced by Aldous and Steele \cite{AldousSteele}
and has been used in many contexts to provide a scaling limit
of complete graphs with independent edge-weights,
for example in the setting of
minimal-weight spanning trees and invasion percolation
\cite{ABGriffKang,AldousSteele,SteeleMST},
random assignment problems \cite{AldousRA, SalezShah},
and random matrices \cite{BCC1, BCC2, BordenaveChafai}.
Here we show how it also arises naturally as a limit
of Poisson processes in high-dimensional Euclidean space,
under appropriate rescaling; as far as we know,
this is the first such application of the PWIT.
We make some brief observations illustrating some of the difficulty of obtaining results about the models considered in Theorems \ref{thm:asymmetricRd}
and \ref{thm:symmetricRd}.
In the symmetric model of Theorem \ref{thm:symmetricRd},
note that for any $i$, the probability that there
are unmatched points of type $i$ is 0 or 1, by ergodicity. If $p_i=p_j$
then by symmetry this probability is the same for $i$ and for $j$, but there can't be unmatched points of two different colours,
so the probability must be 0. Consider for example $k=3$ and $p_1<p_2=p_3$. By the above argument, with probability 1 there are no unmatched points of type $2$ or $3$. One would naturally conjecture that also no points of type $1$
(which has lower intensity) are unmatched -- however,
there is no obvious monotonicity for the model in Euclidean space, and this
conjecture does not seem easy to prove, even taking $d$ large. (Our comparison with the PWIT could be used
to show that the intensity of unmatched points of type 1 can be made
as small as desired by taking $d$ sufficiently large, but it is not clear that it will help in showing that the density is 0 for some $d$.)
Meanwhile for the asymmetric two-type model, Theorem \ref{thm:asymmetricRd}
implies that for given $p_0>0$,
for sufficiently large $d$ there exist
unmatched blue points with probability 1
for any probability $p>p_0$ of blue points.
One might naturally imagine that for any fixed $d$,
if there are unmatched blue points for some given $p_0$,
then the same is true for any $p>p_0$.
However, this monotonicity property also appears difficult to prove.
(Note that one can easily find finite
configurations of points such that removing a
blue point increases the number of unmatched blue points.)
\subsection{Asymmetric two-type matching on a
hierarchical graph}\label{subsec:hierarchical}
We also consider the asymmetric two-type model in a case where
the distance function is given by a hierarchical metric.
In this case we can indeed show that there are unmatched points
for every value of $p$ (as we conjectured above for the stable matching
with respect to Euclidean distance in ${\mathbb R}^d$).
Consider the distance on ${\mathbb R}_+$ defined by
\begin{equation}\label{rhodef}
\rho(x,y)=2^{-\sup\{k\in{\mathbb Z}:\lfloor2^{k}x\rfloor=\lfloor2^{k}y\rfloor\}}.
\end{equation}
This is an ultrametric (that is, a metric such that
$\rho(x,z)\leq\max\{\rho(x,y), \rho(y,z)\}$ for all $x,y,z$).
Let $p\in(0,1)$.
Consider a Poisson process of rate $\lambda>0$ on ${\mathbb R}_+$.
As above, let each point independently be coloured
blue with probability $p$ and red with probability $1-p$;
red-red and red-blue matches are allowed, but not blue-blue.
\begin{thm}
\label{thm:hierarchical}
Fix $\lambda>0$ and $p\in(0,1)$.
Consider the two-type asymmetric stable matching model for
a Poisson process on ${\mathbb R}_+$ with rate $\lambda$,
in which each point is independently blue with
probability $p$ and
red with probability $1-p$.
With probability 1, there
are infinitely many stable matchings
with respect to the hierarchical metric $\rho$,
and all these matchings have infinitely many
unmatched blue points.
In fact, there exists $c=c(\lambda, p)>0$
such that, with probability 1, for all large enough $R$
the number of unmatched blue points
in the interval $[0,R]$ is at least $cR$ for
all stable matchings.
\end{thm}
There exist multiple stable matchings with respect to $\rho$
since a point can be equidistant from several others.
However, these stable matchings are all closely related
to each
other in the following way.
Take a dyadic interval $[2^km,2^k(m+1))$ for some integers $m\geq0$ and $k$,
and, for a given stable matching,
consider the set of points in the interval that are
not matched to another point in the interval
(equivalently, that are not matched at distance $2^k$ or less).
This set cannot include both a red point and a blue point,
and also cannot include two red points.
We write $N_k(m)$ for the number of blue points minus the number
of red points, which is in $\{-1,0,1,2,\dots\}$.
Then we have
\begin{equation}
\label{hierarchicalrecursion}
N_k(m)=g\left(N_{k-1}(2m), N_{k-1}(2m+1)\right)
\end{equation}
where $g:\{-1,0,1,2,\dots\}^2 \mapsto \{-1,0,1,2,\dots\}$
is defined by
\[
g(a,b)=
\begin{cases}0&\text{if }a=b=-1\\a+b&\text{otherwise}\end{cases}.
\]
Also $N_k(m)$ is uniquely determined whenever the
interval has at most one point. By starting from a partition into dyadic intervals each of which contain at most one point
and applying (\ref{hierarchicalrecursion}) recursively, we obtain that the values $N_k(m)$ are in fact the same for any stable matching.
By translation invariance, $N_k(m), m\in{\mathbb Z}$ are i.i.d.\
for every $k$, and (\ref{hierarchicalrecursion}) yields a recursion
in $k$ for the distribution of $N_k(m)$, which we analyse
in order to prove Theorem \ref{thm:hierarchical}.
One could also consider a version in ${\mathbb R}^d$ for $d\geq 2$
based on dyadic $d$-cubes rather than dyadic intervals,
or indeed a more general model $p$-adic model for all $p\geq 2$.
The result and the analysis would be very similar.
\subsection{Related work}
The recent article \cite{amir-angel-holroyd} treats general (not necessarily stable)
\linebreak[4]
translation-invariant matchings of Poisson processes of multiple colors under arbitrary color-matching rules (and indeed generalized matchings in which three or points may be matched to each other). The optimal tail behavior of such matchings in ${\mathbb R}^d$ is analyzed in terms of the dimension $d$, the matching rule, and the color intensities. It turns out that convex geometry in the space of intensity vectors plays a key role.
In \cite{frogs}, stable matchings of various kinds are shown to be intimately tied to two-player games. In particular, our Theorem~\ref{thm:asymmetricRd} implies that a certain game is first-player win in sufficiently high dimension.
\subsection{Plan of the paper}
In Section \ref{sec:uniqueness} we set up
a formal framework for stable matchings on weighted graphs,
and give a result which guarantees existence and
uniqueness of the stable matching for several of the
multi-type models that we consider in the paper. The
required conditions are that the weights involving
any given vertex are distinct and have no accumulation points, and that there are no
infinite \textit{descending paths}
(in the sense of \cite{DaleyLast}).
In Section \ref{sec:PWIT} we describe the PWIT
and explain how it arises as a scaling
limit of high-dimensional Poisson processes. We then
investigate various stable matching models on the PWIT
(where the recursive structure of the graph makes
certain exact computations possible).
The comparison between the stable matching models for the PWIT
and for the Poisson process in ${\mathbb R}^d$ is formally developed in
Section \ref{sec:coupling}.
In Sections \ref{sec:conclusionRd}, \ref{sec:conclusionhierarchical}
and \ref{sec:stableproof} we complete the proofs
of the main results. We conclude the article with some open problems.
\section{Stable matching on general weighted graphs}
\label{sec:uniqueness}
In this section we give a result guaranteeing the
existence and uniqueness of a stable matching
for a general class of models based on a symmetric
distance function (which need not be a metric),
or, in other words, an edge-weighted graph.
In particular, the framework will
cover the various multi-type models considered above.
(A pair of points whose types are incompatible
will not be joined by an edge in the graph.)
Suppose we have a set $V$ and a symmetric function
$\ell:V\times V\to {\mathbb R}_+\cup\{\infty\}$
with $\ell(x,x)=\infty$ for all $x$.
We call $\ell(x,y)$ the \textbf{weight} associated
to the pair $\{x,y\}$;
we think of it as the weight of the edge $\{x,y\}$
in a weighted graph with vertex set $V$
and vertex set $E_\ell:=
\big\{\{x,y\}:\ell(x,y)<\infty\big\}$,
with the case $\ell(x,y)=\infty$
corresponding to the absence of an edge.
Let a \textbf{matching} of $(V,E_\ell)$
be a function $M:V\to V$ that is an involution,
i.e.\ $M(M(x))=x$ for all $x\in V$,
and such that $\ell(x,M(x))<\infty$ whenever
$M(x)\ne x$.
If $x\ne y$ and $M(x)=y$ (in which case also $M(y)=x$) then
we say that $x$ and $y$ are \textbf{matched} by $M$
(or that $y$ is the \textbf{partner} of $x$ in $M$);
if $M(x)=x$ then we say that $x$ is \textbf{unmatched} by $M$.
Given a matching $M$, define $d_M:V\to{\mathbb R}_+\cup\{\infty\}$ by
\begin{equation}\label{dMdef}
d_M(x)=
\ell\big(x,M(x)\big).
\end{equation}
The matching $M$ of $(V,E_\ell)$ is \textbf{stable} (with
respect to the function $\ell$)
if
\begin{equation}
\label{stablecondition}
\ell(x,y)\geq\min\left(d_M(x), d_M(y)\right)
\text{ for all $x$ and $y$.}
\end{equation}
We can interpret this definition as follows. Each point $x$ has
an order of preference among the other points; it prefers
to have a partner $y$ such that $\ell(x,y)$ is as
small as possible (but will remain unmatched rather than
being matched to another $y$ with $\ell(x,y)=\infty$).
\begin{prop}\label{prop:uniquestable}
Let $V$ be finite or countably infinite.
Suppose that the function $\ell$ satsfies the following conditions.
\begin{itemize}
\item[(i)]Distinct weights:
there are no $x,y,z\in V$ with $y\ne z$ such that $\ell(x,y)=\ell(x,z)<\infty$.
\item[(ii)]
Locally finite: for all $x\in V$ and all $r<\infty$,
the set $\{y\in V:\ell(x,y)<r\}$ is finite.
\item[(iii)]
No infinite descending paths:
there is no sequence of elements $x_0, x_1, x_2, \dots$ of $V$
such that $\ell(x_0, x_1)>\ell(x_1, x_2)>\ell(x_2, x_3)>\dots$.
\end{itemize}
Then there exists a unique stable matching $M$ of $(V,E_\ell)$.
If $x$ and $y$ are two points both left unmatched by $M$, then $\ell(x,y)=\infty$.
\end{prop}
For the asymmetric two-type model of Theorem \ref{thm:asymmetricRd}
for two points $x$ and $y$ of the Poisson process
we take $\ell(x,y)=|x-y|$ unless $x$ and $y$ are both blue,
in which case $\ell(x,y)=\infty$. Similarly for the symmetric
multi-type model of Theorem \ref{thm:symmetricRd},
let $\ell(x,y)=|x-y|$ unless $x$ and $y$ have the same colour,
in which case $\ell(x,y)=\infty$.
In both those cases, conditions (i) and (ii) in Proposition
\ref{prop:uniquestable} hold with probability 1
by basic properties of
the Poisson process. The fact that the Poisson process
has no infinite descending paths
with probability 1 is a special case
of Theorem 4.1 of Daley and Last \cite{DaleyLast},
so condition (iii) also holds. Hence indeed,
for the models of Theorems \ref{thm:asymmetricRd} and \ref{thm:symmetricRd},
with probability 1 there exists
a unique stable matching.
We will also apply Proposition \ref{prop:uniquestable} to the PWIT
in Section \ref{sec:PWIT} and to a variant of the
hierarchical metric on ${\mathbb R}$ in Section \ref{sec:conclusionhierarchical}.
We also make a useful observation about the
information necessary to determine whether or not
a given vertex $x$ is matched within some given distance $R$.
A \textbf{descending path} from $x$ with weights less than $R$
is a sequence $x_0, x_1, x_2, \dots, x_k$ with $x_0=x$
and
\begin{equation}\label{descendingdef}
R>\ell(x_0, x_1)>\ell(x_1, x_2)>\dots>\ell(x_{k-1}, x_k).
\end{equation}
Let $V^{\downarrow}_R(x)$ and $E^{\downarrow}_R(x)$ be the sets of
all vertices and respectively all edges
that are contained in any such path.
\begin{prop}\label{prop:whoismatched}
If conditions (i), (ii), (iii) of Proposition \ref{prop:uniquestable}
hold, then for all $x\in V$ and all $R>0$, the set $E^{\downarrow}_R(x)$ is finite. To determine whether $x$ is matched along an
edge of weight less than $R$ in the stable matching
(i.e.\ whether $d_M(x)<R$ where $M$ is the stable matching)
it suffices to know $E^{\downarrow}_R(x)$
and the collection of edge-weights
$\{\ell(y,z): \{y,z\}\in E^{\downarrow}_R(x)\}$.
\end{prop}
Finally, note that the definition of stable matching
only uses the relative ordering of edge-weights. If
the weights are rescaled
by applying the same strictly increasing
function to each finite weight, the set of stable
matchings does not change. Combining this with Proposition
\ref{prop:whoismatched}, we can in fact transfer information
about stable matchings from one graph to another, if
the local structure of the sets of descending paths
agrees in a suitable sense:
\begin{prop}\label{prop:stablerescaled}
Suppose that the set
$V$, with associated edge-weight function $\ell$, and
the set $\widetilde{V}$, with associated edge-weight function
$\widetilde{\ell}$, both satisfy conditions (i), (ii), (iii)
of Proposition \ref{prop:uniquestable}.
Let $M$ and $\widetilde{M}$ be the stable matchings of $V$
and $\widetilde{V}$ respectively.
Given $\widetilde{x}\in\widetilde{V}$ and $\widetilde{R}>0$, define
$\widetilde{V}_{\widetilde{R}}^{\downarrow}(\widetilde{x})\subseteq \widetilde{V}$
and $\widetilde{E}_{\widetilde{R}}^{\downarrow}(\widetilde{x})$
with respect to the function
$\widetilde{\ell}$ in the same way that $V_R^{\downarrow}(x)\subseteq V$
and
$E_R^{\downarrow}(x)$
were defined
with respect to the function $\ell$.
Let $f$ be a strictly increasing function $f:{\mathbb R}_+\cup\{\infty\}
\to{\mathbb R}_+\cup\{\infty\}$ such that $f(\infty)=\infty$.
Suppose there is a bijection
$\phi$ from $V_R^{\downarrow}(x)$ to $\widetilde{V}_{f(R)}^{\downarrow}(\widetilde{x})$
with $\phi(x)=\widetilde{x}$,
such that for each $u,v\in V_R^{\downarrow}(x)$,
$\{u,v\}\in E_R^{\downarrow}(x)$
iff $\{\phi(u),\phi(v)\}
\in \widetilde{E}_{\widetilde{R}}^{\downarrow}$,
and in that case
$\widetilde{\ell}(\phi(u), \phi(v))= f(\ell(u,v))$. Then
$\ell(x,M(x))<R$ iff
$\widetilde{\ell}(\widetilde{x},
\widetilde{M}({\widetilde{x}}))<f(R)$.
\end{prop}
We prove Propositions \ref{prop:uniquestable}, \ref{prop:whoismatched} and \ref{prop:stablerescaled}
in Section \ref{sec:stableproof}.
The proof is based on an inductive construction to identify
edges which must be included in any stable matching,
related to the approach used in \cite{HPPS}
for the special cases of one-type and symmetric two-type
matchings in ${\mathbb R}^d$ with weights given by Euclidean distance.
\section{The Poisson-weighted infinite tree}
\label{sec:PWIT}
\subsection{Definition of the PWIT}
\label{subsec:PWITdef}
The \textbf{Poisson-weighted infinite tree}, or PWIT,
is an edge-weighted graph with vertex set
\[
{\mathbb N}^{\downarrow}=
\{\emptyset,1,2,\dots,11,12,\dots,21,22,
\dots,
111,112,\dots\},
\]
and edges $\{v,vj\}$ for each $v\in{\mathbb N}^{\downarrow}$ and $j\in{\mathbb N}$.
We say that $vj$ is a \textbf{child} of $v$.
For each $v\in{\mathbb N}^{\downarrow}$, let $(t^{(v)}_j: j=1,2,3,\dots)$
be the points of a Poisson process of rate 1 on ${\mathbb R}_+$
in increasing order,
and let these processes be independent for different $v$.
Then let $t^{(v)}_j$ be the weight associated to the edge
$\{v,vj\}$
(which we will also sometimes write as $t(v,vj)$).
The PWIT was introduced by Aldous and Steele \cite{AldousSteele}
and often arises in applications as a scaling
limit of the complete graph with edges weighted
by i.i.d.\ random variables.
We will explain how it also gives a scaling limit
of a Poisson process in high-dimensional Euclidean space.
Stable matchings on the PWIT can be analysed quite precisely,
and we will be able to use them to study the behaviour of stable matchings
in ${\mathbb R}^d$ for large $d$.
First we mention briefly the way in which the PWIT
arises as a limit of the weighted complete graph.
This can be formalised in many different ways;
in particular the framework
of \textit{local weak convergence} is often
used (see for example \cite{AldousSteele}),
but the following less technical approach gives
the essential idea. Consider the complete graph $K_n$
with i.i.d.\ weights attached to the edges
which are, say, exponential with rate $1/n$.
Fix some vertex $v$ and some ``radius" $R>0$,
and consider the subgraph of $K_n$ created by
the collection of all paths from $v$ which have total weight at most $R$.
Similarly we can consider the subtree of the PWIT
created by the collection of all paths from
the root which have total weight at most $R$.
Then for any given $R$, we can couple
the complete graph with the PWIT so that, with
probability tending to 1 as $n\to\infty$, there is an
isomorphism between these two subgraphs which
identifies $v$ with the root of the PWIT, and which preserves the edge weights.
Now we motivate informally the idea of the PWIT
as a limit of the Poisson process in ${\mathbb R}^d$ as $d\to\infty$.
Let
\[
\omega_d=\frac{\pi^{d/2}}{\Gamma\left(1+\frac{d}{2}\right)}.
\]
Then the volume of a ball of radius $r$ in ${\mathbb R}^d$ is $\omega_d r^d$.
Consider a Poisson process of rate $1$ in ${\mathbb R}^d$, as seen from a ``typical point", located at the origin and denoted by $O$. (We will make this notion precise by considering the Palm version of the Poisson process in Section
\ref{sec:coupling}.) The point $O$ will correspond to the root
of the PWIT. Let $x_1, x_2, x_3,\dots$ be the other points of the process,
written in order of their distance from $O$.
Then the sequence $\omega_d|x_1|^d, \omega_d|x_2|^d, \omega_d|x_3|^d,\dots$
forms a Poisson process of rate 1 on ${\mathbb R}_+$. Rescaled in this
way, these distances correspond to the weights
$t^{\emptyset}_1,
t^{\emptyset}_2,
t^{\emptyset}_3,\dots$
on the edges connecting the root of the PWIT to its children.
Note that $\omega_d^{1/d}|x_1|$ converges in probability to 1 as $d\to\infty$. In fact, for any $\epsilon>0$, the probability that there exists a point
of the process within distance $\omega_d^{-1/d}(1-\epsilon)$ of $O$ decays
exponentially with $d$, while the expected number of points within
distance $\omega_d^{-1/d}(1+\epsilon)$ increases exponentially.
Now consider in turn the points closest to $x_1$. Other than the origin,
let these points be $x_{1,1}, x_{1,2}, x_{1,3},\dots$
in order of distance from $x_1$. Similarly rescaled,
their distances from $x_1$ again converge to a Poisson process,
and $\omega_d^{1/d}|x_{1,1}-x_1|$ converges in probability to 1 as $d\to\infty$.
On the other hand, for large $d$, we expect $x_1$ and $x_{1,1}-x_1$
to be approximately orthogonal, so that $x_{1,1}$ is at distance
approximately $\sqrt{2}\omega_d^{-1/d}$ from $O$. In particular, $x_{1,1}$
is not among the nearest neighbours of $O$
(we expect to find exponentially many closer points).
We can extend by considering paths from the origin consisting of distinct points $x_0=O, x_1, x_2, \dots, x_k$, in which each $x_j$ is one of
the $m$ nearest neighbours of $x_{j-1}$. For given $k$
and $m$, with high probability
as $d\to\infty$ no such path ends in a point $x_k$ which is one of the $m$ nearest neighbours of $O$. This explains why the acyclic structure
of the PWIT gives an appropriate limit for the graph of
near neighbours in the Poisson process on ${\mathbb R}^d$.
In this way we could give a result
similar to that mentioned for the complete graph
above, comparing the structure of the PWIT
and the Poisson process restricted to
paths of total (rescaled) weight $R$,
corresponding to local weak convergence.
In fact, to analyse stable matchings we need a
different mode of convergence,
concerning the subgraph obtained
by taking all descending paths from the
root (in the PWIT) or the origin (in ${\mathbb R}^d$)
with weights (or distances) less than $R$;
in the same way as at (\ref{descendingdef}), a descending
path is a path such that the successive edge
weights (or distances) form a decreasing sequence.
We show that with high probability as $d\to\infty$,
the collection of descending paths in the two models
can be coupled so that (after rescaling of distance)
their graph structure is identical in the sense of
Proposition \ref{prop:stablerescaled}.
This will allow us to approximate certain intensities
in the stable matching model in ${\mathbb R}^d$ (for example, the intensity of points of a given type which are not matched by the stable matching) by probabilities involving the matching of the root of the PWIT.
Note that while the edge weights of the PWIT correspond to rescaled distances, we do not think of these weights as defining a graph
distance, or indeed giving any metric. The weight
(corresponding to rescaled distance)
between $\emptyset$ and its child $1$ in the PWIT is $t_1^{\emptyset}
\sim\textrm{Exp}(1)$,
and that between $1$ and its child $11$ is $t_1^1\sim\textrm{Exp}(1)$,
but, asymptotically as $d\to\infty$, the rescaled distance between
$\emptyset$ and its ``grandchild" $11$ goes to $\infty$.
\subsection{Stable matchings on the PWIT}
In the rest of the section
we analyse various stable matching problems where
the set of points is given by the vertices of the PWIT.
From elementary properties of the Poisson process,
with probability 1, all the weights in the PWIT are
distinct, and any vertex has only finitely many edges
with weights falling in any given compact interval.
This gives conditions (i) and (ii) of Proposition
\ref{prop:uniquestable} for all the models of stable matching
on the PWIT that we consider below, and condition (iii)
on the absence of infinite descending chains
will be given by Lemma \ref{lemma:treesize} below.
So in all the cases we will consider, with probability 1 there
exists a unique stable matching.
Given a probability vector $(p_1, \dots, p_k)$,
let each vertex of the PWIT have type (or colour) $i$ with
probability $p_i$, independently for different vertices.
(Note then that for each $v$, the weights of edges from
$v$ to its children of colour $i$ form a Poisson process
of rate $p_i$, indepdendently for different $i=1,\dots, k$
and $v\in{\mathbb N}^{\downarrow}$.) As in the spatial models already considered,
the colours determine which pairs of points are allowed to be matched.
The PWIT has a recursive structure.
The subtrees rooted at each child $1,2,\dots$ of the root
$\emptyset$ have the structure of independent copies of the PWIT.
We can consider the stable matching problem on any of these subtrees.
If $j$ is a child of the root, along an edge with weight $t_j^{(\emptyset)}$,
say that $j$ is ``available" (to the root) if $j$ is not matched
along an edge with weight
less than $t_j^{(\emptyset)}$ in the stable matching of
the subtree rooted at $j$.
Then the root is matched to the nearest of its children
which is both available and has a compatible colour
(and is unmatched if no such child exists).
\subsubsection{One-type matching}
As an introduction we start with the simplest case, where there is only one type of point (so $p_1=1$) and any pair $\{v, vj\}$ with $v\in {\mathbb N}^{\downarrow}$, $j\in {\mathbb N}$ may be matched. That is, $\ell$ is the symmetric function given by
\begin{equation}\label{elldefPWIT}
\ell(v,vj)=t^{(v)}_j,
\end{equation}
and $\ell(v,w)=\infty$ whenever $v$ and $w$ are
not joined by an edge in the PWIT.
For $t\geq 0$, let $x(t)$ be the probability that the root is not matched along an edge with weight less than $t$. By the recursive structure
of the PWIT, conditional on the weights
$t_1^{(\emptyset)}=t_1$, $t_2^{(\emptyset)}=t_2,\dots$
from the root to its children,
these children are available independently with
probabilities $x(t_1), x(t_2), \dots$.
In fact, the process of available children of the root
forms a inhomogeneous Poisson process with rate $x(t)$
on ${\mathbb R}_+$.
The root is matched to the first point of this process.
So $x(t)$ is given by the probability that this process has no points
in $[0,t)$, giving
\[
x(t)=\exp\left\{-\int_0^t x(u) du\right\}.
\]
Hence we have
\begin{equation}
\label{one-type-equation}
x'(t)=-x(t)^2.
\end{equation}
Using $x(0)=1$ gives the exact solution
$x(t)=1/(t+1)$. Since $x(t)\to0$ as $t\to\infty$,
the root is matched with probability 1. (However,
since $\int_0^{\infty}x(t)dt=\infty$,
the weight of the edge along which it is matched has infinite mean.)
\subsubsection{Asymmetric two-type matching}
\label{subsubsec:asymmPWIT}
Now we study the asymmetric two-type model
corresponding to the one studied in Theorem
\ref{thm:asymmetricRd} for points in ${\mathbb R}^d$.
Let each vertex of the PWIT independently be red with probability
$1-\epsilon$ and blue with probability $\epsilon$.
Red-red and red-blue
matches are allowed, but blue-blue matches are not.
That is, the function $\ell$ is defined as at (\ref{elldefPWIT})
except that now $\ell(v,vj)=\infty$ if $v$ and $vj$ are both blue.
Let $r(t)$ be the probability that the root is red
and not matched along an edge of weight less than $t$, and $b(t)$
the probability that the root is blue and not matched
along an edge of weight less than $t$.
By an analogous argument to the one-type case above,
the processes of available red and blue children of the root
form independent inhomogeneous Poisson processes
of rates $r(t)$ and $b(t)$ on ${\mathbb R}_+$.
We have
\begin{gather*}
r(t)=(1-\epsilon)\exp\left\{-\int_0^t \big[r(u)+b(u)\big]du\right\}\\
b(t)=\epsilon\exp\left\{-\int_0^t r(u) du\right\},
\end{gather*}
so that
\begin{align}
\nonumber
r'(t)&=-r(t)\big(r(t)+b(t)\big)\\
\label{rb-equation}
b'(t)&=-b(t)r(t),
\end{align}
with initial conditions $r(0)=1-\epsilon$
and $b(0)=\epsilon$.
Certainly $r(t)\to 0$ as $t\to\infty$ (for example by comparison
to (\ref{one-type-equation}), we have $r(t)<1/(t+1)$).
We can ask whether or not there are unmatched blue points;
that is, does $b(t)$ also converge to 0 as $t\to\infty$?
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.8\textwidth]{rb0-25.pdf}
\caption{
\label{fig:rb}
Numerical solution of the system (\ref{rb-equation})
describing the evolution of the asymmetric two-type system on the PWIT,
with $r(0)=0.75$ and $b(0)=0.25$.
As $t\to\infty$, $r(t)$ decays exponentially to 0
and $b(t)$ converges to $b(\infty)\approx 0.0124$.
}
\end{center}
\end{figure}
Write $R(t)=-\log r(t)$ and $B(t)=-\log b(t)$.
Then $R$ and $B$ are increasing with $t$,
and $R(t)\to\infty$ as $t\to\infty$.
We can derive $B$ as a function of $R$, and ask whether
$B\to\infty$ as $R\to\infty$.
We have
\begin{align*}
R'&=-\frac{r'}{r}=r+b=e^{-R}+e^{-B}\\
B'&=-\frac{b'}{b}=r=e^{-R},
\end{align*}
which gives
\[
\frac{dR}{dB}=\frac{e^{-R}+e^{-B}}{e^{-R}}=1+e^{R-B},
\]
leading to $\frac{d(R-B)}{dB}=e^{R-B}$
which has general solution
\[
R-B=-\log(-B+c).
\]
We see that as $B\uparrow c$, $R\to\infty$.
Hence for the original system $B(t)\to c$ as $t\to\infty$.
To find $c$, we use
$r(0)=1-\epsilon$ and $b(0)=\epsilon$,
so that $R(0)=-\log(1-\epsilon)$ and $B(0)=-\log \epsilon$; this gives
gives $c=\frac{1-\epsilon}{\epsilon}+\log\frac1\epsilon$.
Then $b(t)\to b(\infty)=e^{-B(\infty)}=e^{-c}=\epsilon e^{-1/\epsilon+1}
=e^{-1/\epsilon+1}b(0)$.
So we see that a proportion $e^{-1/\epsilon+1}$ of the blue
points remain unmatched (or, more formally, this is the
conditional probability
that the root remains unmatched, given that it is blue).
\subsubsection{Symmetric multi-type matching}
\label{subsubsec:symm}
Now we turn to the model corresponding to the setting of
Theorem \ref{thm:symmetricRd}, in which there are $k$ types
with probabilities $p_1, \dots, p_k$, and two points may be matched
if their types are different.
Now we define $\ell$ as at (\ref{elldefPWIT}) unless
$v$ and $vj$ have the same colour, in which case $\ell(v,vj)$ is infinite.
Let $x_i(t)$ be the probability that the root itself has type $i$
and is not matched along an edge of weight less than $t$.
As before, the weights of edges from the root leading to available children
of type $i$ form inhomogeneous Poisson processes
of rates $x_i(t)$, independently for $i=1,2,\dots,k$.
Then $x_i(0)=p_i$ and
\begin{equation}\label{symmetric-equation}
x'_i(t)=-x_i(t)\sum_{j\ne i}x_j(t).
\end{equation}
Writing $X_i(t)=-\log x_i(t)$ this gives
\begin{align}\label{deriv}
X_i'(t)&=\frac{-x_i'(t)}{x_i(t)}\\
\nonumber
&=\sum_{j\ne i}x_j(t),
\end{align}
and so
\begin{align*}
X_i'(t)-X_j'(t)&=x_j(t)-x_i(t)\\
&=e^{-X_j(t)}-e^{-X_i(t)}.
\end{align*}
Then if $x_j(t)<x_i(t)$,
or equivalently $X_i(t)<X_j(t)$, then the derivative
of $X_j(t)-X_i(t)$ is strictly positive. Hence
in particular if $p_j<p_i$ then $x_j(t)<x_i(t)$ for all $t$.
Suppose the maximum initial density is
attained by at least two types; say $p_1=p_2\geq p_j$ for all $j$.
Then by symmetry $x_1(t)=x_2(t)$ for all
$t$. It's impossible for a positive proportion of points
of two different types to remain unmatched
(in particular, the root would have an unmatched child
of a different colour with probability 1, and this would contradict
the final statement of Proposition \ref{prop:uniquestable}).
Hence in this case all points are matched.
Suppose on the other hand that
there is a unique type with highest initital probability.
Then we will show that a positive proportion of
points of this type remain unmatched:
\begin{prop}
If $p_1>p_j$ for all $j>1$, then
$\lim_{t\to\infty} x_1(t)>0$.
\end{prop}
\begin{proof}
Since only the type with maximum initial probability can have points left
unmatched, we know that $x_2(t)\to 0$, i.e.\ $X_2(t)\to\infty$.
Now write $Z(t)=X_2(t)-X_1(t)$.
From (\ref{deriv}), we get
\begin{align*}
\frac{dZ(t)}{dX_2(t)}
&=\frac{d(X_2(t)-X_{1}(t))}{dX_{2}(t)}\\
&=\frac{x_1(t)-x_2(t)}
{\sum_{j>1}x_j(t)}\\
&=\frac{\exp(-X_1(t))-\exp(-X_{2}(t))}
{\exp(-X_1(t))+\sum_{j\geq 3}\exp(-X_j(t))}\\
&\geq
\frac{\exp(-X_1(t))-\exp(-X_2(t))}
{\exp(-X_1(t))+(k-2)\exp(-X_2(t))}\\
&=\frac{\exp(-X_{2}(t))(\exp(Z(t))-1)}
{\exp(-X_{2}(t))(\exp(Z(t))+(k-2))}\\
&=\frac{\exp(Z(t))-1}{\exp(Z(t))+(k-2)}\\
&=1-\frac{k-1}{\exp(Z(t))+(k-2)}.
\end{align*}
This derivative is always positive since
$Z(t)>0$ for all $t$; hence in fact $Z(t)$
is increasing as a function of $X_{2}(t)$,
and this derivative is bounded away from 0.
So $Z(t)\to\infty$ as $X_{2}(t)\to\infty$,
i.e.\ as $t\to\infty$.
This gives that $X_{2}(t)-X_1(t)\to\infty$,
i.e. that $x_{2}(t)/x_1(t)\to0$.
Similarly in fact $x_j(t)/x_1(t)\to0$ for all $j>1$.
So for some $t$,
\begin{equation}\label{moretype1}
x_1(t)>x_2(t)+\dots+x_k(t).
\end{equation}
Now the intuition is that since, looking at points unmatched
within weight $t$,
the density of type-1 points
is higher than the density of all other types put together,
it is impossible to match all the type-1 points.
To see this directly, one can use (\ref{deriv})
to observe that the derivative of $x_1(t)-\sum_{j\geq 2}x_j(t)$
is always non-negative (heurisitically,
this reflects the fact that each match involves at most one type-1 point and
at least one point of another type); combining with (\ref{moretype1})
gives that $x_1(t)$ stays bounded away from 0 as $t\to\infty$.
\end{proof}
\begin{remark}
In the case where $p_i, 1\leq i\leq k$ take only two
distinct values, we can solve exactly.
Consider for example the case where
$p_1>p_2=\dots=p_k$, which (up to reordering) is the only such case
where some points will remain unmatched.
Then by symmetry between all coordinates except the first,
\begin{align*}
x_1'(t)&=-(k-1)x_1(t)x_2(t)\\
x_2'(t)&=-(x_1(t) x_2(t) +(k-2)x_2(t)^2).
\end{align*}
We get
\[
\frac{dx_2}{dx_1}=\frac{1}{k-1}+\frac{k-2}{k-1}\frac{x_2}{x_1},
\]
which is solved by
\[
x_2(t)=x_1(t)-cx_1(t)^{\frac{k-2}{k-1}}.
\]
From the initial values $x_1(0)=p_1$, $x_2(0)=p_2$
we obtain $c=(p_1-p_2)p_1^{-\frac{k-2}{k-1}}$.
Then considering $t\to\infty$ and using $x_2(\infty)=0$,
we have
\begin{align*}
x_1(\infty)&=(p_1-p_2)^{k-1} p_1^{-(k-2)}\\
&=\left(1-\frac{p_2}{p_1}\right)^{k-1} x_1(0).
\end{align*}
We can interpret the quantity $x_1(\infty)/x_1(0)$
as the ``proportion of points of type 1 left unmatched"
(more precisely, the probability that the root is unmatched,
given that it has type 1).
Looking for asymptotics as the difference between the
initial probabilities becomes small,
we can put for example
$p_1=\frac1k+(k-1)\delta$
and $p_2=\frac1k-\delta$. Then we obtain
\begin{align*}
\frac{x_1(\infty)}{x_1(0)}
&=\left(\frac{k^2\delta}{1+k(k-1)\delta}\right)^{k-1}
\\
&\sim \,\,\,k^{2(k-1)}\delta^{k-1}
\text{ as } \delta\downarrow 0.
\end{align*}
\end{remark}
\section{Coupling the PWIT and a Poisson process in ${\mathbb R}^d$}
\label{sec:coupling}
\subsection{Palm version}
\label{subsec:Palm}
Consider a simple point process in ${\mathbb R}^d$ with finite intensity.
The \textbf{Palm version} of the process is obtained, informally speaking, by conditioning on the presence of a point at the origin. One can also describe the Palm version as giving the distribution of the process ``as seen from
a typical point". This notion can be formalised in various equivalent
ways. For example, let $\Pi$ be the point process, let
$[\Pi]$ denote the set of its points, and, for $y\in {\mathbb R}^d$,
let $\theta^y(\Pi)$ denote the process obtained by translating
$\Pi$ by $y$; then the probability of an event $A$
for the Palm version $\Pi^{\downarrow}$ of $\Pi$ can be defined
by
\[
\P(\Pi^{\downarrow}\in A)=
\frac{
{\mathbb E}\#\left\{x\in[\Pi]\cap[0,1]^d : \theta^{-x}(\Pi)\in A\right\}.
}
{
{\mathbb E}\#\left\{x\in[\Pi]\cap[0,1]^d\right\}
}.
\]
In the case of a Poisson process, the Palm version
has a particularly straightforward description; it can be obtained
simply by adding a point at the origin to a configuration
drawn from the original measure. See for example
Chapter 11 of Kallenberg \cite{Kallenberg} for extensive details.
Our multi-type models
add information about the colours of the points of the Poisson process.
In the language of \cite{Kallenberg}, this information
can be taken as a \textit{stationary background}. To obtain the Palm
version, we add a point at the origin whose colour is again
drawn according to the same distribution as the other points
(and independently of the rest of the configuration).
Intensities of various types of point in the original process
can then be related to probabilities involving the point at the origin in
the Palm version. In particular, the intensity of
points of type $i$ which are unmatched in the stable matching
is given by the probability that the point at the origin in the Palm
version has type $i$ and is unmatched in the stable matching.
\subsection{Descending paths}
In the PWIT, let a
\textit{descending path from the root with weights less than $T$}
be a sequence
$v_0, v_1, \dots, v_k$ of points of ${\mathbb N}^{\downarrow}$,
where $v_0$ is the root, where
$v_i$ is a child of $v_{i-1}$ for $i=1,2,\dots, k$, and where
\[
T>t(v_0, v_1)>t(v_1,v_2)>\dots>t(v_{k-1}, v_k),
\]
where $t(v_{i-1}, v_i)$ is the weight of the edge between
$v_{i-1}$ and $v_i$.
For the Palm version of the
Poisson process in ${\mathbb R}^d$,
let a \textit{descending path from the origin with distances less than $R$}
be a sequence of distinct points of the process $x_0, x_1, \dots, x_k$ where $x_0$ is the origin and
where
\[
R>|x_0-x_1|>|x_1-x_2|>\dots>|x_{k-1}-x_k|.
\]
For finite $T$, with probability 1 the set of descending paths from
the root within distance $T$ in the PWIT is finite and contains only
finite paths (see Lemma \ref{lemma:treesize}).
(The analogous property is also true for the
Palm version of the Poisson process in ${\mathbb R}^d$;
this is a special case of Theorem 4.1 of
Daley and Last \cite{DaleyLast}.)
From Proposition \ref{prop:whoismatched},
we know that for the stable matching on the PWIT,
the event that the point at the origin is matched
to a child along an edge with weight less than $T$
is in the sigma-algebra generated by
the graph of descending paths from the origin
within distance $T$ (including the information
about the colours of points); similarly in ${\mathbb R}^d$ for the event
that the origin is matched to a point at distance less than $R$.
\subsection{Description of the coupling}
Throughout this section, we consider the Palm version
of the Poisson process in ${\mathbb R}^d$.
Suppose $T$ and $R$ are related via $T=f_d(R):=\omega_d R^d$
so that a ball of radius $R$ in ${\mathbb R}^d$ has volume $T$.
We aim to couple the collection of descending paths from the root
with weights less than $T$ in the PWIT with
the collection of descending paths from the origin
with distances less than $R$ in ${\mathbb R}^d$,
in such a way that their graph structure is identical,
and such that the weight $t$ of an edge in the PWIT
and the distance $r$ between the corresponding points in ${\mathbb R}^d$
are related by $t=f_d(r)=\omega_d r^d$.
Specifically, we want to arrange that
the hypotheses of Proposition
\ref{prop:stablerescaled} are satsfied
with high probability.
As in Section \ref{sec:uniqueness},
let $V_R^\downarrow(O)$ and $E_R^\downarrow(O)$
be the sets of points, and respectively edges,
contained in some descending path from the origin
with distances less than $R$ in ${\mathbb R}^d$,
and let $\widetilde{V}_T^\downarrow(\emptyset)$
and $\widetilde{E}_T^\downarrow(\emptyset)$
be the set of points,
and respectively edges,
contained in some descending path
from the root in the PWIT with weights less than $T$.
\begin{prop}\label{prop:couplingsuccess}
There exist absolute constants $\alpha>0$ and $c$ such that the following holds.
Let $T\geq 1$, let $d>cT$, and let $R=f_d^{-1}(T)$.
Then we can couple the PWIT model
with the Palm version of the Poisson model
in ${\mathbb R}^d$ such that with probability at least
$1-e^{-\alpha T}$,
there exists a bijective map
$\phi:
V_R^\downarrow(O) \mapsto
\widetilde{V}_T^\downarrow(\emptyset)$
with the following properties:
\begin{itemize}
\item[(i)]
$v_i$ and $\phi(v_i)$ have the same colour for all $i$;
\item[(ii)]
$v_0, v_1, \dots, v_k$ is a descending
path from the origin with weights less than
$R$ in ${\mathbb R}^d$ if and only if $\phi(x_0), \phi(x_1), \dots ,\phi(x_k)$
is a descending path from the root with weights less than
$T$ in the PWIT, and if so then
$t(\phi(v_{i-1}), \phi(v_i))=
f_d(|x_i-x_{i=1}|)$
for $i=1,2,\dots,k$.
\end{itemize}
\end{prop}
Using this result we can apply
Proposition \ref{prop:stablerescaled} to obtain
that (with probability at least $1-e^{-\alpha T}$)
the origin in ${\mathbb R}^d$ is matched within distance $R$ if and only if the root of the PWIT is matched within distance $T$.
Our strategy is to couple a procedure that
explores the
collection of descending paths in ${\mathbb R}^d$ with one
that generates the collection for the PWIT,
aiming to maintain the bijection as described above.
If certain events occur for the Poisson configuration in ${\mathbb R}^d$,
the coupling will fail (and we terminate the procedure -- on this set we couple
the two processes in an arbitrary way so as to
maintain the required marginals);
but if the procedure reaches the end then it is guaranteed that
a bijection as described above exists. We will give a lower bound
for the probability that the coupling reaches the end successfully.
First we describe the procedure to explore the collection in ${\mathbb R}^d$.
We will abandon this
exploration if it ever discovers a point in
${\mathbb R}^d$ that can be arrived at via two different descending paths
from the origin within distance $R$
(if this happens, it is certainly impossible to couple
successfully, since in the case of the PWIT the
collection of descending paths has a tree structure).
For a point $x\in V_R^\downarrow(O)$, other than the origin,
we say the \textit{parent} of $x$ is the point that precedes it
in the descending path from the origin to $x$ with
distance less than $R$. This is unambiguously defined for as long as the procedure keeps running, since if more than one such path is ever discovered, the procedure stops.
In fact, we will be more conservative. If we find two points of $V_R^\downarrow(O)$
which are closer than $R$ to each other, and neither is the parent of the other, we will abandon the procedure. (Note that this must occur if there is any point that can be arrived at via two different descending paths from the origin within distance $R$.) Furthermore, we will also abandon the procedure if it ever finds a parent and child which are closer than
$R/2$ to each other.
We explore space gradually, discovering points of $V_R^\downarrow(O)$
as we proceed. We maintain an ordered list
of points which we have discovered, say $x_0, x_1, \dots, x_k$,
where $x_0$ is the origin.
Let $r_0=R$ and for $j>0$, let $r_j$ be the distance to $x_j$ from its parent, which is $x_i$ for some $i<j$; note that $r_j<r_i$ by the descending path property.
We ``process" the points in order; to process $x_j$, we look for new points in the open ball $B_j:=B(x_j, r_j)$, and add any such new points to the end of the list. These are the points whose parent is $x_j$, i.e.\ the points which can follow $x_j$ in a descending path.
Suppose our list is $x_0, \dots, x_k$,
and we are currently processing point $x_j$ where $j\leq k$.
This means that
we have already processed $x_0, \dots, x_{j-1}$, and so
the region $A_j$ defined by
\begin{equation}\label{Ajdef}
A_j:=B_j\cap\bigcup_{0\leq i<j} B_i
\end{equation}
has already been explored,
and is known to contain no Poisson points
other than $x_j$ (if there had been any such
point, the procedure would have terminated
at an earlier stage since
that point and $x_j$
would have been too close to each other.)
Now we describe how to couple this exploration procedure
with a process which generates
the tree of descending paths with weights less than $T$
in the PWIT. First a useful observation:
\begin{lemma}\label{lemma:simplescaling}
Let $r>0$ and $t=f_d(r)$.
Let $x\in{\mathbb R}^d$ and let $x_1, x_2, \dots, x_k$
be the points of a Poisson process of rate 1 in $B(x,r)$.
Let $t_i=f_d(|x_i-x|)$. Then $t_1, t_2, \dots, t_k$
are the points of a Poisson process of rate 1 on $[0,t]$.
\end{lemma}
\begin{proof}
This is immediate from basic properties of the Poisson process,
and the fact that the ball
$\{y:f_d(|y-x|) <s\}=B(x,f_d^{-1}(s))$
has volume $s$.
\end{proof}
We start off with $x_0$ the origin in ${\mathbb R}^d$ and $v_0$ the
root of the PWIT. For as long as the coupling is successful,
at each stage we have a set of points $v_0, \dots, v_k$
which have been discovered in the PWIT, and which correspond
to the points $x_0, \dots, x_k$ discovered in $R^d$. The root
of the PWIT is $v_0$ and corresponds to the origin in ${\mathbb R}^d$,
which is $x_0$.
If $v_i$ is the parent of
$v_j$ in the PWIT, let $t_j$ be the weight of the edge
between $v_i$ and $v_j$;
then $x_i$ is the parent of $x_j$ in the sense described
earlier for ${\mathbb R}^d$, and $t_j=f_d(|x_i-x_j|)$.
At the same time as processing $x_j$ in ${\mathbb R}^d$, we process $v_j$ in the PWIT. Processing $v_j$ involves generating
the children of $v_j$ which are connected to $v_j$
along edges with weights in $[0,t_j)$.
Notice that $t_j$ is precisely the volume of $B(x_j, r_j)$.
Hence we can couple the children of $y_j$ in the interval $[0,t_j)$
with a Poisson process of rate 1 in $B_j=B(x_j, r_j)$ in
such a way that the weights on the edges from $y_j$ and
the distances of the points from $x_j$ are related according
to the scaling in Lemma \ref{lemma:simplescaling}.
Notice that at this stage of the exploration procedure,
the new points we discover are not a Poisson process on the
whole of $B_j$; as observed above at (\ref{Ajdef}),
the subset $A_j$ of $B_j$ has already been explored. Hence
to generate a set of children and edge weights according
to the correct distribution, we supplement the new
points in $B_j\setminus A_j$ (which are independent of everything
seen in the procedure so far, since this region
has not yet been explored so far) with an extra Poisson process
of rate 1 in $A_j$, again chosen independently of the
points in $B_j$ and of everything else seen so far.
In this way we obtain a Poisson process of rate 1 in $B_j$,
which is independent of the previous history of the procedure,
and we use the correspondence in Lemma \ref{lemma:simplescaling}
to derive the weights to children of $y_j$ which lie in $[0,t_j)$.
If in fact the extra Poisson process in $A_j$ contains at least one point,
we are in trouble, because we cannot maintain the correspondence between the new points found in ${\mathbb R}^d$ and the new vertices added
to the PWIT. In this case we abandon the procedure.
However, if this supplementary process in $A_j$ is empty,
then we can maintain the bijection and the procedure continues.
If the procedure finishes (i.e.\ runs out of new points in ${\mathbb R}^d$
to process) without abandoning, then it provides a bijection between
$V_R^\downarrow(O)$ and $\widetilde{V}_T^\downarrow(\emptyset)$ as required for Proposition \ref{prop:couplingsuccess}.
We summarise the ways that the procedure may fail at step $j$, i.e.\
at the step where we process the point $x_j$:
\begin{itemize}
\item[(1)] Within $B_j\setminus A_j$, we find a child of $x_j$
which is within distance $R/2$ of $x_j$.
\item[(2)] Within $B_j\setminus A_j$, we find two children of $x_j$
which are within distance $R$ of each other.
\item[(3)] Within $B_j\setminus A_j$, we find a child of $x_j$
which is within distance $R$ of a previously discovered point.
\item[(4)] The supplementary Poisson process of rate 1 on $A_j$
contains one or more points.
\end{itemize}
For $m=1,2,3,4$, let us write $\mathcal{E}_j^{(m)}$ for the event
that the procedure successfully completes steps $1,\dots, j-1$,
and then failure type $(m)$ above occurs at step $j$.
(Under this definition it is possible that $\mathcal{E}_j^{(m)}$ and $\mathcal{E}_j^{(m')}$
both occur for different $m$ and $m'$, but it is not possible that
$\mathcal{E}_j^{(m)}$ and $\mathcal{E}_{j'}^{(m')}$ both occur for
$m,m'\in\{1,2,3,4\}$ and for different $j$ and $j'$.)
In the next section we bound the probabilities of each of these types of failure.
If the procedure does fail at step $j$, we do not proceed to step $j+1$.
For the sake of being specific about the coupling, let us say that we
we generate the rest of the subtree of the PWIT spanned by $\widetilde{V}_T^\downarrow(\emptyset)$
according to its distribution conditional on the part of the structure
already created at steps $1,\dots, j$, and independently of any further
information about the process in ${\mathbb R}^d$.
\subsection{Bounding the probability of failure of the coupling}
As above, throughout this section we set $T=\omega_d R^d$, the volume of a ball
of radius $R$ in ${\mathbb R}^d$.
\label{boundsection}
\begin{lemma}\label{lemma:E1bound}
For all $j$, $\P(\mathcal{E}_j^{(1)})\leq \left(\frac12\right)^d T$.
\end{lemma}
\begin{proof}
$\mathcal{E}_j^{(1)}$ is the probability that the procedure
reaches step $j$, and then we find at least one new point
in $B(x_j, R/2)\setminus A_j$. Since (independently of everything
seen so far) the points in that set form a Poisson process of rate 1,
this probability is bounded above by the volume of $B(x_j, R/2)$,
which is $(1/2)^d T$ as required.
\end{proof}
\begin{lemma}
If $y,z\in {\mathbb R}^d$ with $|y-z|\geq R/2$,
then
\begin{equation}\label{volbound}
\operatorname{vol}\big(B(y,R)\cap B(z,R)\big)
\leq \left(\frac{15}{16}\right)^{d/2} T.
\end{equation}
\end{lemma}
\begin{proof}
Any point $x$ in the intersection of the two balls
is at distance at most $h$ from the midpoint of $y$ and $z$,
where $h^2=R^2-(|y-z|/2)^2\leq (15/16)R^2$.
(This can be easily checked by considering the plane which contains $x$, $y$ and $z$.)
Hence the intersection is contained in a ball of radius $h$,
whose volume is $(h/R)^d T=(15/16)^{d/2} T$.
\end{proof}
\begin{lemma}\label{lemma:E2bound}
For all $j$,
$\P(\mathcal{E}_j^{(2)}\setminus \mathcal{E}_j^{(1)})\leq \left(\frac{15}{16}\right)^{d/2}T^2$.
\end{lemma}
\begin{proof}
If $\mathcal{E}_j^{(2)}$ happens but $\mathcal{E}_j^{(1)}$ does not, then
at step $j$ we find a pair of new points, say $y$ and $y'$,
which are both between distance $R/2$ and $R$ from $x_j$, and are within distance
$R$ of each other.
The expected number of such pairs is no more than
\[
\int_y I\big(y\in B(x_j, R)\setminus B(x_j, R/2)\big)
\int_{y'} I\big(y'\in B(x_j, R)\cap B(y,R) \big)dy' dy.
\]
Using (\ref{volbound}), this is bounded above by
\[
\big[\operatorname{vol} B(x_j, R)-\operatorname{vol} B(x_j, R/2)
\big]
\left(\frac{15}{16}\right)^{d/2} T
\]
which is less than $(15/16)^{d/2}T^2$ as desired.
\end{proof}
\begin{lemma}\label{lemma:E3bound}
For all $j$ and $K$,
$\P(\mathcal{E}_j^{(3)} \cap \{|\widetilde{V}_T^\downarrow(\emptyset)|\leq K\})
\leq (K-1)\left(\frac{15}{16}\right)^{d/2}T$.
\end{lemma}
\begin{proof}
If the procedure runs successfully to step $j$,
and $|\widetilde{V}_T^\downarrow(\emptyset)|\leq K$, then at step $j$ (when we come to process the point $x_j$),
the set of already discovered points is $x_1,\dots, x_k$ for some $k\leq K$.
We want to bound the probability that we then find a new point inside $B_j$
which is within distance $R$ of some $x_i$, $i\ne j$, $i\leq k$.
This is at most
\[
\operatorname{vol}
\bigcup_{i\leq k, i\ne j} \big(B(x_i, R)\cap B(x_j, R)\big).
\]
But if indeed the procedure has been successful so far, then
in particular $|x_j-x_i|\geq R/2$ for all such $i$.
Then using (\ref{volbound}), the probability is at most $(k-1)(15/16)^{d/2}T$
which gives the desired bound.
\end{proof}
\begin{lemma}\label{lemma:E4bound}
For all $j$,
$\P(\mathcal{E}_j^{(4)})
\leq (j-1)\left(\frac{15}{16}\right)^{d/2}T$
\end{lemma}
\begin{proof}
This case is very similar to Lemma \ref{lemma:E3bound}.
We wish to bound the probability that at step $j$, the
``supplementary" Poisson process of rate 1 in the set $A_j$ defined
by (\ref{Ajdef}) is non-empty. Using the same argument as above,
if the procedure has run successfully up to step $j$, then
each point $x_i, i<j$ is at distance at least $R/2$ from $x_j$.
Then the area of the set in (\ref{Ajdef}) is at most
$(j-1)(15/16)^{d/2}T$.
\end{proof}
\begin{lemma}\label{lemma:treesize}
${\mathbb E}\big(|\widetilde{V}_T^\downarrow(\emptyset)|\big)=e^T$, and hence
\begin{align}
\label{Markovbound}
\P\big(|\widetilde{V}_T^\downarrow(\emptyset)|>e^{2T}\big)&\leq e^{-T}.
\end{align}
\end{lemma}
\begin{proof}
For $k\geq 0$,
the expected number of descending paths $v_0, v_1,\dots v_k$
with weights less than $T$ in the PWIT, where $v_0$ is the origin, is given by
\[
\int_{0<t_k<\dots<t_1<T}dt_1\dots dt_k
\]
which is $T^k/k!$.
Since in the PWIT each point is the endpoint of at most one
such path, we can sum over $k$ to get
${\mathbb E}\big(|\widetilde{V}_T^\downarrow(\emptyset)|\big)=e^T$.
The bound in
(\ref{Markovbound})
then follows by Markov's inequality.
\end{proof}
\begin{proof}[Proof of Proposition \ref{prop:couplingsuccess}]
Using the estimate in (\ref{Markovbound}), we can combine
the four previous bounds using a union bound.
If the procedure fails, then either it does so at step $j$ for some $j\leq e^{2T}$,
or $|\widetilde{V}_T^\downarrow(\emptyset)|>e^{2T}$. Then we can combine all the bounds
in Lemmas \ref{lemma:E1bound}, \ref{lemma:E2bound}, \ref{lemma:E3bound},
\ref{lemma:E4bound} and \ref{lemma:treesize}
to give
\begin{align*}
\P\left(
\bigcup_{j=1}^\infty \bigcup_{m=1}^4 \mathcal{E}_j^{(m)}\right)
&\leq
\P\left(|\widetilde{V}_T^\downarrow(\emptyset)|>e^{2T}\right)
+ \P\left(\bigcup_{1\leq j\leq e^{2T}} \bigcup_{m=1}^4 \mathcal{E}_j^{(m)}, |\widetilde{V}_T^\downarrow(\emptyset)|\leq e^{2T}\right)\\
&\leq
e^{-T}+\sum_{1\leq j\leq e^{2T}}\sum_{m=1}^4 \P\left(\mathcal{E}_j^{(m)}, |\widetilde{V}_T^\downarrow(\emptyset)|\leq e^{2T}\right)
\\
&\leq e^{-T}+ 4\big(e^{2T}\big)^2\left(\frac{15}{16}\right)^{d/2}T^2
\end{align*}
(assuming $T\geq 1$).
For some constants $c$ and $\alpha$, this upper bound
is less than $e^{-\alpha T}$ for all $T\geq 1$ and
all $d>cT$,
as required for Proposition \ref{prop:couplingsuccess}.
\end{proof}
\section{Euclidean model: proof of Theorems
\ref{thm:asymmetricRd} and \ref{thm:symmetricRd}}
\label{sec:conclusionRd}
\begin{prop}\label{prop:larged}
Consider stable matching for the asymmetric two-type
model in ${\mathbb R}^d$ where each point is red with probability $1-\epsilon$
and blue with probability $\epsilon$.
Fix any $\delta>0$. Then
there exists $c'=c'(\delta)$ such that for
all small enough $\epsilon$,
and all $d>c'\frac1\epsilon e^{1/\epsilon}$,
the density of blue points which
remain unmatched is in
$[(1-\delta)\epsilon e^{-1/\epsilon+1},
(1+\delta)\epsilon e^{-1/\epsilon}+1]$.
\end{prop}
\begin{proof}
Recall from Section \ref{subsubsec:asymmPWIT}
that $r(t)$ and $b(t)$ are the probabilities
that the root of the PWIT is red (or respectively blue)
and is not matched within distance $t$.
We have $r(0)=1-\epsilon$ and $b(0)=\epsilon$,
and as $t\to \infty$, $r(0)\to 0$ and $b(0)\to b(\infty)=
\epsilon e^{-1/\epsilon+1}$.
Correspondingly, write $r^{(d)}(t)$
and $b^{(d)}(t)$ for the probability that,
in the Palm version of the model in ${\mathbb R}^d$,
the point at the origin is red (or respectively blue)
and is not matched within distance $f_d(t)$.
By ergodicity of the Poisson process, the sets of points in ${\mathbb R}^d$
which are red, or respectively blue, and unmatched
within distance $f_d(t)$ have densities $r^{(d)}(t)$
and $b^{(d)}(t)$ with probability 1.
Set also $b^{(d)}(\infty)=\lim b^{(d)}(t)$;
then the set of blue points which remain unmatched
for ever has density $b^{(d)}(\infty)$ with probability 1.
The density of blue points matched at distance
greater than $t$ cannot be greater than the
density of red points matched at distance greater than $t$
(by a mass transport argument). Hence we have that for any $t$,
\begin{equation}\label{bdinfbounds}
b^{(d)}(t)-r^{(d)}(t)
\leq
b^{(d)}(\infty)
\leq b^{(d)}(t).
\end{equation}
From (\ref{rb-equation}),
$b(t)-r(t)$ has positive derivative at all times, so that
$b(t)-b(\infty)\leq r(t)-r(\infty)=r(t)$ for all $t$.
Combining with the bound on $r(t)$ just after (\ref{rb-equation}),
we have that for all $t$,
\begin{equation}
b(t)-b(\infty)\leq r(t) \leq \frac{1}{t}.
\label{btbound}
\end{equation}
Now fix some $\gamma>1$, and let
$T=\gamma\frac1\epsilon e^{1/\epsilon}=\gamma e/b(\infty)>1$.
Suppose that $d>cT$, where $c$ is given by Proposition \ref{prop:couplingsuccess}.
We then have that
\begin{equation}
\left|r^{(d)}(T)-r(T)\right|+
\left|b^{(d)}(T)-b(T)\right|<e^{-\alpha T}.
\label{couplingbound}
\end{equation}
Combining all of (\ref{bdinfbounds}),
(\ref{btbound}), and
(\ref{couplingbound}), we get
\begin{align*}
b^{(d)}(\infty)
&\geq b^{(d)}(T)-r^{(d)}(T)\\
&\geq b(T)-r(T)
-\left|b^{(d)}(T)-b(T)\right|-\left|r^{(d)}(T)-r(T)\right| \\
&\geq b(\infty)-\frac{1}{T}-e^{-\alpha T},
\\
\intertext{and}
b^{(d)}(\infty)
&\leq b^{(d)}(T)\\
&\leq b(\infty) + (b(T)-b(\infty) +\left|b^{(d)}(T)-b(T)\right|\\
&\leq b(\infty)+\frac{1}{T}+e^{-\alpha T}.
\end{align*}
Since $T=\gamma e/b(\infty)$,
for given $\delta$
we can choose $\gamma$
sufficiently large that $b^{(d)}(\infty)$
lies in
$[(1-\delta)\epsilon e^{-1/\epsilon+1},
(1+\delta)\epsilon e^{-1/\epsilon}+1]$.
Taking $c'=c\gamma$ then completes the proof.
\end{proof}
\begin{proof}[Proof of Theorems \ref{thm:asymmetricRd}
and \ref{thm:symmetricRd}]
The statement of Theorem
\ref{thm:asymmetricRd} follows immediately from
Proposition \ref{prop:larged}.
A similar argument leads to Theorem \ref{thm:symmetricRd}
for the symmetric multi-type model.
Let $x_i^{(d)}(t)$ be the density of points of type $i$
which are not matched within distance $t$, and let
$x_1^{(d)}(\infty)$ be the density of points which
remain unmatched for ever.
As in Section \ref{subsubsec:symm}, define
$x_i(t)$ to be the probability that the
root of the PWIT has type $i$ and is not matched within distance $t$.
Then as $t\to\infty$, $x_i(t)\to 0$ for $i>1$ and
$x_1(t)\to x_1(\infty)>0$.
As at (\ref{bdinfbounds}) above, we have
\[
x_1^{(d)}(t)-\sum_{i>2} x_i^{(d)}(t)
\leq x_1^{(d)}(\infty)
\leq x_1^{(d)}(t).
\]
For any $\delta>0$, if $T$ is large enough, then
\[
x_i(T)<\delta/2k
\]
for all $i>1$, and also
\[
x_1(t)-x_1(\infty)<\delta/2k.
\]
Finally, again if $T$ is large enough, and $d>cT$
where $c$ is given by Proposition \ref{prop:couplingsuccess},
then
\[
\sum_i|x_i^{(d)}(T)-x_i(T)|<e^{-\alpha T} <\delta/2k.
\]
Combining all these bounds we obtain if
$T$ is large enough and $d>cT$, then
$|x_1^{(d)}(\infty)-x_1(\infty)|<\delta$.
Putting $\lambda=x_1(\infty)$,
this gives the conclusion of Theorem \ref{thm:symmetricRd}.
\end{proof}
\section{Hierarchical model: proof of Theorem \ref{thm:hierarchical}}
\label{sec:conclusionhierarchical}
Recall that in the context of Theorem \ref{thm:hierarchical},
we have a Poisson process of rate $\lambda$ on ${\mathbb R}_+$,
in which each point is coloured blue with probability $\epsilon$
and red with probability $1-\epsilon$. Red-red and red-blue matches
are allowed, but not blue-blue. Given the hierarchical distance
$\rho$ defined by (\ref{rhodef}),
let $\ell(x,y)=\rho(x,y)$, except when both points $x$ and $y$ are blue,
in which case $\ell(x,y)=\infty$.
Now we cannot apply Proposition \ref{prop:uniquestable} directly,
since with probability 1 there will be points which are
equidistant from others, and so condition (i) does not hold,
and the stable matching for $\ell$ will not be unique.
We can consider instead the distance $\widetilde{\rho}$ on ${\mathbb R}$
given by
\[
\widetilde{\rho}(x,y)=\rho(x,y)+|x-y|.
\]
Correspondingly, for two Poisson points $x$ and $y$,
let $\widetilde{\ell}(x,y)=\widetilde{\rho}(x,y)$ unless
both points are blue, in which case $\widetilde{\ell}(x,y)=\infty$.
Now with probability 1, the function $\widetilde{\ell}$ does satisfy
the conditions of Proposition \ref{prop:uniquestable}, so
that there exists a unique stable matching for $\widetilde{\ell}$.
One can easily show that if $\widetilde{\ell}(u,v)\leq \widetilde{\ell}(x,y)$,
then also $\ell(u,v)\leq \ell(x,y)$.
Using the definition of stable matching, it follows
that if $M$ is
stable for $\widetilde{\ell}$, then it is also stable for $\ell$.
So at least one stable matching for $\ell$ exists.
Recall that we write $N_k(m)$ for the excess of blue points
over red points in the interval $[2^k m, 2^k (m+1)]$, out
of those which are not matched to another point in the interval,
i.e.\ which are not matched at distance $k$ or less.
We use the recursion at (\ref{hierarchicalrecursion}) for
the distribution of $N_k(m)$ as $k$ varies.
Define
\begin{align*}
\beta_k&=\P(N_k(m)\text{ is even})\\
\gamma_k&=\P(N_k(m)\text{ is odd and positive})\\
\delta_k&=\P(N_k(m)\text{ is even and positive})
\end{align*}
First note that for any $k$,
\begin{align*}
\beta_{k}&=\beta_{k-1}^2+\left(1-\beta_{k-1}\right)^2\\
&\geq 1/2.
\end{align*}
Then
\begin{align*}
\gamma_{k+1}&=2\gamma_{k}\beta_k
+ 2\delta_{k}(1-\beta_{k}-\gamma_{k})
\\
&\geq \gamma_{k}
\\
\intertext{and}
\delta_{k+1}&\geq\gamma_k^2,
\\
\intertext{so that}
\gamma_{k+2}
&= 2\gamma_{k+1}\beta_{k+1}
+2\delta_{k+1}(1-\beta_{k+1}-\gamma_{k+1})
\\
&\geq 2\gamma_k\beta_{k+1}
+2\gamma_k^2(1-\beta_{k+1}-\gamma_{k+1})
\\
&=
2\gamma_k \frac12 + 2(\gamma_k-\gamma_{k}^2)(\beta_{k+1}-\frac12)
+2\gamma_k^2(\frac12-\gamma_{k+1})
\\
&\geq
2\gamma_k\frac12 +2\gamma_k^2(\frac12-\gamma_{k+1})\\
&\geq
\gamma_k + \frac{\gamma_{k}^2}{3}
\end{align*}
as long as $\gamma_{k+1}\leq 1/3$.
So $\gamma_k$ will eventually reach $1/3$ for some $k$.
In fact, for any constant $c$, the
number of iterations of the recursion
$x\to x+x^2/3$ required to exceed the value $c$ starting from
the value $\delta$ is $3(1+o(1))\delta^{-1}$ as $\delta\to 0$.
Here we have $\gamma_0>e^{-1}\epsilon e^{-\epsilon}=\epsilon e^{-(1+\epsilon)}$,
since this is the probability that an interval of length 1
contains no red points and exactly one blue point.
Hence for some function $k_0=k_0(\epsilon)$ for which
\begin{equation}\label{k0}
k_0(\epsilon)=3e(1+o(1))\epsilon^{-1} \text{ as } \epsilon\to 0,
\end{equation}
we have that $\gamma_k\geq 1/3$ for all $k\geq k_0(\epsilon)$.
Then also $E N_k(m)\geq1/6$ for $k\geq k_0(\epsilon)$,
since $\P(N_k(m)\geq 1)\geq 1/3$, while
$\P(N_k(m)=-1)=\P(N_k(m) \text{ odd})-\gamma_k\leq 1/2-1/3=1/6$.
Note that, more weakly than (\ref{hierarchicalrecursion}),
we have that $N_k(m)\geq N_{k-1}(2m)+N_{k-1}(2m+1)$.
In particular, for any $r>0$, the quantity $N_{k_0+r}(m)$
is bounded below by a sum of $2^r$ independent copies
of the random variable $N_{k_0}(0)$, which has finite mean and is bounded below.
Hence, using standard large deviations results, there is some constant $\theta>0$ such that for all $k\geq k_0$,
\[
\P(N_k(m)< \theta 2^k)\leq e^{-\theta 2^k}.
\]
In particular, the sum of the right-hand side of all $k\geq k_0$ is finite. We obtain
that with probability 1, there exists some $K$ such that
\begin{equation}\label{allblue}
N_K(0)\geq \theta 2^K, \text{ and } N_k(1)\geq \theta 2^k \text{ for all } k\geq K.
\end{equation}
The quantities $N_K(0)$ and $\{N_k(1): k\geq K\}$ relate
to the intervals $[0,2^K)$ and $\{[2^k, 2^k+1): k\geq K\}$
which form a partition of ${\mathbb R}_+$. If indeed (\ref{allblue})
holds, then (for any stable matching) none of these intervals contains
a red point which is matched outside the interval. Hence, for any
of these intervals, all the blue points which are not matched within the interval
are not matched at all.
Hence in fact the number of unmatched blue points in $[0,2^k)$ is at least
$\theta 2^k$ for all $k\geq K$. Taking $R=2^K$ and $c=\theta/2$ then gives the result of Theorem \ref{thm:hierarchical}.
\section{Existence and uniqueness of a stable matching:
proof of Propositions \ref{prop:uniquestable}, \ref{prop:whoismatched} and \ref{prop:stablerescaled}}
\label{sec:stableproof}
Before proving Propositions \ref{prop:uniquestable} and
\ref{prop:whoismatched}, we first note a useful characterisation
of a stable matching which holds under the assumption
that the weights of edges from any given point $x$ are
all distinct.
For convenience we repeat here the definition
given at
(\ref{stablecondition}); a matching $M$
is stable if
\begin{equation}
\label{stablecondition2}
\ell(x,y)\geq\min\left(d_M(x), d_M(y)\right)
\text{ for all $x$ and $y$.}
\end{equation}
\begin{lemma}\label{lemma:stableprop}
Suppose condition (i) of Proposition \ref{prop:uniquestable}
holds (the distinct weights condition). Then a matching $M$
is stable iff for all $x\in V$ and $R\in(0,\infty)$,
\begin{equation}
\label{dMprop}
\{d_M(x)<R\} \Leftrightarrow \{\exists y \text{ such that }
\ell(x,y)<R \text{ and } d_M(y)\geq \ell(x,y)\}.
\end{equation}
\end{lemma}
\begin{proof}
We will show that if the distinct weights condition holds,
then (\ref{stablecondition2}) and (\ref{dMprop}) are equivalent.
Suppose the matching is not stable, so (\ref{stablecondition})
fails for some $x,y$. Then we can
choose $R$ with $\ell(x,y)<R<\min\{d_M(x), d_M(y)\}$,
in which case the right
side of (\ref{dMprop}) holds but the left side does not.
Hence (\ref{dMprop}) also fails.
On the other hand, suppose that (\ref{stablecondition2}) holds.
If $d_M(x)\geq R$ but the right side of (\ref{dMprop}) were true,
then $x$ and $y$ are not partners (since $\ell(x,y)<d_M(x)$).
But by the distinct weights condition, $y$ does not have a partner
$z$ with $\ell(z,y)=\ell(x,y)$. Hence in fact $d_M(y)>\ell(x,y)$
strictly; then $\ell(x,y)<\min\{d_M(x), d_M(y)\}$
contradicting (\ref{stablecondition2}).
Meanwhile if $d_M(x)<R$ then $x$
has a partner $y$ with $\ell(x,y)<R$ and $d_M(y)=d_M(x)$,
so the right side of (\ref{dMprop}) holds.
Hence under (\ref{stablecondition2}), the
left and right sides of (\ref{dMprop}) are equivalent, as required.
\end{proof}
\begin{proof}[Proof of Propositions \ref{prop:uniquestable}
and \ref{prop:whoismatched}]
The underlying idea is essentially the same
as was used
in \cite{HPPS} for the special cases of one-type
and symmetric two-type models with weights given by distances
in ${\mathbb R}^d$. However we will present the construction rather differently,
so as to make explicit the way in which the stable matching is determined by the collections of descending paths as stated in
Proposition
\ref{prop:whoismatched}. (The argument in \cite{HPPS} is
phrased in terms of the following recursive construction. Call two
points $x$ and $y$
\textit{mutually closest} if $\ell(x,y)<\ell(x,z)$ for all $z\ne y$
and $\ell(x,y)<\ell(z,y)$ for all $z\ne x$. Now, given the point
configuration, match all mutually closest pairs of points to each other, and then remove them from the configuration. Now match all
pairs which are mutually closest in the remaining set of points;
repeat indefinitely. Lemma 15 of \cite{HPPS} shows that
for the models under consideration, this
recursive construction yields a stable matching, which is
in fact unique.)
We begin by justifying the first assertion of
Proposition \ref{prop:whoismatched}.
Consider the subgraph spanned by the edges of $E_R^{\downarrow}(x)$,
obtained by taking the union of
all descending paths from $x$ with weights less than $R$.
From condition (ii), any vertex has finite degree in this
subgraph (since any vertex is incident to only finitely
many edges with weight less than $R$). If there were
infinitely many vertices in this subgraph, then there
would be arbitrarily long descending paths from $x$,
and then, by compactness, an infinite descending path.
But this is excluded by condition (iii), so indeed
$E_R^{\downarrow}(x)$ is finite.
Now we argue that in fact we
can determine whether $d_M(x)$ by inspecting the set
$E^{\downarrow}_R(x)$, as required for Proposition \ref{prop:whoismatched}.
Using Lemma \ref{lemma:stableprop}, it is enough to determine
whether $d_M(y)< \ell(x,y)$ for all $y$ with $\ell(x,y)<R$.
We proceed by induction on the size of
$E^{\downarrow}_R(x)$. Consider $y$ with $\ell(x,y)<R$.
Then $E_{\ell(x,y)}^{\downarrow}(y)\subset E_R^{\downarrow}(x)$ (the inclusion is strict
since the edge $(x,y)$ is included in the second set but not in
the first).
So for the induction step, we may assume that
we know whether $d_M(y)<\ell(x,y)$ for each such $y$,
and hence indeed we can deduce whether $d_M(x)<R$.
The base of the induction is the case where $x$ is incident
to no edges of weight less than $R$, in which case certainly
$d_M(x)\geq R$.
This completes the proof of Proposition \ref{prop:whoismatched}.
The weights of edges in $E_R^{\downarrow}(x)$ determine whether
$d_M(x)<R$, and hence the weights of all the edges determine
the value of $d_M(x)$ and so, since the weights of edges
incident to $x$ are distinct by (i), in fact determine $M(x)$.
Hence there is at most one stable matching.
To show the existence of a stable matching, note
that the inductive procedure above can be used to \textit{define}
a function $d_M(x)$ for $x\in V$ which satisfies
(\ref{dMprop}). We need to show that this function
actually corresponds to a matching $M$ in the sense of (\ref{dMdef}).
Suppose $d_M(u)=s<\infty$. Then applying (\ref{dMprop})
with $x=u$ and considering both
$R\leq s$ and $R>s$, we obtain that
for some $v$, $\ell(u,v)=s$ and $d_M(v)\geq s$.
Then in turn we can apply (\ref{dMprop}) with $x=v$
and any $R>s$; because we have $u$ with $\ell(v,u)<R$ and
$d_M(u)\geq \ell(u,v)$ it follows that $d_M(v)<R$,
and hence in fact $d_M(v)=s$. Thus there is a point $v$
satisfying $\ell(u,v)=d_M(v)=s$, and this point is unique
by condition (i). Then define $M(u)=v$ (and similarly $M(v)=u$).
Meanwhile if $d_M(u)=\infty$, define $M(u)=u$.
Then indeed $M$ is a matching, and satsfies (\ref{dMprop}),
and so is stable.
Further, suppose $x$ and $y$ are both unmatched by $M$,
so that $d_M(x)=d_M(y)=\infty$. If $\ell(x,y)$ were finite,
then for any $R>\ell(x,y)$, again the right side of (\ref{dMprop})
would hold but the left side would not. Hence indeed $\ell(x,y)=\infty$ as required for the final statement of Proposition
\ref{prop:uniquestable}.
Finally, in Proposition \ref{prop:stablerescaled}
there is a bijection from $V_R^{\downarrow}(x)\subset V$
to $\widetilde{V}^{\downarrow}_{f(R)}(\widetilde{x})\subset \widetilde{V}$
which maps $x$ to $\widetilde{x}$ and under which
the edge-weights are related by the strictly increasing function $f$. The definition of stable matching,
and the equivalent condition in (\ref{dMprop}),
use only information about relative orderings of edge-weights;
such orderings are preserved when the edge-weights
are rescaled by $f$.
Hence the inductive procedures
for determining whether $d_M(x)<R$ for the stable matching
$M$ of $V$, and whether $d_{\widetilde{M}}(\widetilde{x})<f(R)$
for the stable matching $\widetilde{M}$ of $\widetilde{V}$,
proceed identically, and so indeed $d_M(x)<R$ if and only if
$d_{\widetilde{M}}(\widetilde{x})<f(R)$.
\end{proof}
\section*{Open Problems}
Consider a homogeneous Poisson process in ${\mathbb R}^d$ in which each point is independently assigned a colour according to a fixed probability vector.
\begin{enumerate}[(i)]
\item For the asymmetric two-type stable matching (with
only red-blue and red-red matches allowed), do there
exist red and blue probabilities for which all points
are matched? The question is open for every $d\geq
1$.
\item For the asymmetric two-type stable matching in a
fixed dimension $d$, is the intensity of
unmatched blue points non-decreasing in the initial
probability of blue points? Is it strictly
increasing?
\item For the symmetric three-type stable matching (where
points of any two distinct colours are allowed to
match), suppose that the probabilities $p_2,p_3$ of
two of the colours are equal. Symmetry and
ergodicity imply that either all points are matched,
or only points of color $1$ are unmatched. Are all
points matched when $p_1<p_2=p_3$? Are some points
unmatched when $p_1>p_2=p_3$? Again, these questions
are open for all $d$.
\item More generally, for which matching restrictions, probability vectors, and dimensions are all points matched?
\item Can the PWIT provide information about matching distance in high dimensions? For example, in the case of two-color stable matching (where only red-blue matches are allowed), with equal probability of red and blue points, the probability for a typical point to be matched at distance at least $r$ is known \cite{HPPS} to be between $r^{-\alpha}$ and $r^{-\beta}$ as $r\to\infty$ where $\alpha(d),\beta(d)\in(0,\infty)$, but the bounds on these constants are far apart except when $d=1$. What can be said about their asymptotic behaviour as $d\to\infty$?
\end{enumerate}
\section*{Acknowledgments}
We thank Robin Pemantle for helpful conversations at an early stage of this work. JBM thanks the Theory Group of Microsoft Research for their support and hospitality; this work was carried out while he was a visiting researcher. JBM was also supported by EPSRC Fellowship EP/E060730/1.
|
2,877,628,091,472 | arxiv |
\section{Introduction}
\input{intro}
\section{Preliminaries}\label{sec:prelims}
\input{prelim}
\section{Proof of \cref{thm:main}}\label{sec:main}
\input{theproof}
\section{Wrapping up the proof of \cref{thm:conj}}\label{sec:wrap-up}
\input{sec-wrap-up}
\section{Conclusion}
\input{outro}
\bibliographystyle{abbrv}
|
2,877,628,091,473 | arxiv | \section{Introduction}
With the progress of learning-based computer vision, recent research efforts have been extended from image tasks to the more challenging video domains. Video tasks, such as object detection \cite{deng2009imagenet}, video instance segmentation ~\cite{yang2019video}, and multi-object tracking and segmentation \cite{voigtlaender2019mots}, hold valuable potentials for real-world applications \cite{9206716, voigtlaender2019mots,liu2020video, liu2020visual} (i.e., autonomous driving or video surveillance ).
\begin{figure}[!bt]
\centering
\subfigure[Visualization of feature aggregation process]{
\includegraphics[width=8cm]{img/currAgg2.png}
}
\subfigure[Current aggregation methods]{
\includegraphics[width=3cm]{img/currAgg.png}
}
\subfigure[Our aggregation methods]{
\includegraphics[width=3cm]{img/webAgg.png}
}
\caption{Comparison of feature aggregation methods. (a) Features from the neighboring frames are weighted equally during aggregation. (b) The current aggregation methods only reason the relations between the current frame and neighboring frames. (c) Our proposed method computes every pair of frames in the neighborhood in the aggregation process.}
\label{fig:problem}
\end{figure}
A primary challenge of video object detection is to tackle the feature degradation on video frames caused by camera jitter or fast motion. Under the circumstance, detection algorithms for still images are ill-posed for video tasks. Nonetheless, the video has rich temporal information, on which the same object may appear in multiple frames for a certain time span. The value of such temporal information is explored in prior studies using the post-processing paradigm \cite{han2016seq,kang2017t,kang2017t,lee2016multi}. These methods firstly perform still-image detection on single frames and then assemble the detection results across temporal dimensions using a disjoint post-processing step (i.e., motion estimation and object tracking). None of the above methods, therefore, operate in an end-to-end fashion. Moreover, if detection on single frames produces weak predictions, the assembling approach cannot improve the detection results.\\
\indent Alternatively, there have been several attempts to boost the performance of video detection using feature aggregation. \cite{liu2020video,voigtlaender2019mots,zhu2017deep} leverage optical flow to model the feature movement across frames and propagate temporal features to increase the feature representation for detection. With stronger features, the detection results are significantly improved. However, such temporal features are exploited by an intuitive lumping operation, which is oversimplified.\\
\indent In terms of how to organize features in aggregation, we recognize two important predecessors, FGFA \cite{zhu2017flow} and SELSA \cite{wu2019sequence}. Compared to the lumping solution \cite{liu2020video,voigtlaender2019mots,zhu2017deep}, both methods use similarity scores to select more helpful features for aggregation. The aggregated feature is organized by an adaptive weight at every spatial location for their representations
(as shown in Figure \ref{fig:problem}(a)). Albeit being superior over the prior efforts, FGFA \cite{zhu2017flow} and SELSA \cite{wu2019sequence} encounter several obstacles from achieving optimal performance:
1) They focus on modeling the global relation for every neighboring frame while ignoring the preservation of the local spatial information for aggregation;
2) They primarily consider the global feature relations to the current frames, while having no constraint in feature learning among the neighboring frames (see Figure \ref{fig:problem}(b)); 3) They take a fixed number of neighboring frames for the feature aggregation, which is heuristic than general.
In this work, we attempt to take a deeper look at video object detection and improve the performance guarantees by organizing temporal information in a more rigorous principle. Inspired by \cite{zhu2017flow,wu2019sequence, chen20mega}, we propose TF-Blender to organically model features consistently and correspondingly in two ranges. Specifically, we reinforce local similarity in feature space on sequential video frames to depict the continuous and coherence of visual patterns, while identifying semantic correspondence across frames, which makes the temporal representations robust to appearance variations, shape deformations, and local occlusions. In this design, TF-Blender is able to generalize feature aggregation by encouraging the video representation and capturing helpful visual content to improve detection performance.
Concretely, we are able to achieve the following contributions:
\begin{itemize}
\item We propose a framework called TF-Blender, which depicts the temporal feature relations and blends valuable neighboring features to increase the temporal-spatial feature representation across frames.
\item In TF-Blender, we devise a temporal relation module to manage temporal information and a feature adjustment module to add constraints in feature learning to preserve spatial information during feature aggregation. We, therefore, organize the feature learning between every pair of frames and aggregate features in the whole neighborhood (see Figure \ref{fig:problem}(c))
\item Our method is general and flexible, which can be crafted on any detection network. With our novel feature enhancement strategy, we can obtain an absolute gain of more than $0.7\%$ in mAP on the ImageNet VID benchmark and $1.5\%$ in mAP on YouTube-VIS benchmark for recent state-of-the-arts methods.
\end{itemize}
\begin{figure*}[!hbt]
\centering
\includegraphics[width=17cm]{img/framework.png}
\caption{Our TF-Blender framework includes three key modules: a) \textbf{Temporal relation module}: Feature relation function $g\left(f_i,f_j\right)$ is used as input to learn adaptive weights $\mathcal{W}\left(f_i,f_j\right)$ used for feature blender. 2) \textbf{Feature adjustment module:} Every neighboring frame feature $f_j$ is aggregated with other neighboring features to generated feature representative $\mathcal{F}\left(f_i, f_j\right)$. 3) \textbf{Feature blender module:} The results of $\mathcal{W}\left(f_i, f_j\right)$ and $\mathcal{F}\left(f_i, f_j\right)$ are combined to aggregate the feature of the current frame with dynamic number of neighboring frames.}
\label{fig:framework}
\end{figure*}
\section{Related Works}
\subsection{Video Object Detection}
\textbf{Video object detection.} Different from image object detection, video object detection faces challenging cases (i.e., motion blur, occlusion, and defocus) which rarely occur in images \cite{CORES2021104179, geng2020objectaware, 10.1145/3422852.3423477}. To handle the challenges in video domains, several works \cite{Kang_2016, kang2017t, han2016seq} use post-processing techniques on top of still image detectors. For instance, Seq-NMS \cite{han2016seq} links bounding boxes across frames with IoU threshold and re-rank the linked bounding boxes; TCN \cite{Kang_2016} introduces tubelet modules and applies a temporal convolutional network to embed temporal information to improve the detection across frames; T-CNN \cite{kang2017t} applies image object detectors to generate results and then uses optical flow to associate the detected results. Although achieving improvements, none of them are trained end-to-end and their performances are still sub-optimal.
Another focus of the recent works \cite{zhu2017deep, zhu2017flow, wu2019sequence, deng2019relation, chen20mega, zhu2017high, xiao2018video} is to aggregate temporal features to improve the feature representation for detection. These methods can be divided into three categories: local aggregation, global aggregation, and combination aggregation. Local aggregation methods \cite{zhu2017flow, Wang_2018_ECCV, deng2019relation, liu2020video, zhu2017high, xiao2018video, feichtenhofer2018detect, bertasius2018object} usually focus on propagating features in a short range on video sequences. Among them, FGFA \cite{zhu2017flow} and MANet \cite{Wang_2018_ECCV} are representatives which use optical flow \cite{ilg2016flownet, fischer2015flownet} to calibrate and aggregate features across local frames. On the contrary, global aggregation methods \cite{wu2019sequence, 9010864, 9011008} rely on long-range semantic information. One seminal work is from SELSA \cite{wu2019sequence}, who computes the semantic similarity between the current frame and its neighbours across the whole video in order to perform temporal feature aggregation. Different from the methods which exploit features locally or globally, MEGA \cite{chen20mega} introduces a memory module to use both local and global features to enhance the visual representation of the current frame. The aggregation methods achieve further performance gain over the post-processing methods, but they generally focus on higher-level video frame selection instead of exploring lower-level temporal features exploitation.
\textbf{Video instance segmentation.} Similar to video object detection, MaskTrack R-CNN \cite{yang2019video} extends instance segmentation \cite{yolact-iccv2019, yolact-plus-tpami2020} from image domain to video domains which requires segmenting and tracking instances across frames. However, most of the current methods like MaskProp \cite{bertasius2020classifying}, EnsembleVIS \cite{Luiten_2019_ICCV} focus on how to track instances across frames rather than how to generate high-quality features for detection, segmentation, and tracking. In this work, we, therefore, propose a more principled solution, which effectively transforms and exploits valuable temporal features for the video object detection task.
\subsection{Relation Learning}
Relation learning is widely used for different tasks (i.e., point cloud analysis \cite{liu2019relationshape, CUI2021300} and image understanding \cite{liu2021densernet, yan2021hierarchical}) to describe the relationship between the current feature and its neighbors. RS-CNN \cite{liu2019relationshape} extends regular grid CNN to capture local point cloud features using geometric topology constraints among points. Similarly, PointConv \cite{wu2020pointconv} models the feature relation by computing both the local coordinates and point cloud density. Both methods capture local features in geometric space. On the contrary, DGCNN \cite{wang2019dynamic} defines EdgeConv which captures local point relation in high-dimensional feature space and updates the neighborhood for the kernel dynamically at each layer.\\
\indent Similarly, some recent works attempt to leverage relation learning for object detection. Inspired by \cite{hu2018relation} which proposes an object relation module for still image object detection, RDN \cite{deng2019relation} introduces a relation distillation network to aggregate features based on object relation to improving the features for video object detection. MEGA \cite{chen20mega} extends the relation learning from RDN and proposes a memory-enhanced global-local aggregation network, which organically manages long-range (global) features and short-range (local) features for aggregation in order to increase the feature representation of current time for detection. However, the focuses of the above methods \cite{hu2018relation,deng2019relation,chen20mega} are the selection of higher-level video frames for aggregation rather than modeling lower-level temporal relation to increasing the feature representation.\\
\indent Different from these methods, we propose a more general approach for relation learning in feature aggregation. Our TF-Blender can robustly depict the salient correspondences between the feature of the current frame and neighboring frames and exploit only valuable features for a stronger detection.
\section{TF-Blender}
\subsection{Preliminary and Overall Pipeline}
The conventional feature aggregation methods~\cite{zhu2017flow,wu2019sequence, liu2020video, Wang_2018_ECCV} generally work in a constrained fashion. Given a set of neighboring frames $\textbf{F}_j$ of the current frame $\textbf{F}_i, \forall \textbf{F}_j \in \mathcal{N}\left(\textbf{F}_i\right)$, their corresponding features $f_j$ are weighted equally based on the feature similarity to $\textbf{F}_i$ in order to aggregate the temporal feature $\Delta{f}_{i}$:
\begin{equation}
\begin{aligned}
\Delta f_i &= \sum_{\textbf{F}_j \in \mathcal{N}\left(\textbf{F}_i\right) }({w}_{ij}\times{f}_{j}).
\end{aligned}
\label{aggEq}
\end{equation}
The principal problem of feature aggregation, therefore, is to calculate weights $w_{ij}$ and select representative neighboring feature $f_j$.
Different from the above simple paradigm, we exploit the temporal features from a general perspective. To achieve this goal, our TF-Blender crafts on three novel architectural modules, temporal relation module, feature adjustment module, and feature blender module, to boot the detection performances (see Figure \ref{fig:framework}).
\begin{figure}[!bt]
\centering
\subfigure[Input frames]{
\includegraphics[width=8cm]{img/inputFrames.png}
}
\subfigure[Feature maps of input frames]{
\includegraphics[width=8cm]{img/inputFeatures.png}
}
\subfigure[Results of temporal relation]{
\includegraphics[width=8cm]{img/inputAtt.png}
}
\caption{An example of the problem of feature aggregation with global weights: a) shows two neighboring frames where the moving car (the green rectangles) is smaller than the traffic cone (the red rectangles). b) visualizes the feature maps of the two frames where the traffic cone also has a high response besides the car. With global weights, the high response feature of the traffic cone (the red rectangles) cannot be suppressed unless the global weights have very small values. c) shows the results of our proposed temporal relation module which assigns every pixel in the feature map with an adaptive weight and can suppress the irrelevant features.}
\label{fig:localGlobal}
\end{figure}
\subsection{Temporal Relation}
Our temporal relation models the correspondences between the keyframe and its neighbors. To achieve this goal,
existing methods use $\mathbb{W}\left(f_i, f_j\right)$ to compute a global weight on every pixel in the feature map. This approach ignores local spatial information of the feature map during the process of aggregation, which causes the issue of severe outliers in the feature map.
As shown in Figure \ref{fig:localGlobal}(a), two neighboring frames have a car with a fast speed and a still traffic cone marked with green and red rectangles respectively.
The feature maps of the input frames are visualized as Figure \ref{fig:localGlobal}(b) and the features of the traffic cone are outliers for the car detection. For global weights, if the weights between the paired frames are none-zero, irrelevant features cannot be removed during aggregation (see Figure \ref{fig:localGlobal}(b)). This problem occurs frequently when dealing with occlusions or small-scale objects.\\
\indent To address this issue, our temporal relation module generates adaptive weights $\mathcal{W}\left(f_i, f_j\right)$ for every pixel on the feature map in replace of the global weights $\mathbb{W}\left(f_i, f_j\right)$.
We model $\mathcal{W}\left(f_i, f_j\right)$ as a tensor with the same size as the feature representatives for aggregation. For every neighboring frame $\textbf{F}_j$ of the current frame $\textbf{F}_i$, we use temporal relation module to calculate adaptive weights $\mathcal{W}\left(f_i, f_j\right)$ (see Figure \ref{fig:framework}). The process is formulated as:
\begin{equation}
\mathcal{W}\left(f_i, f_j\right) = \mathcal{M}\left(g\left(f_i, f_j\right)\right),
\label{temporalRelation}
\end{equation}
where $g$ is a feature relation function to describe the temporal relation between $f_i$ and $f_j$ and $\mathcal{M}$ is a masking function to calculate the adaptive weight based on $g$. As shown in Figure \ref{fig:localGlobal}(c), our temporal relation can enhance the feature representations from the region of interest and suppress the irrelevant features.\\
\indent More concretely, we compute $\mathcal{M}$ in Eq.~\ref{temporalRelation} using a mini-network (see Figure \ref{fig:miniNetwork}). Compared with the CoefNet in LMP \cite{10.1145/3422852.3423477}, our feature adjustment module is built on a lighter architecture, which makes our TF-Blender computationally efficient. The input of the module is $f_i$ and $f_j$, marked as red and blue cuboids respectively. Feature relation function $g$ describes the relation between $f_i$ and $f_j$ and generates the input (the gray cuboid) of the mini-network $\mathcal{M}$. Afterward, we apply three convolution layers (the yellow cubes) to generate the final adaptive weights $\mathcal{W}\left(f_i, f_j\right)$ (the purple cuboid). The selection of the feature relation function $g$ will be discussed in \ref{ID}.
\begin{figure}
\centering
\includegraphics[width=5cm]{img/miniNetwork.png}
\caption{Visualization of temporal relation module. The input feature of $f_i$ and $f_j$ are visualized as blue and red cuboids respectively. Feature relation function $g$ models the temporal relation between $f_i$ and $f_j$ (the gray cuboids). In the mini-network, convolution layers (the yellow cubes) are applied to generate the final results (the purple cuboids). The results of the mid-layers are visualized as brown cuboids. }
\label{fig:miniNetwork}
\end{figure}
\subsection{Feature Adjustment}
Our feature adjustment module aims to represent the feature consistency and salience of the neighboring frames for feature aggregation. A simple solution \cite{zhu2017flow, wu2019sequence, chen20mega} is to directly use feature $f_j$ from frame $\textbf{F}_j$ as the follow:
\begin{equation}
\mathcal{F}\left(f_i,f_j\right) = f_{j\rightarrow i}.
\end{equation}
However, $f_j$ cannot be guaranteed to be valuable for aggregation as there is no constraints between these neighboring features. Therefore, we aggregate every neighboring frame feature $f_j$ before aggregating the current frame feature $f_i$. We get feature representative $\mathcal{F}\left(f_i, f_j\right)$ by aggregating $f_j$ with the other neighboring features $f_m, \forall \textbf{F}_m \in \mathcal{N}\left(\textbf{F}_i\right), \textbf{F}_m \neq \textbf{F}_j$ (see Figure \ref{fig:framework}). During feature adjustment, we use the temporal relation module to generate adaptive weights for neighbouring feature aggregation and the process can be expressed as:
\begin{equation}
\mathcal{F}\left(f_i, f_j\right) = \sum_{\substack{\textbf{F}_m\in \mathcal{N}\left(\textbf{F}_i\right) \\ \textbf{F}_{m} \neq\textbf{F}_{j}}} \mathcal{W}\left(f_j,f_m\right)\otimes f_j
\end{equation}
where $\otimes$ is element-wise multiplication, $f_m$ is the feature of the neighboring frame except itself, and $\mathcal{W}\left(f_j, f_m\right)$ is Eq~\ref{temporalRelation}, which can be expressed here as:
\begin{equation}
\begin{aligned}
\mathcal{W}\left(f_j, f_m\right) &= \mathcal{M}\left(g\left(f_j, f_m\right)\right) \\
\forall \textbf{F}_m &\in \mathcal{N}\left(\textbf{F}_i\right), \textbf{F}_m \neq \textbf{F}_j
\end{aligned}
\end{equation}
\begin{figure*} [!ht]
\centering
\includegraphics[width=17cm]{img/exampleVIS.png}
\caption{Quantitative examples of comparison between methods without and with our TF-Blender integrated on ImageNet VID and YouTube-VIS benchmarks.
}
\label{fig:quatitiveResults}
\end{figure*}
\subsection{Feature Blender}
In our feature blender module, we first enhance the results of the temporal relation module with the non-linear function ReLU so that the contrast between the area of interests and background can be captured (see the blender module in Figure \ref{fig:framework}). We formulate this process as:
\begin{equation}
\hat{\mathcal{W}}\left(f_i, f_j\right) = \texttt{ReLU}\left(\mathcal{W}\left(f_i, f_j\right)\right).
\end{equation}
Meanwhile, we normalize the results of the feature adjustment module with the softmax function over all the channels to improve the generalization of our model. On the top of the feature blender module in Figure \ref{fig:framework}, blue dots are normalized to green dots by the softmax function with the guidance of purple double arrows. The process can be expressed as:
\begin{equation}
\hat{\mathcal{F}}\left(f_i, f_j\right) = \texttt{softmax}\left(\mathcal{F}\left(f_i, f_j\right)\right).
\label{softmas}
\end{equation}
In our feature blender module, we force $\hat{\mathcal{W}}\left(f_i, f_j\right)$ to be $0$ if the adjusted neighboring feature is very similar to the feature of the current frame, shown as dashed purple double arrows in the feature blender module part of Figure \ref{fig:framework}. We use the cosine distance to describe the similarity between $\hat{\mathcal{F}}\left(f_i, f_j\right)$ and $f_i$. If the cosine distance is bigger than $\delta$, $\hat{\mathcal{W}}\left(f_i, f_j\right)$ is force to be $0$. We define this process as:
\begin{equation}
\hat{\mathcal{W}}\left(f_i, f_j\right) = 0, \quad \textbf{if} \quad \frac{\hat{\mathcal{F}}\left(f_i, f_j\right)\cdot f_i}{|\hat{\mathcal{F}}\left(f_i, f_j\right)||f_i|} > \delta.
\label{delta}
\end{equation}
We have this design because most of the current feature aggregation-based methods \cite{chen20mega, wu2019sequence, deng2019relation, zhu2017flow} have a fixed number of neighboring frames in aggregation. However, for neighboring frames which include issues of severe motion blur or defocus, aggregating them are irrelevant and redundant, which may cause unwanted ambiguity.
Finally, we use element-wise multiplication to combine the results of from Eq.~\ref{softmas} and Eq.~\ref{delta} to perform the feature aggregation:
\begin{equation}
\begin{aligned}
\Delta f_i &= \sum_{\textbf{F}_j \in \mathcal{N}\left(\textbf{F}_i\right) }\left(\hat{\mathcal{W}}\left(f_i, f_j\right)\otimes \hat{\mathcal{F}}\left(f_i, f_j\right)\right) \\
\end{aligned}
\end{equation}
\begin{table}[!bt]
\centering
\begin{tabular}{c|c|c}
\toprule
Methods & mAP(\%) & Runtime(FPS)\\
\midrule
FGFA\cite{zhu2017flow} & 77.8 & 7.3\\
SELSA\cite{wu2019sequence} & 81.5 & 10.6\\
RDN\cite{deng2019relation} & 81.7 & -\\
MEGA\cite{chen20mega} & 82.9 & 5.3 \\
\midrule
FGFA(Ours) & 79.3$_{\uparrow1.5}$ & 6.9\\
SELSA(Ours) & 82.5$_{\uparrow1.0}$ & 10.1\\
RDN(Ours) & 82.4$_{\uparrow0.7}$ & -\\
MEGA(Ours) & 83.8$_{\uparrow0.9}$ & 4.9\\
\bottomrule
\end{tabular}
\caption{Performance comparison with the recent state-of-the-art video object detection models on ImageNet VID validation set. The backbone is ResNet-101 and runtime is tested on a single RTX 2080Ti GPU.}
\label{tab:vod}
\end{table}
\section{Experiments}
\subsection{Implementation Details}\label{ID}
\textbf{Evaluation metrics.}
Following \cite{zhu2017deep,zhu2017flow}, we report all results on
using the mean average precision (mAP).\\
\indent \textbf{Video object detection setup.} We evaluate our methods with MEGA~\cite{chen20mega}, SELSA~\cite{wu2019sequence},FGFA~\cite{zhu2017flow}, and RDN \cite{deng2019relation}, the three state-of-the-art systems. We perform our training and evaluation on the ImageNet VID benchmark \cite{russakovsky2015imagenet}, which contains 3,862 videos for training and 555 videos for validation. Following the widely used protocols in \cite{zhu2017flow, chen20mega, wu2019sequence}, we train our model on a combination of ImageNet VID and DET datasets. We implement our method mainly based on the source code of the original method. The whole network is trained on 8 RTX 2080Ti GPUs with SGD. During the training and inference process, each GPU holds on one set of images or frames. During the training process, the encoder parameters are frozen and an NMS of 0.5 IoU is adopted to suppress detection redundancy.
\textbf{Video instance segmentation setup.} We also evaluate our proposed method with state-of-the-art MaskTrack R-CNN~\cite{lin2020video} and SipMask~\cite{cao2020sipmask}. We perform our training and evaluation on the YouTube-VIS benchmark \cite{yang2019video}, where there are 3,471 videos for training and 507 videos for validation. During the training process, we use weights pretrained on MS-COCO \cite{lin2014microsoft} and use 8 RTX 6000 GPUs with SGD. In both training and evaluation, the original frame sizes are resized to $640 \times 360$.
\textbf{Parameters.} For mini-network $\mathcal{M}$ in Eq. (\ref{temporalRelation}), a three-layer CNNs is introduced to adapt the channels for feature aggregation. Feature relation function $g$ is defined as a concatenated tensor of $f_i$, $f_j$, $f_i - f_j, f_j - f_i$ and the $\delta$ in Eq. (\ref{delta}) is set to 0.7.
\subsection{Main Results}
\indent \textbf{Results on ImageNet VID benchmarks.} We compare state-of-the-art systems crafted on our method with their original implementations. For a fair comparison, we used the codes provided by the original papers and re-implement them with our proposed method. The results are demonstrated in Table \ref{tab:vod}. Based on the results, our proposed methods substantially improve the performance of every compared method listed in the table with the same backbone.\\
\indent For head-to-head comparisons,
all the methods with the same backbone can leverage our proposed methods to improve their performances on detection results around $0.7\%$-$1.5\%$ on accuracy. Among them, FGFA with our proposed method has the highest improvement compared with other methods. Among them, local aggregation and global aggregation methods like FGFA \cite{zhu2017flow} and SELSA \cite{wu2019sequence} can have a better improvement with our proposed methods compared with combination aggregation methods like RDN \cite{deng2019relation} and MEGA \cite{chen20mega}. We argue that the limited performance gains come from the combination aggregation methods, which consider both local and global features and make detection more robust to issues like motion blur in videos.\\
\indent Figure \ref{fig:quatitiveResults} shows some examples of detection results with our methods integrated. Based on the examples, we can see that our proposed method can help solve the problem of weak detection with rare pose and part occlusion situations.
\begin{table*}[!bt]
\centering
\begin{tabular}{c|c|c|c|c|c|c|c}
\toprule
Methods & Category & AP & AP$_{50}$ & AP$_{75}$ & AR$_1$ & AR$_{10}$ & FPS\\
\midrule
Stem-Seg \cite{athar2020stemseg} & \multirow{6}{*}{One-stage} & 30.6 & 50.7 & 33.5 & 31.6 & 37.1 & 12.1 \\
Stem-Seg(Ours) & & 31.3 & 51.5 & 34.1 & 32.1 & 37.9 & 11.3\\
SipMask \cite{cao2020sipmask} & & 33.7 & 54.1 & 35.8 & 35.4 & 40.1 & 28.0\\
SipMask(Ours) & & 35.1 & 55.5 & 36.9 & 36.1 & 41.3 & 26.6\\
SG-Net \cite{Liu_2021_CVPR} & & 34.8 & 56.1 & 36.8 & 35.8 & 40.8 & 22.9\\
SG-Net(Ours) & & 35.7 & 57.1 & 37.6 & 36.6 & 42.0 & 21.3\\
\midrule
MaskTrack R-CNN \cite{lin2020video} & \multirow{2}{*}{Two-stage} & 30.3 & 51.1 & 32.6 & 31.0 & 35.5 & 10.0\\
MaskTrack R-CNN(Ours) & & 31.4 & 52.3 & 33.5 & 31.9 & 36.5 & 9.4\\
\bottomrule
\end{tabular}
\caption{Performance comparison with the recent state-of-the-art video instance segmentation models on YouTube-VIS validation set. The backbone is ResNet-50-FPN and the models are pretrained on MS-COCO. The runtime is tested on a single RTX TITAN GPU.}
\label{tab:vis}
\end{table*}
\textbf{Experiments on YouTube-VIS benchmark.} We also evaluate our proposed method on YouTube-VIS dataset \cite{yang2019video} and report our results on the validation as \cite{yang2019video, cao2020sipmask, athar2020stemseg}. Most of the current video instance segmentation methods focus on how to generate high-quality masks and link the same objects across frames with features extracted by the backbones like ResNet while only a few of them pay attention to improve the features for mask generation and object tracking. We add our proposed methods to these video instance segmentation methods to evaluate the effectiveness of our TF-Blender on issues like motion blur and defocus in videos. The results with ResNet-50 as backbones are shown in Table \ref{tab:vis}.
From Table \ref{tab:vis}, our proposed methods achieve competitive results under all evaluation metrics. With our proposed methods, MaskTrack R-CNN and SipMask can be improved by more than $1.6\%$ on the AP metric. The bottom part of Figure \ref{fig:quatitiveResults} shows an example of detection and segmentation results with our integrated.
\begin{table}[!bt]
\centering
\begin{tabular}{c|c|c|c|c}
\toprule
Method & TR & FA & FB &mAP(\%) \\
\midrule
a & & & & 77.8 \\
b & \checkmark & & & 78.5\\
c & & \checkmark & & 78.1\\
d & & & \checkmark & 78.3\\
e & \checkmark & \checkmark & & 78.6\\
f & \checkmark & & \checkmark & 78.8\\
g & & \checkmark & \checkmark & 78.5\\
h & \checkmark & \checkmark & \checkmark & 79.3\\
\bottomrule
\end{tabular}
\caption{Impact of integrating every functional module into the baseline to the accuracy. TR, FA, and FB stand for temporal relation module, feature adjustment module, and feature blender module respectively.}
\label{tab:ablationStudy}
\end{table}
\subsection{Ablation Study}
We carry out extensive ablation studies to discover the optimal settings related to different settings of our system using FGFA \cite{zhu2017flow}.
\textbf{Analysis of contributing components.} We first conduct experiments on the effect of every component in our proposed method and the results are shown in Table \ref{tab:ablationStudy}. The baseline model a is the original FGFA. Every component of our proposed method (temporal relation, feature adjustment, and feature blender) contributes towards improving the overall performance in detection accuracy. By introducing the temporal relation module, the performance of model b can be improved by $0.7\%$. Model c adds our feature adjustment module to the baseline and gets an improvement of $0.3\%$ compared with the baseline model a. We add our feature blender module to model a to generate dynamic numbers of neighboring frames for feature aggregation and get model d, which is $0.5\%$ better than the original model on mAP metric. Model e, f, and g come from the combination of models a, b and c. As can be shown in Table \ref{tab:ablationStudy}, by combining every two of our proposed methods, the video object detection performance can be further improved. Compared with the baseline model a, our full model h can obtain an absolute gain of $1.5\%$ in accuracy of video object detection.
\textbf{Analysis of temporal relation.}
We conduct ablation studies on the choice of $g$ in Eq. (\ref{temporalRelation}). During these experiments, all the other experimental settings are kept the same. We first try different combinations of $f_i$ and $f_j$ for $g$ on FGFA \cite{zhu2017flow} as Table \ref{tab:ablationStudyRB}. A naive idea is to use just $f_i$ and $f_j$ as input and there is $0.5\%$ improvement on FGFA. We think that the performance is limited because only individual frame features are taken into account which is not enough to describe the relationship between the $f_i$ and $f_j$. Thus, we introduce the difference between $f_i$ and $f_j$ to $g$ and get an improvement of $0.8\%$ for FGFA. We then use the summation of $f_i$ and $f_j$ as $g$ to generate $\mathcal{W}\left(f_i, f_j\right)$ but there is only $0.1\%$ improvement. We also make a combination between $f_i + f_j$ with the other choices mentioned above (like $f_i, f_j$, and $f_i - f_j$), but the results of the combination are worse than those of the original. We think the reason why $f_i + f_j$ is not suitable to describe the relations between $f_i$ and $f_j$ is $f_i + f_j$ works like an average filter which mixes the pixels with higher responses and those with lower responses in the feature map. Besides the experiments mentioned above, we also try $f_i, f_j, f_i - f_j$ and get an improvement of $1.1\%$. Finally, we choose $f_i, f_j, f_i - f_j, f_j - f_i$ as our feature relation function $g$, which has the highest detection accuracy. Since $f_i$ and $f_j$ denote the current and adjacent features respectively. Frame $\textbf{F}_j$ could be a frame before or after the current frame $\textbf{F}_i$. Thus, it is imperative to calculate both $f_i - f_j$ and $f_j - f_i$, as they model the different temporal correspondence and consistency.
\begin{table}[!bt]
\centering
\begin{tabular}{c|c}
\toprule
$g$ & mAP(\%)\\
\midrule
$f_i, f_j$ & 78.3\\
$f_i - f_j$ & 78.6\\
$f_i + f_j$ & 77.9\\
$f_i, f_j, f_i + f_j$ & 78.1\\
$f_i - f_j, f_i + f_j$ & 78.5\\
$f_i, f_j, f_i - f_j$ & 78.9\\
$f_i, f_j, f_i - f_j, f_j - f_i$ & 79.3\\
\bottomrule
\end{tabular}
\caption{Results of different designs on feature relation function $g$.}
\label{tab:ablationStudyRB}
\end{table}
\textbf{Experiments on $\mathcal{M}$.} We conduct experiments on the design of $\mathcal{M}$ for the temporal relation module, especially on the number of layers of $\mathcal{M}$ for the mini-network. Model a is the simplest design where there is only one convolution layer with kernel size $1\times1$. By keeping the kernel size fixed and adding one more convolution layer, model b can increase the mAP by $0.2\%$. When there are three convolution layers with kernel size $1\times1$, the detection accuracy can
obtain $79.2\%$ as model c. However, when adding more convolution layers, as in model d, the detection accuracy begins to decrease. We argue that the increasing number of convolution layers introduces arduous parameters in the mini-network which cause overfitting. In model e, we change the kernel size from $1\times1$ to $3\times3$ and get an improvement of detection accuracy by $0.1\%$.
\begin{table}[!ht]
\centering
\begin{tabular}{c|c|c}
\toprule
model & \# of layers & mAP(\%)\\
\midrule
a & 1 & 78.8\\
b & 2 & 79.0\\
c & 3 & 79.2\\
d & 4 & 79.1\\
e & 3 & 79.3 \\
\bottomrule
\end{tabular}
\caption{Impact of the number of layers for $\mathcal{M}$.}
\label{tab:mAblation}
\end{table}
\textbf{Analysis of object sizes and motion speeds.} We also investigate the effect of our TF-Blender on the object sizes and motion speeds of the objects. We use the same definition as MS-COCO \cite{lin2014microsoft} and FGFA \cite{zhu2017flow} for object sizes and motion speeds respectively. We use mAP as evaluation metrics and visualize the improvement of performance on objects with different sizes and motion speeds as Figure \ref{fig:scaleAndSpeed}.
We notice that our method has different improvements on objects with various motion speeds. As shown in Figure \ref{fig:scaleAndSpeed} (a), there is a higher improvement for objects with slow motion speeds compared with those with fast and medium speeds. We think that there may be two reasons. One is that even though our proposed method can help improve the detection accuracy for objects with fast motion speeds, it's still a challenge to have accurate enough detection results for all the objects with fast-motion speed. Another reason is that objects with slow-motion account for $37.9\%$ in ImageNet VID benchmark while those with medium and fast motion speeds are $35.9\%$ and $26.2\%$ respectively.
Another critical observation from our experiment that
our method can offer the highest improvement for detection on large objects, as shown in Figure \ref{fig:scaleAndSpeed}(b). This resonates with the assumption of our proposed method: since large objects have larger feature map sizes, the corresponding pixel can benefit more from an individual weight for fine-grained feature encoding. For small objects, since their feature maps are small, the weights for aggregation have less contribution to feature representation improvement.
\begin{figure}[!bt]
\centering
\subfigure[Motion speed]{
\includegraphics[width=3.8cm]{img/motionSpeed.png}
}
\subfigure[Object sizes]{
\includegraphics[width=3.8cm]{img/objectSize.png}}
\caption{Improvement of performance with different motion speeds and object sizes.}
\label{fig:scaleAndSpeed}
\end{figure}
\textbf{Speed-accuracy tradeoff.} The computational loads for convectional methods (i.e., FGFA~[\textcolor{green}{47}] and SELSA~[\textcolor{green}{41}]) stem from two major sources: 1. feature extraction (encoding) network $\mathcal{N}_{ex}$; 2. task network $\mathcal{N}_{tk}$. Thus, the runtime complexity for the above methods is:
\begin{equation}
\begin{aligned}
\mathcal{O}\big(\mathcal{N}_{ex}\big)+\mathcal{O}\big(\mathcal{N}_{tk}\big)
\label{11}
\end{aligned}
\end{equation}
\indent While the proposed TF-Blender approach is adopted, the computational cost can be defined as:
\begin{equation}
\begin{aligned}
\mathcal{O}\big(\mathcal{N}_{ex}\big)+ i\cdot\mathcal{O}\big(\mathcal{N}_{tf}\big)+\mathcal{O}\big(\mathcal{N}_{tk}\big)
\end{aligned}
\end{equation}
where $\mathcal{N}_{tf}$ is the cost for the TF-Blender module and $i$ is the number of aggregated frames.
Typically, $\mathcal{O}\big(\mathcal{N}_{tk}\big) \ll \mathcal{O}\big(\mathcal{N}_{ex}\big)$ and $\mathcal{O}\big(\mathcal{N}_{tf}\big) \ll \mathcal{O}\big(\mathcal{N}_{ex}\big)$. Thus, the cost ratio $r$ can be expressed as:
\begin{equation}
\begin{aligned}
r=1+\frac{i\cdot\mathcal{O}\big(\mathcal{N}_{tf}\big)}
{\mathcal{O}\big(\mathcal{N}_{ex}\big)+\mathcal{O}\big(\mathcal{N}_{tk}\big)}\\
\end{aligned}
\label{13}
\end{equation}
This increasing computational cost is affordable because the impact of $i\cdot\mathcal{O}\big(\mathcal{N}_{tf}\big)$ is negligible.
We visualize the speed-accuracy tradeoff of FGFA~[\textcolor{green}{47}] as an example (cf. Figure \ref{fig:speed_acc_tradeoff}). With the increasing number of input frames, FGFA with TF-Blender achieves significant improvement in accuracy while the runtime increase keeps in an affordable range.
\begin{figure}[!ht]
\centering
\includegraphics[width=8cm]{img/speed_acc.png}
\caption{Demonstration of a speed-accuracy tradeoff with and without TF-Blender on FGFA with ResNet-50.}
\label{fig:speed_acc_tradeoff}
\end{figure}
\section{Conclusion}
In this paper, we discuss the problems of video object detection and introduce a framework named TF-Blender which contains temporal relation, feature adjustment, and feature blender modules to solve the problem of feature degrading in the video frames. Our method is flexible and general, which can be adopted by any learning-based detection network to achieve improved performance. Extensive experiments demonstrate that, with the integration of our proposed method, the current state-of-the-art methods can improve video object detection accuracy on ImageNet VID and YouTube-VIS benchmarks by a large margin. We believe that our TF-Blender can be a valuable addition to the existing methods for temporal feature aggregation for video detection and TF-Blender can be extended to other video analysis tasks like video instance segmentation.
{\small
\bibliographystyle{ieee_fullname}
|
2,877,628,091,474 | arxiv | \section{Introduction}
One of the most puzzling aspects of cosmology is the unknown reason for the
dominance of matter over anti-matter in our Universe. Within the Standard
Model of particle physics there is no explanation for this observation and
hence a new mechanism has to be responsible. A favored model called
leptogenesis~\cite{davidson} links the matter dominance to the nature of
neutrinos and to the violation of lepton number, i.e. the total number of
electrons, muons, taus and neutrinos minus the number of their anti-particles.
In most extensions of the Standard
Model~\cite{mohapatra06,mohapatra07,rodejohann15} neutrinos are assumed to be
their own anti-particles (Majorana particles). This might lead to lepton
number violating processes at the TeV energy scale observable at the
LHC~\cite{rodejohann15} and would result in neutrinoless double beta
($0\nu\beta\beta$) decay where a nucleus of mass number $A$ and charge $Z$
decays as $(A,Z) \rightarrow (A,Z+2) + 2\,e^-$. Lepton number violation has
not been unambiguously observed so far. There are several experimental
$0\nu\beta\beta$ decay programs ongoing using for example
$^{76}$Ge~\cite{gerda:2013:prl,2015:mjd},
$^{130}$Te~\cite{2015:cuore,2016:sno} or
$^{136}$Xe~\cite{kamland:2016,exo:2014,next:2016}. They all measure the sum
of the electron energies released in the decay which corresponds to the mass
difference $Q_{\beta\beta}$ of the two nuclei. The $0\nu\beta\beta$ decay
half-life is at least 15 orders of magnitude longer than the age of the
universe. Its observation requires therefore the best suppression of
backgrounds.
In the GERmanium Detector Array (\gerda) experiment bare germanium detectors
are operated in liquid argon (LAr). The detectors are made from germanium
with the $^{76}$Ge isotope fraction enriched from 7.8\,\% to about 87\,\%.
Since source and detector of $0\nu\beta\beta$ decay are identical in this
calorimetric approach the detection efficiency is high.
This Article presents the first result from \gerda\ Phase~II. In the first
phase of data taking (Phase~I), a limit of $T_{1/2}^{0\nu}>2.1\cdot10^{25}$~yr
(90\,\% C.L.) was found~\cite{gerda:2013:prl} for an exposure of
21.6~kg$\cdot$yr and a background of 0.01~\ctsper\ at
$Q_{\beta\beta}=(2039.061\pm0.007)$~keV~\cite{qbb}. At that time, the result
was based on data from 10 detectors (17.6~kg total mass). In December 2015,
Phase~II started with 37 detectors (35.6~kg) from enriched material. The mass
is hence doubled relative to Phase~I. The ambitious goal is an improvement of
the half-life sensitivity to $>10^{26}$~yr for about 100 kg$\cdot$yr exposure
by reducing the background level by an order of magnitude. The latter is
achieved by vetoing background events through the detection of their energy
deposition in LAr and the characteristic time profile of their signals in the
germanium detectors. The expected background is less than one count in the
energy region of interest up to the design exposure which means that
\gerda\ will be the first ``background free'' experiment in the field.
We will demonstrate in this Article that \gerda\ has reached the envisioned
background level which is the world-best level if weighted by our superior
energy resolution. \gerda\ is therefore best suited to not only quote limits
but to identify with high confidence a $0\nu\beta\beta$ signal.
\section{The experiment}
The \gerda\ experiment~\cite{gerda:2013:tec} is located at the underground
Laboratori Nazionali del Gran Sasso (LNGS) of INFN, Italy. A rock overburden
of about 3500~m water equivalent removes the hadronic components of cosmic ray
showers and reduces the muon flux at the experiment by six orders of magnitude
to 1.2~$\mu$/(m$^2\cdot$h).
The basic idea is to operate bare germanium detectors in a radiopure cryogenic
liquid like LAr for cooling to their operating temperature of $\sim$90~K and
for shielding against external radiation originating from the walls (see
Extended Data Fig.~\ref{extfig:setup} for a sketch of the
setup)~\cite{heusser}. In \gerda, a
64~m$^3$ LAr cryostat is inside a 590~m$^3$ water tank. The clean water
completes the passive shield. Above the water tank is a clean room with a
glove box and lock for the assembly of germanium detectors into strings and
the integration of the liquid argon veto system.
\gerda\ deploys 7 coaxial detectors from the former
Heidelberg-Moscow~\cite{klapdor1} and IGEX~\cite{igex} experiments and 30
broad energy (BEGe) detectors ~\cite{gerda:2015:bege}. All diodes have p-type
doping (see Extended Data Fig.~\ref{extfig:detectors}). Electron-hole pairs
created in the 1--2~mm thick n$+$ electrode mostly recombine such that the
active volume is reduced. A superior identification of the event topology and
hence background rejection is available for the BEGe type (see below). The
enriched detectors are assembled into 6 strings surrounding the central one
which consists of three coaxial detectors of natural isotopic composition.
Each string is inside a nylon cylinder (see Extended Data
Fig.~\ref{extfig:string}) to limit the LAr volume from which radioactive ions
like $^{42}$K can be collected to the outer detector
surfaces~\cite{gerda:2014:bkg}.
All detectors are connected to custom made low radioactivity charge sensitive
amplifiers~\cite{cc3} (30~MHz bandwidth, 0.8~keV full width at half maximum
(FWHM) resolution) located in LAr about 35~cm above the detectors. The charge
signal traces are digitized with 100~MHz sampling rate and stored on disk for
offline analysis.
In background events some energy is often also deposited in the argon. The
resulting scintillation light~\cite{lar1} can be detected to veto them. In
Phase~II, a cylindrical volume of 0.5~m diameter and 2.2~m height around the
detector strings (see Extended Data Fig.~\ref{extfig:setup}
and~\ref{extfig:larcaps}) is instrumented with light sensors. The central
0.9~m of the cylinder are defined by a curtain of wavelength shifting fibers
which surround the 0.4~m high detector array. The fibers are read-out at both
ends with 90~silicon photomulipliers (SiPM)~\cite{lar2}. Groups of six
$3\times3$~mm$^2$ SiPMs are connected together to a charge sensitive
amplifier. Sixteen 3'' low-background photomultpliers (PMT) designed for
cryogenic operation are mounted at the top and bottom surfaces of the
cylindrical volume. The distance to any detector is at least 0.7~m to limit
the PMT background contribution from their intrinsic Th/U radioactivity. All
LAr veto channels are digitized and read-out together with the germanium
channels if at least one detector has an energy deposition above
$\sim$100~keV.
The nylon cylinders, the fibers, the PMTs and all surfaces of the instrumented
LAr cylindrical volume are covered with a wavelength shifter to shift the LAr
scintillation light from 128~nm to about 400~nm to match the peak quantum
efficiency of the PMTs and the absorption maximum of the fibers.
The water tank is instrumented with 66~PMTs to detect Cherenkov light from
muons passing through the experiment. On top of the clean room are three
layers of plastic scintillator panels covering the central 4$\times$3~m$^2$ to
complete the muon veto~\cite{gerda:muon}.
\section{Data analysis}
The data analysis flow is very similar to that of Phase~I. The offline
analysis of the digitized germanium signals is described in
Refs.~\cite{gerda:2013:prl,acat,gelatio}.
A data blinding procedure is again applied. Events with a reconstructed
energy in the interval $Q_{\beta\beta}\pm 25$~keV are not analyzed but only
stored on disk. After the entire analysis chain has been frozen, these
blinded events have been processed.
The gain stability of each germanium detector is continuously monitored by
injecting charge pulses (test pulses) into the front-end electronics with a
rate of 0.05~Hz. The test pulses are also used to monitor leakage current and
noise. Only data recorded during stable operating conditions (e.g.~gain
stability better than 0.1\,\%) are used for the physics analysis. This
corresponds to about 85\,\% of the total data written on disk.
Signals originated from electrical discharges in the high voltage line or
bursts of noise are rejected during the off\-line event reconstruction by a
set of multi-parametric cuts based on the flatness of the baseline, polarity
and time structure of the pulse. Physical events at $Q_{\beta\beta}$ are
accepted with an efficiency larger than 99.9\,\% estimated with $\gamma$ lines
in calibration data, test pulse events and template signals injected in the
data set. Conversely, a visual inspection of all events above 1.6~MeV shows
that no unphysical event survives the cuts.
The energy deposited in a germanium detector is reconstructed offline with an
improved digital filter~\cite{gerda:2015:zac}, whose parameters are optimized
for each detector and for several periods. The energy scale and resolution
are determined with weekly calibration runs with $^{228}$Th sources. The
long-term stability of the scale is assessed by monitoring the shift of the
position of the 2615~keV peak between consecutive calibrations. It is
typically smaller than 1~keV for BEGe detectors and somewhat worse for some
coaxial ones. The FWHM resolution at 2.6~MeV is between 2.6--4.0~keV for BEGe
and 3.4--4.4~keV for coaxial detectors. The width of the strongest $\gamma$
lines in the physics data (1525~keV from $^{42}$K and 1460~keV from $^{40}$K)
is found to be 0.5~keV larger than the expectation for the coaxial detectors
(see Fig.~\ref{fig:Eres}). In order to estimate the expected energy
resolution at $Q_{\beta\beta}$ an additional noise term is added to take this
into account.
For $0\nu\beta\beta$ decays in the active part of a detector volume, the total
energy of $Q_{\beta\beta}$ is detected in 92\,\% of the cases in this
detector. Multiple detector coincidences are therefore discarded as
background events. Two consecutive candidate events within 1~ms are also
rejected (dead time $\sim$$10^{-4}$) to discriminate time-correlated decays
from primordial radioisotopes, as e.g.~the radon progenies $^{214}$Bi and
$^{214}$Po. Candidate events are also refuted if a muon trigger occurred
within 10~$\upmu$s prior to a germanium detector trigger. More than 99\,\% of
the muons that deposit energy in a germanium detector are rejected this way.
The induced dead time is $<$0.1\,\%.
The traces from PMTs and SiPMs are analyzed offline to search for LAr
scintillation signals in coincidences with a germanium detector trigger. An
event is rejected if any of the light detectors record a signal of amplitude
above 50\,\% of the expectation for a single photo-electron within 5~$\upmu$s
from the germanium trigger. 99\,\% of the photons occur in this window.
Accidental coincidences between the LAr veto system and germanium detectors
create a dead time of $(2.3\pm0.1)$\,\% which is measured with test pulse
events and cross checked with the counts in the $^{40}$K peak.
\begin{figure}
\includegraphics[width=\columnwidth]{fig1-pII-2016.pdf}
\caption{\label{fig:Eres}
Average energy resolution (FWHM) for $\gamma$ lines of the calibration
spectrum (filled symbols) and the $^{42}$K line from physics data
(open symbols) for BEGe (symbols and solid line in blue) and coaxial
(symbols and dashed line in red) detectors. The insets show the K
lines and the 2615~keV calibration peak.
}
\end{figure}
\begin{figure*}
\includegraphics[width=\textwidth]{fig2-pII-2016.pdf}
\caption{\label{fig:allE}
Energy spectra of Phase~II data sets before (open histogram) and
after argon veto cut (filled histogram). The blue lines are the
expected $2\nu\beta\beta$ spectra for our recent half-life
measurement. The inset shows the BEGe spectrum in the energy region
around the two potassium lines. Various background contributions are
labeled in the bottom panel.
}
\end{figure*}
Fig.~\ref{fig:allE} shows the energy spectra for BEGe and coaxial detectors of
Phase~II with and without the LAr veto cut. Below $\sim$$500$~keV the spectra
are dominated by $^{39}$Ar $\beta$ decays, up to 1.7~MeV by events from double
beta decay with two neutrino emission ($2\nu\beta\beta$), above 2.6~MeV by
$\alpha$ decays on the detector surface and around $Q_{\beta\beta}$ by a
mixture of $\alpha$ events, $^{42}$K $\beta$ decays and those from the
$^{238}$U and $^{232}$Th decay chains. The two spectra are similar except for
the number of $\alpha$ events which is on average higher for coaxial
detectors. The number of $\alpha$ counts shows a large variation between the
detectors. The power of the LAr veto is best demonstrated by the $^{42}$K
line at 1525~keV which is suppressed by a factor $\sim$5 (see inset) due to
the $\beta$ particle depositing up to 2~MeV energy in the LAr. The figure
also shows the predicted $2\nu\beta\beta$ spectrum from $^{76}$Ge using our
Phase~I result for the half-life of
$T_{1/2}^{2\nu}=(1.926\pm0.094)\cdot10^{21}$~yr~\cite{gerda:2015:2nbb}.
The time profile of the germanium detector current signal is used to
discriminate $0\nu\beta\beta$ decays from background events. While the former
have point-like energy deposition in the germanium (single site events, SSE),
the latter have often multiple depositions (multi site events, MSE) or
depositions on the detector surface. The same pulse shape discrimination (PSD)
techniques of Phase~I~\cite{gerda:2013:psd} are applied.
Events in the double escape peak (DEP) and at the Compton edge of 2615~keV
gammas in calibration data have a similar time profile as $0\nu\beta\beta$
decays and are hence proxies for SSE. These samples are used to define the PSD
cuts and the related detection efficiencies. The latter are cross checked
with $2\nu\beta\beta$ decays.
The geometry of BEGe detectors allows to apply a simple mono-parametric PSD
based on the maximum of the detector current pulse $A$ normalized to the total
energy $E$~\cite{aovere,aovere2}. The energy dependence of the mean and the
resolution $\sigma_{ae}$ of $A/E$ are measured for every detector with
calibration events. After correcting for these dependences and normalizing the
mean $A/E$ of DEP events to~1, the acceptance range is determined for each
detector individually: the lower cut is set to keep 90\,\% of DEP events and
the upper position is twice the low-side separation
from~1. Fig.~\ref{fig:aovere} shows a scatter plot of the PSD parameter
$\zeta=(A/E-1)/\sigma_{ae}$ versus energy and the projection to the energy
axis. Events marked in red survive the PSD selection. Below 1.7~MeV
$2\nu\beta\beta$ events dominate with a survival fraction of
$(85_{-1}^{+2})$\,\%. The two potassium peaks and Compton scattered photons
reconstruct at $A/E<1$ (below the SSE band). All 234~$\alpha$ events at
higher energies exhibit $A/E > 1$ and are easily removed. The average
$0\nu\beta\beta$ survival fraction~\cite{phd:vici} is $(87\pm2)$\,\%. The
uncertainty takes into account the systematic difference between the $A/E$
centroids of DEP and $2\nu\beta\beta$ events and different fractions of MSE in
DEP and $0\nu\beta\beta$ events.
\begin{figure}
\includegraphics[width=\columnwidth]{fig3-pII-2016.pdf}
\caption{\label{fig:aovere}
For all BEGe detectors, PSD parameter $\zeta = (A/E -
1)/\sigma_{ae}$ versus energy for physics data and the
projection to the energy axis. Red circles and red spectrum
represent events that pass the selection. Since the cuts
are detector specific the accepted $\zeta$ ranges differ.
}
\end{figure}
For coaxial detectors a mono-parametric PSD is not sufficient since SSE do not
have a simple signature~\cite{gerda:2013:psd}. Instead two neural network
algorithms are applied to discriminate SSE from MSE and from $\alpha$ surface
events. The first one is identical to the one used in Phase~I. The cut on the
neural network qualifier is set to yield a survival fraction of DEP events of
90\,\% for each detector. For the determination of the $0\nu\beta\beta$
efficiency, $2\nu\beta\beta$ events in physics data and a complete Monte Carlo
simulation~\cite{kirschphd} of physics data and calibration data are used. The
simulation considers the detector and the electronics response to energy
depositions including the drift of charges in the crystal~\cite{adl}. We find
a survival fraction for $0\nu\beta\beta$ events of $(85\pm5)$\,\% where the
error is derived from variations of the simulation parameters.
The second neural network algorithm is applied for the first time and
identifies surface events on the p$+$ contact. Training is done with physics
data from two different energy intervals. After the LAr veto cut events in the
range 1.0--1.3~MeV are almost exclusively from $2\nu\beta\beta$ decay and hence
signal-like. Events above 3.5~MeV are almost all from $\alpha$ decays on the
p$+$ electrode and represent background events in the training. As
$0\nu\beta\beta$ efficiency we measure a value of $(93\pm1)$\,\% for a
$2\nu\beta\beta$ event sample not used in the training. The combined PSD
efficiency for coaxial detectors is $(79\pm5)$\,\%.
\section{Results}
This analysis includes the data sets used in the previous
publication~\cite{gerda:2013:prl,gerda:2015:taup13}, an additional coaxial
detector period from 2013 (labeled ``PI extra'') and the Phase~II data from
December 2015 until June 2016 (labeled ``PIIa coaxial'' and ``PIIa
BEGe''). Table~\ref{tab:datasets} lists the relevant parameters for all data
sets. The exposures in the active volumes of the detectors for $^{76}$Ge are
234 and 109 mol$\cdot$yr for Phase~I and II, respectively. The
efficiency is the product of the $^{76}$Ge isotope fraction (87\,\%), the
active volume fraction (87--90\,\%), the $0\nu\beta\beta$ event fraction
reconstructed at full energy in a single crystal (92\,\%), pulse shape
selection (79--92\,\%) and the live time fraction (97.7\,\%). For the Phase~I
data sets the event selection including the PSD classification is
unchanged. An improved energy reconstruction~\cite{gerda:2015:zac} is applied
to the data as well as an updated value for the coaxial detector PSD
efficiency of the neural network analysis of $(83\pm3)$\,\%~\cite{kirschphd}.
\begin{table}
\caption{\label{tab:datasets}
List of data sets, exposures (for total mass), energy resolutions in
FWHM, efficiencies (including enrichment, active mass, reconstruction
efficiencies and dead times) and background indices (BI) in the analysis
window.
}
\begin{tabular}{ccccc} \hline
data set & exposure & FWHM & efficiency & BI \\
& [kg$\cdot$yr] & [keV] & & $10^{-3}$\ctsper \\ \hline
PI golden & 17.9 & $4.3(1)$ & $0.57(3)$ & $11\pm2$~~ \\
PI silver & 1.3 & $4.3(1)$ & $0.57(3)$ & $30\pm10$ \\
PI BEGe & 2.4 & $2.7(2)$ & $0.66(2)$ & $5_{-3}^{+4}$ \\
PI extra & 1.9 & $4.2(2)$ & $0.58(4)$ & $5_{-3}^{+4}$ \\ \hline
PIIa coaxial & 5.0 & $4.0(2)$ & $0.53(5)$ & $3.5_{-1.5}^{+2.1}$ \\
PIIa BEGe & 5.8 & $3.0(2)$ & $0.60(2)$ & $0.7_{-0.5}^{+1.1}$ \\ \hline
\end{tabular}
\end{table}
Fig.~\ref{fig:spectrum} shows the spectra for the combined Phase~I data sets
and the two Phase~II sets. The analysis range is from 1930 to 2190~keV
without the intervals $(2104\pm5)$~keV and $(2119\pm5)$~keV of known peaks
predicted by our background model~\cite{gerda:2014:bkg}. For the coaxial
detectors four events survive the cuts which means that the background is
reduced by a factor of three compared to Phase~I (see 'PI golden' in
Tab.~\ref{tab:datasets}). Due to the better PSD performance, only one event
remains in the BEGe data which corresponds to a background of
$0.7_{-0.5}^{+1.1}\cdot 10^{-3}$ \ctsper. Consequently, the Phase~II
background goal is reached.
We perform both a Frequentist and a Bayesian analysis based on an unbinned
extended likelihood function~\cite{gerda:2015:taup13}. The fit function for
every data set is a flat distribution for the background (one free parameter
per set) and for a possible signal a Gaussian centered at $Q_{\beta\beta}$
with a width according to the corresponding resolution listed in
Tab.~\ref{tab:datasets}. The signal strength is calculated for each set
according to its exposure, efficiency and the inverse half-life $1/T$ which is
a common free parameter.
\begin{figure}
\includegraphics[width=\columnwidth]{fig4-pII-2016.pdf}
\caption{\label{fig:spectrum}
Combined Phase~I data (top), Phase~II coaxial (middle) and BEGe
detector spectra (bottom) in the analysis window. The binning is
2~keV. The exposures are given in the panels. The red histogram is
the final spectrum, the filled grey one without pulse shape
discrimination and the open one in addition without argon veto cut.
The blue line is the fitted spectrum together with a hypothetical
signal corresponding to the 90\,\% C.L. limit of $T_{1/2}^{0\nu} >
5.3\cdot10^{25}$~yr.
}
\end{figure}
Systematic uncertainties like a 0.2~keV uncertainty of the energy scale at
$Q_{\beta\beta}$ are included in the analysis as pull terms in the likelihood
function. The implementation takes correlations into account.
The Frequentist analysis uses the Neyman construction of the confidence
interval and the standard two-sided test statistics~\cite{pdg14,cowan} with
the restriction to the physical region $1/T\ge0$: the frequency distribution
of the test statistic is generated using Monte Carlo simulations for different
assumed $1/T$ values. The limit was determined by finding the largest value
of $1/T$ for which at most 10\,\% of the simulated experiments had a value of
the test statistic more unlikely than the one measured in our data (see
Extended Data Fig.~\ref{extfig:analysis}). Details of the statistical
analysis can be found in the appendix. The best fit yields zero signal events
and a 90\,\% C.L.~limit of 2.0 events in 34.4~kg$\cdot$yr total exposure or
\begin{equation}
T_{1/2}^{0\nu} > 5.3 \cdot 10^{25}\,{\rm yr.}
\end{equation}
The (median) sensitivity assuming no signal is $4.0\cdot10^{25}$~yr (see
Extended Data Fig.~\ref{extfig:analysis}). The systematic errors weaken the
limit by $<$1\,\%.
The Bayesian fit yields for a prior flat in $1/T$ between 0 and
$10^{-24}$~yr$^{-1}$ a limit of $T_{1/2}^{0\nu} > 3.5\cdot10^{25}$~yr (90\,\%
C.I.). The sensitivity assuming no signal is $3.1\cdot10^{25}$~yr.
\section{Discussion}
The second phase of \gerda\ collects data since December 2015 in stable
conditions with all channels working. The background at $Q_{\beta\beta}$ for
the BEGe detectors is $(0.7_{-0.5}^{+1.1})\cdot10^{-3}$~\ctsper. This is a
major achievement since the value is consistent with our ambitious design
goal.
We find no hint for a $0\nu\beta\beta$ decay signal in our combined data and
place a limit of $T_{1/2}^{0\nu} ({\rm ^{76}Ge})>5.3\cdot10^{25}$~yr (90\,\%
C.L., sensitivity $4.0\cdot10^{25}$~yr). For light Majorana neutrino exchange
and a nuclear matrix element range for $^{76}$Ge between 2.8 and 6.1
\cite{mene09,horoi16,bar15,suh15,fae13,rod13,yao15} the \gerda\ half-life
limit converts to $m_{\beta\beta}<$0.15--0.33~eV (90\,\% C.L.).
We expect only a fraction of a background event in the energy region of
interest (1 FWHM) at design exposure of 100~kg$\cdot$yr. \gerda\ is hence the
first ``background free'' experiment in the field. Our sensitivity grows
therefore almost linearly with time instead of by square root like for
competing experiments and reaches $10^{26}$~yr for the half-life limit within
3 years of continuous operation. With the same exposure we have a 50\,\% chance
to detect a signal with $3\sigma$ significance if the half-life is below
$10^{26}$~yr.
Phase~II has demonstrated that the concept of background suppression by
exploiting the good pulse shape performance of BEGe detectors and by detecting
the argon scintillation light works. The background at $Q_{\beta\beta}$ is at
a world-best level: it is lower by typically a factor of 10 compared to
experiments using other isotopes after normalization by the energy resolution
and total efficiency $\epsilon$; i.e. (BI$\cdot$FWHM/$\epsilon$) is superior.
This is the reason why the \gerda\ half-life sensitivity of
$4.0\cdot10^{25}$~yr for an exposure of 343~mol$\cdot$yr is similar to the one
of Kamland-Zen for $^{136}$Xe of $5.6\cdot10^{25}$~yr based on a more than
10-fold exposure of 3700~mol$\cdot$yr~\cite{kamland:2016}.
A discovery of $0\nu\beta\beta$ decay would have far reaching consequences for
our understanding of particle physics and cosmology. Key features for a
convincing case are an ultra low background with a simple flat distribution,
excellent energy resolution and the possibility to identify the events with
high confidence as signal-like as opposed to an unknown $\gamma$-line from a
nuclear transition. The latter is achieved by the detector pulse shape
analysis and possibly a signature in the argon. The concept to operate bare
germanium detectors in liquid argon has proven to have the best performance
for a discovery which motivates future extensions of the program. The
\gerda\ cryostat can hold 200~kg of detectors. Such an experiment will remain
background-free until an exposure of 1000~kg$\cdot$yr provided the background
can be further reduced by a factor of five. The discovery sensitivity would
then improve by an order of magnitude to a half-life of $10^{27}$~yr. The
200~kg setup is conceived as a first step for a more ambitious 1~ton
experiment which would ultimately boost the sensitivity to $10^{28}$~yr
corresponding to the $m_{\beta\beta}<$10--20~meV range. Both extensions
are being pursued by the newly formed LEGeND Collaboration
(http://www.legend-exp.org)
|
2,877,628,091,475 | arxiv | \section{Introduction}
In the present article we are interested in developing a finite element method for solving the Stokes problem with surface tension on an immersed \textcolor{black}{interface}. \textcolor{black}{This problem concerns the simulation} of the motion of a bubble-soap for example. We consider that the bubble-soap has no thickness, and is represented by a hypersurface. Its presence inside the fluid is modeled by a Neumann-type force which generates a jump of the normal trace of the stress tensor. This force is proportional to the mean curvature of the surface. In particular, at the equilibrium, this force indicates the difference of pressures inside and outside the bubble-soap. Apart from the equilibrium, this force impacts the behavior of the surrounding fluid on both sides, and the response of the fluid is a velocity on the hypersurface, by assuming the equality of velocities at the interface. This velocity determines the evolution of the interface, and thus this is how the dynamics of the bubble-soap is coupled to its own geometry. Addressing the question of existence of weak solutions for such a kind of models can be a difficult task. For more details on the mathematical aspects, we refer to~\cite{Abels2016}.
We focus our interest on the linear Stokes problem, which constitutes the corner stone of more complex models like the Navier-Stokes equations. Inside a domain $\Omega \subset \R^d$ ($d=2$ or $3$), we consider an immersed interface $\Gamma$, that we assume to be a closed smooth oriented manifold of codimension 1 without boundary, let us say a smooth perturbation of the sphere, for the sake of simplicity. The hypersurface $\Gamma$ splits the domain $\Omega$ into two connected subset $\Omega^+$ and $\Omega^-$, as described in Figure~\ref{fig1}. We thus have $\Omega = \Omega^+ \cup \overline{\Omega^-}$. The velocity-pressure couples are denoted by $(\bu^+,p^+)$ and $(\bu^-,p^-)$ inside $\Omega^+$ and $\Omega^-$, respectively.
\begin{figure}[!h]
\begin{center}
\scalebox{0.5}
{ \input{./images/transmission.tex} }
\caption{A \textcolor{black}{surface} force applied on an interface, separating the fluid domain into two parts.}\label{fig1}
\end{center}
\end{figure}
\FloatBarrier
The system we are interested in is the following:
\begin{eqnarray*}
\left\{ \begin{array} {rcl}
-\nu^+ \Delta \bu^+ + \nabla p^+ = \bff^+ & & \text{in } \Omega^+, \\
-\nu^- \Delta \bu^- + \nabla p^- = \bff^- & & \text{in } \Omega^-, \\
\divg \bu^+ = 0 & & \text{in } \Omega^+, \\
\divg \bu^- = 0 & & \text{in } \Omega^-, \\
\bu^+ = 0 & & \text{on } \p \Omega, \\
\bu^+ - \bu^- = 0 & & \text{across } \Gamma, \\
\sigma(\bu^+, p^+)\bn^+ + \sigma(\bu^-, p^-)\bn^- = \bgg & & \text{across } \Gamma.
\end{array} \right.
\end{eqnarray*}
The notation $\sigma(\bu,p) = \nu(\nabla \bu + \nabla \bu ^T) -p\, \I$ refers to the stress tensor, where $\nu =\nu^+$ or $\nu^-$ stands for the viscosities. The data on which we focus our interest is a general \textcolor{black}{surface} force $\bgg$. We also consider volume forces $\bff^+$ and $\bff^-$ in the right-hand-side of the first two equations in the system above, but their consideration in finite element formulations does not involve any specific difficulty. We denote by $\bn^+$ and $\bn^-$ the outward unit normal on $\Gamma$ of $\Omega^+$ and $\Omega^-$, respectively. The goal of this paper is to define a robust approximation of a Poincar\'e-Steklov operator (of type {\it Neumann-to-Dirichlet}), that computes the trace of the velocity on $\Gamma$ from the surface tension force $\bgg$.
The state-of-the-art of finite element methods developed for solving two-phase flow problems with surface tension forces can be divided into different types of strategies. The first one consists of methods that adapt the mesh in function of the shape of the interface. Among them, adaptive methods were developed in~\cite{Dufour1998, Kou2014, Xie2016}. Another strategy consists in deforming the mesh in function of the deformation of the interface. A Lagrangian framework was considered in~\cite{Peric2001}. {\it Arbitrary Lagrangian Eulerian} (ALE) formulations are more famous for fluid-structure interactions models. However, we can mention~\cite{Navti1997, Kou2014, Anjos2014, Liu2017, Anjos2018} where ALE-FEM methods are developed in the context of surface tension models. Finally, the use of unfitted mesh -- which is our concern -- was considered in~\cite{Zhang2016} . \textcolor{black}{These techniques require local treatments and specific approximations of the forces on the interface, in particular when the latter is implemented with a level-set function. In this fashion, we can evoke the works~\cite{Gross2007-2, Gross2007}, where enrichment of basis functions are provided.
\textcolor{black}{Following a fictitious domain approach, other strategies can be mentioned for capturing an interface in the context of two-phase flows. Level-set methods for multiphase flows are developed in~\cite{Engquist2002, Mesri2016, Turek2018}. Discontinuous Galerkin methods are used in~\cite{Whiteley2015, Moortgat2016}. The problem of determining th position of the interface, and tracking it constitutes the wide family of interface-capturing methods. In the context of the present work, we can cite~\cite{Ohmori1997, Devals2007}, and more recently~\cite{Owkes2013, Denner2014, Dhar2015, Park2018, Duret2018, Heinrich2018}. In our case, a level-set function helps us in practice to determine the position of the interface, by saying on which side of the interface a point of the domain is located, in the same fashion as~\cite{Mesri2016}.
}
\textcolor{black}{In the present work the focus is on the introduction of dual variables, namely multipliers for taking into account the interface conditions. This is achieved with the consideration of a judicious Lagrangian functional, from which the finite element formulation is derived. When the optimality conditions for this Lagrangian functional are satisfies, these multipliers correspond to velocities and forces on this interface, whose values are unknowns in the problem, and so their approximation is of interest for the consideration of coupled systems.}
\textcolor{black}{An other} originality of our method lies in the fact that the \textcolor{black}{interface} $\Gamma$ is taken into account with a fictitious domain approach. That means that the \textcolor{black}{interface} does not fit the mesh, and so the latter is chosen independently of the \textcolor{black}{interface} (Cartesian mesh, structured mesh...). Our approach is inspired by the eXtended Finite Element Method (XFEM) introduced by~\cite{Moes1999}. This method consists in enriching the set of basis functions with singular functions, in order to handle variables (defined on the \textcolor{black}{interface}) whose degrees of freedom are independent of the mesh edges. See~\cite{reviewXfem} for a review of the applications of the method. Applications of XFEM to the context of two-phase flow were tested in~~\cite{Chessa2003, Chessa2003-2, Reusken2007, Reusken2008, Gross2011, Sauerland2011, Cheng2012, Liao2012, Sauerland2013, Fahsi2017}, for instance. In our case, \textcolor{black}{one of the main difference with the XFEM approach lies in the fact that} we do not provide singular functions as enrichment, but merely the trace on the \textcolor{black}{interface} of the standard basis functions (see section~\ref{sec-fict} for the details). The price-to-pay -- in comparison with XFEM -- is a lack of robustness with respect to the geometry, and a lack of quality for the convergence of the dual variables. We circumvent this drawback by performing a stabilization technique with an augmented Lagrangian \`a la Barbosa-Hughes~\cite{Barbosa1, Barbosa2}. See also~\cite{Tezduhar2003}. The present strategy was first introduced in~\cite{Renard2009}, and next adapted to fluid mechanics in~\cite{Court2014, Court2015} in the context of Fluid-Structure Interactions \textcolor{black}{based on Dirichlet boundary conditions}. This approach has also shown its capacities in~\cite{Court2016} where complex non-planar cracks in 3D were taken into account in the context of Geophysics. In this fashion, let us also mention that Nitsche type methods were developed for solving this kind of problems, in several works~\cite{Hansbo2002, Hansbo2005, Hansbo2005-2, Hansbo2005-3, Hansbo2010, Hansbo2011, Hansbo2014}. This family of methods does not require the introduction of Lagrange multipliers for the boundary conditions, whose consideration can be made with overlapping meshes, for instance.
\textcolor{black}{In the present work, Taylor-Hood elements will be used for the choice of pairs between primal and dual variables. We will then focus our interest in the use of structured meshes, since unstructured meshes can lead to instabilities in that case (see~\cite{Case2011, Gonzalez2015} for instance)}.
By developing such a method, our goal is to have a tool which enables us to perform unsteady simulations involving a moving interface with \textcolor{black}{surface} forces, in complex situations where sparing computation time is crucial. The purpose can be the study of the movement of a bubble-soap, or the simulation of the stabilization of this latter, by the use of an electric field as a control function for instance (see~\cite{Sato1998}), that acts on the interface as a surface tension type force. The interest of our method lies in the fact that, for updating the geometry between two time-steps, we only need to update a number of objects which is of the same range as the number of degrees of freedom chosen for describing the interface (see section~\ref{sec-smartupdate} for more details).
We illustrate the capacity of the method and our underlying motivation by performing simulations for an unsteady coupled model. The initial geometric configuration, given by the \textcolor{black}{interface} $\Gamma$ at time $t=0$, generates a surface tension force $\bgg = -\mu \kappa \bn^-$ where $\mu$ is a coefficient and $\kappa$ is the mean curvature of the \textcolor{black}{interface}. While solving the problem for this force, we obtain the trace of the velocity on the \textcolor{black}{interface}. This velocity enables us to update the \textcolor{black}{interface} for the next time step. This is a so-called {\it partitioned} method. \textcolor{black}{In this fashion, let us mention~\cite{Turek2004}, which treats of an other type of coupled problems}.\\
The plan is organized as follows: In section~\ref{sec-setting} we set the problem and its variational formulation, by introducing a judicious Lagrangian functional. The discretization is described in section~\ref{sec-fict}: \textcolor{black}{the} fictitious domain method is explained in section~\ref{subsec-fict}, and the theoretical analysis is provided in section~\ref{subsec-theor} (without stabilization) and in section~\ref{subsec-theorstab} (with stabilization). Explanations about the practical implementation are given in section~\ref{sec-impl}. Numerical tests are provided in section~\ref{sec-numtests}. Convergence and accuracy are tested with and without the stabilization technique in section~\ref{subsec-cvstab0} and section~\ref{subsec-cvstab1}, respectively. In particular, robustness with respect to the geometry is demonstrated in section~\ref{subsec-cvrobust}. The unsteady simulations are given in section~\ref{sec-unsteady}. Conclusions are given in section~\ref{sec-conclusion}. \textcolor{black}{The Appendix is devoted to the proofs of technical results.}
\paragraph{Notation.} The symbol $\pm$ will be used for indicating that we consider both symbols $+$ and $-$, for the sake of concision. The jump of a variable $\varphi$ across $\Gamma$ will be denoted by $\left[ \varphi \right]$, equal to $\varphi^+ - \varphi^-$. As unit normal of reference, we denote $\bn = \bn^-$, and so $\bn^+ = -\bn$. We denote by $\sigma(\bv,q) = 2\nu \varepsilon(\bv) - q\, \I$ the stress tensor, where $\varepsilon(\bv) = \frac{1}{2}(\nabla \bv + \nabla \bv^T)$ is the symmetric Cauchy stress tensor and $\I$ is the identity matrix of $\R^{d\times d}$. When $\divg \bv = 0$, we recall that $-\divg \sigma(\bv,q) = -\nu \Delta \bv + \nabla q$. We denote by $|\cdot |$ the Euclidean norm of $\R^d$ or $\R^{d\times d}$, and by $A:B = \trace(A^T B)$ the scalar product in $\R^{d\times d} \times \R^{d\times d}$.
\section{Setting of the problem} \label{sec-setting}
Let us consider the following system:
\begin{eqnarray} \label{sysjump} \label{mainsys} \label{syscont}
\left\{ \begin{array} {rcl}
-\divg \sigma^\pm(\bu^\pm,p^\pm) = \bff^\pm & & \text{in } \Omega^\pm, \label{eqstokes}\\
\divg \bu^\pm = 0 & & \text{in } \Omega^\pm, \\
\bu^+ = 0 & & \text{on } \p \Omega, \\
\left[ \bu \right] = 0 & & \text{across } \Gamma, \label{eqjump} \\
\left[\sigma(\bu, p)\right]\bn = \bgg & & \text{across } \Gamma. \label{eqjumpg}
\end{array} \right.
\end{eqnarray}
The notation $\sigma^\pm(\bu^\pm,p^\pm) := 2\nu^\pm \varepsilon(\bu^\pm) - p^\pm\I$ is introduced for considering different (constant) viscosities. Assuming that $\Gamma$ is closed, we define the following function spaces:
\begin{eqnarray*}
\begin{array} {lcl}
\mathbf{V}^{+} = \displaystyle \left\{\bv \in \mathbf{H}^1(\Omega^+)\mid \ \bv_{|\p \Omega} = 0\right\}, & &
\mathbf{V}^- = \mathbf{H}^1(\Omega^-
, \\[5pt]
\Q^{\pm} = \displaystyle \left\{q \in \LL^2(\Omega^\pm) \mid \ \int_{\Omega^\pm} q \, \d \Omega^\pm = 0 \right\}, & & \\[10pt]
\WW = \HH^{-1/2}(\Gamma), & & \ZZ = \WW' = \HH^{1/2}(\Gamma).
\end{array}
\end{eqnarray*}
\textcolor{black}{We will denote by $\langle \, \cdot \, ; \cdot \, \rangle_{\WW;\WW'}$ the duality bracket between $\WW'$ and its dual space $\WW'' \equiv \WW$.} The equality of velocities in the third equation of~\eqref{mainsys} suggests the existence of a function $\Phi$ such that $\bu^+ = \Phi$ and $\bu^- = \Phi$. We take into account these two equalities by introducing two multipliers denoted by $\blambda^\pm$. More specifically, we look for a weak solution of system~\eqref{sysjump} as a critical point of the following Lagrangian functional:
\begin{eqnarray}
\mathscr{L}_0(\bu^+, p^+, \blambda^+, \bu^-, p^-, \blambda^-, \Phi) & = &
2\nu \int_{\Omega^+} |\varepsilon(\bu^+)|^2\d \Omega^+ + 2\nu \int_{\Omega^-} |\varepsilon(\bu^-)|^2\d \Omega^- \nonumber\\
& & - \int_{\Omega^+} \bff^+\cdot \bu^+\d \Omega^+ - \int_{\Omega^-} \bff^-\cdot \bu^-\d \Omega^- \nonumber\\
& & - \int_{\Omega^+} p^+\divg \bu^+ \d \Omega^+ - \int_{\Omega^-} p^-\divg \bu^- \d \Omega^- \nonumber\\
& & - \langle \blambda^+ ; \bu^+ - \Phi \rangle_{\mathbf{W};\mathbf{W}'}
- \langle \blambda^- ; \bu^- - \Phi\rangle_{\mathbf{W};\mathbf{W}'}
- \langle \bgg ; \Phi \rangle_{\mathbf{W};\mathbf{W}'}. \nonumber
\end{eqnarray}
\textcolor{black}{When the derivatives of $\mathscr{L}_0$ with respect to $\bv^\pm$ vanish, by integration by parts we obtain the first equation of~\eqref{eqstokes} and also $\blambda^{\pm} = \sigma^\pm(\bu^{\pm},p^{\pm})\bn^{\pm}$. Next, the sensitivity of $\mathscr{L}_0$ with respect to $\blambda^\pm$ and $\Phi$ implies to the transmission conditions of~\eqref{eqstokes}.}
For the sake of concision, we will denote
\begin{eqnarray*}
\mathfrak{u} = (\bu^+,p^+, \blambda^+,\bu^-, p^-, \blambda^-, \Phi)
& \text{ and } &
\mathfrak{v} = (\bv^+,q^+,\bmu^+,\bv^-,q^-, \bmu^-, \varphi).
\end{eqnarray*}
The first-order optimality conditions satisfied by a saddle-point of $\mathscr{L}_0$ then yield the following variational formulation:
\begin{eqnarray}
& & \text{Find
$\mathfrak{u} \in \VV^+ \times \Q^+ \times \mathbf{W}\times \VV^-\times \Q^- \times \mathbf{W} \times \ZZ$ such that} \nonumber \\
& & \left\{ \begin{array} {lcl}
\mathcal{A}_0^\pm(\mathfrak{u};\bv) =
\mathcal{F}^\pm(\bv) & &
\forall \bv \in \VV^\pm, \\[5pt]
\mathcal{B}_0^\pm(\mathfrak{u};q) = 0 & &
\forall q \in \Q^\pm, \\[5pt]
\mathcal{C}_0^\pm(\mathfrak{u};\bmu) = 0 & &
\forall \bmu \in \WW^\pm, \\[5pt]
\mathcal{D}_0^\pm(\mathfrak{u};\bvarphi) =
\mathcal{G}(\bvarphi) & &
\forall \bvarphi \in \ZZ .
\end{array} \right.
\end{eqnarray}
In this formulation we introduced the following bilinear forms
\begin{eqnarray*}
\mathcal{A}_0^\pm(\mathfrak{u};\bv) =
2\nu \int_{\Omega^{\pm}} \sigma^\pm(\bu^{\pm},p^{\pm}): \varepsilon(\bv) \, \d \Omega^{\pm}
- \langle \blambda^{\pm} \; \bv \rangle_{\mathbf{W};\mathbf{W}'}, & &
\mathcal{F}^\pm(\bv) =
\int_{\Omega^\pm} \bff^\pm \cdot \bv \, \d \Omega^\pm, \\
\mathcal{B}_0^\pm(\mathfrak{u};q) = -\int_{\Omega^\pm} q^\pm \divg \bu^\pm \d \Omega^\pm,
& & \\[5pt]
\mathcal{C}_0^\pm(\mathfrak{u};\bmu) =
-\langle \bmu ; \bu^\pm - \Phi \rangle_{\mathbf{W};\mathbf{W}'}, & & \\
\mathcal{D}_0^\pm(\mathfrak{u};\bvarphi) =
\langle \blambda^{+} + \blambda^- ; \bvarphi \rangle_{\mathbf{W}; \mathbf{W}'}, & &
\mathcal{G}(\bvarphi) = \langle \bgg ; \bvarphi \rangle_{\mathbf{W}; \mathbf{W}'}.
\end{eqnarray*}
\section{Discrete formulation} \label{sec-fict}
In the rest of the paper, we will consider a Cartesian mesh, and we will denote the mesh parameter by $h = \max_{T\in \mathcal{T}_h} h_T$, where $h_T$ is the diameter of a triangle $T$, and $\mathcal{T}_h$ is the set of the triangles of the mesh.
\subsection{The fictitious domain method} \label{subsec-fict}
We first consider global functions on the same whole domain $\Omega$, that we discretize with a structured mesh. On this mesh we define discrete finite element spaces, $\tilde{\VV}_h \subset \HH^1(\Omega)$, $\tilde{Q}_h \subset \L^2_0(\Omega)$, $\tilde{\WW}_h \subset \LL^2(\Omega)$ and $\tilde{\ZZ}_h \subset \HH^1(\Omega)$. We set
\begin{eqnarray*}
\tilde{\VV}_h = \left\{\bv_h \in \mathscr{C}(\overline{\Omega}) \mid \ {\bv_h}_{|\p \Omega} = 0, \ {\bv_h}_{\left| T\right.} \in P(T), \ \forall T \in \mathcal{T}_h\right\},
\end{eqnarray*}
where $P(T)$ denotes a finite dimensional space of smooth functions that contains polynomial functions of degree $k \geq 1$ on a triangle $T$, taken in the set $\mathcal{T}_h$ of triangles of the mesh. We refer to~\cite{Ern} for details.
The fictitious finite element spaces are defined as follows:
\begin{eqnarray*}
\VV^\pm_h := {\tilde{\mathbf{V}}{}_h}_{\left| \Omega^{\pm} \right.}, \quad
\Q^\pm_h := {\tilde{\Q}{}_h}_{\left| \Omega^{\pm} \right.}, \quad
\WW_h := {\tilde{\WW}{}_h}_{\left| \Gamma \right.}, \quad
\ZZ_h := {\tilde{\ZZ}{}_h}_{\left| \Gamma \right.}.
\end{eqnarray*}
Note that these spaces are the intuitive discretizations of spaces $\VV^\pm$, $\Q^\pm$, $\WW$ and $\ZZ$, respectively. The corresponding selection of degrees of freedom is explained in Figure~\ref{fig-fictistyle}. \textcolor{black}{For this task, we need to know whether a node in one side of the interface are the other, and this can be realized in practice with the use of a level-set function. This object represents the interface, and at this stage is used only for implementation purposes.} In particular, the points of intersection of the edges of the mesh with the \textcolor{black}{interface} (a circle in Figure~\ref{fig-fictistyle}) are determined, in order to define an approximation of the \textcolor{black}{level-set} (by piecewise polynomial functions), as well as degrees of freedom for the multipliers. This technique of discretization is inspired by XFEM, with the difference that here we do not provide enrichment of the standard basis elements with specific singular functions, we only take into account the standard basis functions multiplied by the Heaviside functions ($H(\mathrm{x}) =1$ when $\mathrm{x} \in \Omega^\pm$, $H(\mathrm{x}) = 0$ otherwise). The resulting products appear in the integrals of the variational formulation, during the assembly procedure (see section~\ref{sec-smartupdate}).
\begin{minipage}{\linewidth}
\begin{center}
\includegraphics[trim = 10cm 4.5cm 10cm 4.5cm, clip, scale=0.22]{./images/Scutmesh2.png}
\hspace*{5pt}
\includegraphics[trim = 10cm 4.5cm 10cm 4.5cm, clip, scale=0.22]{./images/Scutmesh3.png}
\hspace*{5pt}
\includegraphics[trim = 10cm 4.5cm 10cm 4.5cm, clip, scale=0.22]{./images/Scutmesh1.png}
\vspace*{-10pt}
\begin{figure}[H]
\caption{Degrees of freedom used for each space: $\VV_h^+$ and $\Q_h^+$ (left), $\VV_h^-$ and $\Q_h^-$ (center), $\WW_h$ and $\ZZ_h$ (right). The blue base nodes are chosen for the degrees of freedom of the \textcolor{black}{fluid variables} ($\bu^\pm$ and $p^\pm$), while the red ones are just kept for localizing the \textcolor{black}{interface}, and cutting the standard basis functions. The green nodes are determined for defining the basis functions of the multipliers ($\blambda^\pm$ and $\Phi$) on the \textcolor{black}{interface}.}\hfill \\
\label{fig-fictistyle}
\end{figure}
\end{center}
\end{minipage}
Then Problem~\eqref{mainsys} is approximated as follows:
\begin{eqnarray}
& & \text{Find } (\bu_h^\pm,p_h^\pm, \blambda_h^\pm, \Phi_h) \in \VV_h^\pm \times \Q_h^\pm \times \WW_h \times \ZZ_h \text{ such that } \nonumber \\
& & \left\{ \begin{array} {rcl}
a_0^\pm(\bu_h^\pm,\bv_h^\pm) + b_0^\pm(\bv_h^\pm,p_h^\pm) + c_0(\bv_h^\pm,\blambda_h^\pm) = \mathcal{F}^\pm(\bv^\pm_h) & &
\forall \bv_h^\pm \in \mathbf{V}_h^\pm, \\[5pt]
b_0^\pm(\bu_h^\pm,q_h^\pm) = 0 & & \forall q_h^\pm \in \Q_h^\pm, \\[5pt]
c_0(\bu_h^\pm, \bmu_h^\pm) - c_0(\Phi_h, \bmu_h^\pm) = 0 & & \forall \bmu_h^\pm \in \mathbf{W}_h, \\[5pt]
c_0(\bvarphi_h, \blambda_h^+ + \blambda_h^-) = \mathcal{G}(\bvarphi_h) & &
\forall \bvarphi_h \in \mathbf{Z}_h,
\end{array}\right. \label{mainsysapprox} \label{sysdisc}
\end{eqnarray}
where we denote
\begin{eqnarray*}
a_0^\pm(\bu,\bv) = 2\nu^\pm\int_{\Omega^\pm} \varepsilon(\bu):\varepsilon(\bv) \, \d \Omega^\pm,
\quad b_0^\pm(\bu, q) = -\int_{\Omega^\pm} q\divg \bu \, \d \Omega^\pm,
\quad c_0(\bvarphi, \blambda) = - \int_{\Gamma} \bvarphi \cdot \blambda \, \d \Gamma.
\end{eqnarray*}
Note that the duality bracket $\langle \, \cdot \, ; \cdot \, \rangle_{\WW; \WW'}$, with $\WW = \HH^{-1/2}(\Gamma)$, has been replaced by the inner product of $\LL^2(\Gamma)$. \textcolor{black}{The aim is to avoid to define an approximation of} the Laplace–Beltrami operator on $\Gamma$, which is a non-trivial task because of the fictitious domain approach (see~\cite{Massing2017} for instance). Under stronger regularity assumptions for the data $\bff^\pm$ and $\bgg$ \textcolor{black}{(for instance $\bff^\pm \in \LL^2(\Omega^\pm)$ and $\bgg \in \LL^2(\Gamma)$)}, we can reasonably consider this simplification, and thus we now set
\begin{eqnarray*}
\WW \equiv \WW' \equiv \ZZ = \LL^2(\Gamma),
\quad \WW_h \subset \LL^2(\Gamma),
\quad \ZZ_h \subset \LL^2(\Gamma).
\end{eqnarray*}
However, for the mathematical analysis, we keep the abstract formalism involving the notation $\WW$ and $\WW'$.
\subsection{Theoretical convergence} \label{subsec-theor}
For the mathematical analysis, we make the following assumptions:
\begin{itemize}
\item[$(\mathbf{H1})$:] There exists a constant\footnote{Throughout the paper, $C$ denotes a generic positive constant independent of the mesh size $h$.} $C >0$ independent of $h$ such that
\begin{eqnarray*}
\inf_{q_h^\pm \in \Q^\pm_h\setminus\{0\}} \sup_{\bv_h^\pm \in \VV^\pm_{0,h}\setminus\{0\}}
\frac{b_0^\pm(\bv^\pm_h,q^\pm_h)}{\| \bv^\pm_h \|_{\VV_h^\pm} \| q_h^\pm \|} & \geq & C,
\end{eqnarray*}
where we denote $\VV_{0,h}^\pm := \left\{\bv^\pm_h \in \VV_h^\pm \mid c_0(\bv_h^\pm, \bmu_h^\pm) = 0 \ \forall \bmu_h^\pm \in \WW_h \right\}$.
\item[$(\mathbf{H2})$:] \quad If $\overline{\bmu}_h \in \mathbf{W}_h$ satisfies $c^+_0(\bv_h^+, \overline{\bmu}_h) = 0$ for all $\bv_h^+ \in \mathbf{V}_h^{+}$, or $c^-_0(\bv_h^-, \overline{\bmu}_h) = 0$ for all $\bv_h^- \in \mathbf{V}_h^{-}$, then $\overline{\bmu}_h = 0$.
\end{itemize}
Assumption~$(\mathbf{H1})$ is a discrete inf-sup condition for the couple {\it velocity}/{\it pressure}. It implies in particular the following property: If $\overline{q}^\pm_h \in \Q^\pm_h$ satisfies $b^\pm_0(\bv_h^+, \overline{q}^\pm_h) = 0$ for all $\bv_h^+ \in \VV_{0,h}^\pm$, then $\overline{q}^\pm_h = 0$. Assumption~$(\mathbf{H2})$ is weaker than an inf-sup condition for the couple $\bu^\pm / \blambda^\pm$. It demands only than the spaces $\VV^\pm_h$ are rich enough with respect to the space $\WW_h$.\\
Now we define the space
\begin{eqnarray*}
\mathbb{V}^0_h & = & \left\{
(\bv_h^+,\bv_h^-) \in \mathbf{V}_h^+ \times \mathbf{V}_h^- | \quad c(\bv_h^+-\bv_h^-, \bmu_h)=0 \quad \forall \bmu_h \in \mathbf{W}_h \right\}.
\end{eqnarray*}
\begin{lemma} \label{lemma-coer}
The bilinear form
\begin{eqnarray*}
((\bu^+,\bv^+),(\bu^-,\bv^-)) & \mapsto &
a_0^+(\bu^+,\bv^+) + a_0^-(\bu^-,\bv^-)
\end{eqnarray*}
is uniformly $\mathbb{V}_h^0$-elliptic, in the sense that there exists a constant $C>0$ independent of $h$ such that
\begin{eqnarray*}
a_0^{+}(\bv_h^+,\bv_h^+) + a_0^{-}(\bv_h^-,\bv_h^-) \geq
C \left(\|\bv_h\|^2_{\mathbf{V}^{+}_h} + \|\bv_h\|^2_{\mathbf{V}^{-}_h} \right), \quad
\forall (\bv_h^+,\bv_h^-) \in \mathbb{V}_h^0
\end{eqnarray*}
\end{lemma}
\begin{proof}
This result is an application of the Petree-Tartar lemma. From Korn's inequality we have
\begin{eqnarray*}
\| \bv_h^+\|_{\VV^+}^2 + \| \bv_h^-\|_{\VV^-}^2 & \leq &
a_0^{+}(\bv_h^+,\bv_h^+) + a_0^{-}(\bv_h^-,\bv_h^-) +
\| \bv_h^+\|_{\LL^2(\Omega^+)}^2 + \| \bv_h^-\|_{\LL^2(\Omega^-)}^2.
\end{eqnarray*}
Since from the Rellich-Kondrachov theorem the embeddings $\VV^\pm \hookrightarrow \LL^2(\Omega^\pm)$ are compact, we just have to verify that \textcolor{black}{$a_0^{+}(\bv_h^+,\bv_h^+) + a_0^{-}(\bv_h^-,\bv_h^-) = 0 \Rightarrow (\bv^+,\bv^-) = 0$} in $\mathbb{V}^0_h$. The equality of the left-hand-side of this assertion is equivalent to $a_0^{\pm}(\bv_h^\pm,\bv_h^\pm) = 0$, and so in particular $\varepsilon(\bv_h^\pm) = 0$ in $\Omega^\pm$. From~\cite[page~18]{Temam}, the functions $\bv_h^\pm$ reduce to affine forms. The function $\bv_h^+$ is actually $0$, because ${\bv_h^+}_{\mid \p \Omega } =0$. Since $(\bv_h^+,\bv_h^-) \in \mathbb{V}^0_h$, we deduce that ${\bv^-_h}_{\mid \Gamma} = 0$. Indeed, it is easy to see that the intersection of $\mathbb{V}^0_h$ with the space of affine functions is actually contained into the space $\left\{
(\bv_h^+,\bv_h^-) \in \mathbf{V}_h^+ \times \mathbf{V}_h^- | \ {\bv_h^+}_{\mid \Gamma} = {\bv_h^-}_{\mid \Gamma} \right\}$.
Thus $\bv_h^- = 0$ in $\Omega^-$, which completes the proof.
\end{proof}
\begin{proposition}
Assume that assumptions~$(\mathbf{H1})-(\mathbf{H2})$ hold. Then system~\eqref{sysdisc} admits a unique solution that we denote by $(\bu_h^+,p_h^+,\blambda_h^+, \bu_h^-,p_h^-,\blambda_h^-,\Phi_h)$.
\end{proposition}
\begin{proof}
Since system~\eqref{sysdisc} is linear and of finite dimension, existence is equivalent to uniqueness. Let us prove uniqueness by showing that $(\bu_h^\pm,p_h^\pm, \blambda_h^\pm, \Phi_h) = 0$ when $\mathcal{F}^\pm \equiv 0$ and $\mathcal{G} \equiv 0$. In that case, taking $\bv^\pm_h = \bu_h^\pm$ in the first equation of~\eqref{sysdisc}, combined with its second equation taken with $q_h^\pm = p_h^\pm$, yields $a_0^\pm(\bu^\pm_h,\bu_h^\pm) + c_0(\bu^\pm_h, \blambda_h^\pm) = 0$. Using the third equation then leads us to $a_0^\pm(\bu^\pm_h,\bu_h^\pm) + c_0(\Phi_h, \blambda_h^\pm) = 0$. By summing these two identities, and by using the fourth equation with $\bvarphi_h = \Phi_h$, we obtain
\begin{eqnarray*}
a_0^+(\bu^+_h,\bu_h^+) + a_0^-(\bu^-_h,\bu_h^-) = 0,
\end{eqnarray*}
and so $\bu_h^\pm = 0$ in $\VV^\pm_h$ by Lemma~\ref{lemma-coer}. Next, still in the first equation, we get $b_0^\pm(\bv_h^\pm, p_h^\pm) = 0$ for all $\bv^\pm_h \in \VV^\pm_{0,h}$, and then $p_h^\pm = 0$ from assumption~$(\mathbf{H1})$. Then it remains only $c_0(\bv_h^\pm, \blambda_h^\pm) =0$, valid for all $\bv_h^\pm \in \VV_h^\pm$, and this yields $\blambda_h^\pm =0$ from assumption~$(\mathbf{H2})$. Finally, we obtain $\Phi_h=0$ by using assumption~$(\mathbf{H2})$ in the third equation.
\end{proof}
\begin{proposition} \label{prop-cvnaive}
Assume that assumptions~$(\mathbf{H1})-(\mathbf{H2})$ hold. Denote by $(\bu^+,p^+,\blambda^+, \bu^-,p^-,\blambda^-, \Phi)$ and \\$(\bu_h^+,p_h^+,\blambda_h^+, \bu_h^-,p_h^-,\blambda_h^-,\Phi_h)$ the respective solutions of system~\eqref{syscont} and system~\eqref{sysdisc}. Then
\begin{eqnarray}
& & \|\bu^+-\bu_h^+\|_{\mathbf{V}^+} + \|p^+-p_h^+\|_{\L^2(\Omega^+)} +
\|\bu^--\bu_h^-\|_{\mathbf{V}^-} + \|p^--p_h^-\|_{\L^2(\Omega^-)} \nonumber\\
& & \leq C\left( \inf_{(\bv_h^+,\bv_h^-)\in \mathbb{V}^0_h}
\left(\|\bu^+-\bv_h^+ \|_{\VV^+} + \|\bu^--\bv_h^- \|_{\VV^-}\right) +
\inf_{q_h^+\in \Q_h^+}\|p^+-q_h^+\|_{\L^2(\Omega^+)} + \inf_{q_h^-\in \Q_h^-}\|p^--q_h^-\|_{\L^2(\Omega^-)} \right. \nonumber \\
& & \left. \qquad + \inf_{(\bmu_h^+,\bmu_h^-) \in \WW_h \times \WW_h} \left(\|\blambda^+ - \bmu_h^+\|_{\WW} ;
\|\blambda^- - \bmu_h^-\|_{\WW}\right) \right), \label{est-naive}
\end{eqnarray}
where the constant $C>0$ is independent of $h$.
\end{proposition}
\textcolor{black}{The proof of Proposition~\ref{prop-cvnaive} is given in section~\ref{App-A1}. It} provides us an estimate for the velocities and the pressures. We have no such estimate for the multipliers $\Phi$ and $\blambda^\pm$. Note that in the right-hand-side of estimate~\eqref{est-naive}, no term involving the variable $\Phi$ appears. The accuracy on the velocities and the pressures is then not conditioned by the approximation of the variable $\Phi$. We can have a good approximation of $\Phi$ without having necessarily a good approximation on the other variables.
\paragraph{On the limitation of the order of convergence.}
Besides the lack of information on the convergence for the dual variables, let us mention a theoretical result that limits the rate of convergence for the velocities and the pressures.
\begin{proposition} \label{prop-limit}
Assume that assumptions $(\mathbf{H1})-(\mathbf{H2})$ hold. With the notation of Proposition~\ref{prop-cvnaive}, assume that $\bu^\pm \in \HH^{1+d/2+\eta}(\Omega^\pm) \cap \VV^\pm$ fro some $\eta >0$, and that
\begin{eqnarray*}
\inf_{\bmu_h \in \WW_h}\| \blambda^\pm - \bmu_h\| & \leq & Ch^\delta
\end{eqnarray*}
for some $\delta \geq 1/2$. Then
\begin{eqnarray}
\| \bu^\pm - \bu^\pm_h \|_{\VV^\pm}
+ \| p^\pm - p_h^\pm \|_{\Q^\pm}
& \leq &
Ch^{1/2}. \label{est-limit}
\end{eqnarray}
\end{proposition}
For the sake of concision, we do not provide the proof of this result, since the proofs given in~\cite{Renard2009} (Proposition~3) and~\cite{Court2014} (Proposition~3) can be straightforwardly transposed to our case. In the context of fictitious domain approaches, it is classical to observe theoretically this kind of limitation on the order of convergence. Actually, for our approach, in view of the numerical tests presented in section~\ref{subsec-cvstab0}, estimate~\eqref{est-limit} seems to be not sharp enough. In practice, it is tricky to find a numerical test for which this rate is observed. Such a test was provided in~\cite{Renard2009} for the Poisson problem in a test performed with a very specific configuration, and simply not found in~\cite{Court2014} for a Stokes problem. A more advanced theoretical analysis of problem~\eqref{sysdisc} is not our main concern, and so we do not comment further this point. In the following subsection, we modify the variational formulation in order to obtain an optimal convergence result that concerns all the variables.
\subsection{Stabilization technique} \label{subsec-theorstab}
The Lagrangian $\mathscr{L}$ is augmented with quadratic terms as follows:
\begin{eqnarray*}
\mathscr{L}(\bu^+, p^+, \blambda^+, \bu^-,p^-, \blambda^-, \Phi) & = &
\mathscr{L}_0(\bu^+, p^+, \blambda^+, \bu^-,p^-, \blambda^-, \Phi) \\
& & + \frac{\alpha_0}{2} \left( \| \Phi - \bu^+ \|^2_{\ZZ} + \| \Phi - \bu^- \|^2_{\ZZ} \right) \\
& & - \frac{\gamma}{2} \left( \| \sigma(\bu^+,p^+)\bn^+ -\blambda^+ \|^2_{\LL^2(\Gamma)} +
\| \sigma(\bu^+,p^+)\bn^+ -\blambda^- \|_{\LL^2(\Gamma)}^2 \right) .
\end{eqnarray*}
The additional terms constitute the so-called stabilization technique. \textcolor{black}{The first terms, proportional to the coefficient $\alpha_0 >0$, are} introduced in order to enforce the convergence for the variable $\Phi$. The other terms, those which are proportional to the coefficient $\gamma>0$, are added in order to enforce the convergence for the multipliers $\blambda^\pm$ towards the normal traces of the stress tensor. \textcolor{black}{For practical purpose,} it is convenient to choose the coefficient $\gamma$ proportional to the mesh size:
\begin{eqnarray*}
\gamma = \gamma_0h, & & \text{where $\gamma_0 >0$ is independent of $h$.}
\end{eqnarray*}
The first-order derivatives of $\mathscr{L}$ lead to the following stabilized formulation:
\begin{eqnarray}
& & \text{Find
$\mathfrak{u} \in \VV^+ \times \Q^+ \times \mathbf{W} \times \VV^-\times \Q^- \times \mathbf{W} \times \ZZ$ such that} \nonumber \\
& & \left\{ \begin{array} {rcl}
\mathcal{A}^\pm(\mathfrak{u};\bv) =
\mathcal{F}(\bv) & &
\forall \bv \in \VV^\pm, \\[5pt]
\mathcal{B}^\pm(\mathfrak{u};q) = 0 & &
\forall q \in \Q^\pm, \\[5pt]
\mathcal{C}^\pm(\mathfrak{u};\bmu) = 0 & &
\forall \bmu \in \WW^\pm, \\[5pt]
\mathcal{D}^\pm(\mathfrak{u};\bvarphi) =
\mathcal{G}(\bvarphi) & &
\forall \bvarphi \in \ZZ ,
\end{array} \right.
\end{eqnarray}
where
\begin{eqnarray*}
\mathcal{A}^\pm(\mathfrak{u};\bv) & = &
\mathcal{A}_0^\pm(\mathfrak{u};\bv)
-\alpha_0 \int_\Gamma \Phi \cdot \bv^\pm \d \Gamma
- \gamma \int_\Gamma 2\nu \varepsilon(\bv^\pm) \cdot \left(\sigma(\bu^\pm, p^\pm)\bn^\pm - \blambda^\pm \right) \d \Gamma,\\
\mathcal{B}^\pm(\mathfrak{u};q) & = & \mathcal{B}_0^\pm(\mathfrak{u};q)
+ \gamma \int_\Gamma q^\pm \bn^\pm \cdot \left(\sigma(\bu^\pm, p^\pm)\bn^\pm - \blambda^\pm \right) \d \Gamma,\\
\mathcal{C}^\pm(\mathfrak{u};\bmu) & = &
\mathcal{C}_0^\pm(\mathfrak{u};\bmu) + \gamma \int_\Gamma \bmu^\pm \cdot \left(\sigma(\bu^\pm, p^\pm)\bn^\pm - \blambda^\pm \right) \d \Gamma,\\
\mathcal{D}^\pm(\mathfrak{u};\bvarphi) & = &
\mathcal{D}_0^\pm(\mathfrak{u};\bvarphi)
+ 2\alpha_0 \int_\Gamma \Phi \cdot \bvarphi \, \d \Gamma .
\end{eqnarray*}
With the introduction of the stabilization terms, the goal is to prove a theoretical result, namely Theorem~\ref{th-infsup}, leading to the optimal convergence of all the variables, in particular the multipliers. For that purpose, we make a list of assumptions:
\begin{itemize}
\item[$(\mathbf{A1})$:] There exists a constant $C >0$ independent of $h$ such that
\begin{eqnarray*}
\inf_{q^{\pm}_h \in \Q^{\pm}_h} \sup_{\bv_h^{\pm} \in \mathbf{V}^{\pm}_{0,h}}
\frac{b(\bv^{\pm}_h, q^{\pm}_h)}{\|q^{\pm}_h\|_{\Q^{\pm}_h} \|\bv^{\pm}_h\|_{\mathbf{V}^{\pm}_{h}}} & \geq & C.
\end{eqnarray*}
\item[$(\mathbf{A2})$:] There exists $C>0$ independent of $h$ such that for all $\bv^{\pm}_h \in \mathbf{V}^{\pm}_h$ one has
\begin{eqnarray*}
\qquad h\|\varepsilon(\bv^{\pm}_h)\bn^\pm\|^2_{\LL^2(\Gamma)} \leq
C\| \bv^{\pm}_h \|^2_{\mathbf{V}^{\pm}_h}.
\end{eqnarray*}
\item[$(\mathbf{A3})$:] There exists $C>0$ independent of $h$ such that for all $q^{\pm}_h \in \Q^{\pm}_h$ one has
\begin{eqnarray*}h\|q^{\pm}_h\|^2_{\mathbb{L}^2(\Gamma)} \leq
C\| q^{\pm}_h \|^2_{\Q^{\pm}_h}.
\end{eqnarray*}
\end{itemize}
Assumptions~$(\mathbf{A2})-(\mathbf{A3})$ are in the same fashion as those made in~\cite{Court2014}. In practice, we can consider that they are satisfied when the intersections of $\Omega^\pm$ with the simplices of the mesh are not too small. This question is discussed in~\cite[Section~6 and Appendix~B]{Renard2009}, where an alternative stabilization technique is proposed in order to avoid situations for which the geometric configuration would not allow the fulfillment of these assumptions. However, in practice, the frequency of these \textcolor{black}{geometric} situations \textcolor{black}{can be reduced (by refining the mesh locally for instance)}, and their impact on the accuracy of the method is quite negligible, so that it is reasonable to consider these assumptions. On the other hand, assumption~$(\mathbf{A1})$ (reproduced from assumption~$(\mathbf{H1})$) is considered, but there is {\it a priori} no reason that it is satisfied for a general geometric configuration. Its fulfillment could be enforced with an additional specific stabilization technique, but this point is not of our interest in this work. Sticking to assumption~$(\mathbf{A1})$ can be made in practice by choosing pair of elements of type $P_{k+1}$/$P_k$ for the couple {\it velocity}/{\it pressure}, as the so-called Taylor-Hood elements.\\
For the sake of concision, we now denote
\begin{eqnarray*}
\mathfrak{u}_h = (\bu_h^+,p_h^+,\blambda_h^+, \bu_h^-,p_h^-,\blambda_h^-,\Phi_h)
& \text{ and } &
\mathfrak{v}_h = (\bv_h^+,q_h^+,\bmu_h^+, \bv_h^-,q_h^-,\bmu_h^-,\bvarphi_h).
\end{eqnarray*}
The weak formulation of the approximated stabilized problem~\eqref{pbstabilized} is given in a compact form, as follows:
\begin{eqnarray}
& & \text{Find $\mathfrak{u}_h \in \VV_h^+ \times \Q_h^+ \times \mathbf{W}_h \times \VV_h^-\times \Q_h^- \times \mathbf{W}_h \times \ZZ_h$ such that} \nonumber \\
& & \mathcal{M}(\mathfrak{u}_h, \mathfrak{v}_h) = \mathcal{H}(\mathfrak{v}_h) \qquad
\text{for all } \mathfrak{v}_h \in \VV_h^+ \times \Q_h^+ \times \mathbf{W}_h \times \VV_h^-\times \Q_h^- \times \mathbf{W}_h \times \ZZ_h.
\label{pbstabilized}
\end{eqnarray}
where
\begin{eqnarray*}
\mathcal{M}(\mathfrak{u};\mathfrak{v}) & = &
2\nu^+ \int_{\Omega^+}\varepsilon(\bu^+):\varepsilon(\bv^+)\, \d \Omega^+ + 2\nu^- \int_{\Omega^+}\varepsilon(\bu^-):\varepsilon(\bv^-)\, \d \Omega^- \\
& & - \int_{\Omega^+} \left( p^+\divg \bv^+ + q^+\divg \bu^+ \right)\d \Omega^+ - \int_{\Omega^-} \left( p^-\divg \bv^- + q^-\divg \bu^- \right)\d \Omega^- \\
& & - \int_{\Gamma} \left( \blambda^+ \cdot (\bv^+-\bvarphi) + \bmu^+\cdot (\bu^+ - \Phi)\right)\d \Gamma - \int_{\Gamma} \left( \blambda^- \cdot (\bv^--\bvarphi) + \bmu^-\cdot (\bu^- - \Phi)\right)\d \Gamma \\
& & + \alpha_0 \int_{\Gamma} (\bu^+ -\Phi) \cdot (\bv^+ - \bvarphi) \, \d \Gamma
+ \alpha_0 \int_{\Gamma} (\bu^- -\Phi) \cdot (\bv^- - \bvarphi) \, \d \Gamma \\
& & -\gamma_0 h\int_{\Gamma} (2\nu \varepsilon(\bu^+)\bn^+ - p^+\bn^+ - \blambda^+) \cdot
(2\nu \varepsilon(\bv^+)\bn^+ - q^+\bn^+ - \bmu^+)\, \d \Gamma \\
& & -\gamma_0 h\int_{\Gamma} (2\nu \varepsilon(\bu^-)\bn^- - p^-\bn^- - \blambda^-) \cdot
(2\nu \varepsilon(\bv^-)\bn^- - q^-\bn^- - \bmu^-)\, \d \Gamma, \\
\mathcal{H}(\mathfrak{v}) & = & \int_{\Omega^+} \bff^+ \cdot \bv^+ \d \Omega^+
+ \int_{\Omega^-} \bff^- \cdot \bv^- \d \Omega^-
+ \int_{\Gamma} \bgg \cdot \varphi \, \d \Gamma.
\end{eqnarray*}\textbf{}
Here again, in the approximated problem we have replaced the duality brackets $\langle \, \cdot \, ; \cdot \, \rangle_{\WW', \WW}$ by $\LL^2(\Gamma)$ scalar products. We are now in position to \textcolor{black}{establish} a discrete inf-sup condition for the stabilized problem.
\begin{theorem} \label{th-infsup}
Assume that $(\mathbf{A1})$--$(\mathbf{A3})$ hold. Then, for $\alpha_0$ and $\gamma_0$ small enough, there exists a constant $C>0$ independent of the mesh size $h$ such that
\begin{eqnarray*}
\inf_{\mathfrak{u}_h \in \mathfrak{V}_h} \sup_{\mathfrak{v}_h \in \mathfrak{V}_h}
\frac{\mathcal{M}(\mathfrak{u}_h;\mathfrak{v}_h)}
{|||\, \mathfrak{u}_h\, |||\ |||\, \mathfrak{v}_h\, |||} & \geq & C,
\end{eqnarray*}
where $\mathfrak{V}_h =\VV_h^+ \times \Q_h^+ \times \mathbf{W}_h \times \VV_h^-\times \Q_h^- \times \mathbf{W}_h \times \ZZ_h$, and where the norm $|||\, \cdot \, |||$ is defined as follows:
\begin{eqnarray*}
|||\, \mathfrak{u}\, |||^2 & = & \|\bu^+\|^2_{\mathbf{V}^+} + \|p^+\|_{\Q^{+}}
+ \|\bu^-\|^2_{\mathbf{V}^-} + \|p^-\|_{\Q^{-}}
+h \|\blambda^+\|^2_{\mathbf{L}^2(\Gamma)} +h \|\blambda^-\|^2_{\mathbf{L}^2(\Gamma)} \\
& & + h\left(\|\varepsilon(\bu^+)\bn\|^2_{\mathbf{L}^2(\Gamma)} +
\|\varepsilon(\bu^-)\bn\|^2_{\mathbf{L}^2(\Gamma)} +
\|p^-\|^2_{\mathbf{L}^2(\Gamma)} + \|p^-\|^2_{\mathbf{L}^2(\Gamma)}\right) \\
& & +\frac{1}{h}\|\bu^+-\Phi\|^2_{\mathbf{L}^2(\Gamma)}
+\frac{1}{h}\|\bu^--\Phi\|^2_{\mathbf{L}^2(\Gamma)} + \| \Phi \|^2_{\mathbf{W}}.
\end{eqnarray*}
\end{theorem}
\textcolor{black}{For the sake of clarity, we give the proof of Theorem~\ref{th-infsup} in section~\ref{App-A2}.} The consequence of this result is the optimal order of convergence for the multipliers, stated as follows:
\begin{corollary} \label{supercoro}
Denote by $k_{\bu}$, $k_p$, $k_\lambda$ and $k_{\Phi}$ the respective degrees of standard finite elements used for the velocities $\bu^\pm$, the pressures $p^\pm$ and the multipliers $\blambda^\pm$ and $\Phi$. Then
\begin{eqnarray*}
& & \max\left(\| \bu^\pm - \bu^\pm_h\|_{\VV^\pm}, \|p^\pm-p_h^\pm\|_{\Q^\pm}, h\|\blambda^\pm-\blambda^\pm_h\|_{\WW}, \|\Phi - \Phi_h\|_{\ZZ} \right) \\
& & \leq C\left( h^{k_{\bu}}\|\bu^+\|_{\HH^{k_{\bu}+1}(\Omega^+)}
+h^{k_{p}+1}\|p^+\|_{\H^{k_p+1}(\Omega^+)}
+h^{k_{\blambda}+1} \|\blambda^+\|_{\HH^{k_{\blambda}+1/2}(\Gamma)} \right. \\
& & \left. + h^{k_{\Phi}+1} \| \Phi\|_{\HH^{k_{\blambda}+1/2}(\Gamma)}
+h^{k_{\bu}}\|\bu^-\|_{\HH^{k_{\bu}+1}(\Omega^-)}
+ h^{k_{p}+1}\|p^-\|_{\H^{k_p+1}(\Omega^-)}
+h^{k_{\blambda}+1} \|\blambda^-\|_{\HH^{k_{\blambda}+1/2}(\Gamma)}\right).
\end{eqnarray*}
\end{corollary}
\begin{proof}
On the space $\mathfrak{V}_h = \VV_h^+ \times \Q_h^+ \times \WW_h \times \VV_h^- \times \Q_h^- \times \WW_h \times \ZZ_h$ endowed with the norm $||| \, \cdot \, |||$, the bilinear form $\mathcal{M}$ is uniformly continuous with respect to the mesh size $h$. Then Theorem~\ref{th-infsup} combined with a C\'ea-type lemma (see~\cite{Ern} for instance) yields the following error estimate:
\begin{eqnarray*}
||| \, \mathfrak{u}- \mathfrak{u}_h \, |||& \leq & C
\inf_{\mathfrak{v}_h \in \mathfrak{V}_h} ||| \, \mathfrak{u}- \mathfrak{v}_h \, |||.
\end{eqnarray*}
We now invoke the extension theorem for the Sobolev spaces, the standard
estimates for the nodal finite element interpolation operators, and the trace inequality
\begin{eqnarray*}
\| \bvarphi \|_{\LL^2(\Gamma)} & \leq &
C \left(h\|\bvarphi\|_{\LL^2(T)} + \frac{1}{h}\|\bvarphi\|_{\LL^2(T)} \right)
\end{eqnarray*}
on any convex $T \in \mathcal{T}_h$, to complete the proof. We refer to ~\cite[Appendix~A,~page~1496]{Renard2009} for more details.
\end{proof}
\section{On the implementation of the method} \label{sec-impl}
This section is dedicated to remarks about the practical implementation of the method. We mention the libraries we use for the writing of the code, and we underline their advantages for the efficiency of the method.
\subsection{Libraries used for the implementation} \label{sec-libimpl}
Our finite element code is written under the Getfem++ Library (see \cite{Getfem}). \textcolor{black}{The method follows} the approach initially introduced for the Poisson problem in~\cite{Renard2009}, where functionalities of the library were defining the fictitious domain approach. It was next extended to the Stokes problem in~\cite{Court2014, Court2015} for standard Dirichlet conditions. In dimension 2 and 3, solving the global system can be made by using the solver MUMPS (see~\cite{MUMPS1, MUMPS2}), while using the linear algebra Gmm++ Library (installed inside the Getfem++ library).
For the consideration of boundary conditions where the boundary is independent of the mesh, the library Getfem++ enables us to solve several difficulties like the following:
\begin{itemize}
\item Defining basis functions of $\mathbf{W}_h$ from traces on $\Gamma$ of the standard basis functions of $\tilde{\WW}_h$. Note that the linear independence of these basis functions is not automatically satisfied {\it a priori}, and so redundant degrees of freedom may need to be eliminated {\it a posteriori} (with a range basis algorithm), if we do not want to deal with singular systems.
\item Detecting the interface $\Gamma$. For that, we can consider a level-set function, and we use functionalities of the library that compute objects related to this level-set function (values, gradient, unit normal vector...). If we choose analytically a general expression of a level-set function for localizing the interface, then a piecewise polynomial approximation is carried out in the implementation. We can specify the (polynomial) degree of this approximation.
\item During the assembly procedure, computing with accuracy the integrals over elements that are intersected by this interface. This is done with the call of the Qhull Library (see~\cite{Qhull}).
\end{itemize}
\subsection{Efficient update of matrices when the geometry evolves} \label{sec-smartupdate}
The most important interest of fictitious domain methods is to avoid to re-mesh when the \textcolor{black}{interface} has to be modified, and thus to spare computation time and resources. Indeed, re-meshing implies re-assembly of the whole system, and the computation of numerous integration terms of the matrices, during the assembly procedure of a complex simulation, is the most costly part in terms of computation time. For avoiding this, let us explain how to update a restricted number of terms between two different geometric configurations.
First we compute a stiffness matrix on the whole domain, independent of the \textcolor{black}{interface} $\Gamma$: Denoting by $\{ \tilde{\bvarphi}_i \}$ the basis functions of the discrete space $\tilde{\VV}_h$, we assemble the following matrix:
\begin{eqnarray*}
\tilde{\mathbf{A}}_{ij} & = & \int_{\Omega} \varepsilon(\tilde{\bvarphi_i}): \varepsilon(\tilde{\bvarphi_j}) \, \d \Omega.
\end{eqnarray*}
This matrix is assembled once for all, and stored. Given an interface $\Gamma$ immersed into $\Omega$, the goal is now to construct efficiently the stiffness matrices used effectively for solving the system for the corresponding geometric configuration, namely the matrices
\begin{eqnarray*}
\mathbf{A}^\pm_{ij} & = & \int_{\Omega^\pm} \varepsilon(\bvarphi^\pm_i): \varepsilon(\bvarphi^\pm_j) \, \d \Omega^\pm,
\end{eqnarray*}
where $\{ \bvarphi_i^\pm \}$ denote the basis functions of spaces $\VV^\pm_h$. The indexes of these basis functions, can be deduced from those of the standard ones $\{ \tilde{\bvarphi_i} \}$ by the use of reduction matrices $\mathbf{R}^\pm$. These matrices are sparse and binary, and so can be used inexpensively. Analogously we define the extension matrices $\mathbf{E}^\pm$, with which we associate the following properties:
\begin{eqnarray*}
\mathbf{E}^\pm = {\mathbf{R}^\pm}^T, & & \mathbf{R}^\pm\mathbf{E}^\pm = \mathbf{I}.
\end{eqnarray*}
From there, we can define the following partial stiffness matrices:
\begin{eqnarray*}
\tilde{\mathbf{A}}^+ = {\mathbf{R}}^+ \tilde{\mathbf{A}} {\mathbf{E}}^+, & &
\tilde{\mathbf{A}}^- = {\mathbf{R}}^- \tilde{\mathbf{A}} {\mathbf{E}}^-.
\end{eqnarray*}
However, these reduction matrices enable us only to select the indexes of the family $\{ \tilde{\bvarphi_i} \}$ that concern and localize the domains $\Omega^\pm$ (see Figure~\ref{fig-fictistyle}). The definition of functions $\{ \bvarphi_i \}$ from functions $\{ \tilde{\bvarphi}_i \}$ requires the identification of the triangles of the mesh that are cut by the $\Gamma$. The latter is localized with a level-set function, and the approximation of this level-set is made with the use of piecewise polynomial functions. The way the approximated level-set cuts the mesh defines subsimplices with corresponding Heaviside functions (see section~\ref{subsec-fict}). Thus we obtain the functions $\{\bvarphi_i\}$ by multiplying $\{ \tilde{\bvarphi_i} \}$ with the Heaviside functions on \textcolor{black}{these} subsimplices. The matrices $\mathbf{A}^\pm$ are then obtained with a local reassembly of the integration terms of the matrices $\tilde{\mathbf{A}}^\pm$ by including the Heaviside functions for the terms concerned by the subsimplices aforementioned. By these steps, we claim that we update a number of objects that is of the same range as the number of the mesh elements intersected by the \textcolor{black}{interface}. Of course, the same procedure can be transposed for other matrix blocks of the system.
\begin{algorithm}[htpb]
\begin{description}
\item[First Assembly:] Compute matrix $\tilde{\mathbf{A}}$ independent of the interface, once for all, and store it.
\item[Initialization: $k=0$.] For a given parameterized geometry, initialize
\begin{description}
\item[1:] the level-set expression \texttt{ls-value}, and next the level-set object \texttt{ls},
\item[2:] the partial integration methods \texttt{mim} and the partial finite element methods \texttt{mf}.
\item[3:] From \texttt{mim}, identify the indices of \texttt{mf} concerned by the interface.
\item[4:] From $\tilde{\mathbf{A}}$, define $\tilde{\mathbf{A}}^\pm$, and next $\mathbf{A}$ by reassembling the terms concerned by the interface.
\item[5:] First solve, use of MUMPS.
\end{description}
\item[Iteration $k\geq 0$.] From the first solve, define a new geometric configuration, and update
\begin{description}
\item[1:] the level-set expression, and next the level-set object: \texttt{ls.adapt();}
\item[2:] the partial integration methods and the partial finite element methods: \texttt{mim.adapt();} \texttt{mf.adapt();}
\item[3:] From the former \texttt{mim} and the new \texttt{mim}, identify the indices of the new \texttt{mf} concerned by the change of interface.
\item[4:] From $\tilde{\mathbf{A}}$, define $\tilde{\mathbf{A}}^\pm$, and next $\mathbf{A}$ by reassembling the terms concerned by the change of interface.
\item[5:] Solve, use of MUMPS.
\end{description}
\end{description}
\caption{Efficient update algorithm}\label{UCG}
\end{algorithm}
\FloatBarrier
\textcolor{black}{Note that in particular cases where storing an LU decomposition -- of matrix $\tilde{\mathbf{A}}$ is possible, then the same reduced update procedure could be performed for updating this decomposition without redoing it entirely. Note that the stabilization matrices have to be reassembled anyway, but the corresponding time computation is negligible compared with the time needed for the assembly of $\tilde{\mathbf{A}}$.}
\section{Numerical tests} \label{sec-numtests}
In order to illustrate the theoretical analysis, and in particular to underline the result of Corollary~\ref{supercoro} which guarantees the optimal convergence rates, we propose numerical tests. The convergence tests will be performed for the square domain $\Omega = [0,1]^2$ ($d=2$), with interfaces $\Gamma$ represented by level-set of type
\begin{eqnarray*}
\begin{array} {lcl}
\mathrm{\ell s}(x,y) = ( x-x_c)^2 + ( y-y_c)^2 - R^2 & & \text{if } d=2,
\end{array}
\end{eqnarray*}
where $(x_c,y_c)$ denotes the coordinates of the center of a \textcolor{black}{circle}, and where $R>0$ denotes its radius. We will use the following exact solutions:
\begin{eqnarray*}
& & \begin{array} {lcl}
\bu_{ex}(x,y) = \left( \begin{array} {c}
\cos(\pi x)\sin(\pi y) \\
-\sin(\pi x)\cos(\pi y)
\end{array} \right)
& & \\
\text{\textcolor{black}{$p_{ex}(x,y)$}} = c_p\left((y-y_c)\cos(2\pi x)+(x-x_c)\sin(2\pi y)\right) & &
\text{with $c_p= 1$ or $3$,} \\
& & \text{whether \textcolor{black}{$\ell s(x,y) < 0$} or $>0$ respectively.}
\end{array}
\end{eqnarray*}
Note that these solutions satisfy $\divg \bu_{ex} = 0$, and when the interface is a sphere of center $(x_c,y_c)$, the pressures satisfy automatically $\displaystyle \int_{\Omega^{\pm}} p_{ex}^{\pm}\, \d \Omega^{\pm} = 0$. The following data of system~\eqref{mainsys} are thus considered in the numerical tests:
\begin{eqnarray*}
\bff^\pm = -\nu^{\pm}\Delta_{ex}\bu_{ex} + \nabla p_{ex}, & &
\bgg = 2(\nu^+-\nu^-)\varepsilon(\bu_{ex})\bn - (p^+_{ex}-p^-_{ex})\bn.
\end{eqnarray*}
We choose $(x_c,y_c) = (0.5,0.5)$, $R = 0.23$, $\nu^+ = 2.0$ and $\nu^- = 1.0$.
\subsection{Convergence rates without stabilization} \label{subsec-cvstab0}
For the different variables, we compute the relative errors with respect to the exact solutions given previously. Results are given in Figure~\ref{fig-cv1}, Figure~\ref{fig-cv2} and Figure~\ref{fig-cv3}, where the rates of convergence (obtained for different mesh sizes) are computed by linear regression. The notation P2/P1/P0, for instance, indicates that P2 elements are chosen are chosen for the velocities $\bu^\pm$, P1 elements are chosen for the pressures $p^\pm$, and P0 elements are chosen for the multipliers $\Phi$ and $\blambda^\pm$. The quantities represented in Figure~\ref{fig-cv1} and Figure~\ref{fig-cv2} are relative errors computed on the global variables $\bu_h$ and $p_h$ defined as follows:
\begin{eqnarray*}
\bu_h(\bx) = \left\{ \begin{array} {ll}
\bu_h^+(\bx) & \text{if } \bx \in \Omega^+, \\
\bu_h^-(\bx) & \text{if } \bx \in \Omega^-,
\end{array} \right.
& &
p_h(\bx) = \left\{ \begin{array} {ll}
p_h^+(\bx) & \text{if } \bx \in \Omega^+, \\
p_h^-(\bx) & \text{if } \bx \in \Omega^-.
\end{array} \right.
\end{eqnarray*}
The quantities represented in Figure~\ref{fig-cv3} correspond to a mean value between the errors committed on $\blambda^+$ and $\blambda^-$, namely the square root of the quantity given below:
\begin{eqnarray}
\frac{
\| \blambda_h^+ - \sigma^+(\bu_{ex}^+,p_{ex}^+)\bn^+ \|^2
+ \| \blambda_h^- - \sigma^-(\bu_{ex}^-,p_{ex}^-)\bn^- \|^2
}
{
\| \sigma^\pm(\bu_{ex}^+,p_{ex}^+)\bn^+ \|^2
+ \| \sigma^\pm(\bu_{ex}^-,p_{ex}^-)\bn^- \|^2
}. \label{formula-error-lambda}
\end{eqnarray}
\begin{minipage}{\linewidth}
\begin{center}
\hspace*{-20pt}\begin{tabular} {r|l}
\includegraphics[trim = 0cm 0cm 1.0cm 0cm, clip, scale=0.35]{./courbes_cv/stab0_uL2.png}
&
\includegraphics[trim = 0cm 0cm 1.0cm 0cm, clip, scale=0.35]{./courbes_cv/stab0_uH1.png}
\end{tabular}
\begin{figure}[H]
\vspace*{-15pt}
\caption{$\LL^2$ and $\HH^1$-relative errors (in \%) on the velocity $\bu$ in function of the mesh size, and estimation of convergence rates with the slopes of the curves obtained by linear regression.}
\label{fig-cv1}
\end{figure}
\end{center}
\end{minipage}
\begin{minipage}{\linewidth}
\begin{center}
\hspace*{-20pt}\begin{tabular} {c}
\includegraphics[trim = 0cm 0cm 1.0cm 0cm, clip, scale=0.35]{./courbes_cv/stab0_pL2.png}
\end{tabular}
\begin{figure}[H]
\vspace*{-15pt}
\caption{$\LL^2$-relative errors (in \%) on the pressure $p$ in function of the mesh size, and estimation of convergence rates with the slopes of the curves obtained by linear regression.}
\label{fig-cv2}
\end{figure}
\end{center}
\end{minipage}
\FloatBarrier
\begin{minipage}{\linewidth}
\begin{center}
\hspace*{-20pt}\begin{tabular} {r|l}
\includegraphics[trim = 0cm 0cm 1.0cm 0cm, clip, scale=0.35]{./courbes_cv/stab0_mfL2.png}
&
\includegraphics[trim = 0cm 0cm 1.0cm 0cm, clip, scale=0.35]{./courbes_cv/stab0_mlL2.png}
\end{tabular}
\begin{figure}[H]
\vspace*{-15pt}
\caption{$\LL^2(\Gamma)$-relative errors (in \%) on the multipliers $\Phi$ and $\blambda^\pm$ in function of the mesh size, and estimation of convergence rates with the slopes of the curves obtained by linear regression.}
\label{fig-cv3}
\end{figure}
\end{center}
\end{minipage}
\FloatBarrier
\hfill \\ \hfill \\
In Figure~\ref{fig-cv1}, we observe that the optimal convergence rates for the velocity seem to be reached for the Q3/Q2/Q1 triplet of elements. The rates for the triplet P3/P2/P1 are slightly degraded, while a limitation seem to occur when we choose P0 or Q0 elements for the multipliers. Indeed, in those cases the computed rate for the $\LL^2$-norm of the velocity is around $2$, and the one for the $\HH^1$-norm is about $1.5$. However, the limitation on the quality of convergence for the $\HH^1$-norm is not as bad as the one announced by Proposition~\ref{prop-limit}. In Figure~\ref{fig-cv2}, we can make the same observations on the quality of convergence for the pressure. The optimal rates is here again achieved when we choose the Q3/Q2/Q1 triplet, but also when we choose the P2/P1/P0 triplet. The effective rate of convergence for the pressure is significantly degraded for the P3/P2/P0 and Q3/Q2/Q0 triplets. In Figure~\ref{fig-cv3}, we see that the optimal rate of convergence for the variable $\Phi$ seems to be achieved in all cases, while for the variables $\blambda^\pm$ the accuracy appears to be particularly bad when we choose the P3/P2/P1 triplet.
\subsection{With stabilization} \label{subsec-cvstab1}
In this subsection we study the influence of the stabilization technique on the accuracy of the method. First, in all the tests we performed, we do not see any significant influence of the stabilization technique for the variable $\Phi$: For all the values we considered for the parameter $\alpha$, for all kind of mesh size and all kind of geometric configuration, the difference observed with and without stabilization is negligible. With $\alpha_0 = 0$, we already observe in Figure~\ref{fig-cv3} the optimal rates of convergence for the variable $\Phi$. This may be due to a gap between the theoretical analysis and the numerical realization. The lack of theoretical convergence announced in section~\ref{subsec-theor} seems to be too pessimistic, and maybe it is true that without the stabilization terms that concern $\Phi$, the optimal theoretical convergence for $\Phi$ is automatically guaranteed.
Thus the rest of numerical tests will be performed with $\alpha_0 = 0$, and we will focus our interest on the parameter $\gamma$ for the stabilization of the variables $\blambda^\pm$. Without stabilization for the variables $\blambda^\pm$, the accuracy showed in Figure~\ref{fig-cv3} is already satisfying, {\it a priori}, but we are going to see that in some situations the stabilization technique is crucial. Getting a good approximation of these quantities can be of interest for fluid-structure models, with the consideration of other \textcolor{black}{interface} conditions, for control problems where their expressions can appear in adjoint systems, or simply for physical reasons.
\paragraph{Choice of the stabilization parameter.}
Remind that the stabilization parameter is chosen to be proportional to the mesh size, as $\gamma = \gamma_0h$. Let us choose the parameter $\gamma_0$ judiciously. Indeed, a too large stabilization parameter would degrade the coerciveness of the system. For this task, for different values of $\gamma_0 >0$, we compute the $\LL^2(\Gamma)$ relative errors on the multipliers $\blambda^\pm$ (as explained in section~\ref{subsec-cvstab0}, formula~\eqref{formula-error-lambda}) for the P2/P1/P0 triplet of elements, with the mesh size $h=0.1$, with the geometric configuration and the exact solutions described at the beginning of section~\ref{sec-numtests}. The results are presented in Figure~\ref{fig-choice}.
\begin{center}
\hspace*{-10pt}\includegraphics[trim = 0cm 0cm 0cm 0cm, clip, scale=0.35]{./images/choice.png}
\vspace*{-30pt}
\begin{figure}[H]
\caption{$\LL^2(\Gamma)$ relative errors on the multipliers $\blambda^\pm$, for different values of the stabilization parameter $\gamma_0$.}
\label{fig-choice}
\end{figure}
\end{center}
\FloatBarrier
We observe in Figure~\ref{fig-choice} that serious instabilities appear for values of $\gamma_0$ larger than $0.1$. Smaller instabilities actually occur for smaller values, leading us to choose $\gamma_0$ around $0.02$, even if for this range of values the influence of the stabilization technique on the accuracy is {\it a priori} negligible (see the comments of Figure~\ref{fig-robust} for further comments). For this kind of size of computational domains, and for this range of values for the viscosities, we will choose in the rest of the paper values \textcolor{black}{of $\gamma_0$ smaller than $0.02$}. \textcolor{black}{Note that the range of acceptable values for $\gamma_0$ does not depend on the mesh size $h$, as predicted by the theory. Performing the same tests for finer meshes leads to the same range of values.} \\
In practice, for this value of $\gamma_0$, the differences between the errors computed with and without the stabilization technique are not significant, for all the variables, and so we do not show the convergence curves obtained with the stabilization technique, since qualitatively the observations are the same. Actually, we show in the next paragraph that the main interest of the stabilization technique lies in its capacities of considering various geometric configurations.
\paragraph{Robustness with respect to the geometry.} \label{subsec-cvrobust}
Let us study the behavior of the stabilization technique for different geometric configurations. The goal is to anticipate an unsteady framework for which the level-set would have to cut the mesh randomly, and to demonstrate that this stabilization technique provides a qualitatively constant behavior in terms of accuracy. For that purpose, we propose to \textcolor{black}{compute and compare the relative errors on $\blambda^\pm$ (like in the previous paragraph), with and without stabilization}, and for different values of the abscissa $x_c$ of the center of the circle. The results are presented in Figure~\ref{fig-robust}.
\begin{center}
\hspace*{-10pt}\includegraphics[trim = 0cm 0cm 0cm 0cm, clip, scale=0.35]{./courbes_cv/robust2.png}
\vspace*{-30pt}
\begin{figure}[H]
\caption{$\LL^2(\Gamma)$ relative errors on the multipliers $\blambda^\pm$, for different geometric configurations, without stabilization (in blue) and with stabilization (in red) with $\gamma_0 = 0.02$.}
\label{fig-robust}
\end{figure}
\end{center}
\FloatBarrier
In Figure~\ref{fig-robust}, we observe that the stabilization technique enables us to prevent the instabilities that occur for some values of $x_c$ without stabilization. \textcolor{black}{Actually these instabilities are also fixed for very small values of $\gamma_0$ (like $\gamma_0 = 5.e^{-4})$.} Let us mention that for larger values of $\gamma_0$ (like $\gamma_0 = 0.03$), other instabilities appear for some values of $x_c$. Besides, the errors obtained with stabilization are -- almost -- always better than those computed without stabilization. This improvement is in general not really significant, the main interest of the stabilization technique lies of course in its robustness with respect of the geometry.
\section{An unsteady case: Deformation of an ellipsoid coupled with surface tension} \label{sec-unsteady}
This section is devoted to testing the capacities of the method in an unsteady \textcolor{black}{simplified} framework. \textcolor{black}{The aim is to illustrate that the method enables us to solve a problem for which the time evolution of an interface is coupled with its own geometry.} The \textcolor{black}{interface} $\Gamma$ will depend on time, and from now we denote it by $\Gamma(t)$. It splits the domain $\Omega$ into two parts that we denote by $\Omega^+(t)$ and $\Omega^-(t)$, playing the role of $\Omega^+$ and $\Omega^-$ respectively, as previously. The unknowns are now $(\bu^{\pm},p^{\pm})$ and the deformation of $\Gamma(t)$, that we denote by $X$. While the velocity and the pressure are described in Eulerian coordinates, the description of the deformation $X$ is Lagrangian:
\begin{eqnarray*}
\Gamma(t) = X(\Gamma(0),t).
\end{eqnarray*}
We consider a system which couples the different unknowns mentioned above, and for which a good approximation of the variable $\Phi = \bu^{\pm}_{| \Gamma(t)}$ is essential.
\subsection{\textcolor{black}{A test at low Reynolds number}} \label{subsec-simustokes}
We consider the following system, for all $t \in [0,T]$:
\begin{eqnarray}\label{syscoupled}
\left\{ \begin{array} {rcll}
-\divg \sigma(\bu,p) & = & 0 & \text{in } \Omega^{\pm}(t), \\
\divg \bu & = & 0 & \text{in } \Omega^{\pm}(t), \\%[5pt]
\bu(\cdot,t) & = & \displaystyle\frac{\p X}{\p t}(X^{-1}(\cdot,t),t) & \text{on } \Gamma(t),
\label{sysc3}\\[5pt]
\left[ \sigma(\bu,p) \right]\bn & = & -\mu \kappa \bn, & \text{across } \Gamma(t). \label{sysc4}
\end{array} \right.
\end{eqnarray}
The parameter $\mu >0$ is the surface tension parameter, assumed to be constant. The scalar function $\kappa$ denotes the mean curvature of the surface $\Gamma(t)$, related to $X$ through the formula
\begin{eqnarray*}
\left(\Delta_{\Gamma_0} X\right)\circ X^{-1} & = & \kappa \bn.
\end{eqnarray*}
The notation $\Delta_{\Gamma_0}$ refers to the Laplace-Beltrami operator on the manifold $\Gamma_0 := \Gamma(0)$. The evolution of $\Gamma(t)$ is ruled by the following coupling: \textcolor{black}{the} geometry of $\Gamma(t)$ imposes the jump condition in the fourth equation of~\eqref{sysc4}. The response of the surrounding fluid is the trace of the velocity $\bu^{\pm}$ on $\Gamma(t)$. It determines in the third equation of~\eqref{sysc3} the time-derivative of the deformation $X$, and thus the evolution of $\Gamma(t)$.
We restrain the set of deformations $X$ to the case of an ellipsoid centered at the point of coordinates $x_{c,i} = 0.5$, for $i = 1\dots d$. \textcolor{black}{We denote $\mathrm{y} = (y_i)_{i=1\dots d}$ the space variable in the reference configuration, and $\mathrm{x} = (x_i)_{i=1\dots d}$ the one in the deformed configuration}. The deformation is parameterized by its semi-axes $(a_i)_{i=1\dots d}$, and its expression is explicit, given by
\begin{eqnarray*}
X(\mathrm{y},t) & = & \left(x_{c,i} + (y_i-x_{c,i})\frac{a_i(t)}{a_i(0)} \right)_{i=1\dots d}.
\end{eqnarray*}
We calculate easily
\begin{eqnarray}
\frac{\p X}{\p t}(X^{-1}(\mathrm{x},t),t) & = &
\left((x_i - x_{c,i})\frac{a_i'(t)}{a_i(t)} \right)_{i=1\dots d}. \label{eqvelodisc}
\end{eqnarray}
The \textcolor{black}{interface} is implemented with a level-set function, whose expression is given by
\begin{eqnarray*}
\ell s(\mathrm{x}) & = & -1 + \sum_{i=1}^d \left( \frac{x_i-x_{c,i}}{a_i} \right)^2.
\end{eqnarray*}
\textcolor{black}{In practice, this function is approximated with piecewise polynomial functions of degree 2 (as mentioned in section~\ref{sec-libimpl}), and thus this approximation is exact in that case.} Remind that the outward unit normal vector is given by $\bn = \nabla \ell s/ |\nabla \ell s|_{\R^d}$, and the curvature of the ellipse obeys the formula $\kappa = -\divg \bn$, leading to the \textcolor{black}{formula $\kappa(\mathrm{x}) = \frac{-1}{a_1^2a_2^2}\left(\frac{x_1^2}{a_1^4} + \frac{x_2^2}{a_2^4} \right)^{-3/2}$.}
\paragraph{Numerical scheme for the time evolution.}
Denoting by $\Phi(t)$ the multiplier taking into account the jump condition on $\Gamma(t)$ (last equation of~\eqref{sysc4}), we know that its value is the trace on $\Gamma(t)$ of the velocity fields $\bu^{\pm}(\cdot,t)$. Then the third equation of~\eqref{sysc3} becomes $\Phi(t) = \displaystyle \frac{\p X}{\p t}(X^{-1}(\cdot,t),t)$. Combining this equality with~\eqref{eqvelodisc}, we deduce componentwise, for $i=1\dots d$, the equality
\begin{eqnarray*}
\Phi_i(t) & = & (x_i - x_{c,i})\frac{a_i'(t)}{a_i(t)},
\end{eqnarray*}
where $\Phi_i$ denotes the $i$-th component of the vector field $\Phi$. This equality has to be considered in the variational sense. Taking the scalar product of it with the scalar functions $\Phi_i(t)$, for $i=1\dots d$, we deduce
\begin{eqnarray*}
a_i'(t) \langle x_i-x_{c,i}; \Phi_i\rangle_{\L^2(\Gamma(t))} & = &
a_i(t) \| \Phi_i\|_{\L^2(\Gamma(t))}.
\end{eqnarray*}
From a time-stepping $(t_n)_{n=0\dots N}$ with a constantA test at low Reynolds number time-step $\Delta t$, we discretize this differential equation semi-implicitly, as follows:
\begin{eqnarray*}
(a_i^{(n+1)} - a_i^n) \langle x_i-x_{c,i}; \Phi_i(t_n)\rangle_{\L^2(\Gamma(t_n))} & = &
(\Delta t )a_i^{(n+1)} \| \Phi_i(t_n)\|_{\L^2(\Gamma(t_n))}.
\end{eqnarray*}
This yields the following scheme:
\begin{eqnarray*}
a_i^{(n+1)} & = &
\left(1 - \Delta t \frac{\|\Phi_i(t_n)|_{\L^2(\Gamma(t_n))}^2}{\langle x_i - x_{c,i};\Phi_i\rangle_{\L^2(\Gamma(t_n))}}\right)^{-1} a_i^{(n)}.
\end{eqnarray*}
We can write $\Phi(t_n) = \mathcal{K}_n(-\mu \kappa(t_n)\bn(t_n))$, where $\mathcal{K}_n$ denotes the Poincar\'e-Steklov operator defined by the solution of system~\eqref{sysjump}, with $\Omega = \Omega(t_n)$ and $\bgg=-\mu \kappa(t_n) \bn(t_n)$. The term $\kappa (t_n)\bn(t_n)$ is entirely given by the geometry of $\Gamma(t_n)$, namely the parameters $(a_i^{(n)})_{i=1\dots d}$. Thus we see the explicit scheme given by
\begin{eqnarray} \label{scheme-Stokes}
a_i^{(n+1)} & = &
\left(1 - \Delta t \frac{\|\mathcal{K}_n(-\mu \kappa(t_n)\bn(t_n))i|_{\L^2(\Gamma(t_n))}^2}{\langle x_i - x_{c,i};\Phi_i\rangle_{\L^2(\Gamma(t_n))}}\right)^{-1} a_i^{(n)}.
\end{eqnarray}
A simulation is performed with \textcolor{black}{Q3/Q2/Q1} elements, a triangular Cartesian mesh with $40$ subdivisions in each space direction. The other parameters are listed in Table~\ref{table1}.
\begin{remark}
We use the whole stabilization technique (with parameters $\alpha_0$ and $\gamma_0$), even if we were not able to show that it could influence the accuracy for the variable $\Phi$. The goal is to prevent bad potential situations that we were not able to detect in section~\ref{subsec-cvrobust} (like those which occurred for the variables $\blambda^\pm$ in Figure~\ref{fig-robust}), namely situations where the accuracy on the other variables (like $\Phi$) would be also degraded for some geometric configurations. Indeed, theoretically there is {\it a priori} no reason that the convergence is guaranteed for the variable $\Phi$ without stabilization.
\end{remark}
\begin{table}
\begin{center}
\begin{eqnarray*}
& \begin{array} {|c|c|c|c|c|c|c|c|c|}
\hline
\nu^+ & \nu^- & \mu & a_1^{(0)} & a_2^{(0)} & T & \Delta t & \alpha_0 & \gamma_0\\
\hline
0.1 & 0.05 & 50 & 0.3537 & 0.2037 & 0.1 & 0.00025 & 0.01 & 0.01\\
\hline
\end{array} &
\end{eqnarray*}
\vspace*{-10pt}
\caption{Simulation parameters, for the Stokes model.}
\label{table1}
\end{center}
\end{table}
\FloatBarrier
\begin{center}
\hspace*{-10pt}\includegraphics[trim = 0cm 0.5cm 0cm 1.5cm, clip, scale=0.35]{./images/ellipse-Stokes.png}
\vspace*{-30pt}
\begin{figure}[H]
\caption{Time evolution of the semi-axes of the ellipse in dimension 2, for the Stokes model.}
\label{fig-ellipse-Stokes}
\end{figure}
\end{center}
\FloatBarrier
Figure~\ref{fig-ellipse-Stokes} shows that the ellipse converges to the circle. Indeed, its semi-axes tend monotonously to a constant value. The potential energy of the interface, namely $\mu |\Gamma(t)|$, is dissipated by the viscosity forces as follows:
\begin{eqnarray*}
\frac{\d}{\d t}\left( \mu\left|\Gamma(t)\right| \right) + 2\nu^+\|\varepsilon(\bu^+(\cdot,t))\|^2_{\left[\L^2(\Omega^+(t))\right]^{d\times d}}
+ 2\nu^-\|\varepsilon(\bu^-(\cdot,t)) \|^2_{\left[\L^2(\Omega^-(t))\right]^{d\times}} & = & 0, \qquad \forall t >0.
\end{eqnarray*}
Thus the area of $\Gamma(t)$ tends to a minimal value, that we know to be corresponding to the sphere.
\subsection{\textcolor{black}{Application to the Navier-Stokes model}} \label{sec-NSE2D}
We test the capacities of the method when the inertia forces are not neglected in comparison with the viscosity forces. We replace the first equation of the Stokes system by the Navier-Stokes Equation:
\begin{eqnarray*}
\rho\left(\frac{\p \bu}{\p t} + (\bu \cdot \nabla) \bu\right) -\divg \sigma(\bu,p) = 0.
\end{eqnarray*}
The density is denoted by $\rho^\pm$, and is chosen to be constant in $\Omega^+(t)$ and $\Omega^-(t)$. The other equations of system~\eqref{syscoupled} remain the same. The chosen parameters are listed in Table~\ref{table2}.
\begin{table}
\begin{center}
\begin{eqnarray*}
& \begin{array} {|c|c|c|c|c|c|c|c|c|c|c|}
\hline
\rho^+ & \rho^- & \nu^+ & \nu^- & \mu & a_1^{(0)} & a_2^{(0)} & T & \Delta t & \alpha_0 & \gamma_0\\
\hline
0.2 & 0.1 & 0.1 & 0.05 & 50 & 0.3537 & 0.2037 & 0.1 & 0.00025 & 0.01 & 0.01\\
\hline
\end{array} &
\end{eqnarray*}
\vspace*{-10pt}
\caption{Simulation parameters, for the Navier-Stokes model in 2D.}
\label{table2}
\end{center}
\end{table}
\FloatBarrier
\textcolor{black}{The time-derivative of $\bu$ is discretized with the implicit Euler scheme, but on the explicit domains $\Omega^\pm(t_n)$. The nonlinear term $(\bu \cdot \nabla) \bu$ is treated with a Newton method:}
\begin{eqnarray*}
\left\{ \begin{array} {rcll}
\rho\left(\displaystyle \frac{\bu^{(n+1)}-\bu^{(n)}}{\Delta t} + \left(\bu^{(n+1)} \cdot \nabla\right) \bu^{(n+1)}\right) -\divg \sigma\left(\bu^{(n+1)},p^{(n+1)}\right) & = & 0 & \text{in } \Omega^\pm(t_n), \\
\divg \bu^{(n+1)} & = & 0 & \text{in } \Omega^{\pm}(t_{n}), \\
\left[ \sigma(\bu^{(n+1)},p^{(n+1)}) \right]\bn & = & -\mu \kappa(t_n) \bn(t_n), & \text{across } \Gamma(t_{n}).
\end{array} \right.
\end{eqnarray*}
Note that some degrees of freedom of $\bu^{(n+1)}$, corresponding to nodes of $\Omega^{+}(t_n)$ for instance, can be next considered in $\Omega^-(t_{n+1})$ for the next time step (and conversely). These nodes are those which are concerned by the update of the \textcolor{black}{interface} $\Gamma(t_n)$ into $\Gamma(t_{n+1})$, and the updates of the different matrices of the system require only the re-assembly of the terms whose indexes correspond to these nodes (see section~\ref{sec-smartupdate}). For updating $\Gamma(t_n)$ (and thus $\Omega^\pm(t_n)$), we still use the scheme adopted in section~\ref{subsec-simustokes}.
\begin{minipage}{\linewidth}
\hspace{-0.05\linewidth}%
\centering
\begin{minipage}{0.32\linewidth}
\begin{figure}[H]
\includegraphics[trim = 16cm 3cm 12cm 3cm, clip, scale=0.18]{./images/pressureNSE-0.png}
\begin{center}\begin{small} $ t = 0.0 $ \end{small}\end{center}
\end{figure}
\end{minipage}
\hspace{0.0\linewidth}
\begin{minipage}{0.32\linewidth}
\begin{figure}[H]
\includegraphics[trim = 16cm 3cm 12cm 3cm, clip, scale=0.18]{./images/pressureNSE-20.png}
\begin{center}\begin{small} $ t = 0.00675 $ \end{small}\end{center}
\end{figure}
\end{minipage}
\hspace{0.0\linewidth}
\begin{minipage}{0.32\linewidth}
\begin{figure}[H]
\includegraphics[trim = 16cm 3cm 12cm 3cm, clip, scale=0.18]{./images/pressureNSE-29.png}
\begin{center}\begin{small} $ t = 0.00975 $ \end{small}\end{center}
\end{figure}
\end{minipage}
\\
\hspace{-0.05\linewidth}%
\centering
\begin{minipage}{0.32\linewidth}
\begin{figure}[H]
\includegraphics[trim = 16cm 3cm 12cm 3cm, clip, scale=0.18]{./images/pressureNSE-40.png}
\begin{center}\begin{small} $ t = 0.0135 $ \end{small}\end{center}
\end{figure}
\end{minipage}
\hspace{0.0\linewidth}
\begin{minipage}{0.32\linewidth}
\begin{figure}[H]
\includegraphics[trim = 16cm 3cm 12cm 3cm, clip, scale=0.18]{./images/pressureNSE-82.png}
\begin{center}\begin{small} $ t = 0.02725 $ \end{small}\end{center}
\end{figure}
\end{minipage}
\hspace{0.0\linewidth}
\begin{minipage}{0.32\linewidth}
\begin{figure}[H]
\includegraphics[trim = 16cm 3cm 12cm 3cm, clip, scale=0.18]{./images/pressureNSE-240.png}
\begin{center}\begin{small} $ t = 0.08 $ \end{small}\end{center}
\end{figure}
\end{minipage}
\end{minipage}
\begin{minipage}{\linewidth}
\begin{figure}[H]
\caption{Time evolution of the pressure outside and inside the interface for the Navier-Stokes model with surface tension forces. \textcolor{white}{}\label{fig-pressureNSE}}
\end{figure}
\end{minipage}\\
\FloatBarrier
\begin{minipage}{\linewidth}
\hspace{-0.05\linewidth}%
\centering
\begin{minipage}{0.32\linewidth}
\begin{figure}[H]
\includegraphics[trim = 16cm 3cm 12cm 3cm, clip, scale=0.18]{./images/velocityNSE-0.png}
\begin{center}\begin{small} $ t = 0.0 $ \end{small}\end{center}
\end{figure}
\end{minipage}
\hspace{0.0\linewidth}
\begin{minipage}{0.32\linewidth}
\begin{figure}[H]
\includegraphics[trim = 16cm 3cm 12cm 3cm, clip, scale=0.18]{./images/velocityNSE-40.png}
\begin{center}\begin{small} $ t = 0.00675 $ \end{small}\end{center}
\end{figure}
\end{minipage}
\hspace{0.0\linewidth}
\begin{minipage}{0.32\linewidth}
\begin{figure}[H]
\includegraphics[trim = 16cm 3cm 12cm 3cm, clip, scale=0.18]{./images/velocityNSE-82.png}
\begin{center}\begin{small} $ t = 0.00975 $ \end{small}\end{center}
\end{figure}
\end{minipage}
\\
\hspace{-0.05\linewidth}%
\centering
\begin{minipage}{0.32\linewidth}
\begin{figure}[H]
\includegraphics[trim = 16cm 3cm 12cm 3cm, clip, scale=0.18]{./images/velocityNSE-129.png}
\begin{center}\begin{small} $ t = 0.0135 $ \end{small}\end{center}
\end{figure}
\end{minipage}
\hspace{0.0\linewidth}
\begin{minipage}{0.32\linewidth}
\begin{figure}[H]
\includegraphics[trim = 16cm 3cm 12cm 3cm, clip, scale=0.18]{./images/velocityNSE-166.png}
\begin{center}\begin{small} $ t = 0.02725 $ \end{small}\end{center}
\end{figure}
\end{minipage}
\hspace{0.0\linewidth}
\begin{minipage}{0.32\linewidth}
\begin{figure}[H]
\includegraphics[trim = 16cm 3cm 12cm 3cm, clip, scale=0.18]{./images/velocityNSE-240.png}
\begin{center}\begin{small} $ t = 0.08 $ \end{small}\end{center}
\end{figure}
\end{minipage}
\end{minipage}
\begin{minipage}{\linewidth}
\begin{figure}[H]
\caption{Time evolution of the velocity outside and inside the interface for the Navier-Stokes model with surface tension forces. \textcolor{white}{}\label{fig-velocityNSE}}
\end{figure}
\end{minipage}\\
\FloatBarrier
\begin{center}
\hspace*{-10pt}\includegraphics[trim = 0cm 0.5cm 0cm 1.5cm, clip, scale=0.35]{./images/ellipse-NSE.png}
\begin{figure}[H]
\vspace*{-30pt}\caption{Time evolution of the semi-axes of the ellipse in dimension 2, for the Navier-Stokes model.}
\label{fig-ellipse-NSE}
\end{figure}
\end{center}
\FloatBarrier
\textcolor{black}{The respective behaviors of the pressure and the velocity are represented in Figure~\ref{fig-pressureNSE} and Figure~\ref{fig-velocityNSE}. In Figure~\ref{fig-pressureNSE} we can observe the discontinuity of the pressure and its convergence to constant values. In Figure~\ref{fig-velocityNSE} we observe the velocity converges to zero, up to slight artifacts which appear after a long time. The time evolution of the semi-axes of the ellipsoid is represented in Figure~\ref{fig-ellipse-NSE}.} In comparison with Figure~\ref{fig-ellipse-Stokes}, Figure~\ref{fig-ellipse-NSE} shows inertia effects in the convergence of the semi-axes to a constant value. The decrease of the energy function is given by the \textcolor{black}{inequality}
\begin{eqnarray*}
\frac{\d}{\d t}\left(\frac{\rho^+}{2} \| \bu^+(\cdot,t)\|^2_{\mathbf{L}^2(\Omega^+(t))} + \frac{\rho^-}{2} \| \bu^-(\cdot,t)\|^2_{\mathbf{L}^2(\Omega^-(t))} + \mu\left|\Gamma(t)\right| \right)
& \leq & 0,
\qquad \forall t >0.
\end{eqnarray*}
The conservation of mass is studied in Figure~\ref{Fig-volNSE}, where we plot the relative error of mass conservation (during the time iterations), namely the quantity defined by
\begin{eqnarray*}
100.0*\frac{|M(t_n)-M(t_0=0)|}{M(t_0)}, & & \text{with } M(t_n) = \rho^- \pi a_n^{(1)}a_n^{(2)}.
\end{eqnarray*}
This quantity represents the relative error made on the mass conservation.
\begin{center}
\hspace*{-10pt}\includegraphics[trim = 0cm 0.5cm 0cm 1.5cm, clip, scale=0.35]{./images/Mass-NSE_revised.png}
\begin{figure}[H]
\vspace*{-30pt}\caption{Relative mass conservation default, for the Navier-Stokes model, \textcolor{black}{in red for a given mesh (40 subdivisions), in blue for a twice as fine mesh (80 subdivisions)}.}
\label{Fig-volNSE}
\end{figure}
\end{center}
\FloatBarrier
We see in Figure~\ref{Fig-volNSE} that the error can reach approximately 10\%. This error could be \textcolor{black}{slightly} improved by choosing more precise elements, \textcolor{black}{but rather by choosing} a smaller time step. However, a more appropriate time-stepping should be performed, in order to obtain numerically the mass conservation, automatically. This property is crucial for more sophisticated simulations, where the goal would be to stick to benchmarks. \textcolor{black}{Note that the mass conservation default is not showed for the Stokes model in section~\ref{subsec-simustokes}, since it is close to zero, and so this mass conservation default is clearly due to the material derivative terms.}
\paragraph{\textcolor{black}{Tests at the equilibrium.}}
\textcolor{black}{Let us test now consider the configuration at the equilibrium. The latter is characterized by a null velocity and a constant pressure on both sides of the interface reduced to a circle. The Laplace-Young law predicts that the difference of pressures must satisfy
\begin{eqnarray*}
\Delta p:= p^- - p^+ & = & \mu / r,
\end{eqnarray*}
where $r$ denotes the radius of the interface. This is observed in Table~\ref{table3} and Figure~\ref{fig-Young-Laplace}, where tests are performed in the same fashion as in~\cite[section~5.1]{Turek2017}.
}\\
\begin{minipage}{\linewidth}
\hspace{-0.08\linewidth}%
\centering
\begin{minipage}{0.48\linewidth}
\begin{figure}[H]
\includegraphics[trim = 19cm 10cm 15cm 9cm, clip, scale=0.36]{./images/static-pressure35.png}
\end{figure}
\end{minipage}
\hspace{-0.02\linewidth}
\begin{minipage}{0.48\linewidth}
\begin{figure}[H]
\includegraphics[trim = 0cm 0cm 0cm 0cm, clip, scale=0.25]{./images/Young-Laplace.png}
\end{figure}
\end{minipage}
\begin{figure}[H]
\vspace*{-20pt}\centering\caption{Pressures values computed at the equilibrium for $\mu = 1$ and $h = 0.05$. Left, the radius is 0.25. Right, the difference of pressures is computed for different values of the radius.}
\label{fig-Young-Laplace}
\end{figure}
\end{minipage}
\begin{table}
\begin{center}
\begin{eqnarray*}
& \begin{array} {|c|c|c|c|c|}
h & p^+ & p^- & |\Delta p - \mu/r|/(\mu/r) & |\Delta p |/(\mu/r) \\
\hline
0.1 & -0.93593 & 3.07232 & 0.00206 & 1.00206 \\
\hline
0.05 & -0.08705 & 3.90449 & 0.00211 & 0.997885 \\
\hline
0.025 & -9.9013e-05 & 4.09994 & 0.02501 & 1.02501 \\
\hline
0.0125 & -0.00268 & 3.99513 & 0.00055 & 0.99945 \\
\hline
\end{array} &
\end{eqnarray*}
\vspace*{-10pt}
\caption{Pressure computations for static bubble, with surface tension coefficient $\mu = 1$ and radius $r = 0.25$.}
\label{table3}
\end{center}
\end{table}
\FloatBarrier
\hfill \\
\textcolor{black}{In Table~\ref{table3} the static pressures on both sides are presented in function of the mesh size. It is showed that globally the values of the pressures converge to the correct ones, but the difference of pressures is well-computed mainly for the finest mesh. In Figure~\ref{fig-Young-Laplace} we observe that the difference of pressures is well-computed for different values of the radius of the interface.}
\section{Conclusion} \label{sec-conclusion}
In this work we developed a mixed finite element method for the Stokes problem with jump \textcolor{black}{interface} conditions involving surface-tension-type forces. Besides, the \textcolor{black}{interface} is taken into account with a fictitious domain approach, in order to avoid remeshing when this \textcolor{black}{interface} changes. The jump condition is taken into account with a multiplier, whose role is central in coupled problems like simulating the motion of a bubble-soap in a viscous incompressible fluid. The interest of our approach lies in obtaining of a good approximation for this multiplier, while sparing computation time due to remeshing and assembly procedure. Moreover, this fictitious domain method is quite simple to be implemented, and it optimizes the complexity of local treatments for approximating dual variables (like the multiplier aforementioned) that are defined on the \textcolor{black}{interface}. But it requires an augmented Lagrangian approach, in order to prove theoretical convergence for these dual variables, and also to stabilize the approximation of the latter. \textcolor{black}{By this means, this article proposes an alternative to Xfem approaches, by avoiding the use of additional singular basis functions, and thus by simplifying the implementation. Besides}, there is a gap between theoretical analysis and numerical observations, \textcolor{black}{since a part of the stabilization technique, necessary for the theoretical analysis, seems to be unnecessary in practice} for stabilizing one of the multipliers. Without stabilization, our theoretical analysis seems to be too pessimistic, and perhaps convergence properties are hidden in the structure of the Lagrangian functional we rely on. This theoretical point remains to be investigated deeper. For anticipating an unsteady case, the crucial criterion of robustness with respect to the geometry was tested and verified (with the other part of the stabilization technique). Illustrations of time evolution of an ellipsoidal bubble-soap were provided, showing the good behavior of the method, even in the case of inertial effects. A more sophisticated time-stepping, improving the behavior of the method for the model of the Navier-Stokes Equations (conservation of mass for instance), as well as consideration of more general type of deformations are expected in a forthcoming work.
|
2,877,628,091,476 | arxiv | \section{Introduction}
In 1961, Shockley and Queisser found the upper theoretical limit for the efficiency of p-n junction solar energy converters to be about 30\%. This is known as the Shockley-Queisser thermodynamic limit.\cite{shockley_Detailed_1961} Since then, there have been two main approaches for increasing the efficiency of the solar cell by means of producing multiple photogenerated excitons from a single absorbed photon. The two approaches are multiple exciton generation (MEG) (carrier multiplication (CM)) and singlet fission (SF).
In MEG, the exciton multiplication occurs when the absorbed photon is at least twice
the nanocrystal band gap. This has been tested experimentally in semiconductor
nanocrystals,\cite{shabaev_Multiexciton_2006,beard_Multiple_2008,beard_Variations_2009,
beard_Multiple_2015,stewart_Comparison_2012} quantum dots,\cite{nozik_Quantum_2002,beard_Third_2013,
schaller_High_2004,mcguire_New_2008} quantum wires, and quantum rods.\cite{padilha_Aspect_2013,beard_Quantum_2014}
The affect of size, shape, and composition of PbS, PbSe, PbTe nanocrystals has on MEG was
studied by Padilha et. al.\cite{padilha_Carrier_2013} MEG also has been shown to occur
in carbon nanotubes\cite{gabor_Extremely_2009} as well as graphene.\cite{mcclain_Multiple_2010}
The generation of multiexcitons has been subject of
intense theoretical research.\cite{doi:10.1021/ar3002365}
For example, symmetry-adapted configuration
interaction mehtod has been used to study the excited states of nanocrystals, such as lead selenide and silicon quantum dots, to determine the energetic threshold of MEG.\cite{jaeger_The_2012,akimov_Advanced_2014}
In addition to energetics requirements,
the importance of electron-phonon coupling for multiexciton
generation and multiexciton recombination (MER) in semiconductor quantum dots
has also been demonstrated.\cite{hyeon_Multiple_2012}
The second avenue to generate multiple excitons is singlet fission. In molecular
chromophores that have a triplet state energy that is close to 1/2 the energy of
the first allowed optical transition (S$_{1}$-S$_{0}$), exciton multiplication can
occur upon photoexcitation to produce two triplet states from the single singlet
state.\cite{johnson_The_2013,smith_Singlet_2010} Johnson et. al. showed this using
1,3-Diphenylisobenzofuran as a model chromophore.\cite{johnson_Singlet_2014} Thompson
et. al. shows the magnetic field dependence of singlet fission in solutions
of diphenyl tetracene.\cite{thompson_Magnetic_2015} Wu et. al. presents that tetracene
is the best candidate in silicon solar cells to increase efficency using SF. They
report a quantum efficiency of 127\% $\pm$ 18\%.\cite{wu_Singlet_2014}
In this work, we present a theoretical study of the effect of an external electromagnetic
field on the generation of a biexcitonic state from a single excitonic state.
The main goal of this work is to present a systematic
derivation of the
time-dependent transition probability for the $(1\mathrm{e}-1\mathrm{h}) \rightarrow (2\mathrm{e}-2\mathrm{h})$ process.
We consider a general many-electron system in the presence of an
external EM field.
The system is assumed to be excited at $t=0$ and the
is propagated in time using field-dependent Hamiltonian.
The form of the field-dependent Hamiltonian
and the initial conditions are described in \autoref{sec:sys_info}.
The time-propagation of the state vector is performed
using time-ordered field-dependent propagator (\autoref{sec:time_prop})
using time-dependent perturbation theory
and the 0th, 1st and 2nd order contributions to the
time-dependent transition amplitudes were
derived in terms for second-quantized operators(\autoref{sec:tdpt}).
The transition amplitudes were expressed in terms
of the time-independent Hugenholtz diagrams\cite{shavitt2009many} (\autoref{sec:diag})
with time-dependent vertex amplitudes.
Finally, simplified expressions for calculating
time-dependent vertex amplitudes
that is amenable to computer implementation
were derived (\autoref{sec:vertex}).
The key results and conclusions from the
derivation are summarized in \autoref{sec:results}.
\section{System information and definition}
\label{sec:sys_info}
We define the reference effective one-particle Hamiltonian as,
\begin{align}
h_0 = \frac{-\hbar^2}{2m} \nabla^2 + v_\mathrm{ext} + v_\mathrm{eff}
\end{align}
where $v_\mathrm{eff}$ is the effective one-particle operator
and can be approximated using $v_\mathrm{HF}, v_\mathrm{KS}, v_\mathrm{ps}$,
or $v_\mathrm{model}$.
The eigenspectrum of the $h_0$ is used
for the construction of the
creation and annihilation operators
\begin{align}
h_0 \chi_p = \epsilon_p \chi_{p}.
\end{align}
The N-electron non-interacting Hamiltonian
is defined as,
\begin{align}
H_0 = \sum_{i}^N h(i).
\end{align}
The ground state of $H_0$
is defined as the quasiparticle vacuum,
\begin{align}
\vert 0 \rangle \equiv \Phi_0.
\end{align}
The Hamiltonian for the
interacting N-electron system
is defined as,
\begin{align}
H = H_0 + W
\end{align}
where $W$ is the residual
electron-electron interaction
not included in the one-body
operator $v_\mathrm{eff}$
\begin{align}
W = \sum_{i<j}^N r_{ij}^{-1} - \sum_{i}^N v_\mathrm{eff}(i).
\end{align}
The non-interacting electron-hole
wave function is
defined using the creation operators for
quasi-electrons and quasi-holes
\begin{align}
\vert \Phi_i^a \rangle
&=
\{a^\dagger i \} \vert 0 \rangle.
\end{align}
The correlated electron-hole wave function is defined
using a correlation operator, $\Omega_n$,
\begin{align}
\label{eq:correlationOperator}
\vert \Psi\rangle
&= \Omega_n \vert \Phi_i^a \rangle
\end{align}
where $\Omega_n$ will be defined later.
We are interested in the time-development of
the correlated wave function
under the influence of an
external electromagnetic field.
The interaction between the molecule and the EM field
is given by the time-dependent interaction operator
$V_F(t)$.\cite{cohen1992quantum} The total field dependent Hamiltonian
is defined as,
\begin{align}
H_F(t) = H_0 + V_F(t).
\end{align}
\section{Method for time-propagation}
\label{sec:time_prop}
In this work, we will work in the Dirac's
interaction representation.
In this representation, the total
interaction potential is defined using the
following similarity-transformation,
\begin{align}
Z_I^F(t)
&=
e^{+iH_0t/\hbar} [V_F(t) + W] e^{-iH_0t/\hbar}.
\end{align}
The field-dependent time-development operator, $U_F(t,0)$,
is defined as,
\begin{align}
U_F(t,0) = 1 + \sum_{n=1} U_F^{(n)}(t)
\end{align}
where $U_F^{(n)}(t)$ is defined as,
\begin{align}
U_F^{(n)}(t)
&=
C_n \int_0^t dt_1 dt_2 \dots dt_n
\mathcal{T}[Z_I^F(t_1) Z_I^F(t_2)\dots Z_I^F(t_n)].
\end{align}
We assume that the system at $t=0$ is described by
the state vector $\Psi(0) = \Omega_n \vert \Phi_i^a \rangle$.
The time-development of this state vector to time $t$
is given by the following exprssion,
\begin{align}
\vert \Psi_F(t) \rangle
&=
U_F(t,0) \vert \Psi(0) \rangle.
\end{align}
The subscript $F$ in the above equation implies that the
time-development was performed under the influence of the
the extenral field, $V_F$.
In this work, we are interested in
the 2e-2h generation from 1e-1h excitation.
\begin{align}
\textrm{(Carrier multiplication) }
P_{F,X \rightarrow X_2} (t)
&=
\vert \langle 0 \vert \{ k^\dagger j^\dagger b c\} \vert \Psi_F(t) \rangle \vert^{2}.
\end{align}
For the purpose of this derivation,
it is useful to write the transition probability
in terms of the transition amplitude $I$ as shown below,
\begin{align}
P_{F,X \rightarrow X_2} (t_f)
&=
\int_{0}^{t_f} dt \,
[I_{F} (t)] [I_{F} (t)]^\ast
\end{align}
where,
\begin{align}
I_{F} (t)
&=
\langle 0 \vert \{ k^\dagger j^\dagger b c\} \vert \Psi_F(t) \rangle.
\end{align}
In this work, we will use both Wick's contraction
and diagrammatic methods
for deriving the expression for the time-dependent transition amplitudes.
The first step in this many-step derivation is to write all the relevant quantities as vacuum expectation values.
Writing the expression in terms of time-development operator,
\begin{align}
I_F (t) &=
\langle 0 \vert \{ k^\dagger j^\dagger b c\} U_F(t,0) \Omega_X \{ a^\dagger i \}\vert 0 \rangle.
\end{align}
For the nth-order term in the time-developmenet operator, we define
\begin{align}
I_F^{(n)}(t_1,t_2,\dots,t_n)
&=
\langle 0 \vert \{ k^\dagger j^\dagger b c\} \mathcal{T}[Z_I^F(t_1)Z_I^F(t_2)\dots Z_I^F(t_n)] \Omega_X \{ a^\dagger i \}\vert 0 \rangle.
\end{align}
Using Wick's theorem, we conclude the only fully contracted terms
will have non-zero contribution to the above expression
\begin{align}
\label{eq:I_F}
I_F^{(n)}(t_1,t_2,\dots,t_n)
&=
\langle 0 \vert \{ k^\dagger j^\dagger b c\} \mathcal{T}[Z_I^F(t_1)Z_I^F(t_2)\dots Z_I^F(t_n)] \Omega_X \{ a^\dagger i \}\vert 0 \rangle_{C}.
\end{align}
In this work, we evaluate the above expansion up to second-order
using diagrammatic techniques. The explicit expression for
$I_F^{(0)}, I_F^{(1)}$ and $I_F^{(2)}$ are presented in
Sec.~\ref{ssec:0orderContribution},~\ref{ssec:1orderContribution},and ~\ref{ssec:2orderContribution}.
\section{Perturbative treatment of transition amplitudes}
\label{sec:tdpt}
\subsection{0th order contribution}
\label{ssec:0orderContribution}
The zeroth order term is field-independent and is given by the expression,
\begin{align}
I_F^{(0)}
&=
\langle 0 \vert \{k j^\dagger b c\} \Omega_X \{ a^\dagger i \}\vert 0 \rangle_C.
\end{align}
As expected, the above expression is independent of time.
The Wick's contraction required to evaluate this
term is denoted by the following expression,
\begin{align}
\eta^{(3a)}
&=
\langle 0 \vert \{k j^\dagger b c\} \Omega_X \{ a^\dagger i \}\vert 0 \rangle_L.
\end{align}
We note that only connected diagrams contribute to the
above expression and this fact is denoted by subscribe "L".
\subsection{1st order contribution}
\label{ssec:1orderContribution}
The first-order term is:
\begin{align}
I_F^{(1)}(t_1)
&=
\langle 0 \vert \{k j^\dagger b c\} Z_I^F(t_1) \Omega_X \{ a^\dagger i \}\vert 0 \rangle_C.
\end{align}
To evaluate the above expression, we will have to
derive the expression of the the time-dependent interaction potential, $Z_I^F(t_1)$,
which is defined as,
\begin{align}
Z_I^F(t) = e^{+iH_0t/\hbar} [V_F(t) + W]e^{-iH_0t/\hbar}.
\end{align}
In this derivation, we will split the above expression into
1-body and 2-body terms,
\begin{align}
V_I^F(t) = e^{+iH_0t/\hbar} V_F(t) e^{-iH_0t/\hbar}
\end{align}
\begin{align}
W_I^F(t) = e^{+iH_0t/\hbar} W e^{-iH_0t/\hbar}.
\end{align}
The 1-body and 2-body time-dependent operators are
represented using time-dependent amplitudes,
\begin{align}
V_I^F(t)
&=
\sum_{pq} A_{pq}(t) p^\dagger q \\
&=
\sum_{pq} A_{pq}(t) \{p^\dagger q \} + \langle 0 \vert V_I^F(t) \vert 0 \rangle \\
&=
\sum_{pq} A_{pq}(t) \{p^\dagger q \} + \langle 0 \vert V_F(t) \vert 0 \rangle.
\end{align}
Similarly the 2-body term is given as,
\begin{align}
W_I^F(t)
&=
\frac{1}{2}
\sum_{pqrs} B_{pqrs}(t) p^\dagger q^\dagger s r \\
&=
\frac{1}{2}\sum_{pqrs} B_{pqrs}(t) \{ p^\dagger q^\dagger s r \}
+
\sum_{pq} C_{pq}(t) \{ p^\dagger q \}
+ \langle 0 \vert W_I^F(t) \vert 0 \rangle
\end{align}
where,
\begin{align}
C_{pq}(t) = \sum_{i}^N B_{piqi}(t)-B_{piiq}(t).
\end{align}
Adding the terms and rewriting them in terms of
normal-ordered 2-body, 1-body, and vacuum expectation value terms we get,
\begin{align}
Z_I^F(t)
&=
\frac{1}{2}\sum_{pqrs} B_{pqrs}(t) \{ p^\dagger q^\dagger s r \}
+ \sum_{pq} D_{pq}(t) \{p^\dagger q \}
+ \langle 0 \vert Z_I^F(t) \vert 0 \rangle.
\end{align}
where,
\begin{align}
\mathbf{D}(t) &= \mathbf{A}(t) + \mathbf{C}(t)
\end{align}
\begin{align}
Z_I^F(t)
&=
Z_{0}(t) + Z_{D}(t) + Z_{B}(t).
\end{align}
The 1st order probability for generation of 2e-2h from 1e-1h is given by the following expression,
\begin{align}
I_{F}^{(1)} (t)
&=
\langle 0 \vert \{k j^\dagger b c\} [Z_0 + Z_D + Z_B] \Omega_X \{ a^\dagger i \}\vert 0 \rangle_{C}.
\end{align}
Summing over
\begin{align}
I_{F}^{(1)} (t)
=
Z_0 (t) I^{(0)}
+ \sum_{pq} D_{pq}(t) \eta^{(4a)}_{pq}
+ \sum_{pqrs} B_{pqrs}(t) \eta^{(4b)}_{pqrs}
\end{align}
where,
\begin{align}
\eta^{(4a)}_{pq}
&=
\langle 0 \vert \{k j^\dagger b c\}
\{ p^\dagger q \}
\Omega_X \{ a^\dagger i \}\vert 0 \rangle_C \\
\eta^{(4b)}_{pqrs}
&=
\langle 0 \vert \{k j^\dagger b c\}
\{ p^\dagger q^\dagger s r \}
\Omega_X \{ a^\dagger i \}\vert 0 \rangle_{C}.
\end{align}
\subsection{2nd order contribution}
\label{ssec:2orderContribution}
The second-order term for ($t_1 > t_2$) is:
\begin{align}
I_{F}^{(2)} (t)
&=
\langle 0 \vert \{k j^\dagger b c\} Z_I^F(t_1) Z_I^F(t_2) \Omega_X \{ a^\dagger i \}\vert 0 \rangle_C.
\end{align}
Substituting,
\begin{align}
Z_I^F(t_1) Z_I^F(t_2)
&=
[Z_0(t_1)+Z_D(t_1)+Z_B(t_1)][[Z_0(t_2)+Z_D(t_2)+Z_B(t_2)] \\
&=
Z_0(t_1) [[Z_0(t_2)+Z_D(t_2)+Z_B(t_2)] \notag \\
&+ Z_D(t_1) [Z_0(t_2)+Z_D(t_2)+Z_B(t_2)] \notag \\
&+ Z_B(t_1) [Z_0(t_2)+Z_D(t_2)+Z_B(t_2)] \\
&=
Z_0(t_1) [Z_0(t_2)+Z_D(t_2)+Z_B(t_2)] \notag \\
&+ [Z_D(t_1) + Z_B(t_1)] Z_0(t_2) \notag \\
&+ [Z_D(t_1)Z_B(t_2)] + [Z_B(t_1) Z_D(t_2)] \notag \\
&+ [Z_D(t_1)Z_D(t_2)] + [Z_B(t_1) Z_B(t_2)].
\end{align}
Adding and subtracting $Z_0(t_1)Z_0(t_2)$
in the following expression,
\begin{align}
[Z_D(t_1) + Z_B(t_1)] Z_0(t_2)
&=
[Z_0(t_1) + Z_D(t_1) + Z_B(t_1)] Z_0(t_2) - Z_0(t_1)Z_0(t_2).
\end{align}
Therefore,
\begin{align}
Z_I^F(t_1) Z_I^F(t_2)
&=
Z_0(t_1) [Z_0(t_2)+Z_D(t_2)+Z_B(t_2)] \notag \\
&+ [Z_0(t_1) + Z_D(t_1) + Z_B(t_1)] Z_0(t_2) \notag \\
&+ [Z_D(t_1)Z_B(t_2)] + [Z_B(t_1) Z_D(t_2)] \notag \\
&+ [Z_D(t_1)Z_D(t_2)] + [Z_B(t_1) Z_B(t_2)] - [Z_0(t_1)Z_0(t_2)].
\end{align}
We define time-reversed anti-commutation as,
\begin{align}
[A(t_1),B(t_2)]_{+}^{t} = A(t_1)B(t_2) + B(t_1)A(t_2).
\end{align}
Using the above equation, the expression for $I_{F}^{(2)} (t)$
is given as,
\begin{align}
\label{eq:amplitudes}
I_{F}^{(2)} (t_1 t_2)
&=
Z_0(t_1) I_{F}^{(1)} (t_2) + I_{F}^{(1)} (t_1)Z_0(t_2)
- Z_0(t_1)Z_0(t_2) I^{(0)} \notag\\
&+ \langle 0 \vert \{k j^\dagger b c\} [Z_D(t_1),Z_B(t_2)]^t_{+} \Omega_X \{ a^\dagger i \}\vert 0\rangle \notag \\
&+ \langle 0 \vert \{k j^\dagger b c\} [Z_D(t_1)Z_D(t_2)] \Omega_X \{ a^\dagger i \}\vert 0\rangle \notag \\
&+ \langle 0 \vert \{k j^\dagger b c\} [Z_B(t_1) Z_B(t_2)] \Omega_X \{ a^\dagger i \}\vert 0\rangle.
\end{align}
The evaluation of the terms in Eq.~\ref{eq:amplitudes} are given by,
\begin{align}
\langle 0 \vert \{k j^\dagger b c\} [Z_D(t_1) Z_D(t_2)] \Omega_X \{ a^\dagger i \}\vert 0\rangle =
\sum_{pq} \sum_{rs}
D_{pq}(t_1)D_{rs}(t_2) \eta_{pqrs}^{(5a)}
\end{align}
\begin{align}
\langle 0 \vert \{k j^\dagger b c\} [Z_D(t_1),Z_B(t_2)]^t_{+} \Omega_X \{ a^\dagger i \}\vert 0\rangle =
\sum_{pqrs} \sum_{xy} G_{pqrsxy}(t_1,t_2) \eta_{pqrsxy}^{(5b)}
\end{align}
\begin{align}
\langle 0 \vert \{k j^\dagger b c\} [Z_B(t_1) Z_B(t_2)] \Omega_X \{ a^\dagger i \}\vert 0\rangle =
\sum_{pqrs} \sum_{tuvw}
B_{pqrs}(t_1)B_{tuvw}(t_2) \eta_{pqrstuvw}^{(5c)}
\end{align}
where the time-independent components are given as,
\begin{align}
\eta_{pqrs}^{(5a)}
&=
\langle 0 \vert \{k j^\dagger b c\} \{p^\dagger q\} \{r^\dagger s\} \Omega_X \{ a^\dagger i \}\vert 0\rangle_C \\
\eta_{pqrsxy}^{(5b)}
&=
\langle 0 \vert \{k j^\dagger b c\} \{p^\dagger q^\dagger s r\} \{x^\dagger y\} \Omega_X \{ a^\dagger i \}\vert 0\rangle_C \\
\eta_{pqrstuvw}^{(5c)}
&=
\langle 0 \vert \{k j^\dagger b c\} \{p^\dagger q^\dagger s r\} \{t^\dagger u^\dagger w v \} \Omega_X \{ a^\dagger i \}\vert 0\rangle_C
\end{align}
and
\begin{align}
G_{pqrsxy}(t_1,t_2)
&=
B_{pqrs}(t_1)D_{xy}(t_2) + D_{xy}(t_1)B_{pqrs}(t_2).
\end{align}
\begin{align}
I_{F}^{(2)} (t_1 t_2)
&=
Z_0(t_1) I_{F}^{(1)} (t_2) + I_{F}^{(1)} (t_1)Z_0(t_2)
- Z_0(t_1)Z_0(t_2) I^{(0)} \notag\\
&+ \sum_{pqrs} D_{pq}(t_1)D_{rs}(t_2) \eta_{pqrs}^{(5a)}
+ \sum_{pqrsxy} G_{pqrsxy}(t_1,t_2) \eta_{pqrsxy}^{(5b)}
+ \sum_{pqrstuvw} B_{pqrs}(t_1)B_{tuvw}(t_2) \eta_{pqrstuvw}^{(5c)}.
\end{align}
\section{Diagrammatic evaluation of Wick's contraction}
\label{sec:diag}
In this section, we derive the expressions for the
$\eta$ terms that are needed to
evaluate the expression.
The 3-vertex terms $\eta^{(3a)}$ are given by the
set of diagrams presented in \autoref{fig:3vert}. We note that only linked-diagrams
have non-zero contribution to $\eta^{(3a)}$.
\begin{figure*}
\begin{center}
\includegraphics[trim=4cm 20cm 4cm 2.5cm,scale=1.0]{table3.pdf} \\
\caption{\textrm{3-vertex diagams.}}
\label{fig:3vert}
\end{center}
\end{figure*}
The expression for $\eta^{(4)}$ can be
expressed as a sum of both linked and unlinked diagrams.
However, it can be shown that all unlinked diagrams
have zero contribution. Analysis of the
unlinked diagrams reveal that the
unlinked diagrams contain the following expressions,
\begin{align}
\langle 0 \vert \{ k^\dagger j^\dagger b c\} \{ a^\dagger i \} \vert 0 \rangle
\langle 0 \vert Z_{D,B} \Omega \vert 0 \rangle
&= 0.
\end{align}
The set of linked diagrams for $\eta^{(4a)}$
and $\eta^{(4b)}$ are presented in Fig.~\ref{fig:4verta} and Fig.~\ref{fig:4vertb}.
\begin{figure*}
\begin{center}
\includegraphics[trim=2.5cm 9cm 1.5cm 2.5cm,scale=1.0]{table4_3.pdf} \\
\caption{\textrm{Part A: 4-vertex diagams.}}
\label{fig:4verta}
\end{center}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[trim=2.5cm 3cm 1.5cm 2.5cm,scale=1.0]{table4.pdf} \\
\caption{\textrm{Part B: 4-vertex diagams.}}
\label{fig:4vertb}
\end{center}
\end{figure*}
The evaluation of the $\eta^{(5)}$
expressions require both linked and unlinked diagrams.
In many cases, the unlinked 5-vertex diagrams
can be expressed in terms of the 3-vertex and 4-vertex
diagrams derived earlier.
In case of $\eta^{(5a)}$, this diagrammatic factorization
is expressed as,
\begin{align}
\eta^{(5a)}_{pqrs}
&=
\eta^{(2a)}_{pqrs} \eta^{(3a)} + \eta^{(5aL)}_{pqrs}
\end{align}
where $\eta^{(2a)} $ is the vacuum bubble
\begin{align}
\eta^{(2a)}_{pqrs}
&=
\langle 0 \vert \{ p^\dagger q \} \{ r^\dagger s \} \vert 0 \rangle
\end{align}
and $\eta^{(5aL)}_{pqrs}$ are set of all linked diagrams and the
superscript $L$ is used to represent it.
Using Wick's theorem,
\begin{align}
\{ p^\dagger q \} \{ r^\dagger s \}
&=
\{ p^\dagger q r^\dagger s \}
+\delta_{qr} \{ p^\dagger s \}
-\delta_{ps} \{ r^\dagger q \}
+\delta_{ps}\delta_{qr}.
\end{align}
Therefore,
\begin{align}
\eta^{(5aL)}_{pqrs}
&=
\eta^{(4b)}_{pqrs}
+\delta_{qr} \eta^{(4a)}_{ps}
-\delta_{ps} \eta^{(4a)}_{rq}
+\delta_{ps}\delta_{qr} \eta^{(3a)}.
\end{align}
Similarly, the diagrams associated with
$\eta^{(5b)}$ can be factored as,
\begin{align}
\eta^{(5b)}_{pqrsxy}
&=
\eta^{(1a)}_{xy} \eta^{(4b)}_{pqrs}
+ \eta^{(1b)}_{pqrs} \eta^{(4a)}_{xy}
+ \eta^{(1a)}_{xy} \eta^{(1b)}_{pqrs} \eta^{(3a)}
+\eta^{(5bL)}_{pqrsxy} \\
\eta^{(5c)}_{pqrstuvw}
&=
\eta^{(1b)}_{tuvw} \eta^{(4b)}_{pqrs}
+\eta^{(1b)}_{pqrs} \eta^{(4b)}_{tuvw}
+\eta^{(1b)}_{pqrs} \eta^{(1b)}_{tuvw} \eta^{(3a)}
+ \eta^{(5cL)}_{pqrstuvw}
\end{align}
where,
\begin{align}
\eta^{(1a)}_{pq}
&=
\langle 0 \vert \{ p^\dagger q \} \vert 0 \rangle \\
\eta^{(1a)}_{pqrs}
&=
\langle 0 \vert \{ p^\dagger q^\dagger s r \} \vert 0 \rangle.
\end{align}
In this work, we introduce a renormalization scheme
where all linked 5-vertex diagrams are
represented as 1-loop and 2-loop renormalized 3-vertex and 4-vertex diagrams. Using this approach, diagrams associated with $\eta^{(5aL)}_{pqrs}$ and $\eta^{(5bL)}_{pqrs}$ are presented in Fig.~\ref{fig:5verta} and Fig.~\ref{fig:5vertb}, respectively.
\begin{figure*}
\begin{center}
\includegraphics[trim=2.5cm 3cm 1.5cm 2.5cm,scale=1.0]{table4_1loop.pdf} \\
\caption{\textrm{1-loop renormalized 4-vertex diagams.}}
\label{fig:5verta}
\end{center}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[trim=2.5cm 3cm 1.5cm 2.5cm,scale=1.0]{table4_2loop.pdf} \\
\caption{\textrm{2-loop renormalized 4-vertex diagams.}}
\label{fig:5vertb}
\end{center}
\end{figure*}
\section{Evaluation of time-dependent vertex amplitudes}
\label{sec:vertex}
\subsection{Evaluation of time-dependent amplitudes associated with bare 1-body vertex}
In this section, we will evaluate the expression of the
time-dependent amplitude $A_{pq}(t)$ associated with the
bare 1-body vertex. The equation that defines this amplitude is
given by the following equation,
\begin{align}
\label{eq:amplitude}
e^{+iH_0t/\hbar} V_F(t) e^{-iH_0t/\hbar}
&=
\sum_{pq} A_{pq}(t) p^\dagger q.
\end{align}
We will start by writing the second-quantized (SQ) representation of the
$V_F(t)$ operator
\begin{align}
V_F(t)
&=
\sum_{pq} v_{pq}^F(t) p^\dagger q.
\end{align}
Since $v_{pq}^F(t)$ is just a number, we are interested in
evaluating the SQ operator $e^{+iH_0t/\hbar} p^\dagger q e^{-iH_0t/\hbar}$.
We will start by inserting identity in this expression,
\begin{align}
e^{+iH_0t/\hbar} p^\dagger q e^{-iH_0t/\hbar}
&=
e^{+iH_0t/\hbar} p^\dagger e^{-iH_0t/\hbar} e^{+iH_0t/\hbar} q e^{-iH_0t/\hbar}.
\end{align}
The time-dependent creation and annihilation operators are defined as,
\begin{align}
p^\dagger(t)
&=
e^{+iH_0t/\hbar} p^\dagger e^{-iH_0t/\hbar} \\
q(t)
&=
e^{+iH_0t/\hbar} q e^{-iH_0t/\hbar}.
\end{align}
Using BCH expansion,
\begin{align}
q(t)
&=
q + \frac{it}{\hbar}[q,H_0]
+\frac{1}{2!} \left( \frac{it}{\hbar}\right)^2 [[q,H_0],H_0] + \dots
\end{align}
Using the results from Eq. \eqref{eq:1_body_comm},
derived in Appendix~\ref{sec:CommutatorIdentities},
\begin{align}
[p,q^\dagger r] &= \delta_{pq} r
\end{align}
Therfore,
\begin{align}
[q,H_0]
&=
\sum_{p_1q_1} h_{p_1q_1}
[q,p^\dagger_1,q_1] \\
&=
\sum_{p_1q_1} h_{p_1q_1}
\delta_{qp_1} q_1 \\
&=
\sum_{q_1} h_{qq_1} q_{1}.
\end{align}
Hence, we have the general result,
\begin{align}
[q,H_0] = \sum_{q_1} h_{qq_1} q_{1}.
\end{align}
Similarly,
\begin{align}
[[q,H_0],H_0]
&=
\sum_{q_1} h_{qq_1} [q_1,H_0] \\
&=
\sum_{q_1 q_2} h_{qq_1} h_{q_1q_2} q_2
\end{align}
We note that the above expression can be
written in terms of the matrix product
\begin{align}
\sum_{q_1} h_{qq_1} h_{q_1q_2}
=
[\mathbf{h} \mathbf{h}]_{q q_2}
= [\mathbf{h}^2]_{q q_2}
\end{align}
Therefore, for m-terms expansion,
\begin{align}
[[q,H_0],\dots, \textrm{m-terms},H_0]
&=
\sum_{q_1 q_2 \dots q_m}
h_{q q_1} h_{q_1 q_2} h_{q_2 q_3}
\dots h_{q_{m-1},q_{m}} q_m \\
&=
\sum_{q_m}
[\mathbf{h}^m]_{q q_m} q_{m}.
\end{align}
Since $q_m$ is just a summation index,
we can rewrite the expression as,
\begin{align}
\label{eq:comm_series}
[[q,H_0],\dots, \textrm{m-terms},H_0]
&=
\sum_{q_1}
[\mathbf{h}^m]_{q q_1} q_{1}.
\end{align}
Substituting the above expression in the BCH expansion,
\begin{align}
q(t)
&=
q + \frac{it}{\hbar} \sum_{q_1} h_{qq_1} q_1
+\frac{1}{2!} \left( \frac{it}{\hbar}\right)^2 \sum_{q_1} [\mathbf{h}^2]_{qq_1} q_1 +
\frac{1}{k!} \left( \frac{it}{\hbar}\right)^k \sum_{q_1} [\mathbf{h}^k]_{qq_1} q_1
\dots
\end{align}
Combining all the h-terms
\begin{align}
q(t)
&=
q +
\sum_{q_1}
\left[
\frac{it}{\hbar} h_{qq_1} q_1
+\frac{1}{2!} \left( \frac{it}{\hbar}\right)^2 [\mathbf{h}^2]_{qq_1} +
\frac{1}{k!} \left( \frac{it}{\hbar}\right)^k [\mathbf{h}^k]_{qq_1}
\dots
\right] q_1
\end{align}
Expressing the first term $q$ in terms of $q_1$ using
Kronecker delta,
\begin{align}
q = \sum_{q_1} \delta_{qq_1} q_1
\end{align}
we get,
\begin{align}
q(t)
&=
\sum_{q_1}
\left[ \delta_{qq_1}
+\frac{it}{\hbar} h_{qq_1} q_1
+\frac{1}{2!} \left( \frac{it}{\hbar}\right)^2 [\mathbf{h}^2]_{qq_1} +
\frac{1}{k!} \left( \frac{it}{\hbar}\right)^k [\mathbf{h}^k]_{qq_1}
\dots
\right] q_{1}.
\end{align}
We recognize that the $\delta$ in the above expression
is the element of the identity matrix $\mathbf{I}$.
\begin{align}
q(t)
&=
\sum_{q_1}
\left[ I_{qq_1}
+\frac{it}{\hbar} h_{qq_1} q_1
+\frac{1}{2!} \left( \frac{it}{\hbar}\right)^2 [\mathbf{h}^2]_{qq_1} +
\frac{1}{k!} \left( \frac{it}{\hbar}\right)^k [\mathbf{h}^k]_{qq_1}
\dots
\right] q_1
\end{align}
We define matrix $\tilde{\mathbf{h}}(t)$ as,
\begin{align}
\tilde{\mathbf{h}}_A(t)
&=
\frac{it}{\hbar} \mathbf{h}.
\end{align}
The subscript $A$ is to remind us that it is an anti-hermitian matrix
\begin{align}
\tilde{\mathbf{h}}_A^\dagger(t)
&=
-\tilde{\mathbf{h}}_A(t).
\end{align}
Using the above definition, the sum in the square brackets can be written in terms of
matrix exponentiation,
\begin{align}
\sum_{k=0}^{\infty}
\frac{1}{k!} \tilde{\mathbf{h}}_A^k(t)
&= e^{\tilde{\mathbf{h}}_A(t) }
\end{align}
where,
\begin{align}
\tilde{\mathbf{h}}_A^0 = \mathbf{I}
\end{align}
and $\mathbf{I}$ is identity matrix (and not scalar 1).
Therefore, the time-development of $q$ is given by,
\begin{align}
q(t)
&=
\sum_{q_1}
[e^{\tilde{\mathbf{h}}_A(t) }]_{qq_1} q_1.
\end{align}
Similarly, the time-development of $p^\dagger$ is given by,
\begin{align}
p^\dagger(t)
&=
\sum_{p_1}
[e^{-\tilde{\mathbf{h}}_A(t) }]_{pp_1} p_1^{\dagger}.
\end{align}
Therefore,
\begin{align}
e^{+iH_0t/\hbar} V_F(t) e^{-iH_0t/\hbar}
&=
\sum_{pq} v^F_{pq}(t) p^\dagger(t) q(t) \\
&=
\sum_{pqp_1q_1}
v^F_{pq}(t)
[e^{-\tilde{\mathbf{h}}_A(t) }]_{pp_1}
[e^{\tilde{\mathbf{h}}_A(t) }]_{qq_1}
p_1^\dagger q_{1}.
\end{align}
Using
\begin{align}
[e^{-\tilde{\mathbf{h}}_A(t) }]^\dagger
&=
e^{+\tilde{\mathbf{h}}_A(t) }
\end{align}
\begin{align}
e^{+iH_0t/\hbar} V_F(t) e^{-iH_0t/\hbar}
&=
\sum_{pqp_1q_1}
[e^{+\tilde{\mathbf{h}}_A(t) }]_{p_1p}
v^F_{pq}(t)
[e^{-\tilde{\mathbf{h}}_A(t) }]_{q_1q}
p_1^\dagger q_1
\end{align}
which is equal to,
\begin{align}
e^{+iH_0t/\hbar} V_F(t) e^{-iH_0t/\hbar}
&=
\sum_{p_1q_1}
[e^{+\tilde{\mathbf{h}}_A(t) }
\mathbf{v}^F(t)
e^{-\tilde{\mathbf{h}}_A(t) }]_{p_1 q_1}
p_1^\dagger q_{1}.
\end{align}
Comparing to Eq.~\ref{eq:amplitude}, we get the expression for the
$A$ amplitudes
\begin{align}
\mathbf{A}(t) = e^{+(it/\hbar)\mathbf{h}}
\mathbf{v}^F(t)
e^{-(it/\hbar)\mathbf{h}}.
\end{align}
\subsection{Evaluation of time-dependent amplitudes associated with bare 2-body vertex}
In this section, we will evaluate the expression of the
time-dependent amplitude $B_{pq}(t)$ associated with the
bare 2-body vertex. The equation that defines this amplitude is
given by the following equation,
\begin{align}
e^{+iH_0t/\hbar} W e^{-iH_0t/\hbar}
&=
\sum_{pqrs} B_{pqrs}(t) p^\dagger q^\dagger s r
\end{align}
where the 2-body operator is defined as,
\begin{align}
W = \sum_{pqrs} W_{pqrs} p^\dagger q^\dagger s r.
\end{align}
Using the insertion of identity method used in the
previous section, we express the above equation
in terms of time-dependent SQ operators
\begin{align}
e^{+iH_0t/\hbar} W e^{-iH_0t/\hbar}
&=
\sum_{pqrs} W_{pqrs}
p^\dagger(t) q^\dagger(t) s(t) r(t).
\end{align}
Substituting the previously derived expression for
time-dependent SQ
\begin{align}
p^\dagger(t)
&=
\sum_{p_1}
[e^{-(it/\hbar)\mathbf{h}}]_{pp_1} p_1^\dagger
=
\sum_{p_1}
[e^{+(it/\hbar)\mathbf{h}}]_{p_1p} p_1^\dagger \\
s(t)
&=
\sum_{s_1}
[e^{+(it/\hbar)\mathbf{h}}]_{ss_1} s_1
=
\sum_{s_1}
[e^{-(it/\hbar)\mathbf{h}}]_{s_1s} s_1
\end{align}
we get,
\begin{align}
e^{+iH_0t/\hbar} W e^{-iH_0t/\hbar}
&=
\sum_{pqrs} W_{pqrs}
p^\dagger(t) q^\dagger(t) s(t) r(t) \\
&=
\sum_{p_1 q_1 r_1 s_1 pqrs}
[e^{+(it/\hbar)\mathbf{h}}]_{p_1p}
[e^{+(it/\hbar)\mathbf{h}}]_{q_1q} \\\notag
&\times W_{pqrs}
[e^{-(it/\hbar)\mathbf{h}}]_{r_1r}
[e^{-(it/\hbar)\mathbf{h}}]_{s_1s} \\\notag
&\times
p^\dagger_1 q^\dagger_1 s_1 r_{1}.
\end{align}
The above relationship implies the following
expression for the $B$,
\begin{align}
B_{p_1 q_1 r_1 s_1}
&=
\sum_{pqrs}
[e^{+(it/\hbar)\mathbf{h}}]_{p_1p}
[e^{+(it/\hbar)\mathbf{h}}]_{q_1q}
W_{pqrs}
[e^{-(it/\hbar)\mathbf{h}}]_{r_1r}
[e^{-(it/\hbar)\mathbf{h}}]_{s_1s}.
\end{align}
\section{Results and conclusion}
\label{sec:results}
The main result from this work is the
explicit expressions for the
time-dependent transition amplitudes
for generation of
2e-2h pair from 1e-1h pair
for excited states propagating
in time under the influence of
external electromagnetic field.
Up to second-order the
time-dependent transition amplitude
is given by the following expression,
\begin{align}
I_F(t_f)
&=
I_F^{(0)} t_f
+ \int_0^{t_f} dt_1 \, I_F^{(1)}(t_1)
+ \int_0^{t_f} d t_1 \int_0^{t_1} d t_2 I_F^{(2)}(t_1,t_2).
\end{align}
Because of the complexity of the equation, a brute-force
approach for the calculation of this expression is computationally
prohibitive. In this work, we showed that the expressions
for $I_F^{(n)}$ can be separated into a time-dependent
component and time-independent components.
We have derived the expression for the time-dependent
components and we show that these quantities
can be expressed in terms standard matrix-matrix
tensor-tensor contraction terms.
The extraction of the time-independent components from the
time-propagation equation presents a
significant computational advantage because the time-independent
component can be evaluated at the start of the calculation and
can be reused during the course of the time-dependent calculation.
This strategy dramatically reduces the computational complexity of
for performing such calculations.
We have also presented the explicit results from the calculation
of the time-dependent quantities (denoted by $\eta$)
in terms of the diagrammatic representation.
\par
One of the key results from this work is the
general treatment of electron correlation
in the derived result.
The inclusion of electron-electron
correlation for the excited state
is done by the operator $\Omega$
in Eq.~\ref{eq:correlationOperator}. In the derivation presented
here, we have not imposed any specific form
for the electron-correlation operator.
As a consequence, the set of diagrams
presented in Fig.~\ref{fig:4verta} and ~\ref{fig:4vertb},
is the complete set of diagrams
associated any form of $\Omega$.
If $\Omega$ is chosen to be an N-body operator
like the full-CI or coupled-cluster wave functions,
all the diagrams presented in Fig.~\ref{fig:5verta} and ~\ref{fig:5vertb} will
contribute to the transition amplitudes.
However, if $\Omega$ is chosen to be
a 2-body operator
only a subset of those diagrams will contribute.
\par
The complexity and computational cost of the evaluation
of the diagrams increase with increasing number
of vertices.
Out of the 3-vertex, 4-vertex, and 5-vertex diagrams,
the 5-vertex diagrams are most expensive to calculate.
In this derivation, we have shown that
a subset of the 5-vertex diagrams can
be factored into pre-existing 3- and 4-vertex diagrams.
We also present a renormalization scheme for
the 5-vertex diagrams by expressing them
as 1-loop and 2-loop contracted effective 4-vertex diagrams.
The renormalization method and the factorization of diagrams
utilizes reusability of pre-computed results and contributes in reducing the overall cost of the calculations.
We envision that the developed method can be used
for the investigation of time-dependent carrier multiplicity
in both semiconductor and organic photoactive systems.
\section{Acknowledgments}
We are grateful to National Science Foundation (CHE-1349892) and Syracuse University for the financial support. AC will also like to thank Prof. Heather Jaeger for insightful discussions about this work.
\section*{Appendix}
|
2,877,628,091,477 | arxiv |
\section*{Introduction}
In the past two decades the anomalous Hall effect (AHE) $-$ one of the oldest known manifestations of magnetism in solids $-$ has acquired a major role in testing various new paradigms and phenomena in condensed matter physics~\cite{RMP-AHE}. These include, but are not limited to, the issues related to generation and manipulation of spin currents~\cite{RMP-SHE}, current-induced torques on the magnetization~\cite{Miron,Wadley,RMP-SOT}, electrical detection of topological phases of matter~\cite{Chang2013}, and the emergence of non-collinear spin states~\cite{MacDonald_noncol_2014}.
While originally explored in ferromagnetic (FM) materials, the AHE has come to occupy a special place in the realm of antiferromagnets (AFMs) as well~\cite{PhysRevLett.87.116801,Libor2018}. While it is well-known that in non-coplanar AFMs the AHE
can arise even without spin-orbit interaction, the AHE emerging in collinear AFMs has been recently discovered~\cite{Libor2020,Libor2020-2}, where the latter crystal Hall effect originates in the breaking of symmetry brought by the non-magnetic cage of atoms via structural chirality~\cite{Libor2020,Kartik,Tsymbal2020}.
The direct relation of the AHE to the geometry and topology of electronic states lends a way to utilizing the AHE as a probe for emergence of various Berry phase properties, which has become one of the major areas of research in the past years. Here, the AHE is traditionally associated with the reciprocal $k$-space Berry phase of Bloch electrons~\cite{Niu_Berry_2010}, while its relation to the real-space Berry phases of electrons in winding spin structures is reflected in celebrated topological Hall effect of systems which exhibit non-vanishing scalar spin chirality $\mathbf{S}_i\cdot(\mathbf{S}_j\times \mathbf{S}_k)$ among neighboring triplets of spins, such as skyrmions~\cite{Fabian-2020}. Recently it has been shown that the $k$-space and real-space Berry phases are closely linked together in giving rise to the so-called chiral Hall effect of spin textures~\cite{Fabian-2020}. In contrast to the AHE in ferromagnets and topological Hall effect of skyrmions, the chiral Hall effect is sensitive to the sense of smooth rotation, or, chirality, of the magnetization in e.g. chiral domain walls~\cite{Fabian-2020}. On the other hand, recent studies show that the effect of spin canting on the electronic structure and the AHE in collinear antiferromagnets can be significant~\cite{Suzuki_2016,Takahashi_2018,HuaChen,Yang_2020}.
In this work we demonstrate the emergence of a distinct flavor of the AHE, which can be prominent both in ferromagnets and antiferromagnets. We show that it arises in
diverse magnetic systems upon imprinting the vector chirality $\mathbf{S}_i\times\mathbf{S}_j$ among pairs of neighboring spins by canting driven by external fields or thermal fluctuations. We demonstrate that, similarly to its twin in the world of smooth textures, the chiral Hall effect is sensitive to the sense of vector chirality exhibited by pairs of frustrated spins.
We theoretically investigate the properties of this phenomenon, show that it can be significant in diverse classes of materials, and demonstrate its clear distinction from the conventional anomalous and topological Hall effects by showing that
it has a profoundly different Berry phase origin.
Importantly, we argue that the inclusion of chiral Hall effect into the palette of complex phenomena exhibited by ferromagnets and antiferromagnets is indispensable for providing a unified categorization of the Hall effects $-$ which is a prerogative for a conclusive read-out of crystal structure, magnetic order, and dynamics exhibited by complex magnets.
\begin{figure*}[t!]
\includegraphics[width=0.75\hsize]{./Figure1_new.png}
\caption{\label{FIG1} {\bf Sketch of the definition of crystal Hall and chiral Hall effects in canted ferromangets and antiferromagnets}. Once collinear ferromagnetic or antiferromagnetic order (light yellow arrows in (a) and (b)) is broken by canting with positive ($+\theta$, red arrows) or negative ($-\theta$, blue arrows) sense of vector chirality, the modifications in the electronic structure result in the modifications of the anomalous Hall conductivity (AHC), $\sigma_{xy}(\theta)$. The AHC can decomposed into the crystal Hall (symmetric, $\theta$-even) part, $\sigma_{xy}^s=\left(\sigma_{xy}(+\theta)+\sigma_{xy}(-\theta)\right)/2$, (c), and the chiral Hall (anti-symmetric, $\theta$-odd) part $\sigma_{xy}^a=\left(\sigma_{xy}(+\theta)-\sigma_{xy}(-\theta)\right)/2$, (d). In (c) and (d) the red and blue arrows correspond to the direction of the Hall current for positive and negative chirality in an applied electric field $\mathbf{E}$.}
\end{figure*}
\noindent
\section*{Results}
In this work, we consider the effect of finite vector chirality on the AHE of initially collinear ferro- and antiferromagnetic two-dimensional (2D) systems, which is induced by small canting away from the initial configuration of spins, see (Fig. 1). We concentrate specifically on the case of crystals which comprise two spins in the unit cell, such as a honeycomb lattice
of magnetic atoms, and discuss how our findings can be generalized to the case of several magnetic atom types. Given the original collinear arrangement of spins on sites A and B, $\mathbf{s}_{\rm A}$ and $\mathbf{s}_{\rm B}$, along a certain axis $\hat{\mathbf{s}}_0$, we define a plane which contains this axis as well as spins canted with respect to $\hat{\mathbf{s}}_0$ by an angle $+\theta$ (for $\mathbf{s}_{\rm A}$) and $-\theta$ (for $\mathbf{s}_{\rm B}$).
With this definition, the reversal of sign in the canting angle $\theta\rightarrow -\theta$ provides a state of opposite chirality $\boldsymbol{\chi}$, which we define as $\boldsymbol{\chi}=\mathbf{s}_{\rm A}\times\mathbf{s}_{\rm B}$, with $\chi=|\sin\theta|$, where we assume that the length of the spins does not change upon canting, see (Fig. 1 a,b). In the presence of spin-orbit interaction (SOI) and upon breaking of certain crystalline symmetries, such as inversion symmetry, which is naturally broken upon depositing the 2D magnetic lattice on a surface, the electronic structure of the system with positive chirality
can be different from that with negative chirality.
The canting-driven modifications in the electronic structure inevitably result in the modifications brought to the AHE of the system. This aspect presents the focus of our work. In the case of a 2D system considered here, only the $xy$-component of the conductivity tensor which we denote as $\sigma_{xy}$ encodes the information about the magnitude of the AHE. We consider only the intrinsic part of the AHE as given by the $\mathbf{k}$-dependent Berry curvature of the occupied states $\Omega_{xy}(\mathbf{k}) =\sum_{n\in {\rm occ}}2\Im \Braket{\partial_{k_x}u_{n\mathbf{k}}|\partial_{k_y}u_{n\mathbf{k}}} $ where the sum runs over occupied states at point $\mathbf{k}$ and $u_{n\mathbf{k}}$ is the lattice-periodic Bloch state $n$. The AHC is given by the Brillouin zone (BZ) integral $\sigma_{xy}=\int_{\rm BZ} \Omega_{xy}(\mathbf{k})\,d\mathbf{k}$
(see more details in the section Methods).
In order to track the changes in $\sigma_{xy}$ with respect to canting as given by the angle $\theta$,
we introduce two key quantities $-$ the symmetric ($\sigma_{xy}^s$) and antisymmetric ($\sigma_{xy}^a$) parts of the anomalous Hall conductivity (AHC) $-$ defined as follows:
\begin{equation}\label{Eq1}
\sigma_{xy}^{s(a)}(\theta) =
\frac{\sigma_{xy}(\theta) \pm \sigma_{xy}(-\theta)}{2} = \int_{\rm BZ} \Omega_{xy}^{s(a)}(\theta,\mathbf{k})\,d\mathbf{k},
\end{equation}
where the symmetric and antisymmetric parts of the Berry curvature are determined at each $\mathbf{k}$-point as $\Omega^{s(a)}_{xy}(\theta,\mathbf{k})=\left[ \Omega_{xy}(\theta,\mathbf{k}) \pm \Omega_{xy}(-\theta,\mathbf{k}) \right]/2$. The latter dependence of $\Omega_{xy}$ on $\theta$ arises in response to the dependence of electonic states, whose geometry the Berry curvature measures, on canting.
According to its definition, the symmetric AHC has the same value for the states of opposite chirality,~i.e.~it is $\theta$-even: $\sigma_{xy}^{s}(\theta)=\sigma_{xy}^{s}(-\theta)$, see (Fig. 1 d). Since at zero canting the symmetric AHC is given by the AHC of the collinear system, $\sigma_{xy}^{s}(\theta=0)=\sigma_{xy}(\theta=0)=\sigma_{xy}^0$, we will refer to this part of the AHC as the crystal
Hall conductivity, as for collinear AFMs it would correspond to the situation of crystal Hall effect~\cite{Libor2019}.
In collinear FMs this would correspond to the conventional definition of the ``ferromagnetic" AHE.
On the other hand, the antisymmetric AHC changes sign when $\theta\rightarrow -\theta$,~i.e.~it is $\theta$-odd: $\sigma_{xy}^{a}(\theta)=-\sigma_{xy}^a(-\theta)$, see (Fig. 1 c), and it vanishes for the collinear configuration. Since this part of the AHC is sensitive to the sense of chirality $\boldsymbol{\chi}$, we refer to it as the chiral Hall conductivity.
This name is further motivated by the fact that the chirality-sensitive Hall effect has been recently discovered in systems where a finite chirality is imprinted by smooth spiral-like deformations of the spin texture~\cite{Fabian-2020}. The chiral Hall effect discussed here presents a version of the latter phenomenon where a specific sense of chirality is generated by lattice-periodic short-wavelength deformations of the spin structure.
By definition, both effects $-$ the crystal Hall and chiral Hall effects $-$ when added together, provide the total AHC of the system: $\sigma_{xy}^{s}(\theta) + \sigma_{xy}^{a}(\theta)=\sigma_{xy}(\theta)$.
However, while the crystal Hall effect picks up even powers of $\theta$ in the Taylor expansion of $\sigma_{xy}(\theta)$ around the collinear state, $\sigma_{xy}^{s}(\theta)=\sigma_{xy}^0 + a\theta^2 + ...$, the chiral Hall effect accumulates odd terms in the latter expansion, $\sigma_{xy}^{a}(\theta)=b\theta +c\theta^3 + ...$, where coefficients $a,b$ and $c$ depend on the electronic structure in the collinear state. This tells us, that in the limit of small canting (i.e.~to the first order in $\theta$) the deviations of $\sigma_{xy}$ from $\sigma_{xy}^0$ are manifestly chiral in nature. Correspondingly, understanding the properties of the chiral Hall effect is of utter importance for understanding the behavior of the AHE in collinear magnets where the spins are canted either as a result of external electric and magnetic fields, chemical or structural tuning of exchange interactions, and thermal fluctuations.
\begin{figure*}[ht!]
\includegraphics[width=0.90\hsize]{./Figure2.png}
\caption{\label{FIG2} {\bf The emergence of chiral and crystal Hall effect of ferro- and antiferromagnets on a honeycomb lattice}.
(a) The definition of the angles used to characterize the canted spin structure of spins $\mathbf{s}_{\rm A}$ and $\mathbf{s}_{\rm B}$. The initial direction of collinear magnetization $\hat{\mathbf{s}}_0=(\theta_0,\varphi_0)$ with polar angle $\theta_0$ and azimuthal angle $\varphi_0$ is kept constant during canting, $\hat{\mathbf{s}}_0\sim\mathbf{s}_{\rm A}+\mathbf{s}_{\rm B}$. The spins are canted in the plane of constant $\varphi_0$ by an angle $\theta$ for $\mathbf{s}_{\rm A}$ and $-\theta$ for $\mathbf{s}_{\rm B}$ with respect to $\hat{\mathbf{s}}_0$. (b-c) The changes in the bandstructure of the ferromagnetic (FM) (b) and antiferromagnetic (AFM) (c) spins initially along $\hat{\mathbf{s}}_0=(100^{\circ},10^{\circ})$ upon canting by $\pm 10^{\circ}$. The thin grey line with circles marks the initial bandstucture while blue and red lines mark the bandstructure for $\theta= 10^{\circ}$ and $\theta=-10^{\circ}$, respectively. (d-e) The corresponding anomalous Hall conductivity (AHC), $\sigma_{xy}$, as a function of the Fermi energy is shown for the FM (d) and AFM (e) cases for positive (solid blue line) and negative (dashed red line) canting. The symmetric, $\sigma_{xy}^s$, and anti-symmetric, $\sigma_{xy}^a$, parts of the AHC are shown with dark orange and dark blue lines. All values are in $e^2/h$, where $e$ is the elementary charge and $h$ is Planck's constant.
(f-k) While for the high-symmetry direction of $\hat{\mathbf{s}}_0=(100^{\circ},0^{\circ})$ the symmetry properties of the Berry curvature of the first two bands in the FM case, $\Omega^a(10^{\circ},\mathbf{k})$, lead to vanishing overall chiral Hall effect, (f), the breaking of symmetry for $\hat{\mathbf{s}}_0=(100^{\circ},10^{\circ})$ results in a net effect, (g). The complex structure of $\Omega^a(10^{\circ},\mathbf{k})$ of the first band from (c) in $\mathbf{k}$-space, (h), is clearly correlated with the separation between the first and second bands in energy, shown in (k). }
\end{figure*}
\noindent
{\bf Model considerations.} We start by considering the existence and properties of the chiral Hall effect on a bi-partite honeycomb lattice of magnetic spins. The effective lattice tight-binding Hamitonian reads:
\begin{equation}
\begin{split}
H = -t \sum\limits_{\langle ij \rangle\alpha} c_{i\alpha}^\dagger c_{j\alpha}^{\phantom{\dagger}} &+ i \alpha_{\rm R}\sum\limits_{\langle ij \rangle\alpha \beta} \hat{\mathbf e}_z \cdot (\boldsymbol{\sigma} \times {\mathbf d}_{ij})_{\alpha\beta}\, c_{i\alpha}^\dagger c_{j\beta}^{\phantom{\dagger}}\\
&+ \lambda_{\rm ex} \sum_{i\alpha \beta} (\hat{\mathbf s}_i\cdot \boldsymbol{\sigma})_{\alpha\beta}\, c_{i\alpha}^\dagger c_{i\beta}^{\phantom{\dagger}},
\end{split}
\label{eq:model}
\end{equation}
where $c_{i\alpha}^\dagger$ ($c_{i\alpha}^{\vphantom{\dagger}}$) denotes the creation (annihilation) of an electron with spin $\alpha$ at site $i$, $\langle ...\rangle$ restricts the sums to nearest neighbors, the unit vector $\mathbf d_{ij}$ points from $j$ to $i$, and $\boldsymbol{\sigma}$ stands for the vector of Pauli matrices. Besides the hopping with amplitude $t$, Eq.~\eqref{eq:model} contains the Rashba spin-orbit coupling of strength $\alpha_\text{R}$ originating for example in the surface potential gradient. The remaining term in equation~\eqref{eq:model} is the local exchange term with $\lambda_{\rm ex}$ characterizing the strength of exchange splitting and $\hat{\mathbf{s}}_i$ stands for the direction of spin on site $i$.
Here, we work with the following parameters of the model: $t=1.0$\,eV, $\alpha_{\rm R}=0.4$\,eV, and $\lambda_{\rm ex}=1.4$\,eV. We start with the initial direction of atomic spins along a given direction $\hat{\mathbf{s}}_0$ characterized with polar angles $\hat{\mathbf{s}}_0=(\theta_0,\varphi_0)$, see (Fig. 2 a), with $\hat{\mathbf{s}}_{\rm A}$ and $\hat{\mathbf{s}}_{\rm B}$ along $\hat{\mathbf{s}}_0$ for a FM, and with $\hat{\mathbf{s}}_{\rm A}=-\hat{\mathbf{s}}_{\rm B}=\hat{\mathbf{s}}_0$ in case of an AFM configuration.
Following the symmetry analysis (see Supplementary Note 1), we consider the canting plane which is orthogonal to the $xy$-plane and which contains $\hat{\mathbf{s}}_0$. Within this plane, the azimuthal angle of all spins is constant and the canting is characterized by an angle $\pm\theta$ away from $\hat{\mathbf{s}}_0$ for $\hat{\mathbf{s}}_{\rm A/B}$. A change of sign of $\theta$ corresponds to switching the sign of the chirality among $\hat{\mathbf{s}}_{\rm A}$ and $\hat{\mathbf{s}}_{\rm B}$, (Fig. 2 a).
Before proceeding with the analysis of the AHE, we inspect the influence of chirality on the band structure of the model. To do this, we choose the initial collinear direction of the spins along $\hat{\mathbf{s}}_0=(100^{\circ},10^{\circ})$, which breaks all symmetries in the system. The bandstructures of the FM and AFM configurations for the collinear as well as canted by $\pm 10^{\circ}$ cases are shown in
(Fig. 2 b) and (c), respectively. The band structure for the FM case for $\hat{\mathbf{s}}_0=(90^{\circ},0^{\circ})$ is known to be gapped at half-filling, where the gap of the system is topologically non-trivial~\cite{Niu_mixed_2019}. Clearly, canting-driven band dynamics is different for two opposite chiralities, and respective band shifts sensitively depend on the structural properties. They can be further separated into contributions which are even and odd in the Rashba strength. Among these, the ones odd in $\alpha_{\rm R}$,~i.e.,~sensitive to the sense of structural chirality, are closely related to the emergence of Dzyaloshinskii-Moriya interaction among spins $\mathbf{s}_{\rm A}$ and $\mathbf{s}_{\rm B}$~\cite{Dzyalosinskij_1957,Moriya_1960,Bode}.
In the FM case, the chiral band shifts observed in (Fig. 2) are directly related to the sense of inversion symmetry breaking via the Rashba term in Eq.~\ref{eq:model} and corresponding structural chirality: upon changing the sign of $\alpha_{\rm R}\rightarrow -\alpha_{\rm R}$ in the Hamiltonian, the bands of the configurations with opposite chirality simply exchange their energetic position.
The latter effect can be also understood based on an effective gauge theory, applied recently to the study of orbital magnetism in chiral spin systems~\cite{Lux2018}, where the effect of canting and generally vector chirality was shown to be equivalent to an effect of a fictitious chiral magnetic field $B^{\rm eff}_{\rm R}\sim\boldsymbol{\chi}$, applied to a collinear FM system. Within the interfacial Rashba model it can be shown analytically that $B^{\rm eff}_{\rm R}\sim \alpha_{\rm R}$, implying that $B^{\rm eff}_{\rm R}$ changes sign when the sense of inversion symmetry breaking is reversed. Consequently, the corresponding band shifts of the ferromagnetic electronic states of Hamiltonian~(\ref{eq:model}), a lattice realization of the interfacial Rashba model, change sign.
In ferromagnets with broken inversion symmetry the emergence of non-vanishing chiral magnetic field generated by chiral spin canting goes hand in hand with the rise of the linear-in-chirality contribution to the Hall effect $-$ the chiral Hall effect.
Our analysis clearly reveals that the chiral Hall effect is a general effect appearing not only in smooth textures~\cite{Fabian-2020} but also in the context of canted FMs. In (Fig. 2 d) we show explicit calculations of $\sigma_{xy}$ (for $+\theta$ and $-\theta$ with $\theta=10^{\circ}$), $\sigma_{xy}^s$ and $\sigma_{xy}^a$ for $\hat{\mathbf{s}}_0=(100^{\circ},10^{\circ})$ as a function of band filling of the model.
We observe that significant dependence of the band structure on the chirality results in a noticeable influence of chirality on the AHC mainly close to half-filling. The symmetric in chirality $\sigma_{xy}^s$ largely follows the behaviour of $\sigma_{xy}^0$ in the whole range of energies, while the behavior of the $\sigma_{xy}^a$ is correlated with fine canting-driven band dynamics reflected in a complex distribution of the anti-symmetric Berry curvature in $k$-space, shown in (Fig. 2 g) for the lowest two bands.
And while the latter distribution does not vanish $k$-point-wise for any direction of $\mathbf{s}_0$ except for the case when $\theta=n\pi, n\in\mathbb{Z}$, the overall BZ integral of the antisymmetric Berry curvature vanishes owing to mirror symmetry for high-symmetry directions of $\mathbf{s}_0$ with $\varphi=n\pi/3$, see~e.g.~(Fig. 2 f).
The pronounced chiral Hall effect of the FM model at half-filling is closely related to the topological phase transition occuring for $\hat{\mathbf{s}}_0=(90^{\circ},0^{\circ})$. Here, as the direction of the collinear magnetization passes through $(xy)$-plane, the quantized Hall conductance of the system changes by $2\frac{e^2}{h}$ in response to the change in the chirality of the Chern insulating state. This topological phase transition is the consequence of the presence of a so-called mixed Weyl point in the electronic structure at $E_F=0$\,eV for the in-plane magnetization~\cite{Hanke_mixed_2017-2}, the Berry phase nature of which we discuss later. Correspondingly, energy-resolved calculations of the chiral Hall conductivity as a function of the angle $\theta_0$, presented in (Fig. 3 a), reveal a pronounced and very complex structure of $\sigma_{xy}^a$ next to the mixed Weyl point, which stands in contrast to a relatively smooth behavior of $\sigma_{xy}^s$ in $(\theta_0,E_F)$-space (not shown). On the other hand, the chiral Hall effect exhibits a much stronger response to the canting angle $\theta$, as compared to $\sigma_{xy}^s$: as shown in (Fig. 3 b) for the case of half-filling, while $\sigma_{xy}^s$ changes by about 0.05\,$e^2/h$ for the canting angle of up to $10^{\circ}$, in the same range of $\theta$ the corresponding change of $\sigma_{xy}^a$ is larger by an order of magnitude. In accordance to arguments from above, the general trend of $\sigma_{xy}^a$ and $\sigma_{xy}^s$ with $\theta$ is linear and quadratic, respectively, when the canting angle is sufficiently small.
In contrast to a ferromagnet, for the antiferromagnetic case the magnitudes of the crystal and chiral Hall effects are large and comparable, but they manifest in different energy regions, see (Fig. 2 e). The AFM case presents another example of a correlation between the antisymmetric Berry curvature and the electronic structure: as visible in (Fig. 2 h,k) the emergence of strong features in the Berry curvature of the first band of the model is consistent with the first and second band coming close to each other in energy at specific points in the BZ.
In analogy to ferromagnets, this gives rise to monopoles of special type which manifest in an enhanced antisymmetric Berry curvature, as discussed below.
In analogy to the FM case considered above, the scaling of the chiral Hall effect with the canting angle can be confirmed to be linear for small $\theta$, see~e.g.~the inset of (Fig. 3 b).
Overall, as we have shown above by explicit calculations, the flavor of the Hall effect linear in spin chirality $-$ the chiral Hall effect $-$ exists and can be prominent both in FMs and AFMs. In the next two sections we uncover the nature of the chiral Hall effect as a phenomenon which can be clearly distinguished from the ``conventional" AHE, associated with the change in the overall magnetization of the system. For FMs, the conceptual difference between the two is very clear, as both of the canted states, used to arrive at the chiral Hall effect, (Fig. 3 a), share the same overall magnetization. How to draw the distinction for AFMs is less obvious, as the change in chirality in (Fig. 3 b) is associated with the change in sign and magnitude of the overall ``ferromagnetic" magnetization arising upon canting. Below, we formalize the classification of chiral and crystal Hall effects consistently in canted ferro- and antiferromagnets, referring to symmetry arguments.
\noindent
{\bf Symmetry analysis.}
The magnetic order is fully characterized by the staggered field $\vec{n}_-$ and the ferromagnetic field $\vec{n}_+$ which are defined according to $\vec{n}_\pm = \vec{s}_\mathrm{A} \pm \vec{s}_\mathrm{B} $.
The Hall conductivity can thus be decomposed into terms which are even and odd with respect to the interchange of $\hat{\mathbf{n}}_- \to - \hat{\mathbf{n}}_-$, i.e.,
\begin{align}
\sigma_{xy} (\vec{n}_+ ;\vec{n}_- )
& =
\sigma_{xy}^\mathrm{odd} (\vec{n}_+ ;\vec{n}_- )
+
\sigma_{xy}^\mathrm{even} (\vec{n}_+ ;\vec{n}_- ).
\end{align}
The off-diagonal components of the conductivity as they arise from the Berry curvature can be interpreted as the components of an axial vector which is odd under time-reversal.
Each of these terms can thus be further expanded as a sum over all terms which are odd under magnetization reversal:
\begin{align}
\sigma_{xy}^\mathrm{odd} &= \sum_{k,l=0}^\infty (c^\mathrm{odd}_{xy})^i : ( \vec{n}_-^{\otimes 2k+1} \otimes \vec{n}_+^{\otimes 2l} )_i
\label{eq:sigma_expansion_A}
\\
\sigma_{xy}^\mathrm{even} &= \sum_{k,l=0}^\infty (c^\mathrm{even}_{xy})^i : ( \vec{n}_-^{\otimes 2k} \otimes \vec{n}_+^{\otimes 2l+1} )_i,
\label{eq:sigma_expansion_B}
\end{align}
where $:$ denotes the tensor contraction over the multi-index $i=(i_1, \ldots, i_{2(k+l)+1})$ (we refer to the Supplemental Note 1 for an explicit example).
This decomposition into odd and even parts also corresponds to the parity under magnetic sublattice interchange, which would leave $\hat{\mathbf{n}}_+$ invariant.
Therefore, the symmetry requirements for these two tensors are quite different.
In order for $\sigma_{xy}^\mathrm{even}$ to be finite, the crystal symmetry needs to support axial tensors of odd order.
\begin{figure}[t!]
\begin{center}
\rotatebox{0}{\includegraphics[width=0.45\textwidth]{Figure3.png}}
\end{center}
\caption{{\bf Properties of the chiral Hall effect}. (a) Behavior of the antisymmetric part of the anomalous Hall conductivity $\sigma_{xy}^a$ at $10^{\circ}$ canting as a function of Fermi energy and direction of collinear ferromagnetic magnetization $\mathbf{s}_0=(\theta_0,10^{\circ})$. While the fine structure of the chiral Hall effect correlates with the band structure dynamics in response to canting and rotation of the initial magnetization, the origin of the effect in the Weyl point at half filling for $\theta_0=90^{\circ}$, serving as a source of staggered mixed Berry curvature, is visible. (b) The scaling of the crystal (orange line) and chiral (violet line) Hall effects with the canting angle $\theta$ at half-filling of the ferromagnetic case $\mathbf{s}_0=(100^{\circ},10^{\circ})$. The inset displays the scaling of the chiral Hall effect with $\theta$ for Fermi energy $E_F=-1.5$\,eV in the antiferromagnetic case with the same $\mathbf{s}_0$.
}
\label{FIG3}
\end{figure}
In particular, the effect is then even under lattice inversion and in our model it is thus necessarily even in the spin-orbit coupling strength $\alpha_{\mathrm{R}}$.
The case is different for $\sigma_{xy}^\mathrm{odd}$, whose tensorial components above either transform axial or polar depending on whether or not the symmetry under consideration interchanges the lattice sites: since $P \vec{s}_\mathrm{A/B} = \vec{s}_\mathrm{B/A}$ for the inversion operation $P$, the staggered magnetization would behave polar for our lattice, i.e., $P \vec{n}_- = - \vec{n}_-$, and not axial as $\vec{n}_+$.
For small values of the spin-orbit strength, $\sigma_{xy}^\mathrm{odd}$ is therefore linear in $\alpha_{\mathrm{R}}$ (generally odd in $\alpha_{\mathrm{R}}$), which is a corollary to the general fact that polar tensors of odd rank are identically zero in centrosymmetric crystal structures, see Table~I.
While the general expansion in Eqs.~(\ref{eq:sigma_expansion_A}-\ref{eq:sigma_expansion_B}) is in principle complete,
a formulation in terms of the chirality $\boldsymbol{\chi}$ offers a deeper insight into the various effects which can appear in ferro- and antiferromagnets.
Based on the definitions above the chirality itself can be reinterpreted as
\begin{equation}
\boldsymbol{\chi} = \vec{s}_\mathrm{A} \times \vec{s}_\mathrm{B} = \frac{1}{2} (
\vec{n}_- \times \vec{n}_+ ),
\end{equation}
which is therefore odd in both $\vec{n}_-$ and $\vec{n}_+$, but even under time-reversal.
Since $\vec{n}_+ \cdot \vec{n}_- = 0$, one has
$
\boldsymbol{\chi} \times \vec{n}_\pm = \mp \| \vec{n}_\pm \|^2 \vec{n}_\mp /2
$.
Hence, the leading order terms in the expansion of $ \sigma_{xy}^\mathrm{odd}$
and $ \sigma_{xy}^\mathrm{even}$
can be written in two equivalent ways by either replacing all appearing $\vec{n}_-$ or $\vec{n}_+$ factors in terms of chirality, i.e.,
\begin{align}
\sigma_{xy}^\mathrm{odd} &\sim \sum_i \alpha^\mathrm{FM}_i ( \hat{\mathbf{n}}_+) \chi_i =
\sum_{ij} \alpha^\mathrm{AFM}_{ij} ( \hat{\mathbf{n}}_-) \chi_i \chi_j
\\
\sigma_{xy}^\mathrm{even} &\sim \sum_i \beta^\mathrm{AFM}_i ( \hat{\mathbf{n}}_-) \chi_i =
\sum_{ij} \beta^\mathrm{FM}_{ij} ( \hat{\mathbf{n}}_+) \chi_i \chi_j,
\end{align}
where $\alpha^\mathrm{FM}_i$, $\alpha^\mathrm{AFM}_{ij}$ and $\beta^\mathrm{FM}_i$, $\beta^\mathrm{AFM}_{ij}$ are odd under time-reversal.
The choice of $\alpha$ and $\beta$ coefficients is a matter of philosophy. In a weakly canted ferromagnet, for example, it makes sense to formulate the change in conductivity as response to the $\boldsymbol{\chi}$ where the coefficients depend only on the electronic structure of the unperturbed, collinear system, which is solely determined by $\hat{\mathbf{n}}_+$.
For a weakly canted antiferromagnet, it makes sense to do the opposite.
This situation is summarized in Table \ref{tab:categorization}.
The chiral Hall effect can be now understood as the effect which accumulates all terms containing an odd number of $\chi_i$ relative to their collinear reference state.
To lowest order, these are therefore linear in $\chi_i$ and hence chiral.
This definition corresponds exactly to the way the chiral Hall effect has been defined at the beginning
and it corresponds also to the diagonal terms in Table~I. In particular, as we show in the Supplemental Note 1, the ``topological" terms of the type $\hat{\mathbf{n}}_{+}\cdot\boldsymbol{\chi}$ do not appear in the expansions of the conductivities above explicitly, which allows to draw a strict line between the chiral Hall effect, and the topological Hall effect rooted in the scalar spin chirality. On the other hand,
the crystal Hall effect can be identified with those terms which are even in $\chi_i$ when formulated with respect to the collinear reference state.
For the canted antiferromagnet, this corresponds to the definition given in~Ref.~\cite{Libor2020}, which we extend here to the case of canted ferromagnets.
The lowest order introduced by the canting is thus bichiral, i.e. it is quadratic in $\chi_i$. This corresponds to off-diagonal terms in Table~I, which thus provides complete characterization of flavors of the Hall effect in terms of chirality of the spin structure.
Note that the expansion of $\sigma_{xy}^\mathrm{even}$ in Eq.~(\ref{eq:sigma_expansion_B}) also contains the contribution from the usual anomalous Hall effect, which is the lowest order term proportional to the magnetization $\hat{\mathbf{n}}_+$.
The chiral Hall effect in AFMs and the crystal Hall effect in FMs, while being formally proportional to $\hat{\mathbf{n}}_+$, are different from the conventional AHE contribution as their structure is generally more complex, and the corresponding coefficients in the expansion (\ref{eq:sigma_expansion_B}) depend on the electronic structure in a different way than the usual AHE coefficient.
This is directly reflected in the different Berry phase nature of the two classes of phenomena.
Below, we provide the geometrical theory of the chiral Hall effect, which marks it as a playground for exploring novel types of Berry phases, not accessible in the realm of AHE of collinear magnets.
\noindent
{\bf Berry phase picture of chiral Hall effect.} We show that the chiral Hall effect allows for an elegant interpretation in geometrical terms which relate the geometry of Bloch electronic states in $k$-space with the geometry associated with spin rotations. To do this, we consider a perturbation of the system which is characterized by a parameter $\lambda(\theta)$ corresponding to staggered infinitesimal rotation of spins on two sublattices by an angle $\theta$ around a fixed direction, as defined before. This type of perturbation is distinctly different from that associated with a variation of the total magnetization of a collinear FM system, related to the change in the exchange coupling strength, when treated on the model level.
\begin{table}
\centering
\caption{Unified categorization of various Hall effects taking place in canted ferromagnets (FM) and antiferromagnets (AFM) as a function of ferromagnetic/staggered magnetization $\hat{\mathbf{n}}_{+/-}$ and structural chirality $\vec{\chi}$. Here, $\alpha^\mathrm{FM}_i$ and $\beta^\mathrm{FM}_{ij}$ are expansion coefficients, depending on whether the reference state is FM or AFM. The leading order is linear or quadratic in the Rashba spin-orbit interaction parameter $\alpha_R$.}
\begin{tabular}{cccc}
\toprule
$\vec{s}_\mathrm{A} \leftrightarrow \vec{s}_\mathrm{B}$ & Canted ferromagnet & Canted antiferromagnet \\ \midrule
& Chiral Hall Effect & Crystal Hall Effect \\
$\sigma_{xy}^\mathrm{odd}$ & $\alpha^\mathrm{FM}_i ( \hat{\mathbf{n}}_+) \chi_i$ & $\alpha^\mathrm{AFM}_{ij} ( \hat{\mathbf{n}}_-) \chi_i \chi_j$ \\
& $\sim\alpha_{\rm R}$ & $\sim\alpha_{\rm R}$ \\
\midrule
& Crystal Hall Effect & Chiral Hall Effect \\
$\sigma_{xy}^\mathrm{even}$ & $\beta^\mathrm{FM}_{ij} ( \hat{\mathbf{n}}_+) \chi_i \chi_j$ & $\beta^\mathrm{AFM}_{i} ( \hat{\mathbf{n}}_-) \chi_i $\\
& $\sim\alpha_{\rm R}^2$ & $\sim\alpha_{\rm R}^2$ \\
\bottomrule
\end{tabular}
\label{tab:categorization}
\end{table}
We look at the evolution of the $k$-space Berry curvature $\Omega_{xy}$ with $\lambda$, which is ultimately related to the change in the AHC of the system. Namely, we single out the linear in $\lambda$ term by looking at the quantity
$\delta\Omega_{xy} = \lim_{\lambda\rightarrow 0}\partial_\lambda\Omega_{xy}$, which stands for the magnitude of the response of chiral Hall conductivity to infinitesimal canting,~i.e.~$\Omega_{xy}^a\approx |\theta|\cdot\delta\Omega_{xy}$. Using perturbation theory arguments, it can be shown that at zero temperature (omitting the Fermi surface contribution)
\begin{equation}\label{mixed}
\delta\Omega_{xy}
= \Im~\mathrm{tr}_\mathrm{occ}\left(
[\Omega_{xy},\mathcal{A}_{\lambda}
] +
[
\mathcal{Q}_{\lambda x},\mathcal{A}_{y}
] +
[\Omega_{y\lambda},\mathcal{A}_{x}
]\right) / 2
,
\end{equation}
antisymmetrized with respect to $( x \leftrightarrow y)$ interchange of indeces, where $\mathcal{A}_{\alpha}=i\Braket{u_n|\partial_{\alpha}u_m}$ with $\alpha=\{k_x,k_y,\lambda\}$ are the components of the Berry connection, $\mathcal{Q}_{\alpha\beta}=\partial_\alpha \mathcal{A}_\beta + \partial_\beta \mathcal{A}_\alpha$ is the quantity related to the quantum metric tensor~\cite{AA}, and
$\Omega_{x\lambda}=2\Im\Braket{\partial_{k_x}u_n|\partial_{\lambda}u_m}$ is the mixed component of the Berry curvature tensor. The details on the derivation can be found in Supplemental Note 2.
The appearance in Eq.~(\ref{mixed}) of the mixed Berry curvature, which couples the changes in the electronic states with respect to the Bloch vector to their variation in response to chiral $\theta$-canting, is worth noting. We refer to this type of Berry curvature as the staggered mixed Berry curvature, to distinguish it from the type of the mixed Berry curvature which was introduced in the past for the situation where $\lambda$
represents an infinitesimal rotation of the same sense on both atoms, and which corresponds to a coherent rotation of the ferromagnetic or staggered antiferromagnetic magnetization in collinear FMs and AFMs.
The latter type of the Berry curvature was shown to be directly related to the anti-damping spin-orbit torque that an electric field exerts on the collinear magnetization~\cite{Frank-SOT,Hanke_mixed_2017-2,Niu_mixed_2019}. The staggered mixed Berry curvature is thus directly related to the staggered spin-orbit torque, able to drive canting in collinear systems, which we discuss at a later point.
In fact, Eq.~(\ref{mixed}) is valid for the type of perturbation which corresponds to a coherent rotation as well, which fundamentally relates the spin-orbit torque to the linear in $\theta$ anisotropy of the anomalous Hall conductivity of the collinear system.
\begin{figure}[t!]
\begin{center}
\rotatebox{0}{\includegraphics[width=0.45\textwidth]{Figure4.png}}
\end{center}
\caption{{\bf Chiral Hall effect and staggered mixed Berry curvature in Mn$_2$Au.} (a) Crystal structure of Mn$_2$Au with Mn atoms in sublattices A and B denoted with red and blue balls, respectively, and Au atoms shown with yellow balls. The canting of spins, initially oriented along $z$, is induced by applying an exchange field along $y$. (b) Fermi energy dependence of the chiral Hall conductivity for different strength of the exchange field (100\,meV corresponds to 2$^{\circ}$ canting). (c) Band distribution of $\mathbf{k}$-space Berry curvature $\Omega^n_{zx}$ for electronic states between $+1.0$\,eV and $+2.0$\,eV above the Fermi energy, where the chiral Hall effect is pronounced, for the canting of $+2^{\circ}$. Dashed line indicates the doubly degenerate electronic band structure in the absence of canting. The effect of opposite canting is identical, with the sign of the Berry curvature of each band reversed. (d) Band distribution of staggered mixed Berry curvature $\Omega^n_{\lambda x}$ for electronic states without canting shown with dashed line in (c). Note that $\Omega^n_{\lambda x}$ is identical for each of the doubly degenerate bands. The correlation between the chiral Hall effect and staggered mixed Berry curvature is evident.}
\label{}
\end{figure}
The uncovered relation between the
anomalous Hall effect and
chiral Hall effect with the mixed and
staggered mixed Berry curvature, respectively,
is not too surprising. This is easiest understood by referring to the magnetic graphene model studied here. For a collinear case, this model exhibits a band degeneracy of the mixed Weyl type~\cite{Hanke_mixed_2017-2} for the in-plane direction of the magnetization, whose non-zero topological charge is determined by integrating the Berry curvature vector field, constructed out of $k$-space and mixed components of the Berry curvature tensor, around it. The two types of Berry curvature in the vicinity of the mixed Weyl point thus become intertwined with each other by non-trivial topology of the mixed Weyl point. The fundamental relation~(\ref{mixed}) is the formal generalization of this rationale to the situation of a general driving parameter $\lambda$. For our FM model, the pronounced chiral Hall effect in the vicinity of the in-plane magnetization (Fig. 3 a), which underlines the staggered mixed nature of the band degeneracy, goes hand in hand with large variation of the collinear AHE and large mixed Berry curvature around the degeneracy point, found in the past~\cite{Hanke_mixed_2017-2}.
The emergence of such staggered mixed Weyl points in the electronic structure correspondingly results in a large response of the AHE to canting, found for instance in~\cite{Suzuki_2016,Takahashi_2018,Yang_2020}, large response in terms for the so-called chiral orbital magnetization~\cite{Lux2018,Fabian-2020}, and a large chiral Hall effect, in accordance to our calculations.
\begin{figure*}[ht!]
\begin{center}
\rotatebox{0}{\includegraphics[width=0.93\textwidth]{Figure5.pdf}}
\end{center}
\caption{{\bf Chiral and crystal Hall effect in monolayer of antiferromagnetic SrRuO$_3$ (SRO)}.
(a) Top view of the monolayer with staggered magnetization along $x$. Green, blue and orange spheres mark Sr, Ru and O atoms, respectively, with arrows representing Ru spins. Visible is the octahedral distortion of oxygen cage surrounding Ru atoms (rotation in the $xy$-plane and tilt with respect to the $z$-axis).
(b) Band structure of SRO monolayer with spins along $x$ (black line, open circles), and in the canted state with canting angle of $\theta = \pm 5^{\circ}$ in the $xy$-plane with respect to the $x$-axis (green and red lines for positive and negative chirality, respectively). (c) Schematic of the geometrical setup: Canted state considering the canting angle $\theta = \pm 5^{\circ}$ in the plane of the SRO film ($xy$-plane) with respect to the $x$-axis.
(d) Same as in (c) for the $xz$-plane of canting along $z$.
(e-f) Computed anomalous Hall conductivity (AHC) as a function of Fermi level position in the collinear (along $x$) as well as in the canted state. The corresponding geometrical setup is shown schematically in (c) and (d) respectively. Shaded grey areas corresponds to the AHC in the initially collinear state, $\sigma_{xy}^0$, while blue and red lines mark the AHC for positive and negative chirality.
(g-h) The symmetric, $\sigma_{xy}^{s}$ (violet line), and antisymmetric, $\sigma_{xy}^{a}$ (orange line) parts of the AHC are shown on the background of the AHC in the collinear state (shaded area).
While the crystal Hall effect ($\sigma_{xy}^{s}$) of SRO displays little variation with the canting plane, the chiral Hall effect ($\sigma_{xy}^{a}$) is extremely sensitive to the interplay of crystal symmetries and canting.
}
\label{FIG4}
\end{figure*}
\noindent
{\bf Chiral Hall effect in Mn$_2$Au.} To demonstrate the close relation of the chiral Hall effect to the staggered mixed Berry curvature, we consider an example of Mn$_2$Au. We investigate the AFM phase of this material with the spins on two Mn sublattices (A and B) aligned along the $z$-axis, see Fig.~4(a), and compute its electronic structure and transport properties by referring to ab-initio methods. The crystal structure of Mn$_2$Au possesses global inversion symmetry which prohibits the emergence of the crystal Hall effect, in accordance to the symmetry analysis presented above. The collinear AFM state of this system has $PT$-symmetry which results in degeneracy of the bands for collinear spin configuration (black dashed lines in Fig. 4c).
To simulate the effect of canting, we apply an exchange field of various magnitude along $y$, acting on the set of ab-initio Wannier states. This results in canting of spins on two sublattices in the $zy$-plane, Fig.~4a. The magnitude of the exchange field of $\pm 100$\,meV corresponds to about $\pm 2^{\circ}$ of canting away from the $z$-axis. Finite spin canting and corresponding finite chirality break the $PT$-symmetry, which results in lifting of band degeneracies at each $k$-point in the Brillouin zone, as exemplified for the case of $+ 2^{\circ}$ canting in Fig.~4c. Upon canting, each of the split bands acquires a finite $k$-space Berry curvature $\Omega^n_{zx}(\mathbf{k}) =2\Im \Braket{\partial_{k_z}u_{n\mathbf{k}}|\partial_{k_x}u_{n\mathbf{k}}}$, Fig.~4(c), which is purely anti-symmetric in nature: i.e. upon canting of the opposite sense, while the band structure remains intact, $\Omega^n_{zx}(\mathbf{k})$ retains its magnitude but switches its sign. This means, that in case of Mn$_2$Au, the Hall conductivity, obtained by summing up positive and negative Berry curvature contributions over all bands, Fig.~4c, is manifestly chiral
in that it switches sign with changing the sense of canting. The corresponding computed chiral Hall conductivity, shown as a function of band filling and strength of canting in Fig.~4b, displays a complex structure with pronounced peaks and sizeable magnitude.
To clearly reveal the geometric origin of the chiral Hall effect in Mn$_2$Au along the lines of Berry phase theory presented above, we calculate the band-resolved contributions to the staggered mixed Berry curvature $\Omega^n_{\lambda x}(\mathbf{k})=2\Im\Braket{\partial_{k_x}u_n|\partial_{\lambda}u_n}$, where $\lambda$ corresponds to staggered canting by angle $\theta$ of the spins on two sublattices in $zx$-plane. At each $k$-point, the Berry curvature $\Omega^n_{\lambda x}(\mathbf{k})$, calculated in the collinear AFM state and shown in Fig.~4d, has identical values for the pairs of $PT$-symmetric bands, which is in contrast to the mixed Berry curvature corresponding to the coherent rotation of spins: as result of $PT$ symmetry the mixed Berry curvature and corresponding damping-like spin-orbit torque vanish when summed up over pair of $PT$-symmetric bands~\cite{Hanke_mixed_2017-2,PhysRevLett.118.106402,PhysRevLett.113.157201}. As a result, while the non-staggered damping-like torques are inactive in $PT$-symmetric AFMs such as Mn$_2$Au, the staggered damping-like torques~\cite{PhysRevB.89.174430,PhysRevB.73.214426}, for each state proportional to $\Omega^n_{\lambda x}(\mathbf{k})$ but acting in an opposite way on spins in A and B sublattices, are allowed and can be prominent (see also Discussion section).
By comparing Fig.~4c and d, we observe a very close correlation between the chiral Hall effect and the staggered mixed Berry curvature. We thus numerically solidify the outcome of Eq.~9, which states that large contributions in $\Omega^n_{\lambda x}$ reflect directly on the magnitude of the chiral Hall conductivity. This correlation is particularly prominent in the vicinity of near degeneracies among the bands where large contributions to the staggered Berry curvature and chiral Hall conductivity arise. While such degeneracies in Mn$_2$Au often carry an isolated monopole character, such as e.g. at $+$1.7\,eV around X or at $+$1.4\,eV along $\Gamma$M, they also occur along ``hot" sheets of whole bands coming close to each other~\cite{PhysRevLett.106.117202}, as is the case for example along AZ, Fig.~4d.
The finding of the relation between the chiral Hall effect and staggered mixed Berry curvature $-$ and thus staggered damping-like spin-orbit torque $-$ is important as it provides a guiding principle in the material design of both phenomena, and allows to relate the observations of the Hall signal to the physics of spin-orbit torques and vice versa.
\noindent
{\bf Chiral Hall effect in SrRuO$_3$.}
We now move on to a specific material example which, upon doping, hosts pronounced crystal and chiral Hall effects at the same time.
Namely, we consider a monolayer of SrO-terminated SrRuO$_3$ (SRO) thin films grown on SrTiO$_3$, comprising two Ru spin moments which are arranged antiferromagnetically in the collinear ground state~\cite{PRL-SRO2020,Xia-PRB2009,Chang-PRL2009,Toyota-MIT2005,Kartik}, with $\hat{\mathbf{s}}_0$ along the $x$-axis in the plane of the film ($xy$-plane), see (Fig. 5 a). In the ground state, the monolayer of SRO exhibits a symmetry breaking associated with rotation and tilts of oxygen octahedra surrounding Ru atoms~\cite{Kartik}.
The band structure of SRO monolayer around the Fermi energy is dominated by Ru-t$_{2g}$ states. The combined effect of octahedral distortion, SOI and AFM ordering on Ru-t$_{2g}$ states leads to a formation of a 0.96\,eV gap at the Fermi energy and breaking of degeneracies among the bands present in a symmetric phase of this material, see (Fig. 5 b)~\cite{Kartik}.
The corresponding band splittings are found to be quite prominent around the energies of $-$0.60, $-$0.21 and $+$1.13\,eV, reflecting the strong effect of SOI on the states there, (Fig. 5 b).
Starting from the collinear AFM ground state of the system
we consider a small canting of staggered spins away from the $x$-axis by $\theta= 5^{\circ}$ (chirality ``$+$") and $\theta= -5^{\circ}$ (chirality ``$-$"), both in the $xy$-plane (i.e. keeping the spins in-plane), as well as in the $xz$-plane (as in Fig. 5 a), showing the corresponding rearrangements of the bands for the $xy$ canting plane in Fig. 5 b.
The asymmetric effect of the canting on the electronic band structure is most prominent around the energies of $-$0.21, $-$0.60 and $+$1.13 eV, where the effect of SOI is strongest.
Here, depending on chirality and specific Bloch vector, the initial splitting between the ``collinear" Ru-states gets several times larger upon canting.
Next, we assess the intrinsic Berry curvature contribution to the AHE in SRO upon canting and compare it to the AHE in the collinear state (see section Methods for more details).
As was shown recently~\cite{Kartik}, in the collinear (along $x$) state considered here SRO monolayer exhibits a significant crystal Hall effect over wide regions of energy as a result of combined breaking of time reversal symmetry
and translation by half a lattice constant arising as a consequence of octahedral distortion.
In addition to the crystal Hall conductivity at zero canting, $\sigma_{xy}^0$, shown in (Fig. 5 c,d) with a shaded area, the canting by 5$^\circ$ with positive and negative chirality induces significant changes to the AHC, irrespective of whether the canting is performed in the $xy$- (Fig. 5 c, top) or $xz$-plane (Fig. 5 d, top). Despite a relatively modest effect on the re-distribution of the bands, the effect of small canting on the AHC is especially drastic in the regions of energy of $[-0.6,-0.5]$ and $[+1.0,+1.2]$\,eV, where the
magnitude of $\sigma_{xy}^0$ gets significantly enhanced by canting, and its sign depends on chirality.
We decompose the computed AHC of the canted system into symmetric and antisymmetric components, $\sigma_{xy}^s$ and $\sigma_{xy}^a$, presenting the results in the bottom panels of (Fig. 5 c,d). We clearly observe that for the small canting angle of 5$^\circ$ the crystal Hall conductivity $\sigma_{xy}^s$ follows the energy-dependence of $\sigma_{xy}^0$ quite closely for both tilting planes, which is consistent with the perturbation theory arguments.
On the other hand, the behavior of the chiral Hall conductivity $\sigma_{xy}^a$ stands in sharp contrast to that of $\sigma_{xy}^s$ and $\sigma_{xy}^0$.
In analogy to Mn$_2$Au, given the smallness of canting, the magnitude of the chiral Hall effect that we observe appears gigantic, and it can be attributed to near band degeneracies, with the cross-talk among them activated by canting via staggered mixed Berry curvature mechanism.
While all three types of conductivities originate in the same regions in energy associated with pronounced influence of SOI on the electronic structure, there is no correlation in the sign of $\sigma_{xy}^a$ and $\sigma_{xy}^s$, and the peaks in $\sigma_{xy}^a$ are often not correlated with the sharp features of crystal Hall effect, which is particularly visible for the case of $xz$-canting.
This is consistent with the picture that the states which give rise to the chiral Hall effect and which are sensitive to the canting-driven symmetry breaking are not necessarily associated with the crystal $-$ i.e. "conventional" anomalous $-$ Hall effect.
The comparison of $\sigma_{xy}^a$ for two different canting planes,~(Fig. 5 c,d), reveals extreme sensitivity of the chiral Hall effect to the crystal symmetry of the lattice.
In this sense, tracking the chiral Hall effect with respect to two independent planes of canting provides us with a detailed information on the underlying crystal symmetry without the need of changing the ground state direction of staggered magnetization.
\begin{figure}[t!]
\begin{center}
\rotatebox{0}{\includegraphics[width=0.50\textwidth]{Figure6.pdf}}
\end{center}
\caption{{\bf Chiral magneto-optical effect in monolayer of antiferromagnetic SrRuO$_3$}. a) Real (top) and imaginary (bottom) part of the magneto-optical conductivity in the collinear state (grey shaded area) as well as its symmetric and antisymmetric parts for 5$^\circ$ spin canting in the $xy$-plane evaluated for the position of the Fermi energy at $E_F=1.05$\,eV. (b) Same as in (a) but for the $xz$-plane of canting evaluated at $E_F=1.01$\,eV. The sketches
depict the canting of the spins in antiferromagnetic SRO monolayer upon an application of an external magnetic field along the $\pm y$-axis (c), and $\pm z$-axis (d).}
\label{FIG5}
\end{figure}
\noindent
{\bf Chiral Magneto-Optical Effect.} Finally, we show that the chiral contributions arise not only in the context of the AHE, but also in the realm of magneto-optical (MO) effects. In order to do this, we numerically evaluate the real and imaginary parts of the magneto-optical conductivity (see Methods section for details) of monolayer SRO, starting from the collinear AFM configuration. We further define the symmetric and antisymmetric parts of the magneto-optical conductivity, $\sigma_{xy}^s(\omega)$ and $\sigma_{xy}^a(\omega)$, by referring to the frequency-dependent version of Eq.~\ref{Eq1}, upon canting by $5^{\circ}$ of opposite chirality in the $xy$- and $xz$-plane, in analogy to the previous section.
The results of our assessment are presented in Fig. 6, where we have chosen the Fermi energy to be positioned at the peak of the chiral Hall effect for the corresponding rotation plane as shown in (Fig. 6 c,d): at $E_F=1.05$\,eV for $xy$-, and at $E_F=1.01$\,eV for $xz$-plane of canting. Our analysis shows that, in analogy to their~d.c.~versions, the crystal magneto-optical conductivity follows quite closely the frequency distribution of the MO conductivity computed without canting, both in its real and imaginary parts. On the other hand, while the magnitude of chiral MO conductivity remains large over a wide region of frequencies, its structure is often not correlated
with the corresponding behavior of the crystal part of the conductivity in $\omega$: for example in case of $xz$-canting the chiral MO conductivity is very prominent on the background of almost vanishing crystal MO conductivity in the entire range of energies. This marks the two effects as distinct magneto-optical phenomena. The chiral MO effect thus presents a unique tool to track down optically-mediated electronic transitions
which are responsive to the effect of canting. Tracing down the chiral contributions to the MO conductivity makes it possible to gain a valuable insight into the interplay of electronic structure with crystal symmetry and magnetic order.
\vspace{0.5cm}
\noindent
\section*{Discussion}
In this work, we promote the chiral Hall effect as a new tool to access the properties of ferromagnetic and antiferromagnetic materials. We uncovered that the chiral Hall effect has a qualitatively different Berry phase origin as compared to the conventional AHE. Based on this, we are able to understand how a gigantic chiral Hall effect can be achieved in compensated AFMs even upon a very small canting accompanied by an almost vanishing ferromagnetic component of the magnetization. Addition of the chiral Hall effect to the crystal Hall effect thus allows for drawing a unified map of Hall effects taking place in canted magnets.
As we have seen on the example of SrRuO$_3$, the chiral Hall effect is
sensitive to the details of crystal structure, depending on the plane of canting. In a realistic situation, given a robust ground state direction of the staggered magnetization $\hat{\mathbf{s}}_0$ in an AFM, which is accompanied by a vanishing or non-vanishing crystal Hall effect, the plane of canting can be straightforwardly controlled by a direction of an externally applied magnetic field $\mathbf{B}$, and the chiral Hall effect can be estimated as a difference in measured Hall effect between opposite directions of $\mathbf{B}\rightarrow -\mathbf{B}$, see sketches in (Fig. 5). Sweeping the direction of the field in the plane orthogonal to $\hat{\mathbf{s}}_0$ would allow to reconstruct the angular dependence of the chiral Hall conductivity and determine its nodal points (i.e.~the directions for which it turns to zero), from which the information about the details of the crystal symmetry can be deduced.
On the other hand, the response of the measured signal to the strength of the magnetic field can be used to estimate the magnitude of the Berry curvature response as given by the geometrical theory, Eq.~(\ref{mixed}). The corresponding experimental assessment of the evolution of chiral magneto-optical conductivity, in combination with the magneto-optical spectra without the field, can be used to reconstruct the exact details of electronic structure of a given material, especially the energetic position of states sensitive to canting that hosts large staggered mixed Berry curvature.
Although the role of the chiral Hall effect in ferromagnets is more difficult to access as it is difficult to realize the states of opposite chirality in analogy to AFMs (especially in systems with collinear ground state), the chiral Hall effect, as the dominant contribution to the variation of the AHE upon canting, can contribute strongly to the evolution of the AHE with temperature via the effect of fluctuations. This is easy to understand by realizing that even in collinear ferromagnets with DMI the temperature fluctuations will promote one type of chirality over the other~\cite{Menzel}, which will prohibit the opposite contributions to the AHE from the states of opposite chirality from suppressing each other.
The variation of the AHE with temperature $T$ corresponding to the chiral Hall effect is expected to behave qualitatively differently with respect to the temperature-induced magnetization change $\Delta M(T)$, which at low $T$ is proportional to $\theta^2$ with $\theta(T)$ being an effective fluctuations-driven deviation of the local spins from the equilibrium magnetization direction. Indeed, while the conventional theory of the AHE assumes that the variation of the anomalous Hall resistivity with $T$ is proportional to $\Delta M(T)$ and thus to $\theta^2(T)$, the chiral Hall effect imposes a different, linear in $\theta(T)$ behavior. The fingerprints of the chiral Hall effect can be thus uncovered from the scaling analysis of the temperature-dependent Hall measurements in FM materials.
A promising approach to induce canting between collinear spins in a ferromagnetic ground state, and thereby ignite the chiral Hall effect, lies in referring to current-induced staggered spin-orbit torques (SOTs) ~\cite{RMP-SOT}, which have shown above to be closely linked to the microscopics of the chiral Hall conductivity.
Given that an electric field applied to a ferromagnet exerts local torques on the spins $\mathbf{T}_{\rm A}$ and $\mathbf{T}_{\rm B}$, a crucial distinction can be drawn. While the non-staggered conventional SOT, $\mathbf{T}_+=\mathbf{T}_{\rm A}+\mathbf{T}_{\rm B}$ leads to a coherent magnetization rotation~\cite{RMP-SOT,Frank-SOT,Miron,Wadley}, the staggered component of the SOT~\cite{Jakub} defined as $\mathbf{T}_-=\mathbf{T}_{\rm A}-\mathbf{T}_{\rm B}$ additionally will attempt to induce a finite canting in the system.
In analogy to $\mathbf{T}_+$~\cite{Frank-SOT}, components of staggered SOT even and odd with respect to $\mathbf{n}_+$, $\mathbf{T}_-^{\rm even}$ and $\mathbf{T}_-^{\rm odd}$, can be distinguished.
In systems with inversion symmetry staggered polar tensors of even rank and staggered axial tensors of odd rank are forbidden by symmetry which means that $\mathbf{T}_-^{\rm even}$ and $\mathbf{T}_-^{\rm odd}$ are even in the Rashba strength. However, polar tensors of odd rank and axial tensors of even rank are forbidden by symmetry, and consequently both components of $\mathbf{T}_+(\mathbf{E})$ are odd in $\alpha_\text{$\rm R$}$. Therefore, in contrast to non-staggered SOT, the staggered torques in ferromagnets do not necessarily require broken inversion symmetry. Staggered SOTs can be also used to induce canting in collinear AFMs~\cite{RMP-SOT}, in which case one has to distinguish components which are even and odd with respect to $\mathbf{n}_-$. As $\mathbf{T}_-^{\rm odd}(\mathbf{E})$ is a polar tensor and $\mathbf{T}_-^{\rm even}(\mathbf{E})$ is a staggered axial tensor, it can be shown that
$\mathbf{T}_-^{\rm odd}(\mathbf{E})$ and $\mathbf{T}_-^{\rm even}(\mathbf{E})$ are respectively odd and even in the Rashba strength.
Generally, the interplay of the chiral Hall effect with current-driven phenomena presents an exciting avenue to explore.
By referring to the mechanism of staggered torques, the chiral Hall effect can manifest as a non-linear contribution to the Hall effect, in analogy to the non-linear magnetoresistance effect used to detect the Néel vector reversal in collinear AFMs~\cite{Godinho}.
Besides the relation of the chiral Hall effects to various types of spin-torques born in the system when a current passes through it, the new flavor of the Hall effect should be also intertwined with the phenomenon of current-induced DMI, where the sense and magnitude of canting among spins can be altered upon passing a current through the sample~\cite{Karnad,Hayashi}.
Moreover, the correlation of the chiral Hall effect with the modifications in the electronic structure brought by an external electric field~e.g.~in multiferroics materials must be also profound.
In our work, we have defined the crystal and chiral Hall effects with respect to the staggered ($\mathbf{n}_-$) and ferromagnetic ($\mathbf{n}_+$) components for the system consisting of two spins, which ultimately allowed for representation in terms of the vector chirality. The generalization of this approach to multi-spin systems, for example Mn$_3$X type of systems~\cite{Nayak2016,Jakub-2017}, B20~\cite{Sergii}, FeMn-type~\cite{Wanxiang-2} or Heusler compounds~\cite{Heusler} presents an exciting challenge. In the latter cases, the symmetry properties of the anomalous Hall effect can be scrutinized with respect to generalized AFM order parameters. In analogy to this work, different flavors of spin and structural chirality can be singled out, and their role in mediating various contributions to the AHE can be identified. Ultimately, the classification obtained from such an analysis, of which our study presents a starting toy case, could be possibly reinterpreted in terms of quantitave and qualitative multipole theory~\cite{multipoles,Oppeneer}, and relation to various types of current-induced phenomena, such as spin torques, could be established. We believe this general direction of research to be of fundamental and practical importance to our understanding of chiral magnetism and our ability to detect and control various chiral magnetic phases and their dynamics.
\vspace{0.5cm}
\noindent
\section*{Methods}
\noindent
{\bf Tight-binding model calculation.}
From tight-binding Hamiltonian the Berry curvature was calculated according to the standard expression
$
\Omega_{n}(\mathbf{k}) = -\hslash^{2} \sum_{n \neq m} [\operatorname{2 Im} \langle u_{n\mathbf{k}}| \hat{ v}_{x}|u_{m\mathbf{k}}\rangle \langle u_{n\mathbf{k}}|\hat{v}_{y}|u_{m\mathbf{k}}\rangle]/(\varepsilon_{n\mathbf{k}}-\varepsilon_{m\mathbf{k}})^{2},
$
where $\Omega_{n}(\mathbf{k})$ is the Berry curvature of band $n$, $\hslash\hat{v}_{i} ={\partial \hat{H}(\mathbf{k})}/{\partial k_{i}} $ is the $i$'th velocity operator, $u_{n\mathbf{k}}$ and $\varepsilon_{n\mathbf{k}}$ are the eigenstates and eigenvalues of the Hamiltonian $\hat{H}(\mathbf{k})$, respectively. From 2 to 4 million $k$-points in the full BZ were used to arrive at well-converged values of the anomalous Hall conductivity (AHC) determined as
$\sigma_{x y}= -\hbar e^{2} \int_{BZ} \frac{d\bf k}{(2 \pi)^{2}} \Omega(\mathbf{k})$,
where $\Omega(\mathbf{k})$ is the sum (for each k) of Berry curvatures over the occupied bands.
All calculations were done at $T=10$\,K.
\noindent
{\bf First-principles calculation of Mn$_2$Au.} Electronic structure of Mn$_2$Au was calculated by using a density functional theory (DFT) code \texttt{FLEUR}~\cite{fleur}, which implements the full-potential linearized augmented plane wave (FLAPW) method. Exchange and correlation effects were included within the generalized gradient approximation (GGA) by using Perdew-Burke-Ernzerhof (PBE) functional~\cite{pbe} exchange-correlation functional. Lattice constants of a tetragonal cubic unit cell were set $a=6.29a_0$ and $c=16.14 a_0$, where $a_0$ is the Bohr radius. The muffin-tin radii of Mn and Au were chosen to be $2.53 a_0$ for both atoms. Plane wave-cutoff was set $3.9 a_0^{-1}$, and the BZ was sampled on $12\times 12\times 12$ Monkhorst-Pack ${k}$-mesh~\cite{Monkhorst-Pack1976}.
To calculate the Berry curvature, we obtained a tight-binding model of Mn$_2$Au by projecting Bloch functions onto 18 initial guess Wannier functions (WFs) -- $s$, $p$, $d$ orbitals with spin up and down -- for both Mn and Au atoms and obtained maximally-localized WFs (MLWFs)~\cite{Frank-WFs,Pizzi_2020}. To induce spin canting, we additionally included an exchange field along $y$ by $H_\mathrm{XC}=(J_\mathrm{XC}/\hbar)S_y$. The Berry curvature shown in Fig.~\ref{FIG4}(c) was calculated by
\begin{eqnarray}
\Omega_{zx}^n (\mathbf{k})
&=&
-2 \hbar^2
\frac{
\mathrm{Im}
\left[
\bra{u_{n\mathbf{k}}} \hat{v}_z \ket{u_{m\mathbf{k}}}
\bra{u_{n\mathbf{k}}} \hat{v}_x \ket{u_{m\mathbf{k}}}
\right]
}{ (\varepsilon_{n\mathbf{k}} - \varepsilon_{m\mathbf{k}})^2+\eta^2}
\end{eqnarray}
where we set $\eta=25\ \mathrm{meV}$. The AHC shown in Fig.~\ref{FIG4}(b) was obtained by integrating the Berry curvature over $240\times 240 \times 240$ $k$-points for occupied states. The staggered mixed Berry curvature shown in Fig.~\ref{FIG4}(d) was evaluated by
\begin{eqnarray}
\Omega_{\lambda x}^n (\mathbf{k})
&=&
-2 \hbar
\frac{
\mathrm{Im}
[
\bra{u_{n\mathbf{k}}} \partial_\lambda \hat{H} \ket{u_{m\mathbf{k}}}
\bra{u_{n\mathbf{k}}} \hat{v}_x \ket{u_{m\mathbf{k}}}
]
}{ (\varepsilon_{n\mathbf{k}} - \varepsilon_{m\mathbf{k}})^2+\eta^2},
\end{eqnarray}
where $\lambda$ is defined as a canted angle of the magnetic moments on Mn-A and Mn-B in $zx$ plane. Note that it is related by a staggered torque operator
\begin{eqnarray}
\partial_\lambda \hat{H}
&=&
\frac{\partial \hat{H}}{\partial \theta^\mathrm{A}}
-
\frac{\partial \hat{H}}{\partial \theta^\mathrm{B}}
\nonumber
\\
&=&
\frac{1}{i\hbar}
\left[
\hat{S}_y^\mathrm{A} - \hat{S}_y^\mathrm{B}, \hat{H}
\right]
\nonumber
\\
&=&
\hat{T}_y^\mathrm{A}
-\hat{T}_y^\mathrm{B}
\end{eqnarray}
where $\hat{S}_y^A$ and $\hat{S}_y^B$ are spin operators on Mn-A and Mn-B atoms, respectively.
\noindent
{\bf First-principles calculation of SrRuO$_3$.} DFT calculations were carried out with the FLAPW method as implemented in the \texttt{FLEUR} code~\cite{fleur}.
Using relaxed atomic positions of the SRO monolayer, the electronic structure calculations at different spin canting were carried out with the film version of the FLEUR code~\cite{fleur}.
For self-consistent calculations with the LAPW basis set a plane-wave cutoff of $k_{max}= 4.2a_0^{-1}$ and the total of 24$\times$24 $k$-points in the BZ were used for the convergence of the charge density.
The muffin-tin radii for Sr, Ru, O were set to \SI{2.80}{\au}, \SI{2.32}{\au}, and \SI{1.31}{\au}, respectively.
We used the PBE~\cite{pbe} exchange-correlation functional within the GGA. The electron-electron correlation effects beyond GGA at the magnetic Ru ions were taken into account by referring to the GGA+$U$ method as implemented in the SPEX code~\cite{spex1}, resulting in Coulomb interaction strength of $U=2.52$\,eV and an intra-atomic exchange interaction strength of $J=0.44$\,eV.
To compute the Berry curvature, we first constructed a tight-binding Hamiltonian in terms of maximally-localized Wannier functions
projected from the GGA+$U$+SOC\,[100] states using atomic-orbital-like Ru-t$_{2g}$ and Ru-e$_{g}$ states as initial guess~\cite{Frank-WFs,Pizzi_2020}. From this Hamiltonian the Berry curvature is calculated on a $50\times 50$ $k$-mesh employing an adaptive $5\times 5$ refinement scheme~\cite{Wang-Souza} at points where the value of the Berry curvature exceeded 50\,a.u. These numerical parameters provided well-converged values of the anomalous Hall conductivity.
The magneto-optical conductivity was calculated using the Kubo expression
\begin{equation}
\begin{aligned}
\sigma_{x y}(\omega)=& \hbar e^{2} \int \frac{d \mathbf{k}}{(2 \pi)^{2}} \sum_{n \neq m}\left(f_{n \mathbf{k}}-f_{m \mathbf{k}}\right) \\
& \times \frac{\operatorname{Im}\left[\left\langle u_{n \mathbf{k}}\left|\hat{v}_{x}\right| u_{m \mathbf{k}}\right\rangle\left\langle u_{m \mathbf{k}}\left|\hat{v}_{y}\right| u_{n \mathbf{k}}\right\rangle\right]}{\left(\varepsilon_{n \mathbf{k}}-\varepsilon_{m \mathbf{k}}\right)^{2}-(\hbar \omega+i \eta)^{2}},
\end{aligned}
\end{equation}
where $\hbar\omega$ is the frequency of the applied electric field, and $\eta$ a material dependent broadening parameter. For calculations presented in (Fig. 5) we used $\eta=10$\,meV.
\vspace{0.5cm}
\noindent
\section*{Acknowledgements}
We thank Libor \v{S}mejkal for extensive discussions on the subject. We acknowledge funding under SPP 2137 ``Skyrmionics" of the DFG.
We gratefully acknowledge financial support from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (Grant No. 856538, project "3D MAGiC”).
We also gratefully acknowledge the J\"ulich Supercomputing Centre and RWTH Aachen University for providing computational resources under project Nos. jiff40 and jpgi11.
The work was also supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) $-$ TRR 173 $-$ 268565370 (project A11), TRR 288 – 422213477 (project B06), and project MO 1731/10-1 of the DFG. We also acknowledge funding under HGF-RSF Joint Research Group ``TOPOMANN".
\section*{Competing Interests}
The authors declare no competing interests.
\section*{Data Availability Statement}
The data that support the findings of this study are available from the corresponding author upon reasonable request.
\hbadness=99999
\bibliographystyle{naturemag
\section{ Symmetry analysis of the crystal and chiral Hall effect for the hexagonal magnetic Rashba model
The Rashba model on the hexagonal lattice which is studied in the manuscript belongs to the crystallographic point group $C_{6v}$ in the non-magnetic case.
When the system is magnetic, it is characterized by the respective magnetization vectors $\vec{s}_{i}$ at each of the sites $i=A,B$.
The direction cosine expansion in the manuscript is expressed in terms of $\vec{n}_\pm = \vec{s}_\mathrm{A} \pm \vec{s}_\mathrm{B} $, i.e,
\begin{align}
\sigma_{xy}^\mathrm{odd} &= \sum_{k,l=0}^\infty (c^\mathrm{odd}_{xy})^i : ( \vec{n}_-^{\otimes 2k+1} \otimes \vec{n}_+^{\otimes 2l} )_i
\label{eq:sigma_expansion_A}
\\
\sigma_{xy}^\mathrm{even} &= \sum_{k,l=0}^\infty (c^\mathrm{even}_{xy})^i : ( \vec{n}_-^{\otimes 2k} \otimes \vec{n}_+^{\otimes 2l+1} )_i .
\label{eq:sigma_expansion_B}
\end{align}
Accodringly, the lowest orders are thus given by the general expansions
\begin{align}
\sigma_{xy}^\mathrm{odd} &=
(c^\mathrm{odd}_{xy})^i (n_-)_i + (c^\mathrm{odd}_{xy})^{ikl} (n_-)_i (n_+)_k (n_+)_l + \mathcal{O}( n^5)
\label{eq:odd_trafo}
\\
\sigma_{xy}^\mathrm{even} &=
(c^\mathrm{even}_{xy})^i (n_+)_i + (c^\mathrm{even}_{xy})^{ikl} (n_+)_i (n_-)_k (n_-)_l + \mathcal{O}( n^5) .
\label{eq:even_trafo}
\end{align}
If a physical tensor $T$ of rank $n=k+l$ connects $k$ polar and $l$ axial vectors, its transformation law under a symmetry transformation $S$ is usually given by
\begin{equation}
T^{p_1 \cdots p_k a_1 \cdots a_l} ~~\rightarrow~~
T^{p_1 \cdots p_k a_1 \cdots a_l} |S|^l
S^{\tilde{p}_1}_{p_1}\cdots S^{\tilde{p}_k}_{p_k}
S^{\tilde{a}_1}_{a_1}\cdots S^{\tilde{a}_l}_{a_l},
\end{equation}
where $|S| = \det S$.
Whether $\hat{\mathbf{n}}_\pm$ transforms as a polar or axial vector depends however on the position of the magnetic atoms and which symmetry transformations interchange the lattice sites.
For example, while $ P \hat{\mathbf{n}}_{A,B} = P \hat{\mathbf{n}}_{A,B}$, one has $P \hat{\mathbf{n}}_{-} = - \hat{\mathbf{n}}_{-}$ if $P$ interchanges $A$ and $B$, which would be the case for magnetic atoms on the hexagonal lattice.
In other words, $\hat{\mathbf{n}}_{-}$ behaves as a polar vector under $P$ while $\hat{\mathbf{n}}_+$ behaves as an axial vector.
Taking the possible interchange of magnetic lattice sites into account for an arbitrary symmetry transformation $S$, the transformation law is generalized to
\begin{equation}
T^{i_1 \cdots i_n } ~~\rightarrow~~
T^{i_1 \cdots i_n} \mathrm{swap}(S)^\eta |S|^\epsilon
S^{\tilde{i}_1}_{i_1}\cdots S^{\tilde{i}_k}_{i_n},
\end{equation}
where $\mathrm{swap}(S) = -1$, if $S$ interchanges the lattice sites $A$ and $B$ and $\mathrm{swap}(S) = 1$ otherwise.
As before, a tensor with $\epsilon = 0$ is called polar and with $\epsilon = 1$ it is axial.
If $\eta = 1$ instead of $\eta=0$, the tensor is called \emph{staggered}, since its build to compensate the coupling to the staggered fields $n_{-}$ and their unusual behavior under symmetry transformations.
In this terminology, the coefficients of $\sigma_{xy}^\mathrm{odd}$ qualify as staggered axial tensors, while the coefficients of $\sigma_{xy}^\mathrm{even}$ belong to an ordinary axial tensor.
This fixes their transformation behavior and one finds
\begin{align}
\frac{\sigma_{xy}^\mathrm{odd} - \sigma_{yx}^\mathrm{odd}}{2} & \in \mathrm{span} \lbrace
n_{-}^y n_{+}^y n_{+}^x + n_{-}^y n_{+}^x n_{+}^y
+ n_{-}^x n_{+}^y n_{+}^y - n_{-}^x n_{+}^x n_{+}^x
\rbrace
~ + \mathcal{O}( n^5) \\
\frac{\sigma_{xy}^\mathrm{even} - \sigma_{yx}^\mathrm{even}}{2} &=
\in \mathrm{span} \lbrace
n_{+}^z,
n_{-}^z n_{-}^x n_{+}^x + n_{-}^z n_{-}^y n_{+}^y, ~
n_{-}^x n_{-}^z n_{+}^x + n_{-}^y n_{-}^z n_{+}^y, ~
n_{-}^x n_{-}^x n_{+}^z + n_{-}^y n_{-}^y n_{+}^z
\rbrace~ + \mathcal{O}( n^5) ,
\end{align}
for the point group $C_{6v}$ (the symmetry of the Rashba model on the hexagonal lattice as presented in the manuscript) and where the $x$-direction is defined as the axis which contains both atoms of the unit cell.
Since $\vec{n}_+ \cdot \vec{n}_- = 0$, one has
$
\boldsymbol{\chi} \times \vec{n}_\pm = \mp \| \vec{n}_\pm \|^2 \vec{n}_\mp /2$, which can be used to introduce the chirality to these results.
One finds
\begin{align}
n_{-}^y n_{+}^y n_{+}^x + n_{-}^y n_{+}^x n_{+}^y
+ n_{-}^x n_{+}^y n_{+}^y - n_{-}^x n_{+}^x n_{+}^x
= -\frac{2}{\| \mathbf{n}_{+} \|^2} \begin{pmatrix}
- 2n_{+}^x n_{+}^y n_{+}^z\\
( (n_{+}^y)^2-(n_{+}^x)^2) n_{+}^z \\
3 (n_{+}^x)^2 n_{+}^y - (n_{+}^y)^3
\end{pmatrix} \cdot \boldsymbol{\chi}.
\end{align}
This result can be recast into the form
\begin{equation}
\frac{\sigma_{xy}^\mathrm{odd} - \sigma_{yx}^\mathrm{odd}}{2}
\propto \vec{n}_{+}^T \Xi~ \boldsymbol{\chi} + \mathcal{O}(\chi^3),
\end{equation}
with the angular-dependent coupling matrix
\begin{equation}
\Xi
= -2
\begin{pmatrix}
- 2 \hat{n}_{+}^y \hat{n}_{+}^z & 0 & 0 \\
0 & 0 & ( 3 (\hat{n}_{+}^x)^2 - (\hat{n}_{+}^y)^2 ) \\
0 & ( (\hat{n}_{+}^y)^2-(\hat{n}_{+}^x)^2) & 0
\end{pmatrix},
\end{equation}
and where $\hat{\mathbf{n}}_{+} = \vec{n}_{+} / \| \vec{n}_+ \|$.
In analogy to the topological Hall effect, one might expect the emergence of an isotropic coupling of the form $\vec{n}_+ \cdot \boldsymbol{\chi}$, known as the \emph{scalar spin chirality}.
For the magnetic Rashba model however, this quantity is always zero, i.e., $\vec{n}_+ \cdot \boldsymbol{\chi}=0$.
A finite effect can still arise through anisotropic deviations from this isotropic coupling which are ultimately induced by the spin-orbit interaction.
The symmetry of the odd tensor can be visualized by applying the $\theta$-canting procedure as defined in Fig. 2a) of the manuscript.
For $\vec{n}_{-} = \mathrm{const}$, Fig.~\ref{fig:symmetry_visualization}a) explores the angular dependence of the tensor, by varying the spherical angles of $\vec{n}_{+}$ and plotting the magnitude of $(\sigma_{xy}^\mathrm{odd}(\theta)-\sigma_{xy}^\mathrm{odd}(-\theta) )/2$ as a parametric surface.
The result reveals that the effect vanishes when $\vec{n}_{+}$ is contained in the $yz$ mirror plane, i.e., when $n_+^x = 0$.
Likewise, the $\theta$-canting is ineffective when $n^z_+ = 0$.
A dual situation occurs if the canting is not induced via the polar angle $\theta$, but via the azimuthal angle $\phi$.
The parametric surface which corresponds to $(\sigma_{xy}^\mathrm{odd}(\phi)-\sigma_{xy}^\mathrm{odd}(-\phi) )/2$ is shown in Fig.~\ref{fig:symmetry_visualization}b).
In this case, the effect is largest when $n^z_+ = 0$ and it vanishes when $\vec{n}_+$ is contained in the $xz$ mirror plane.
\begin{figure}[t]
\centering
\begin{subfigure}{6cm}
\centering
\includegraphics[width=\linewidth]{theta_canting.png}
\caption{\small Canting with respect to the $\theta$-angle}
\label{fig:sub1}
\end{subfigure}%
\begin{subfigure}{6cm}
\centering
\includegraphics[width=\linewidth]{phi_canting.png}
\caption{\small Canting with respect to the $\phi$-angle}
\label{fig:sub2}
\end{subfigure}
\caption{\small Symmetry analysis of the chiral Hall effect in ferromagnets. The two figures show the parametric surface of the antisymmetrized contributions to $\sigma_{xy}^\mathrm{odd}$, parametrized by the direction of the ferromagnetic order parameter $\vec{n}_{+}$.
The antiferromagnetic order parameter is induced by canting the ferromagnetic state either with respect to the polar angle $\theta$, or the azimuthal angle $\phi$.
The $x$-axis is defined to contain the two atoms of the unit cell.
\hspace*{\fill}
}
\label{fig:symmetry_visualization}
\end{figure}
\subsection{Influence of mirror planes}
Since the mirror planes seem to define an important characteristic of the chiral Hall effect on the hexagonal lattice, it is worthwhile to study in detail how they enter the symmetry analysis.
In a first step, one can define the auxiliary variables
\begin{equation}
m_i = \left\lbrace
\begin{array}{cc}
-1,& \text{if}~ i = y \\
1,& \text{otherwise}.
\end{array}\right. ,
\end{equation}
which are used to specify the $xz$ mirror operator, given by $(M)^i_j = \delta^i_j m_i$.
Therefore $| M | = -1$.
The transformation law for the coefficients in Eq.~(\ref{eq:odd_trafo}) and Eq.~(\ref{eq:even_trafo}) now reads
\begin{align}
c_{xy}^{i_1 \cdots i_n } \overset{!}{=}
c_{xy}^{ i_1 \cdots i_n} \mathrm{swap}(M)^\eta |M|^\epsilon m_x m_y
\prod_i m_i.
\end{align}
This means that $c_{xy}^{i_1 \cdots i_n }=0$, if
\begin{align}
\mathrm{swap}(M)^\eta |M|^\epsilon m_x m_y
\prod_i m_i &= -1.
\end{align}
The two atoms are along the $x$-axis. $M$, therefore, does not swap the atoms.
The criterion above then reduces to
\begin{align}
\prod_i m_i &= -1
\end{align}
Couplings to an odd number of $n^y_\pm$ components are therefore forbidden by the mirror symmetry. Only an even number of $n^y_\pm$ factors is allowed.
Considering instead the $yz$ mirror plane amounts to the redefinition
\begin{equation}
m_i = \left\lbrace
\begin{array}{cc}
-1,& \text{if}~ i = x \\
1,& \text{otherwise}.
\end{array}\right.
\end{equation}
In this case however, one finds $\mathrm{swap}(M)=-1$.
And tensor elements for which
\begin{align}
(-1)^\eta \prod_i m_i &= -1,
\end{align}
are zero. If the tensor is staggered (odd in $\hat{\mathbf{n}}_{-}$) one has $\eta = 1$ and a coupling to an even number of $n_\pm^x$ factors is forbidden.
On the other hand, if the tensor is not staggered (even in $\hat{\mathbf{n}}_{-}$), couplings to an odd number of $n_\pm^x$ is forbidden.
Consider now the case where $n_+^x = 0$. A finite staggered tensor can then only arise from an odd number of $n_{-}^x$ couplings.
\section{Linear perturbation of the Berry curvature}
The goal of this section is to present a derivation of the first order derivative of the Berry curvature as presented in the manuscript in a manifestely gauge-invariant way.
Consider a continuous partition of unity by a complete set of states:
\begin{equation}
\mathrm{id} = \sum_{n} \ket{n(\boldsymbol{\lambda})}\bra{n(\boldsymbol{\lambda})}.
\end{equation}
It is parametrized by a vector $\boldsymbol{\lambda}$ belonging to the phase space $\mathcal{M}$, representing for example the momentum in the first Brillouin zone and any additional adiabatic real-space coordinates.
This parametrization is inherited from the Hamiltonian, which can be considered as a map from $\mathcal{M}$ to the space of bounded operators on the Hilbert space $\mathcal{H}$:
\begin{align}
H\colon &\mathcal{M}\to \mathcal{B}(\mathcal{H})\\
&\boldsymbol{\lambda} \mapsto H_{\boldsymbol{\lambda}} .
\end{align}
We consider the more general class of these maps which form the space $C^\infty(\mathcal{M};\mathcal{B}(\mathcal{H}) ) $ of smooth phase space operators.
Given such an operator $\hat{o}$, we can differentiate it with respect to the phase space coordinates.
By simultaneously expanding in terms of the instantaenous basis set, one obtains a notion of covariant differentiation:
\begin{align}
\partial_i \hat{o}_{\boldsymbol{\lambda}} &=
\partial_i \sum_{nm} \hat{o}_{\boldsymbol{\lambda}}^{nm} \ket{n}\bra{m}
\notag \\ & =
\sum_{nm} ( \partial_i\hat{o}_{\boldsymbol{\lambda}}^{nm} ) \ket{n}\bra{m}
+ \hat{o}_{\boldsymbol{\lambda}}^{nm} \big(\partial_i \ket{n}\big)\bra{m}
+\hat{o}_{\boldsymbol{\lambda}}^{nm} \ket{n}\big(\partial_i \bra{m} \big)
\notag \\ & \equiv \nabla_i \hat{o}_{\boldsymbol{\lambda}} - i [ \mathcal{A}_i, \hat{o}_{\boldsymbol{\lambda}}] .
\end{align}
Here, we have omitted the explicit notation for the parameter dependence of the states for simplicity.
Further, we introduced the non-abelian Wilzcek-Zee connection
\begin{equation}
\mathcal{A}_i^{nm} \equiv i \braket{n | \partial_i | m}.
\end{equation}
Using the adjoint representation, this result can be concisely expressed as
\begin{equation}
\partial_i = \nabla_i - i \mathfrak{A}_i,
\end{equation}
where $\mathfrak{A}_i = \mathrm{ad}~ \mathcal{A}_i = [\mathcal{A}_i, \bullet] $.
By construction, the associated curvature is flat.
The reason is that the partial derivatives commute, i.e., one has $[\partial_i, \partial_j] \hat{o}_{\boldsymbol{\lambda}} = 0$.
Nevertheless, we can construct it as
\begin{align}
\mathfrak{F}_{ij} &= i [ \partial_i, \partial_j]
\notag \\
& =i [ \nabla_i - i \mathfrak{A}_i, \nabla_j - i \mathfrak{A}_j]
\notag \\
& = [ \nabla_i , \mathfrak{A}_j] - [ \nabla_j , \mathfrak{A}_i] - i [ \mathfrak{A}_i , \mathfrak{A}_j]
\notag \\
& = \nabla_i \mathfrak{A}_j - \nabla_j \mathfrak{A}_i - i [ \mathfrak{A}_i , \mathfrak{A}_j] ,
\end{align}
which is the adjoint representation of
\begin{align}
\mathcal{F}_{ij} = \nabla_i \mathcal{A}_j - \nabla_j \mathcal{A}_i - i [\mathcal{A}_i, \mathcal{A}_j]
= \partial_i \mathcal{A}_j - \partial_j \mathcal{A}_i + i [\mathcal{A}_i, \mathcal{A}_j] = 0 .
\end{align}
Since the right-hand side evaluates to zero for the flat connection, one has the two identities
\begin{align}
\nabla_i \mathcal{A}_j - \nabla_j \mathcal{A}_i & = +i [\mathcal{A}_i, \mathcal{A}_j]
\\
\partial_i \mathcal{A}_j - \partial_j \mathcal{A}_i & = - i [\mathcal{A}_i, \mathcal{A}_j].
\label{eq:non_abelian_curvature}
\end{align}
Let $\ket{n(\boldsymbol{\lambda})}$ now represent the parameter-dependent eigenstates of the Hamiltonian.
The abelian Berry curvature tensor of the $n$-th band is then given by
\begin{equation}
\Omega_{\mu \nu}^{n}(\boldsymbol{\lambda})=i \sum_{m \neq n}
\frac{(\partial_\mu H)^{nm} (\partial_\nu H)^{mn}}{(\epsilon_n - \epsilon_m)^2} - ( \mu\leftrightarrow\nu ).
\end{equation}
Since one has $(\partial_\mu H)^{nm} = i (\epsilon_n-\epsilon_m) \mathcal{A}_\mu^{nm}$ for $n\neq m$, this can also be written as
\begin{align}
\Omega_{\mu \nu}^{n}(\boldsymbol{\lambda})= - i [\mathcal{A}_\mu, \mathcal{A}_\nu]^{nn} = (\partial_\mu \mathcal{A}_\nu - \partial_\nu \mathcal{A}_\mu)^{nn}\equiv \Omega_{\mu \nu}^{nn}(\boldsymbol{\lambda}) ,
\end{align}
which is therefore in accordance with Eq.~(\ref{eq:non_abelian_curvature}).
We can take this as the definition of the abelian part of the more general, but flat curvature tensor:
\begin{equation}
\mathcal{F}_{\mu\nu} = \Omega_{\mu \nu} + i [\mathcal{A}_\mu, \mathcal{A}_\nu] = 0.
\end{equation}
For the calculation of the anomalous Hall effect, one weights the Berry curvature of the respective bands with the electronic density matrix $\rho$, from which one obtains the total Berry curvature
\begin{align}
\Omega_{\mu \nu}^\mathrm{tot}(\boldsymbol{\lambda}) = -i ~\mathrm{tr} ~\rho [\mathcal{A}_\mu, \mathcal{A}_\nu] = \Im~ \mathrm{tr} ~\rho [\mathcal{A}_\mu, \mathcal{A}_\nu],
\end{align}
and where the density matrix is given by
\begin{equation}
\rho(\boldsymbol{\lambda}) = \sum_{n} n_\mathrm{F}(\epsilon_n) \ket{n}\bra{n},
\end{equation}
where $ n_\mathrm{F}$ is the Fermi-Dirac distribution at the chemical potential $\mu$.
Having introduced the necessary terminology, the first order derivation of the Berry curvature tensor can now be constructed.
Several terms can contribute:
\begin{align}
\partial_\kappa \Omega_{\mu \nu}^\mathrm{tot} & =
( \Im~ \mathrm{tr} ~\partial_\kappa \rho \mathcal{A}_\mu \mathcal{A}_\nu
+ \Im~ \mathrm{tr} ~\rho \partial_\kappa \mathcal{A}_\mu \mathcal{A}_\nu
+ \Im~ \mathrm{tr} ~\rho \mathcal{A}_\mu \partial_\kappa \mathcal{A}_\nu
) - ( \mu \leftrightarrow \nu)
\notag \\
& = \Im ~\mathrm{tr} \left(
\nabla_\kappa \rho \mathcal{A}_\mu \mathcal{A}_\nu - i [\mathcal{A}_\kappa, \rho ] \mathcal{A}_\mu \mathcal{A}_\nu + \rho [ \partial_\kappa \mathcal{A}_\mu, \mathcal{A}_\nu]
\right) - ( \mu \leftrightarrow \nu) .
\end{align}
As $\Omega_{\mu \nu}^\mathrm{tot}$ is gauge-invariant quantity, its derivative is gauge-invariant as well, and so is the right-hand side of this equation.
The first term is a Fermi surface contribution which we single out:
\begin{equation}
(\partial_\kappa \Omega_{\mu \nu}^\mathrm{tot})^\mathrm{FS} = \Im ~\mathrm{tr}
\nabla_\kappa \rho [\mathcal{A}_\mu, \mathcal{A}_\nu] .
\end{equation}
It vanishes for insulators in the zero temperature limit and we neglect it in the following.
The remaining terms are
\begin{align}
(\partial_\kappa \Omega_{\mu \nu}^\mathrm{tot})
& = \Im ~\mathrm{tr} \rho \left(
-i [\mathcal{A}_\mu \mathcal{A}_\nu, \mathcal{A}_\kappa ] + [\partial_\kappa \mathcal{A}_\mu , \mathcal{A}_\nu]
\right) - ( \mu \leftrightarrow \nu)
\notag \\
& = \frac{1}{2}\Im ~\mathrm{tr} \rho \left(
[\Omega_{\mu\nu}, \mathcal{A}_\kappa ]
+[\mathcal{Q}_{\kappa\mu}, \mathcal{A}_\nu ]
+[\Omega_{\kappa\mu}, \mathcal{A}_\nu ]
\right) - ( \mu \leftrightarrow \nu)
\notag \\
& = \frac{1}{2}\Im ~\mathrm{tr} \rho \left(
[\Omega_{\mu\nu}, \mathcal{A}_\kappa ]
+[\mathcal{Q}_{\kappa\mu}, \mathcal{A}_\nu ]
+[\Omega_{\nu\kappa}, \mathcal{A}_\mu ]
\right) - ( \mu \leftrightarrow \nu),
\end{align}
where we have introduced
\begin{equation}
\mathcal{Q}_{\mu\nu} =
\partial_\mu \mathcal{A}_\nu + \partial_\nu \mathcal{A}_\mu ,
\end{equation}
which is related to the quantum metric tensor.
\end{document} |
2,877,628,091,478 | arxiv |
\section[Lecture 1]{Formal Aspects of Expectation-Value Theory}
\label{sec:1}
{\renewcommand{\theequation}{1.\arabic{equation}}
\subsubsection{Vocabulary}
In these lectures,
\begin{equation}
{\hat\varphi}^i
\end{equation}
denotes the quantum field. It is an operator function on a
given differentiable manifold (referred to below as the base manifold),
and $i$ is a point of this manifold.
Generally, ${\hat\varphi}^i$ is a collection of fields, and then $i$
is a set containing also the indices labelling these fields.
The hat designates an operator.
The ${\hat\varphi}^i$ is an operator in a Hilbert
space which is not granted. The workers have to build it
with their own hands
as a representation of the algebra of ${\hat\varphi}$'s. For simplicity,
${\hat\varphi}^i$ will be assumed boson and real (self-adjoint) but
otherwise arbitrary.
The starting point is an operator equation
for ${\hat\varphi}^i$
\begin{equation}
S_i({\hat\varphi})+J_i=0
\end{equation}
which is understood as an expansion. It is meant that there is
a c-number function $S_i(\varphi)$ understood as a collection of its
Taylor coefficients at some c-number point of configuration space:
\begin{equation}
S_i(\varphi)=\sum_{n=0}^{\infty}\frac{1}{n!}
S_{ij_1 \cdots j_n}(c)(\varphi-c)^{j_1}\ldots(\varphi-c)^{j_n}\;,
\end{equation}
and one replaces $\varphi^j$ in this expansion with an operator.
Which c-number field $c^j$ will be used for this expansion
does not matter because it will always sum with the operator
$({\hat\varphi}-c)^j$ to make the full quantum field. The expansion point
$c^j$ is often called "background field", and there
has been much emphasis on it. In fact it is completely
immaterial. I shall never make this expansion
explicitly but I shall keep explicit the c-number term of the
equation: a source $J_i$.
Important are only the following three points.
\begin{itemize}
\item[(1)] The function $S_i(\varphi)$ is local, i.e., it depends
only on $\varphi$ and its finite-order derivatives at the point $i$.
\item[(2)] The function $S_i(\varphi)$ is a gradient:
\begin{equation}
S_i(\varphi)=\frac{\delta}{\delta\varphi^i}S(\varphi)\;,
\end{equation}
i.e., there exists an action $S(\varphi)$ generating the
operator field equations.
For its derivatives the following notation will be used:
\begin{equation}
S_{i_1\cdots i_n}(\varphi)=\frac{\delta}{\delta\varphi^{i_1}}\cdots
\frac{\delta}{\delta\varphi^{i_n}}S(\varphi)\;.
\end{equation}
Of course, only the total action matters:
\begin{equation}
S_{\mbox{\scriptsize tot}}=S(\varphi)+\varphi^iJ_i\;.
\end{equation}
\item[(3)] There is a special condition on the matrix of second
derivatives of $S(\varphi)$. I shall refer to this continuous
matrix as $S_2$:
\begin{equation}
S_{ij}(\varphi)\equiv S_2(\varphi)\;.
\end{equation}
By locality, $S_2$ is the kernel of some differential operator
on the base manifold for which I shall use the same notation $S_2$.
It is required that $S_2$ admit a well-posed Cauchy problem
in which case it has the unique advanced and retarded inverses
(Green's functions) $G^+$ and $G^-$:
\begin{equation}
S_{ij}G^{\pm jk}=-\delta^k_i\;,\qquad G^{+jk}=G^{-kj}\;.
\end{equation}
Because $S_2$ is symmetric, the advanced inverse is the transpose
of retarded.
\end{itemize}
One may think of $S_2$ as of a second-order hyperbolic operator
which it will in fact be below but the scheme is more general.
It is formalism-insensitive. One's field equations may have
the second-order differential form or the first-order
differential form, -- the scheme will work anyway. The importance
of the operator $S_2$ is in the fact that it determines the
linear term of the field equations and, therefore, governs
the iteration procedures. Commute ${\hat\varphi}^i$ with the
field equations. Obtained will be a linear homogeneous
equation for the commutator $[{\hat\varphi}^i,{\hat\varphi}^j]$.
Consider the respective inhomogeneous equation and its
two iterative solutions: one with the advanced inverse
for $S_2$ and the other one with retarded. The equation for the
commutator is solved by their difference:
\begin{equation}
[{\hat\varphi}^i,{\hat\varphi}^j]={\rm i}\hbar\left(G^{+ij}(c)
-G^{-ij}(c)\right)+O({\hat\varphi}-c)\;.
\end{equation}
In this way the algebra of ${\hat\varphi}$'s is built as an operator
expansion. This is the quantization postulate.
By the setting of its Cauchy problem, the operator $S_2$ introduces
the concept of causality. If $S_2$ is a second-order hyperbolic
operator, this is the usual relativistic causality. But in any
case the base manifold will be foliated with the Cauchy surfaces
of the operator $S_2$. They will be denoted as $\Sigma$.
A function of ${\hat\varphi}$ that involves ${\hat\varphi}$ on only one
Cauchy surface
\begin{equation}
Q({\hat\varphi})=Q({\hat\varphi}\Bigl|_\Sigma)
\end{equation}
will be called local observable. A state defined as an eigenstate
of local observables
\begin{equation}
Q({\hat\varphi}\Bigl|_\Sigma)|\;\;\rangle=q|\;\;\rangle
\end{equation}
will be called local state. This latter name may be confusing
because the state is, of course, a global concept, and I am using
the Heisenberg picture. But the local state is {\it associated}
with a given $\Sigma$:
\begin{equation}
|\;\;\rangle=|\Sigma,q\rangle\;.
\end{equation}
Of course, for it to be defined, one needs a complete set of
commuting local observables. I call the $Q$'s observables but
they may not even be Hermitian. And I shall consider them linear
in ${\hat\varphi}$. If they are nonlinear, I shall make a local
reparametrization of the field variables so as to make them linear.
In fact, if one has a complete set of commuting local observables,
one has already built a Hilbert space. A linear combination
\begin{equation}
|\Sigma\rangle=\int dq\,\Psi(q)|\Sigma,q\rangle
\end{equation}
is also a local state associated with $\Sigma$ provided that
the function $\Psi(q)$ is external, i.e., independent of the
quantum field ${\hat\varphi}^i$.
Our goal is to learn how to calculate expectation values of
field observables in a local state, and I shall concentrate
on the expectation value
\begin{equation}
\langle\Sigma|{\hat\varphi}^i|\Sigma\rangle\;.
\end{equation}
However, we shall save the effort if we consider another
problem first. Namely, let us recall what would we do in the case
of two local states associated with different Cauchy surfaces:
\begin{equation}
|\Sigma_1,q_1\rangle=|1\rangle\;,\qquad
|\Sigma_2,q_2\rangle=|2\rangle\;,
\end{equation}
$$
\Sigma_2>\Sigma_1\;.
$$
Here and below, "greater" is a notation for "later".
\subsubsection{The Quantum Boundary-Value Problem}
In the problem where given are two local states (1.15),
the field's expectation value is replaced
with the scalar product
\begin{equation}
\frac{\langle2|{\hat\varphi}|1\rangle}{\langle2|1\rangle}
\stackrel{\mbox{def}}{=}\langle\varphi\rangle
\end{equation}
which I shall call mean field although it is not mean in any state.
If our goal was the scalar product (1.16), we would use the Schwinger
principle
\begin{equation}
\delta\langle2|1\rangle={\rm i}\langle2|
\delta S_{\mbox{\scriptsize tot}}|1\rangle\mbox{ or zero}
\end{equation}
whose meaning is this. Consider a variation in the Taylor coefficients
of the field equations, i.e., in the functional form of the total
action. The solution for ${\hat\varphi}^i$ will respond and will induce
a change in the functions $Q({\hat\varphi})$ which will induce a change
in their eigenstates, and finally there will be a change in the
amplitude $\langle2|1\rangle$ induced by a change in the action.
The Taylor coefficients are local. They can be varied in the
region between $\Sigma_1$ and $\Sigma_2$ or outside this region.
The Schwinger principle (1.17) says that, if they are varied
outside, the variation of the amplitude is zero. Otherwise, this
variation is expressed through the variation of the action
by (1.17).
The Schwinger principle is a consequence of the commutation
relations but it can also be taken for the first principle
because one does not need anything else. For many purposes
(but not all) it suffices to use a specific case of (1.17):
a freedom of varying the source $J$. The result of this use is
\begin{equation}
\frac{\delta}{\delta{\rm i}J_{j_1}}
\cdots
\frac{\delta}{\delta{\rm i}J_{j_n}}
\langle2|1\rangle=\left\{
\begin{array}{l}
\langle2|\overleftarrow{T}\left({\hat\varphi}^{j_1}\ldots
{\hat\varphi}^{j_n}\right)|1\rangle,
\mbox{ if }\Sigma_2>j_1,\ldots j_n>\Sigma_1\;,\\
0,\mbox{ otherwise}\;.\\
\end{array}
\right.
\end{equation}
Here $T$ orders the operators ${\hat\varphi}^k$, $k\in\Sigma_k$,
chronologically, i.e., places them in the order of following
of their $\Sigma_k$, and the arrow
over $T$ points the direction of growth of the time $\Sigma$.
Let us come back to the operator field equations. Since all
${\hat\varphi}$'s in these equations are at the same point,
one can formally insert in (1.2) the sign of chronological
ordering:
\begin{equation}
\overleftarrow{T}S_i({\hat\varphi})+J_i=0\;.
\end{equation}
One may worry about additional terms in (1.19) stemming from
the distinction between the chronological and ordinary operator products,
and the noncommutativity of $\overleftarrow{T}$ with the derivatives
in the Taylor coefficients of the equations. Because the operators
in the products are at the same point, these terms are
ambiguous expressions whose handling depends on
the formalisms and procedures used. There is always a happy end:
these terms cancel and help to cancel similar terms appearing
in the subsequent calculations. Therefore, it makes sense to use
such formalisms and procedures that these terms do not appear
at all. This is the approach that I shall follow.
Sandwiching the equation (1.19) between the states $\langle2|$ and
$|1\rangle$, and using (1.18), one obtains the following equation
for the amplitude:
\begin{equation}
\left(S_i\left(\frac{\delta}{\delta{\rm i}J}\right)
+J_i\right)\langle2|1\rangle=0\;.
\end{equation}
Multiply it from the left with $\langle2|1\rangle^{-1}$ and pull
the factors $\langle2|1\rangle$ in the argument of $S_i$ using
the fact that this is a unitary transformation:
\begin{equation}
\left(S_i\left(
\langle2|1\rangle^{-1}
\frac{\delta}{\delta{\rm i}J}
\langle2|1\rangle\right)
+J_i\right)1=0\;.
\end{equation}
In the argument, commute the operators:
\begin{equation}
\left(S_i\left(
\frac{\delta\ln\langle2|1\rangle}{\delta{\rm i}J}+
\frac{\delta}{\delta{\rm i}J}\right)
+J_i\right)1=0
\end{equation}
and use that by (1.18)
\begin{equation}
\frac{\delta\ln\langle2|1\rangle}{\delta{\rm i}J_k}=
\langle\varphi^k\rangle\;.
\end{equation}
The result is the following equation for the mean field:
\begin{equation}
\left(S_i\left(
\langle\varphi\rangle+
\frac{\delta}{\delta{\rm i}J}\right)
+J_i\right)1=0\;.
\end{equation}
Equation (1.24) differs from the classical field equation by the
operator addition
$\delta/\delta{\rm i}J$
to $\langle\varphi\rangle$. When this operator
addition acts on $1$, its effect is zero, but it will act also on
$\langle\varphi\rangle$ because the summands
$\langle\varphi\rangle$ and
$\delta/\delta{\rm i}J$
do not commute. Where in (1.24) is the Planck constant? It is easy
to see by dimension that $\hbar$ is just in front of
$\delta/\delta{\rm i}J$. Therefore, if one wants to expand the
equations in $\hbar$, one should expand them in
$\delta/\delta{\rm i}J$.
The problem boils down to expanding a function $f(A+B)$ in $B$
when $A$ and $B$ do not commute. It suffices to expand the
exponential function since one can write
\begin{equation}
f(A+B)=f\left(\frac{d}{dx}\right)\left.{\rm e}^{(A+B)x}\right|_{x=0}
\end{equation}
or, equivalently,
\begin{equation}
f(A+B)=\left.{\rm e}^{(A+B)d/dx}f(x)\right|_{x=0}\;.
\end{equation}
For the exponential function one has the identity
\begin{equation}
{\rm e}^{(A+B)x}={\rm e}^{Ax}\left(1+\int\limits_0^x dy\,
{\rm e}^{-Ay}B{\rm e}^{(A+B)y}\right)
\end{equation}
which makes the expansion possible. This all works well if
the series of commutators
\begin{equation}
{\rm e}^{-A}B{\rm e}^A= B+[B,A]+\frac{1}{2!}[[B,A],A]
+\frac{1}{3!}[[[B,A],A],A]+\cdots
\end{equation}
terminates somewhere as in our case. Indeed, if
$\langle\varphi\rangle=A$ and
$\delta/\delta{\rm i}J=B$, then
\begin{equation}
[[B,A],A]=0\;.
\end{equation}
Under condition (1.29) one obtains for an arbitrary function:
\begin{equation}
f(A+B)=f(A)+f'(A)B+\frac{1}{2}f''(A)[B,A]+O(B^2)\;.
\end{equation}
As compared to the ordinary Taylor expansion, there are
several additional terms with commutators at each order.
A use of the result above in equation (1.24) gives
\begin{equation}
S_i(\langle\varphi\rangle)+\frac{1}{2}S_{ijk}(\langle\varphi\rangle)
\frac{\delta\langle\varphi^j\rangle}{\delta{\rm i}J_k}
+O(\hbar^2)=-J_i\;,
\end{equation}
\begin{equation}
S_{ij}(\langle\varphi\rangle)
\frac{\delta\langle\varphi^j\rangle}{\delta J_k}
=-\delta^k_i+O(\hbar)\;.
\end{equation}
Here the second equation is obtained by differentiating the
first one, and it tells us what is
$\delta\langle\varphi\rangle/\delta J$. Up to $O(\hbar)$, it
is some Green's function of the operator $S_2$. Denote this
Green's function as
\begin{equation}
\frac{\delta\langle\varphi^j\rangle}{\delta J_k}
=G^{jk}+O(\hbar)\;.
\end{equation}
One can work to any order but I shall stop here. {\it We obtain
closed equations for the mean field}:
\begin{equation}
S_i(\langle\varphi\rangle)+\frac{1}{2{\rm i}}S_{ijk}(\langle\varphi\rangle)
G^{jk}(\langle\varphi\rangle)
+O(\hbar^2)=-J_i\;,
\end{equation}
\begin{equation}
S_{ij}(\langle\varphi\rangle)
G^{jk}(\langle\varphi\rangle)
=-\delta^k_i\;.
\end{equation}
The second term in (1.34) is the loop
\begin{equation}
S_i(\langle\varphi\rangle)+{}
\parbox{30pt}{
\begin{picture}(30,20)
\thicklines
\put(20,10){\circle{20}}
\put(0,10){\line(1,0){10}}
\put(0,12){$\scriptstyle i$}
\end{picture}
}
{}+O(\hbar^2)=-J_i\;,
\end{equation}
all elements of the loop being functions of
$\langle\varphi\rangle$. But two questions remain to be answered:
\begin{itemize}
\item[(i)] Which Green's function is $G$?
\item[(ii)] What are the boundary conditions to the mean-field
equations?
\end{itemize}
The answers are again in the Schwinger principle. Equation (1.18)
tells us what are $G$ and $\langle\varphi\rangle$:
\begin{equation}
\frac{1}{{\rm i}}G^{jk}=\frac{
\langle2|\overleftarrow{T}\left({\hat\varphi}^j
{\hat\varphi}^k\right)|1\rangle}{\langle2|1\rangle}
-\langle\varphi^j\rangle\langle\varphi^k\rangle+O(\hbar)\;,
\end{equation}
\begin{equation}
\langle\varphi^j\rangle=
\frac{\langle2|{\hat\varphi}^j|1\rangle}{\langle2|1\rangle}\;.
\end{equation}
Multiply these expressions by the coefficients that make
the linear $Q$ out of $\varphi$:
\begin{equation}
Q({\hat\varphi})=k_j{\hat\varphi}^j\;,
\end{equation}
and send $j$ either to $\Sigma_1$ or to $\Sigma_2$.
By the definition of the states $|1\rangle$ and $|2\rangle$,
one obtains
\begin{equation}
Q(\langle\varphi\rangle\Bigl|_{\Sigma_1})=q_1\;,\qquad
Q(\langle\varphi\rangle\Bigl|_{\Sigma_2})=q_2\;,
\end{equation}
\begin{equation}
k_jG^{jk}\Bigl|_{j\in\Sigma_1}=0\;,\qquad
k_jG^{jk}\Bigl|_{j\in\Sigma_2}=0\;.
\end{equation}
From (1.37) it follows also that
\begin{equation}
G^{jk}=G^{kj}\;.
\end{equation}
The Green's function $G$ is symmetric and completely determined
by the boundary conditions (1.41). This completes the determination
of the mean-field equations (1.34), and for these equations one
arrives at a boundary-value problem with the boundary conditions
(1.40). As a result, the quantum boundary-value problem is reduced
to a c-number boundary-value problem. I say "c-number" rather than
"classical" because there are differences, and one is the presence
of terms $O(\hbar)$ in the equations, but, as far as the setting
of the problem is concerned, there is no difference. One arrives
at the same boundary-value problem for the observable field as
in the case of the classical states.
Note that the Green's function $G$ and, thereby, the mean-field
equations do not depend on the eigenvalues $q$. The eigenvalues
appear only in the boundary conditions to the equations.
However, $G$ depends on the choice of the observables $Q$
themselves and, through them, on the choice of the states
$|1\rangle$ and $|2\rangle$. Therefore, the mean-field equations
are state-dependent.
Although the Green's function $G$ depends on the choice
of the states, it possesses
two universal properties. One has already been mentioned: $G$
is always symmetric. The other one is this. Let us make a variation
in the operator $S_2$ and find out how does $G$ respond:
$$
S_2G=-1\;,
$$
$$
S_2\delta G=-\delta S_2G\;,
$$
$$
\delta G=?
$$
To answer this question, one can use the Schwinger principle again.
The result is the following {\it variational law}:
\begin{equation}
\delta G=G\delta S_2G\;,
\end{equation}
and this law is universal. It is the same for all
boundary-value problems.
The variational law (1.43) is remarkable. It is characteristic of
finite-dimensional matrices. If a matrix has a unique inverse, then the
inverse obeys this law. This law is valid, for example, for
the inverse of an elliptic operator, i.e., for the Euclidean
Green's function. It is valid also for the advanced and
retarded Green's functions:
\begin{equation}
\delta G^+=G^+\delta S_2G^+\;,\qquad
\delta G^-=G^-\delta S_2G^-\;.
\end{equation}
But it is not valid generally, and, in the case of $S_2$, it
is exceptional.
The variational law for $G$ has an important implication.
Namely, let us differentiate the left-hand side of the
mean-field equations
\begin{equation}
\Gamma_i(\varphi)\equiv S_i(\varphi)+
\frac{1}{2{\rm i}}
S_{imn}(\varphi)G^{mn}(\varphi)+O(\hbar^2)
\end{equation}
to see if the result is symmetric. One obtains
\begin{eqnarray}
\frac{\delta\Gamma_i(\varphi)}{\delta\varphi^j}-
\frac{\delta\Gamma_j(\varphi)}{\delta\varphi^i}&\!=\!&
\frac{1}{2{\rm i}}
S_{imn}G^{m{\bar m}}G^{n{\bar n}}S_{{\bar m}{\bar n}j}
-(i\leftrightarrow j)+O(\hbar^2)\nonumber\\
&\!=\!&0+O(\hbar^2)\;.
\end{eqnarray}
This means that $\Gamma_i(\varphi)$ is a gradient, i.e., there exists
an action generating the mean-field equations:
\begin{equation}
\Gamma_i(\varphi)=\frac{\delta\Gamma(\varphi)}{\delta\varphi^i}\;.
\end{equation}
There is another way to arrive at the same conclusion.
Consider a function of the mean field defined by the
Legendre transformation
\begin{equation}
\Gamma(\langle\varphi\rangle)=
\frac{1}{{\rm i}}\ln\langle2|1\rangle-
\langle\varphi^k\rangle J_k
\end{equation}
where $J$ is to be expressed through $\langle\varphi\rangle$ by solving
equation (1.23). It is easy to see that this function satisfies
the equation
\begin{equation}
\frac{\delta\Gamma(\langle\varphi\rangle)}{\delta\langle\varphi^i\rangle}
=-J_i\;,
\end{equation}
and, therefore, its gradient is the left-hand side of the
mean-field equations.
$\Gamma(\varphi)$ is the effective action. Up to $\hbar^2$ it is
of the form
\begin{equation}
\Gamma(\varphi)=S(\varphi)+
\frac{1}{2{\rm i}}\ln\det G(\varphi)+O(\hbar^2)
\end{equation}
where the second term is the loop without external lines:
\begin{equation}
\Gamma(\varphi)=S(\varphi)+{}
\parbox{20pt}{
\begin{picture}(20,20)
\thicklines
\put(10,10){\circle{20}}
\end{picture}
}
{}+O(\hbar^2)\;.
\end{equation}
The effective action exists for any boundary-value problem
but these actions are different for different such
problems. Only in the classical approximation, the action and the
equations are independent of the boundary conditions.
Let us go over to expectation values.
\subsubsection{The Quantum Initial-Value Problem}
In this problem, given is only one local state (which I shall
assume normalized). Since
the field operators are now sandwiched between the states
associated with one and the same $\Sigma$:
\begin{equation}
\langle1|(\cdots)|1\rangle\;,\qquad \langle1|1\rangle=1
\end{equation}
one cannot apply the Schwinger principle: there is no room
for varying the source. One can create this room artificially
by inserting a complete set of states associated with some
later $\Sigma$:
\begin{equation}
\langle1|1\rangle=\sum_q
\langle1|2q\rangle
\langle2q|1\rangle\;,
\end{equation}
$$
\Sigma_2>\Sigma_1
$$
but this alone will not help because the source is varied
in both amplitudes, and these variations cancel. It will
help only if the two amplitudes in (1.53) are functions of
different sources, i.e., if, instead of (1.53), one introduces
a function of two independent sources, $J$ and $J^*$:
\begin{equation}
Z(J^*,J)=\sum_q
\langle1|2q\rangle_{J^*}
\langle2q|1\rangle_J\;.
\end{equation}
This amounts to considering two copies of the quantum field:
one with the source $J$, the other one with the source $J^*$,
and using in (1.54) the amplitudes of both. Then one can vary
only one source and, after that, make the sources coincident.
Using the Schwinger principle, one obtains
\begin{equation}
\left.\frac{\delta^nZ(J^*,J)}{\delta{\rm i}J_{j_1}
\cdots
\delta{\rm i}J_{j_n}}\right|_{J^*=J}=
\langle1|\overleftarrow{T}\left({\hat\varphi}^{j_1}\ldots
{\hat\varphi}^{j_n}\right)|1\rangle\;.
\end{equation}
In this way the expectation values can be calculated.
The technique of two sources is called time-loop formalism
because in expression (1.54) one goes forward in time, from
$\Sigma_1$ to some $\Sigma_2$,
and then back from $\Sigma_2$ to $\Sigma_1$ but with another copy
of the quantum field.
For every partial amplitude in (1.54) we have equation (1.20)
\begin{equation}
\left(S_i\left(\frac{\delta}{\delta{\rm i}J}\right)
+J_i\right)\langle2q|1\rangle_J=0\;.
\end{equation}
Since the other amplitude in (1.54) does not depend on $J$,
we can linearly combine equations (1.56) to obtain
\begin{equation}
\left(S_i\left(\frac{\delta}{\delta{\rm i}J}\right)
+J_i\right)Z(J^*,J)=0\;.
\end{equation}
Only one source is active in this differential equation.
The other one is a parameter. Therefore, we can just
repeat the consideration above with $Z(J^*,J)$ in place of
$\langle2|1\rangle$, and in this way derive the mean-field
equations. We obtain the loop expansion of exactly the same
form as before:
\begin{equation}
S_i(\langle\varphi\rangle)+\frac{1}{2{\rm i}}S_{ijk}(\langle\varphi\rangle)
G^{jk}(\langle\varphi\rangle)
+O(\hbar^2)=-J_i\;,
\end{equation}
\begin{equation}
S_{ij}(\langle\varphi\rangle)
G^{jk}(\langle\varphi\rangle)
=-\delta^k_i\;,
\end{equation}
and in these loops we must make the sources coincident. There are
only two elements in all loops, $\langle\varphi\rangle$ and $G$.
Upon setting $J^*=J$, $\langle\varphi\rangle$ becomes the
genuine expectation value
\begin{equation}
\langle\varphi^k\rangle=
\left.\frac{\delta\ln Z(J^*,J)}{\delta{\rm i}J_k}
\right|_{J^*=J}=
\langle1|{\hat\varphi}^k|1\rangle\;,
\end{equation}
and the matrix $G$ is given by the expression
\begin{equation}
\frac{1}{{\rm i}}G^{jk}+O(\hbar)=
\left.\frac{\delta^2\ln Z(J^*,J)}{\delta{\rm i}J_j
\delta{\rm i}J_k}\right|_{J^*=J}=
\langle1|\overleftarrow{T}\left({\hat\varphi}^j
{\hat\varphi}^k\right)|1\rangle
-\langle\varphi^j\rangle\langle\varphi^k\rangle\;.
\end{equation}
I am using for it the same letter $G$ but it is now
a different Green's function of the operator $S_2$.
Equations (1.58) with this Green's function in all loops
are the expectation-value equations.
The solution of the expectation-value equations is specified
completely by the initial conditions on $\Sigma_1$ following
from (1.60) but it is not easy to write these conditions
down in the general terms. Only half of them is obvious:
the $Q$'s on $\Sigma_1$ are given. To obtain the other half,
one would need to find the variables canonically conjugate
to $Q$'s and calculate their expectation values
on $\Sigma_1$.\footnote{Let $Q$'s be Hermitian, and let $P$'s
have c-number commutators with $Q$'s: ${[P,Q]={\rm i}}$. Then
the expectation values in the state (1.13) satisfy the initial
conditions
$$
\langle Q\Bigl|_\Sigma\rangle=\int dq\,
\overline{\Psi}(q)q\Psi(q)\;,
\qquad
\langle P\Bigl|_\Sigma\rangle={\rm i}\int dq\,
\overline{\Psi}(q)
\frac{\partial}{\partial q}\Psi(q)
$$
where the overline means complex conjugation. If both $Q({\hat\varphi})$
and $P({\hat\varphi})$ are linear, these are initial conditions
directly for $\langle\varphi\rangle$.} The same concerns
the specification of the Green's function
$G$. This issue will be considered in the next lecture
where a different approach to it will be used.
Let us consider the state-independent properties of $G$.
First, as seen from (1.61), $G$ is symmetric for any initial-value
problem:
\begin{equation}
G^{jk}=G^{kj}\;.
\end{equation}
Second, one can apply the Schwinger principle to derive the
variational law for $G$. At this point, the initial-value problem
differs significantly from the boundary-value problem. When the
operator $S_2$ is varied in the generating function (1.54), one
can no longer play with only one source because $S_2$ is
the same for both copies of the quantum field, and,
therefore, both amplitudes in (1.54) respond. As a consequence,
all four matrices of second derivatives are generally involved:
\begin{equation}
\frac{\delta^2\ln Z}{\delta{\rm i}J_j
\delta{\rm i}J_k}\;,\quad
\frac{\delta^2\ln Z}{\delta{\rm i}J^*_j
\delta{\rm i}J^*_k}\;,\quad
\frac{\delta^2\ln Z}{\delta{\rm i}J^*_j
\delta{\rm i}J_k}\;,\quad
\frac{\delta^2\ln Z}{\delta{\rm i}J_j
\delta{\rm i}J^*_k}\;,
\end{equation}
i.e., the Green's function $G^{jk}$, its complex conjugate,
and two Wightman functions:
$\langle1|{\hat\varphi}^j{\hat\varphi}^k|1\rangle$ and its transpose.
The Wightman functions can be expressed through $G^{jk}$
and the advanced or retarded Green's function:
\begin{equation}
{\rm i}\langle1|{\hat\varphi}^j
{\hat\varphi}^k|1\rangle
-{\rm i}\langle\varphi^j\rangle\langle\varphi^k\rangle=
G^{jk}-G^{+jk}+O(\hbar)=
G^{kj}-G^{-kj}+O(\hbar)\;.
\end{equation}
The result of the calculation is the following variational
law for $G$:
\begin{equation}
\delta G=
G^-\delta S_2 G+
G\delta S_2 G^+-
G^-\delta S_2 G^+\;.
\end{equation}
It is no more the simple law (1.43) but it is, nevertheless,
universal because $G^+$ and $G^-$ are state-independent.
The variational law (1.65) is valid for any initial-value
problem.
The left-hand side of the expectation-value equations has
the form (1.45) as before but, since the variational law
for $G$ is different, the former inference about the
symmetry of $\delta\Gamma_i/\delta\varphi^j$ needs to be revised.
This inference is no longer valid. The advanced and retarded
Green's functions arrange it so that
\begin{equation}
\frac{\delta\Gamma_i(\varphi)}{\delta\varphi^j}=0\quad
\mbox{ when }i<j
\end{equation}
and
\begin{equation}
\frac{\delta\Gamma_i(\varphi)}{\delta\varphi^j}\ne0\quad
\mbox{ when }i>j\;.
\end{equation}
It follows that there is no action generating the
expectation-value equations.
The nonexistence of an action for the initial-value problem
is seen also from the consideration of the Legendre transform
of the generating function (1.54). It is now a function of two
fields:
\begin{equation}
\Gamma(\varphi^*,\varphi)=\frac{1}{{\rm i}}\ln Z(J^*,J) - \varphi J
+ \varphi^* J^*
\end{equation}
where
\begin{equation}
\varphi=\frac{\delta\ln Z(J^*,J)}{\delta{\rm i}J}\;,\qquad
\varphi^*=-\frac{\delta\ln Z(J^*,J)}{\delta{\rm i}J^*}\;.
\end{equation}
The expectation-value equations are obtained as
\begin{equation}
\varphi=\langle1|{\hat\varphi}|1\rangle\;:\qquad
\left.\frac{\delta\Gamma(\varphi^*,\varphi)}{\delta\varphi^i}
\right|_{\varphi^*=\varphi}=-J_i\;,
\hphantom{\left.\frac{\delta\Gamma(\varphi^*,\varphi)}{\delta\varphi^i}
\right|_{\varphi^*=\varphi}}
\end{equation}
and, therefore,
\begin{equation}
\Gamma_i(\varphi)=
\left.\frac{\delta\Gamma(\varphi^*,\varphi)}{\delta\varphi^i}
\right|_{\varphi^*=\varphi}\;.
\end{equation}
This is {\it not} a gradient.
}
\section[Lecture 2]{The In-Vacuum State and Schwinger--Keldysh diagrams}
\label{sec:2}
{\renewcommand{\theequation}{2.\arabic{equation}}
\subsubsection{Specification of The State}
In order to proceed, I need to specify the state. This will be
done in several steps.
\subparagraph{Step 1.}
It will be assumed that $S_2$ is a second-order hyperbolic operator,
and the energy-momentum tensor of the field of small disturbances
$\delta\varphi^i$ with the action
\begin{equation}
\frac{1}{2}S_{ij}\delta\varphi^i\delta\varphi^j
\end{equation}
satisfies the dominant energy condition.
\subparagraph{Step 2.}
The initial-value surface will be shifted to the remote past:
\begin{equation}
\Sigma_1\to -\infty\;.
\end{equation}
Consider the operator field equations (1.2)--(1.3):
\begin{equation}
J_i+S_i(c)+S_{ij}(c)({\hat\varphi}-c)^j+
\sum^{\infty}_{n=2}\frac{1}{n!}S_{ij_1\cdots j_n}(c)
({\hat\varphi}-c)^{j_1}\ldots ({\hat\varphi}-c)^{j_n}=0\;.
\end{equation}
If $c^i$ is some classical solution:
\begin{equation}
S_i(c)=-J_i\;,
\end{equation}
and ${\hat\phi}^i$ is an operator solution of $S_2$ against
the background $c^i$:
\begin{equation}
S_{ij}(c){\hat\phi}^j=0\;,
\end{equation}
then the field
\begin{equation}
{\hat\varphi}^i=c^i+{\hat\phi}^i\;,\qquad i\in\Sigma\to -\infty
\end{equation}
solves the operator dynamical equations asymptotically in the
remote past. It is a property of $S_2$ that its
solution with smooth data having a compact support or
decreasing at the spatial infinity
decreases also in the timelike directions. Then, as
$i\in\Sigma\to -\infty$,
the nonlinear terms in (2.3) decrease
even faster and are negligible. Thus, to build a Hilbert space
of states, it suffices to build a representation of the
algebra of ${\hat\phi}$'s.
\subparagraph{Step 3.}
A Fock space will be built associated with the linear field
${\hat\phi}^i$. This amounts to expanding ${\hat\phi}^i$ in
some basis of solutions of $S_2(c)$:
\begin{equation}
S_2(c)\chi_A=0\;,
\end{equation}
\begin{equation}
{\hat\phi}^i=\chi_A^i{\hat a}_{\mbox{\scriptsize in}}{}^A+
\overline{\chi}_A^i{\hat a}^{+}_{\mbox{\scriptsize in}}{}^A
\end{equation}
where the overline means complex conjugation, and the basis functions
$\chi_A^i$ are normalized with the aid of the inner product:
\begin{equation}
(\chi_A,\chi_B)=0\;,\qquad
(\overline{\chi}_A,\chi_B)=\delta_{AB}\;,
\end{equation}
\begin{equation}
(\phi_1,\phi_2)\equiv -{\rm i}\int\limits_\Sigma
\phi_1W_\mu\phi_2\,d\Sigma^\mu\;.
\end{equation}
Here $W_\mu$ is the Wronskian of $S_2$. In this way, the concept
is introduced of {\it some} particles detectable in the past.
What kind of particles are these, i.e.,
what kind of detectors detect these particles -- depends on the
choice of the basis of solutions but, in any case, the
following functions will be chosen for the local observables $Q$:
\begin{equation}
Q^A({\hat\varphi}\Bigl|_\Sigma)=
-{\rm i}\delta^{AB}
\int\limits_\Sigma \chi_BW_\mu({\hat\varphi}-c)\,d\Sigma^\mu\;,
\end{equation}
$$
\Sigma\to -\infty\;.
$$
One needs these observables only on the initial-value surface, and,
there, they coincide with the annihilation operators of the introduced
particles:
\begin{equation}
Q^A({\hat\varphi}\Bigl|_{\Sigma\to -\infty})=
{\hat a}_{\mbox{\scriptsize in}}{}^A\;.
\end{equation}
The choice of the quantum state will be made in favour of
the zero-eigenvalue eigenstate of these observables:
\begin{equation}
{\hat a}_{\mbox{\scriptsize in}}{}^A|1\rangle=0\;.
\end{equation}
This is the vacuum of the introduced particles.
It follows from (2.6) and (2.8) that the
field's expectation value in the state (2.13), when taken in the remote
past, coincides with the classical solution $c^i$:
\begin{equation}
\langle1|{\hat\varphi}^i|1\rangle=c^i\;,\qquad i\in\Sigma\to -\infty\;.
\end{equation}
The ad hoc classical solution $c^i$ can then be eliminated completely
both from the asymptotic form of the quantum field
\begin{equation}
{\hat\varphi}^i=\langle\varphi^i\rangle+{\hat\phi}^i\;,\qquad i\in\Sigma\to -\infty
\end{equation}
and from the equation defining the Fock modes
\begin{equation}
S_{ij}(\langle\varphi\rangle){\hat\phi}^j=0\;,\qquad i\in\Sigma\to -\infty\;.
\end{equation}
Only the mean field itself figures as a background.
The specification of the state is, however, not completed
because the mean field in the past remains an arbitrary classical
solution:
\begin{equation}
S_i(\langle\varphi\rangle)=-J_i
\;,\qquad i\in\Sigma\to -\infty
\end{equation}
and the state itself remains the vacuum of undefined particles.
To make the final determination, one more step is needed.
\subparagraph{Step 4.}
The final choice of the state assumes one more limitation
on the original action. Namely, it will be assumed that the
external source $J_i$ and all the external fields that may
be present in the action $S$ are asymptotically static in the past.
This means that, asymptotically in the past, there exists a
vector field $\xi^\mu$ such that it is nowhere tangent to any of the
Cauchy surfaces, and the Lie derivative in the direction
of $\xi^\mu$ of all external fields is zero. Specifically,
\begin{equation}
{\cal L}_\xi J_i=0\;,\qquad i\in\Sigma\to -\infty\;.
\end{equation}
If this limitation is fulfilled, then, among the
solutions of (2.17) for the mean field in the past, there is the static
one:
\begin{equation}
{\cal L}_\xi \langle\varphi^i\rangle=0\;,
\qquad i\in\Sigma\to -\infty\;.
\end{equation}
Choose it. Next, use the fact that, with this choice, the operator
$S_2(\langle\varphi\rangle)$
commutes with the Lie derivative, and choose for the basis solutions of
$S_2(\langle\varphi\rangle)$ the functions that, asymptotically in
the past, are eigenfunctions of the Lie derivative:
\begin{equation}
{\rm i}{\cal L}_\xi \chi_A^i=\varepsilon_A \chi_A^i\;,
\quad \varepsilon_A>0\;,
\quad i\in\Sigma\to -\infty\;.
\end{equation}
This fixes both the initial conditions for the mean field and
the type of particles whose vacuum is the chosen state. These
are particles with definite energies.
Since $S_2$ is a second-order hyperbolic operator, it contains
some tensor field, $g^{\mu\nu}$, contracting the second
derivatives. The inverse matrix, $g_{\mu\nu}$, can serve and does serve
in every respect as a metric on the base manifold. The metric
enters the original action $S$ either as a part of the quantum
field ${\hat\varphi}^i$ or as an external field. In both cases it
is subject to equation (2.19). When applied to the metric, this is
the Killing equation. Thus, we assume the existence, asymptotically
in the past, of a timelike Killing vector $\xi^\mu$.
The specification of the quantum initial data is now
completed. The notation for the state defined above is
\begin{equation}
|1\rangle=|\mbox{in vac}\rangle\;,
\end{equation}
and its full name is relative standard in-vacuum state. It is "relative"
because it is relative to the background generated by an asymptotically
static source. It is "standard" because it refers to the
standard concept of particles. It is "in" because these particles
are incoming. And it is "vacuum" because these particles are
absent.
The state should not necessarily be chosen as the zero-eigenvalue
eigenstate. Since the expectation-value equations do not depend
on the eigenvalues, they will have the same form for any
eigenstate of the annihilation operators, i.e., for any coherent
state
\begin{equation}
{\hat a}_{\mbox{\scriptsize in}}{}^A
|\mbox{in }\alpha\rangle=\alpha^A
|\mbox{in }\alpha\rangle\;.
\end{equation}
Only the initial conditions for the mean field will be different:
\begin{equation}
\langle\alpha\mbox{ in}|{\hat\varphi}^i
|\mbox{in }\alpha\rangle=c^i+\chi_A^i\alpha^A+
\overline{\chi}_A^i\overline{\alpha}^A\;,
\qquad i\in\Sigma\to -\infty\;.
\end{equation}
In addition to the static background $c^i$ generated by a source,
the mean field in the past contains now the incoming wave of an
arbitrary profile. This is the general setting of the classical
evolution problem for an observable field like the electromagnetic or
gravitational field. The fact that the nature of the state
has changed from classical to quantum did not affect this setting.
It will be useful to keep comparing the initial-value problem
with the boundary-value problem. In the latter case, one can
define similarly the out-vacuum state and specify the quantum
boundary data as
\begin{equation}
|1\rangle=|\mbox{in vac}\rangle\;,\qquad
|2\rangle=|\mbox{out vac}\rangle\;.
\end{equation}
\subsubsection{Perturbation Theory}
With this specification of the states, let us come back to the
mean-field equations. There remains to be obtained the Green's
function $G(\varphi)$ that figures in the loops. We need it for
an arbitrary background $\varphi$ but we have a variational law,
(1.43) or (1.65), which may be regarded as a differential equation
for $G(\varphi)$ with respect to $\varphi$. The only thing that is
missing and that depends on the choice of states is the
initial condition to this equation. It suffices, therefore,
to know $G$ for only one background.
Then let us do the simplest: perturbation theory around the
trivial background. A second-order hyperbolic operator with
the trivial background is the D'Alembert operator with
flat metric, $\Box_0$:
\begin{equation}
S_2(\varphi)=\Box_0+P\;.
\end{equation}
The remainder is a perturbation $P$.
In the case of the boundary-value problem, the variational law is
(1.43), and, therefore, the expansion of $G(\varphi)$ is of the form
\begin{equation}
G(\varphi)=G_0+G_0PG_0+G_0PG_0PG_0+\ldots
\end{equation}
where $G_0$ is $G$ for the trivial background. This expansion is
to be inserted in the loop in the mean-field
equations
\begin{equation}
\frac{1}{2{\rm i}}S_{ijk}(\varphi)G^{jk}(\varphi)={}
\parbox{30pt}{
\begin{picture}(30,20)
\thicklines
\put(20,10){\circle{20}}
\put(0,10){\line(1,0){10}}
\put(0,12){$\scriptstyle i$}
\end{picture}
}\;\,.
\end{equation}
Let for simplicity $P$ be a potential. One obtains the loop
expanded in powers of $P$:
\begin{equation}
\parbox{30pt}{
\begin{picture}(30,20)
\thicklines
\put(20,10){\circle{20}}
\put(0,10){\line(1,0){10}}
\put(0,13){$x$}
\end{picture}
}
{}=\int dy_1\ldots dy_n\,F(x|y_1,\ldots y_n)P(y_1)\ldots P(y_n)\;.
\end{equation}
The coefficients $F$ will be called formfactors.
The formfactors are loop diagrams
\begin{equation}
F(x|y)={}
\parbox{57pt}{
\begin{picture}(57,30)(-8,5)
\qbezier(0,20)(20,40)(40,20)
\qbezier(0,20)(20,0)(40,20)
\put(41,23){$y$}
\put(0,23){\llap{$x$}}
\end{picture}
}\;,
\end{equation}
\begin{equation}
F(x|y_1,y_2)={}
\parbox{53pt}{
\begin{picture}(53,54)(-8,-7)
\put(0,20){\line(3,2){30}}
\put(0,20){\line(3,-2){30}}
\put(30,0){\line(0,1){40}}
\put(0,23){\llap{$x$}}
\put(33,39){$y_1$}
\put(33,-2){$y_2$}
\end{picture}
}\;,
\end{equation}
$$
\hbox to 120pt{{}\dotfill{}}
$$
with the same propagator for all lines: the trivial-background
Green's function
\begin{equation}
\parbox{30pt}{
\begin{picture}(30,3)
\put(0,1.5){\line(1,0){30}}
\end{picture}
}
{}=G_0\;.
\end{equation}
What is $G_0$? With the trivial background and the standard
in- and out- vacuum states, it is the Feynman Green's function:
\begin{equation}
G_0=G_{\mbox{\sc\scriptsize feynman}}\;.
\end{equation}
Let us do the same thing for the initial-value problem. The loop
in the expectation-value equations will, in the same way, be
expanded in powers of the perturbation, and the expansion will
have the same form (2.28), but the formfactors will be different
because the variational law for $G$ is different. It is now
(1.65) rather than (1.43). Using this law, one obtains for the
formfactors three diagrams in place of one:
\begin{equation}
F(x|y)={}
\parbox{57pt}{
\begin{picture}(57,30)(-8,5)
\qbezier(0,20)(20,40)(40,20)
\qbezier(0,20)(20,0)(40,20)
\put(21.5,30){\vector(-1,0){3}}
\put(41,23){$y$}
\put(0,23){\llap{$x$}}
\end{picture}
}
{}+{}
\parbox{57pt}{
\begin{picture}(57,30)(-8,5)
\qbezier(0,20)(20,40)(40,20)
\qbezier(0,20)(20,0)(40,20)
\put(21.5,10){\vector(-1,0){3}}
\put(41,23){$y$}
\put(0,23){\llap{$x$}}
\end{picture}
}
{}-{}
\parbox{57pt}{
\begin{picture}(57,30)(-8,5)
\qbezier(0,20)(20,40)(40,20)
\qbezier(0,20)(20,0)(40,20)
\put(21.5,30){\vector(-1,0){3}}
\put(21.5,10){\vector(-1,0){3}}
\put(41,23){$y$}
\put(0,23){\llap{$x$}}
\end{picture}
}\;,
\end{equation}
five diagrams in place of one:
\begin{eqnarray}
F(x|y_1,y_2)={}
\parbox{53pt}{
\begin{picture}(53,54)(-8,-7)
\put(0,20){\line(3,2){30}}
\put(30,0){\vector(-3,2){15}}
\put(0,20){\line(3,-2){15}}
\put(30,40){\vector(0,-1){20}}
\put(30,0){\line(0,1){20}}
\put(0,23){\llap{$x$}}
\put(33,39){$y_1$}
\put(33,-2){$y_2$}
\end{picture}
}
{}+{}
\parbox{53pt}{
\begin{picture}(53,54)(-8,-7)
\put(30,40){\vector(-3,-2){15}}
\put(0,20){\line(3,2){15}}
\put(30,0){\vector(-3,2){15}}
\put(0,20){\line(3,-2){15}}
\put(30,0){\line(0,1){40}}
\put(0,23){\llap{$x$}}
\put(33,39){$y_1$}
\put(33,-2){$y_2$}
\end{picture}
}
{}+{}
\parbox{53pt}{
\begin{picture}(53,54)(-8,-7)
\put(30,40){\vector(-3,-2){15}}
\put(0,20){\line(3,2){15}}
\put(0,20){\line(3,-2){30}}
\put(30,40){\line(0,-1){20}}
\put(30,0){\vector(0,1){20}}
\put(0,23){\llap{$x$}}
\put(33,39){$y_1$}
\put(33,-2){$y_2$}
\end{picture}
}
\nonumber\\
{}-{}
\parbox{53pt}{
\begin{picture}(53,54)(-8,-7)
\put(30,40){\vector(-3,-2){15}}
\put(0,20){\line(3,2){15}}
\put(30,0){\vector(-3,2){15}}
\put(0,20){\line(3,-2){15}}
\put(30,40){\vector(0,-1){20}}
\put(30,0){\line(0,1){20}}
\put(0,23){\llap{$x$}}
\put(33,39){$y_1$}
\put(33,-2){$y_2$}
\end{picture}
}
{}-{}
\parbox{53pt}{
\begin{picture}(53,54)(-8,-7)
\put(30,40){\vector(-3,-2){15}}
\put(0,20){\line(3,2){15}}
\put(30,0){\vector(-3,2){15}}
\put(0,20){\line(3,-2){15}}
\put(30,40){\line(0,-1){20}}
\put(30,0){\vector(0,1){20}}
\put(0,23){\llap{$x$}}
\put(33,39){$y_1$}
\put(33,-2){$y_2$}
\end{picture}
}\;,
\end{eqnarray}
and so on. There are two types of propagators in these diagrams:
the trivial-background $G$, and the trivial-background retarded
or advanced Green's function. Respectively, there are two types
of lines:
\begin{equation}
\parbox{30pt}{
\begin{picture}(30,3)
\put(0,1.5){\line(1,0){30}}
\end{picture}
}
{}=G_0\;,\qquad
\parbox{30pt}{
\begin{picture}(30,3)
\put(30,1.5){\vector(-1,0){30}}
\end{picture}
}
{}=G_0^-\mbox{ or }G_0^+\;.
\end{equation}
In the latter case, the arrow
points the direction of growth of time. And what is now $G_0$?
In terms of the linear field (2.5) it is
\begin{equation}
\frac{1}{{\rm i}}G_0^{jk}=\langle\mbox{in vac}|
\overleftarrow{T}({\hat\phi}^j{\hat\phi}^k)
|\mbox{in vac}\rangle
\Bigl|_{\mbox{\scriptsize trivial background}}
\end{equation}
and differs from the previous case in that the
"$\langle\mbox{out vac}|$" is replaced by the
"$\langle\mbox{in vac}|$". But, with the trivial background, the
vacuum for the linear field is stable. The out-vacuum coincides
with the in-vacuum. Therefore,
\begin{equation}
G_0=G_{\mbox{\sc\scriptsize feynman}}\quad\mbox{(again!)}\;.
\end{equation}
The diagrams above are called Schwinger--Keldysh diagrams.
There is not more than one Feynman propagator in every diagram.
The remaining ones are the retarded and advanced Green's
functions organized in a special way and with special signs of
the diagrams themselves. There is a mystery in this special
arrangement. What do these diagrams want
to tell us? We must disclose their secret because working with
them directly is not what can be recommended.
\subsubsection{Mystery of The Schwinger--Keldysh Diagrams}
One thing is obvious right away. In the diagrams above, there
is always a chain of retarded Green's functions connecting
a given point $y$ with the observation point $x$. Therefore,
the formfactor vanishes if at least one of the $y$'s is in
the future of $x$. This is the {\it retardation property}
\begin{equation}
F(x|y_1,\ldots y_n)=0\quad\mbox{ when }\; y_m>x\;,\;\;\forall m\;.
\end{equation}
But this is true of every Schwinger--Keldysh diagram, and why do they
appear in the special combinations? What is the role of the
Feynman propagator?
Let us make a Fourier transformation of the formfactor with
respect to the differences $(x-y_m)$ in the Minkowski coordinates:
\begin{equation}
F(x|y_1,\ldots y_n)
=\int dk_1\ldots dk_n\,\exp\Bigl({\rm i}\sum_{m=1}^n k_m(x-y_m)\Bigr)
f(k_1,\ldots k_n)\;.
\end{equation}
How come that $F$ possesses the retardation property? It is
only that $f$ should admit an analytic continuation
to the upper half-plane in the timelike components of $k$'s.
Then, for $y_m$ later than $x$, we shall be able to close the
integration contour in the upper half-plane of $k_m^0$,
and the integral will vanish. There should be a function
of complex momenta
$f(z_1,\ldots z_n)$ analytic in the upper
half-planes of $z_m^0$ and such that
$f(k_1,\ldots k_n)$
is its limiting value on the real axes:
\begin{equation}
f(k_1,\ldots k_n)=f(z_1,\ldots z_n)\Bigl|_{\textstyle z_m^0=k_m^0+
{\rm i}\varepsilon}\;.
\end{equation}
Let us build this function.
All diagrams in a given-order formfactor are similar. They all
are integrals over the momentum circulating in the loop, and
the integrands are identical. The difference is only in the
integration contours. Thus any diagram in the lowest-order
formfactor $f(k)$ is of the form
\begin{equation}
\parbox{57pt}{
\begin{picture}(57,30)(-8,5)
\qbezier(0,20)(20,40)(40,20)
\qbezier(0,20)(20,0)(40,20)
\put(0,20){\circle*{1.5}}
\put(41,23){$k$}
\end{picture}
}
{}=\int d{\vec p}\int\limits_{\displaystyle {\cal C}}dp^0\,
\frac{\mbox{polynomial in momenta}}{\left(-p^0{}^2+{\vec p}^2\right)
\left(-(p^0-k^0)^2+({\vec p}-{\vec k})^2\right)}\;.
\end{equation}
There are, generally, as many factors in the denominator as
there are propagators in the loop, and each factor contains
two poles. The contour ${\cal C}$ passes round them in
accordance with the type of the propagator. One of the three
rules applies to each pair of poles:
$$
\begin{array}{c}
\parbox{164pt}{
\begin{picture}(164,18)(-28,-12)
\put(-28,0){\line(1,0){10}}
\put(18,0){\line(1,0){10}}
\put(-12,0){\circle*{3}}
\put(12,0){\circle*{3}}
\put(0,0){\oval(36,12)[t]}
\put(50,0){retardation rule,}
\end{picture}
}\\
\parbox{164pt}{
\begin{picture}(164,24)(-28,-12)
\put(-28,0){\line(1,0){10}}
\put(18,0){\line(1,0){10}}
\put(-12,0){\circle*{3}}
\put(12,0){\circle*{3}}
\put(0,0){\oval(36,12)[b]}
\put(50,0){advancement rule,}
\end{picture}
}\\
\parbox{164pt}{
\begin{picture}(164,18)(-28,-6)
\put(-28,0){\line(1,0){10}}
\put(18,0){\line(1,0){10}}
\put(-12,0){\circle*{3}}
\put(12,0){\circle*{3}}
\put(-12,0){\oval(12,12)[b]}
\put(12,0){\oval(12,12)[t]}
\put(-6,0){\line(1,0){12}}
\put(50,0){Feynman rule.}
\end{picture}
}\\
\end{array}
$$
Let us now shift the external momentum $k^0$ to the complex plane.
The poles will shift to the complex plane but we shall also
deform smoothly the contour so that it do not cross the poles.
In this way one can build a function of complex momenta for
each Schwinger--Keldysh diagram.
Thus the lowest-order formfactor with
complex momentum, $f(z)$, is a sum of three functions:
\begin{equation}
f(z)=
\int d{\vec p}\int\limits_{\displaystyle {\cal C}_1}dp^0\,(\ldots)+
\int d{\vec p}\int\limits_{\displaystyle {\cal C}_2}dp^0\,(\ldots)-
\int d{\vec p}\int\limits_{\displaystyle {\cal C}_3}dp^0\,(\ldots)\;,
\end{equation}
and the contours ${\cal C}_1$, ${\cal C}_2$, ${\cal C}_3$
for $z^0$ in the upper half-plane are shown in Fig. 1.
By considering the pinch
conditions, i.e., the conditions that the poles pinch the
integration contour, one can check in each case that these
functions can have singularities only on the real axis.
Therefore, if we consider them in the upper half-plane, they
are analytic, and their limits on the real axis are our
original diagrams.
There remains to be understood what are these functions.
Since the integrands are identical, the sum of the integrals in (2.42)
is the integral over the sum of the contours
\begin{equation}
f(z)=
\int d{\vec p}\int\limits_{\displaystyle {\cal C}_1+{\cal C}_2-{\cal C}_3}
dp^0\,(\ldots)\;.
\end{equation}
Sum up the three contours in Fig. 1. The resultant contour is
such that every pair of poles is passed round by the Feynman rule.
It may be called Feynman contour.
\input figone.tex
But the Feynman contour defines also the in-out formfactor (2.29)
in which both propagators are Feynman, except that the in-out
formfactor is not the limit of $f(z)$ from the upper half-plane.
It is this limit on only half of the real axis, and on the
other half it is the limit from the lower half-plane.
The \mbox{in-in} and in-out formfactors are different boundary values of
the same complex function having a cut on the real axis:
\begin{equation}
\mbox{in-in}\;:\qquad f(k)=f(z)
\Bigl|_{\textstyle z^0=k^0+{\rm i}\varepsilon}\;,
\hphantom{\;{}}
\end{equation}
\begin{equation}
\mbox{in-out}\;:\qquad f(k)=f(z)
\Bigl|_{\textstyle z^0=(1+{\rm i}\varepsilon)k^0}\;,
\end{equation}
and the function itself is the integral over the Feynman contour
\begin{equation}
f(z)=
\int d{\vec p}\int\limits_{\displaystyle
{\cal C}_{\mbox{\sc\scriptsize feynman}}}
dp^0\,(\ldots)\;.
\end{equation}
The same is true of all $n$-th order formfactors, and this
is a disclosure of the mystery. In each case, the set of
Schwinger--Keldysh diagrams is just a splitting of one
Feynman diagram whose purpose is to display the retardation
property and in this way to tell us which boundary value
is to be taken.
\subsubsection{Reduction to The Euclidean Effective Action}
The Feynman contour is famous for the fact that, when the
external momenta are on the imaginary axis, the Feynman contour
is the imaginary axis itself. With all the momenta imaginary,
both the external ones and the one circulating in the loop, this is the
Euclidean formfactor. Then we can {\it start} with the
calculation of the Euclidean
formfactor and next analytically continue it in momenta from the
imaginary axis to the real axis either in the way shown in Fig. 2(a)
or in the way shown in Fig. 2(b). In the first case we shall obtain
the in-out formfactor, and in the second case the in-in formfactor
of Lorentzian theory. It is invaluable that loops can be calculated
Euclidean.
\input figtwo.tex
Then let us make one more step. A formfactor with the Euclidean momentum
can be put in the spectral form
\begin{equation}
f(k)=\int\limits_0^\infty dm^2\,
\frac{\rho(m^2)}{m^2+k^2}+\mbox{ a polynomial in }k^2\;,
\end{equation}
$$
k^2>0
$$
with some spectral weight $\rho(m^2)$, the resolvent $1/(m^2+k^2)$,
and a polynomial accounting for a possible growth of $f(k)$ at
$k^2\to\infty$. There are similar forms for the higher-order formfactors.
If the formfactor is in the spectral form, the procedure of analytic
continuation boils down merely to replacing the Euclidean resolvent
with the retarded or Feynman resolvent:
\begin{equation}
\hphantom{{}\;{}}
\mbox{in-in}\;:\qquad
f(k)=\int\limits_0^\infty dm^2\,
\frac{\rho(m^2)}{m^2-(k^0+{\rm i}\varepsilon)^2+{\vec k}^2}
+\mbox{ a polynomial in }k^2\;,
\end{equation}
\begin{equation}
\mbox{in-out}\;:\qquad
f(k)=\int\limits_0^\infty dm^2\,
\frac{\rho(m^2)}{m^2-k^0{}^2+{\vec k}^2-{\rm i}\varepsilon}+
\mbox{ a polynomial in }k^2\;.
\end{equation}
Note that the spectral weight is the same in all cases: the one
of the Euclidean loop. Thus, the problem boils down to obtaining
the spectral weights of the Euclidean formfactors.
Then back from the Fourier-transformed formfactors to the
formfactors themselves, and from the formfactors to the
mean-field equations. For the loop in these equations
expanded in powers of the perturbation, we obtain an
expression of the following form:
\begin{eqnarray}
\parbox{30pt}{
\begin{picture}(30,20)
\thicklines
\put(20,10){\circle{20}}
\put(0,10){\line(1,0){10}}
\put(0,13){$x$}
\end{picture}
}
&{}=&(c_1+c_2\Box_0+\ldots)P(x)\nonumber\\
&{}+&\int\limits_0^\infty dm^2\,\rho(m^2)
\frac{1}{m^2-\Box_0}P(x)\nonumber\\
&{}+&\int\limits_0^\infty dm_1^2dm_2^2dm_3^2\,\rho(m_1^2,m_2^2,m_3^2)
\nonumber\\
&&{}\times\frac{1}{m_1^2-\Box_0}\left[
\left(\frac{1}{m_2^2-\Box_0}P(x)\right)
\left(\frac{1}{m_3^2-\Box_0}P(x)\right)\right]\nonumber\\
&{}+&\ldots\;.
\end{eqnarray}
Here the first term is local. It comes from the polynomial in
the spectral form. The remaining terms are nonlocal but expressed
through the resolvent which is a Green's function of the massive
operator $\Box_0-m^2$. It is initially the Euclidean Green's function
since we are calculating the Euclidean loop. For the Lorentzian equations,
we arrive at the following rule.
To obtain the expectation-value equations in the in-vacuum state,
replace all the Euclidean resolvents in (2.50) with the retarded
Green's functions.
To obtain the mean-field equations for the
in-out problem, replace all the Euclidean resolvents
with the Feynman Green's functions:
\begin{equation}
\parbox{172.7pt}{
\begin{picture}(172.7,57.6)(-67.1,-24.3)
\put(0,0){\vector(1,0){48.6}}
\put(0,0){\vector(2,1){48.6}}
\put(0,0){\vector(2,-1){48.6}}
\put(55.6,24.3){Euclidean,}
\put(55.6,0){Retarded,}
\put(55.6,-24.3){Feynman.}
\put(-11.4,0){\llap{All ${\displaystyle\frac{1}{m^2-\Box_0}}$}}
\end{picture}
}
\end{equation}
At every level of expectation-value theory, there are proofs
that the expectation-value equations possess two basic properties:
they are real and causal. Causality is the retardation property
discussed above. But it is not enough to have proofs. These
properties should be manifestly built into the working formalism.
Expression (2.50) offers such a formalism. Since the retarded
resolvent secures the causality and is real, this expression
is manifestly real and causal.
But even this is not enough. The theory may possess symmetries,
and one may want these symmetries to be manifest. To this end
it will be noted that, although expansion (2.50) is obtained in
terms of the trivial-background resolvent $1/(m^2-\Box_0)$,
it can be regrouped so as to restore the full-background
resolvent
\begin{equation}
\frac{1}{m^2-S_2}=\frac{1}{m^2-\Box_0-P}
\end{equation}
at each order. It does not matter whether this regrouping will
be made in the expectation-value equations or in the Euclidean
equations because the retarded and Euclidean Green's functions
obey the same variational law (1.43):
\begin{equation}
\frac{1}{m^2-\Box_0}=\frac{1}{m^2-S_2}-
\frac{1}{m^2-S_2}P\frac{1}{m^2-S_2}+\ldots\;.
\end{equation}
This proves that the rule of replacing resolvents applies to
the full-background resolvents as well as to the trivial-background
ones. The latter fact is important because the Euclidean loops
can be calculated covariantly from the outset, and the transition
to the expectation-value equations by replacing
the full-background resolvents does not break
the manifest symmetries. The expectation-value equations are
obtained in as good an approximation as the Euclidean equations
are.
There remains to be made a final observation. For the Euclidean
equations, {\it there is} an effective action:
\begin{equation}
\parbox{30pt}{
\begin{picture}(30,20)
\thicklines
\put(20,10){\circle{20}}
\put(0,10){\line(1,0){10}}
\put(0,12){$\scriptstyle i$}
\end{picture}
}
{}=
\frac{\delta}{\delta\varphi^i}
\;
\parbox{20pt}{
\begin{picture}(20,20)
\thicklines
\put(10,10){\circle{20}}
\end{picture}
}
\end{equation}
because the variational law for the Euclidean Green's function is (1.43).
It is invaluable that loops can be calculated without external
lines. This reduces the calculations greatly, helps to control
symmetries, helps to control renormalizations.
Thus, at the end of the day, we conclude that {\it there is} an
action that generates the expectation-value equations but it
does so indirectly, i.e., {\it not} through the least-action
principle. To make this clear, consider (for the illustrative
purposes only) any quadratic action:
$$
\Gamma(\varphi)=\frac{1}{2}\int dx\,\varphi f(\Box_0)\varphi\;.
$$
Whatever the operator $f(\Box_0)$ is, in the variational derivative
it gets symmetrized:
$$
\frac{\delta\Gamma(\varphi)}{\delta\varphi}=\frac{1}{2}\left(f(\Box_0)+
f^{\rm T}(\Box_0)\right)\varphi=f^{\mbox{\small sym}}
(\Box_0)\varphi\;.
$$
Assuming that the function $f(\Box_0)$ is in the spectral form
$$
f(\Box_0)=\int\limits_0^\infty dm^2\,\rho(m^2)
\frac{1}{m^2-\Box_0}\;,
$$
one obtains the variational equations with the symmetrized resolvent:
$$
\int\limits_0^\infty dm^2\,\rho(m^2)
\left(\frac{1}{m^2-\Box_0}\right)^{\mbox{sym}}\varphi=-J\;.
$$
These cannot be the expectation-value equations since they are
not causal. But, through the derivation above, we know how to
correct this: just to replace the symmetrized resolvent with the
retarded resolvent. The corrected equations
$$
\int\limits_0^\infty dm^2\,\rho(m^2)
\left(\frac{1}{m^2-\Box_0}\right)^{\mbox{ret}}\varphi=-J\;.
$$
do not already follow from any action although indirectly they do.
Only if the action $\Gamma(\varphi)$ is local, i.e.,
the function $f(\Box_0)$ is polynomial,
the least-action principle holds directly.
Two precepts should be kept in mind when using the formalism above.
First, the replacement rule concerns the resolvents of the formfactors
and not the
propagators in the loop. The loop should be calculated Euclidean.
Hence
\subparagraph{First Precept:}
first do the loop, next replace the resolvents.\par
\medskip
\noindent Second, the replacement of resolvents
is to be made in the equations and not in the action. It does not
make sense to make it in the action. Hence
\subparagraph{Second Precept:}
first vary the action, next replace the resolvents.\par
\medskip
We thus go over to the calculation of the Euclidean effective action.
}
\section[Lecture 3]{The Effective Action}
\label{sec:3}
{\renewcommand{\theequation}{3.\arabic{equation}}
\subsubsection{The Operator $S_2$}
The $\varphi^i$ is a set of fields for which a more explicit notation will
now be used:
\begin{equation}
\varphi^i=\varphi^a(x)\;.
\end{equation}
The operator $S_2$ acts on a small disturbance of $\varphi^i$ and is
a second-order differential operator
\begin{equation}
S_{ij}\delta\varphi^j=\left(X^{\mu\nu}_{ab}\partial_\mu\partial_\nu
+Y^\mu_{ab}\partial_\mu+Z_{ab}\right)\delta\varphi^b(x)\;.
\end{equation}
The generality of this operator will, however, be restricted
by the condition that the coefficient of the senior term factorizes as
\begin{equation}
X^{\mu\nu}_{ab}=\omega_{ab}\,g^{\mu\nu}\;,\qquad
\det\omega_{ab}\ne 0\;,\;\;\det g^{\mu\nu}\ne 0\;.
\end{equation}
In this case, the operator (3.2) is said to be diagonal, or minimal,
or nonexotic. Condition (3.3) is too restrictive and not necessary.
It can be replaced by a more general condition
\begin{equation}
\det\left(X^{\mu\nu}_{ab}n_\mu n_\nu\right)
=C(g^{\mu\nu}n_\mu n_\nu)^d
\quad\forall n_\mu\;,\;\quad d=\dim a\;,\;\;C\ne 0\;,\;\;
\det g^{\mu\nu}\ne 0\;,
\end{equation}
and even this condition can be generalized. Higher-order and
first-order operators can also be considered but, in all of these
cases, the Green's functions of $S_2$ are expressed through the
Green's functions of a diagonal second-order operator. The case
(3.3) is basic.
In the case (3.3), the matrix $\omega_{ab}$ can be factored out:
\begin{equation}
S_{ij}\delta\varphi^j=\omega_{ac} H^c_b\delta\varphi^b(x)\;,
\end{equation}
and a covariant derivative can be introduced:
\begin{equation}
\mathop{\nabla_\mu}\delta\varphi^a=\left(\delta^a_b\partial_\mu
+{\cal A}_\mu{}^a_b\right)\delta\varphi^b
\end{equation}
so as to absorb the first-order term:
\begin{equation}
H^a_b=\delta^a_b g^{\mu\nu}\nabla_\mu\nabla_\nu+P^a_b\;.
\end{equation}
This is the final form of $S_2$. A short notation will
be used:
\begin{equation}
H=\Box{\hat 1}+{\hat P}
\end{equation}
where
\begin{equation}
\Box\equiv g^{\mu\nu}\nabla_\mu\nabla_\nu\;,
\end{equation}
and the hat designates a matrix in $a,b$:
\begin{equation}
{\hat 1}=\delta^a_b\;,\quad
{\hat P}=P^a_b\;,\quad\mathop{\rm tr}{\hat P}=P^a_a\;,\quad
\mbox{etc.}
\end{equation}
The matrix $\omega_{ab}$ may be regarded as a local metric in the space
of fields. The symmetry of $S_2$ implies that this matrix
is symmetric, covariantly constant, and converts ${\hat P}$ into a
symmetric form:
\begin{equation}
\omega_{ab}=\omega_{ba}\;,\quad\;\;\nabla_\mu\omega_{ab}=0\;,
\end{equation}
\begin{equation}
P^c_a\omega_{cb}-P^c_b\omega_{ca}=0\;.
\end{equation}
The dominant energy condition implies that $\omega_{ab}$ is positive
definite. The matrix $g^{\mu\nu}$ is the inverse of the metric on
the base manifiold. Since we are considering Euclidean theory,
this metric is positive definite too.
Apart from the algebraic factor $\omega_{ac}$ in (3.5), the operator $S_2$
contains three background fields:
\begin{equation}
g^{\mu\nu}\;,\quad\nabla_\mu\;,\quad{\hat P}
\end{equation}
i.e., the metric, the connection (or covariant derivative), and the
matrix potential. And where is the original background $\varphi$ of
$S_2(\varphi)$? When $S_2$ is calculated from the action $S$, the metric,
connection, and potential are obtained as functions of the
original set of fields $\varphi$, but from now on it does not matter.
The effective action is expressed in a universal manner through
the fields (3.13) only.
The strengths of the fields (3.13) are respectively the Riemann
tensor, the commutator of covariant derivatives, and the potential
which is its own strength:
\begin{equation}
R_{\alpha\beta\mu\nu}\;,\quad
[\nabla_\mu,\nabla_\nu]={\hat{\cal R}}_{\mu\nu}\;,\quad {\hat P}\;.
\end{equation}
I shall call these field strengths curvatures and use for them
the collective notation
\begin{equation}
\left(\,R_{\alpha\beta\mu\nu}\,,\;
{\hat{\cal R}}_{\mu\nu}\,,\; {\hat P}\,\right)=\Re\;.
\end{equation}
The following contractions of the curvatures will be called
currents:
\begin{equation}
{\hat J}_\mu\equiv\nabla^\nu{\hat{\cal R}}_{\mu\nu}\;,
\end{equation}
\begin{equation}
J_{\mu\nu}\equiv R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R\;,\quad
J\equiv g^{\mu\nu}J_{\mu\nu}\;.
\end{equation}
The currents are conserved:
\begin{equation}
\nabla^\mu{\hat J}_\mu=0\;,\quad\;\;\nabla^\mu J_{\mu\nu}=0\;.
\end{equation}
If all the curvatures vanish, the background is trivial. The
effective action is a functional of the curvatures (3.15).
\subsubsection{Redundancy of The Curvatures}
The effective action is a nonlocal functional of the curvatures,
and this fact conditions a certain simplification.
Since the commutator curvature is a commutator, it satisfies
the Jacobi identity, and so does the Riemann curvature:
\begin{equation}
\nabla_\gamma{\hat{\cal R}}_{\mu\nu}+
\nabla_\nu{\hat{\cal R}}_{\gamma\mu}+
\nabla_\mu{\hat{\cal R}}_{\nu\gamma}=0\;,
\end{equation}
\begin{equation}
\nabla_\gamma R_{\alpha\beta\mu\nu}+
\nabla_\nu R_{\alpha\beta\gamma\mu}+
\nabla_\mu R_{\alpha\beta\nu\gamma}=0\;.
\end{equation}
Act on these identities with $\nabla^\gamma$. In the first term,
the operator $\Box$ forms, and in the remaining terms commute
the covariant derivatives. The commutator brings an extra power
of the curvature. The equations obtained
\begin{equation}
\Box{\hat{\cal R}}_{\mu\nu}+O(\Re^2)=
2\nabla_{[\nu}{\hat J}_{\mu]}\;,
\end{equation}
\begin{equation}
\Box R_{\alpha\beta\mu\nu}+O(\Re^2)=
4\nabla_{[\mu}\nabla_{\langle\alpha}\left(
J_{\nu]\beta\rangle}-\frac{1}{2}g_{\nu]\beta\rangle}J\right)
\end{equation}
hold identically and have the form of inhomogeneous
wave equations, the role of inhomogeneity being played by
the currents. In (3.21), (3.22), the brackets of both types $[\,]$ and
$\langle\,\rangle$ denote the antisymmetrization in the respective
indices.
The equations (3.21) and (3.22) are nonlinear but they can be solved
by iteration. The result is that the commutator and Riemann
curvatures get expressed in a nonlocal fashion through their
currents and an arbitrary solution of the homogeneous wave
equation
\begin{equation}
\Box{\hat{\cal R}}_{\mu\nu}^{\mbox{\small wave}}=0\;,\quad\;\;
\Box R_{\alpha\beta\mu\nu}^{\mbox{\small wave}}=0\;.
\end{equation}
If the metric is Lorentzian, this solution is fixed by initial
data which can be given in the remote past. It follows
that the commutator and Riemann curvatures are specified by
giving an incoming wave and the current $J$.
This fact underlies the Maxwell and Einstein equations. They
fix the currents $J$.
Adding initial conditions to these equations specifies the
connection and metric.
In the present case, since the metric is
Euclidean, there are no wave solutions:
\begin{equation}
{\hat{\cal R}}_{\mu\nu}^{\mbox{\small wave}}=0\;,\quad\;\;
R_{\alpha\beta\mu\nu}^{\mbox{\small wave}}=0\;,
\end{equation}
and the Green's function $1/\Box$ is unique. Therefore, the
commutator and Riemann curvatures are expressed entirely
through their currents:
\begin{equation}
{\hat{\cal R}}_{\mu\nu}=
\frac{1}{\Box}2\nabla_{[\nu}{\hat J}_{\mu]}+O(J^2)\;,
\end{equation}
\begin{equation}
R_{\alpha\beta\mu\nu}=
\frac{1}{\Box}4\nabla_{[\mu}\nabla_{\langle\alpha}\left(
J_{\nu]\beta\rangle}-\frac{1}{2}g_{\nu]\beta\rangle}J\right)
+O(J^2)\;.
\end{equation}
Thus, the curvatures are redundant because there are no waves
in Euclidean theory. Owing to this fact, the set of field
strengths (3.15) reduces to
\begin{equation}
\left(\,J_{\mu\nu}\,,\;
{\hat J}_\mu\,,\; {\hat P}\,\right)\;,
\end{equation}
and the effective action is a functional of the reduced set.
\subsubsection{The Axiomatic Effective Action}
To what class of functionals does the effective action belong?
One can say in advance that this should be a functional
analytic in the curvature. Indeed, the first variational
derivative of the effective action
taken at the trivial background should vanish
because, in the absence of an external source, the relative
vacuum becomes the absolute vacuum. The trivial background
should solve the mean-field equations in the absolute vacuum.
Higher-order variational derivatives taken at the trivial
background determine the correlation functions in the
absolute vacuum. They may not vanish but neither should they
blow up.
The analyticity suggests that the effective action can be built
as a sum of nonlocal invariants of $N$-th order in the curvature:
\begin{equation}
\Gamma=\sum_N\Gamma_N\;,\quad\;\;\Gamma_N=O[\Re^N]\;.
\end{equation}
Nonlocal invariant is, however, an uncertain concept.
Even local invariant of $N$-th
order in the curvature is a concept that needs to be refined
but this is easy to do. The most general local monomial that can be built
out of the available quantities yields an invariant of the form
\begin{equation}
\int dx\,g^{1/2}\underbrace{(\nabla_1{\scriptstyle\ldots}\nabla_1)
(\nabla_2{\scriptstyle\ldots}\nabla_2)\ldots}_{\displaystyle k}
\Re_1\Re_2\ldots\Re_N+O[\Re^{N+1}]\;.
\end{equation}
This monomial is a product of $N$ curvatures and $k$ covariant derivatives,
all indices being contracted by the metric. In
(3.29), the labels $1,2,\ldots$ point out which derivative acts
on which curvature but all the curvatures are at the same point,
and the total number of derivatives is finite. Of course, the
curvature sits also in the covariant derivatives and in the metric
that contracts the indices. Therefore, the $N$-th order invariant can
only be defined up to terms $O[\Re^{N+1}]$. In particular, the covariant
derivatives in (3.29) can be commuted freely because the contribution
of a commutator is already $O[\Re^{N+1}]$.
One may now consider a class of nonlocal invariants that can formally
be represented as infinite series of local invariants:
\begin{equation}
\Gamma_N=\int dx\,g^{1/2}\sum^\infty_{k=0}c_k
\underbrace{(\nabla_1{\scriptstyle\ldots}\nabla_1)
(\nabla_2{\scriptstyle\ldots}\nabla_2)\ldots}_{\displaystyle k}
\Re_1\Re_2\ldots\Re_N+O[\Re^{N+1}]\;.
\end{equation}
Here $c_k$ are some dimensional constants. It can be seen that this
is the needed class\footnote{To see it, consider any diagram
with massive propagators and expand it formally in the inverse mass.
The method that accomplishes this expansion is known as the
Schwinger--DeWitt technique.}. The number of curvatures in (3.30)
is $N$ but the number of derivatives is unlimited. Only a finite
number of derivatives can contract with the curvatures. The
remaining ones can only contract among themselves. If two
derivatives acting on the same curvature contract, they make
a $\Box$ operator acting on this curvature:
\begin{equation}
\nabla_1{}^2=\Box_1\;,\quad\nabla_2{}^2=\Box_2\;,\;\;\ldots\;.
\end{equation}
If two derivatives acting on different curvatures contract,
the contraction can again be written in terms of the $\Box$ operators:
\begin{eqnarray}
2\nabla_1\nabla_2&=&(\nabla_1+\nabla_2)^2-\nabla_1{}^2
-\nabla_2{}^2\nonumber\\
&=&\Box_{1+2}-\Box_1-\Box_2
\end{eqnarray}
but there appears a $\Box$ operator acting on
the product of two curvatures:
\begin{equation}
\mathop{\Box_{1+2}}\Re_1\Re_2\Re_3\ldots=
\Box\left(\Re\Re\right)\,\Re_3\ldots\;.
\end{equation}
As a result, (3.30) takes the form
\begin{eqnarray}
\Gamma_N=\int dx\,g^{1/2}\left(\sum^\infty_{k_1,k_2,\cdots=0}c_k
(\Box_1)^{k_1}(\Box_2)^{k_2}(\Box_{1+2})^{k_3}\ldots\right)
\hphantom{a\Gamma+O[\Re^{N+1}]}{}
\nonumber\\
{}\times\underbrace{\Bigl(\nabla{\scriptstyle\ldots}\Re_1
\nabla{\scriptstyle\ldots}\Re_2\ldots
\nabla{\scriptstyle\ldots}\Re_N\Bigr)}_{\mbox{\small contraction}}
+O[\Re^{N+1}]\;.\nonumber\\
\end{eqnarray}
There remains an infinite series in the $\Box$ variables, and these
variables themselves are operators acting on the curvatures in a given
contraction. The remaining series is some function of the $\Box$
variables:
\begin{equation}
\Gamma_N=\int dx\,g^{1/2}
F\left(\Box_1,\Box_2,\Box_{1+2},\ldots\right)
\underbrace{\Bigl(\nabla{\scriptstyle\ldots}\Re_1
\nabla{\scriptstyle\ldots}\Re_2\ldots
\nabla{\scriptstyle\ldots}\Re_N\Bigr)}_{\mbox{\small contraction}}
+O[\Re^{N+1}]\;.
\end{equation}
This is the general form of a nonlocal invariant of $N$-th order
in the curvature. The function $F$ is a formfactor.
There is, in addition, the identity
\begin{equation}
\nabla_1+\nabla_2+\ldots+\nabla_N=0
\end{equation}
which reduces the number of variables in the function $F$. The sum
in (3.36) is a derivative acting on the product of all curvatures,
i.e., a total derivative. Total derivatives vanish because the
curvatures may be considered having compact supports. Thus invariants
of first order in the curvature can only be local because any
derivative is a total derivative. Therefore, the first-order
formfactors are constants:
\begin{equation}
N=1\;:\qquad F={}\mbox{const.}
\end{equation}
At the second order, all formfactors are functions of only
one argument because the remaining arguments can be eliminated by
integration by parts:
\begin{equation}
N=2\;:\qquad F=F(\Box_1)\;,
\end{equation}
$$
\Box_2=\Box_1\;,\quad\Box_{1+2}=0\;.
$$
At the third order, all formfactors are functions of three
individual $\Box$'s because the $\Box$'s acting on pairs
can be eliminated:
\begin{equation}
N=3\;:\qquad F=F(\Box_1,\Box_2,\Box_3)\;,
\end{equation}
$$
\Box_{1+2}=\Box_3\;,\quad\Box_{1+3}=\Box_2\;,\quad
\Box_{2+3}=\Box_1\;.
$$
The $\Box$'s acting on pairs appear beginning with the fourth order
in the curvature and are parameters of the on-shell scattering
amplitudes.
Nonlocal invariants of a given order make a linear space in which
all possible contractions of $N$ curvatures and their derivatives
make a basis, and the formfactors play the role of coefficients
of the linear combining. The basis can be built by listing all
independent contractions. The effective action is an expansion
in this basis with certain coefficients--formfactors:
\begin{equation}
\Gamma=\Gamma_{\rm I}+\Gamma_{\rm II}+\Gamma_{\rm III}+
\ldots\;,
\end{equation}
\begin{equation}
\Gamma_{\rm I}=\int dx\,g^{1/2}
\Bigl[\,c_1R+c_2\mathop{\rm tr}{\hat P}\,\Bigr]\;,
\end{equation}
\begin{eqnarray}
\Gamma_{\rm II}=\int dx\,g^{1/2}\mathop{\rm tr}\,
\Bigl[\,R_{\mu\nu}&F_1(\Box)&R^{\mu\nu}\nonumber\\
{}+R&F_2(\Box)&R\nonumber\\
{}+{\hat P}&F_3(\Box)&R\nonumber\\
{}+{\hat P}&F_4(\Box)&{\hat P}\nonumber\\
{}+{\hat{\cal R}}_{\mu\nu}&F_5(\Box)&{\hat{\cal R}}^{\mu\nu}\,\Bigr]\;,
\end{eqnarray}
\begin{eqnarray}
\Gamma_{\rm III}=\int dx\,g^{1/2}\mathop{\rm tr}
&\Bigl[&
F_1(\Box_1,\Box_2,\Box_3)\,{\hat P}_1{\hat P}_2{\hat P}_3
\nonumber\\
&+&F_2(\Box_1,\Box_2,\Box_3)\,
{\hat{\cal R}}_1{}^\mu{}_\alpha
{\hat{\cal R}}_2{}^\alpha{}_\beta
{\hat{\cal R}}_3{}^\beta{}_\mu\nonumber\\
&+&\cdots\nonumber\\
&+&F_{29}(\Box_1,\Box_2,\Box_3)\,
\nabla_\lambda\nabla_\sigma R_1^{\alpha\beta}
\nabla_\alpha\nabla_\beta R_2^{\mu\nu}
\nabla_\mu\nabla_\nu R_3^{\lambda\sigma}\,\Bigr]\;.\nonumber\\
\end{eqnarray}
In the first-order action (3.41), there are 2 basis contractions:
the Ricci scalar and the trace of the matrix potential, and
the formfactors are constants. In the second-order action,
there are 5 independent contractions listed in (3.42). In the
third-order action, there are 29 basis contractions, examples
of which are given in (3.43). Here I shall stop because, for
the problems of interest, the third order is sufficient.
The reason for that will be explained in the next lecture.
In the expressions above, the basis invariants are written in
terms of the curvatures but they can be rewritten in terms
of the conserved currents. Note also that the operator arguments
of the third-order formfactors $F$ commute because they act on
different objects. Since the arguments commute, the functions $F$
themselves are ordinary functions of three variables.
Thus, even before any calculation, we have an ansatz for the
effective action, with unknown formfactors.
We need them in the spectral forms
\begin{equation}
F_{\rm k}(\Box)=\int\limits_0^\infty dm^2\,
\frac{\rho_{\rm k}(m^2)}{m^2-\Box}
+\mbox{ a polynomial in }\Box\;,
\end{equation}
\begin{equation}
F_{\rm k}(\Box_1,\Box_2,\Box_3)=\int\limits_0^\infty
dm_1^2dm_2^2dm_3^2\,
\frac{\rho_{\rm k}(m_1^2,m_2^2,m_3^2)}{(m_1^2-\Box_1)
(m_2^2-\Box_2)(m_3^2-\Box_3)}\;,
\end{equation}
and then we can proceed directly to the expectation-value equations.
Unknown are only the spectral weights. These are to be calculated
from the loop diagrams but there is an alternative approach.
One can look for the general limitations on the spectral weights stemming
from axiomatic theory. These limitations may be
sufficient to solve one's expectation-value problem. In this case,
the solution will prove to be independent of the details of the
quantum-field model and the approximations made in it. Moreover,
the effective action above does not refer even to quantum field theory.
It is an action for the observable field, and its implications
may be valid irrespective of the underlying fundamental theory.
Only certain axiomatic properties of the spectral weights may be
important. There is an example in which this approach has been
implemented \cite{53}.
Here, the axiomatic approach will not be considered. Let us see
how the effective action is calculated from loops.
\subsubsection{Heat Kernel}
Consider any diagram in the effective action
\begin{equation}
\parbox{42pt}{
\begin{picture}(42,42)(-21,-21)
\put(0,10.5){\line(-1,0){17.195}}
\put(0,10.5){\line(1,0){17.195}}
\put(0,-10.5){\line(-1,0){17.195}}
\put(0,-10.5){\line(1,0){17.195}}
\put(0,0){\line(1,1){13.85}}
\put(0,0){\line(-1,-1){13.85}}
\put(0,0){\circle{42}}
\end{picture}
}\;\,,
\end{equation}
and, for every propagator, write
\begin{equation}
\parbox{28pt}{
\begin{picture}(28,3)
\put(0,1.5){\line(1,0){28}}
\end{picture}
}
{}=-\frac{1}{H}=\int\limits_0^\infty ds\,{\rm e}^{sH}\;.
\end{equation}
The kernel of the exponential operator
\begin{equation}
{\rm e}^{sH}\delta(x,y)\equiv {\hat K}(x,y|s)
\end{equation}
(and the operator itself) is called heat kernel, and the
parameter $s$ is often called proper time. Both names are
matters of history, and a matter of physics is the fact
that $H$ is negative definite. The matrix $P$ in (3.8)
may spoil the negativity but, since it is treated
perturbatively, as one of the curvatures, this does not matter.
Upon the insertion of (3.47), the diagram remains the same
as before but with
the heat kernels in place of the propagators, and the integrations
over the proper times will be left for the last:
\begin{equation}
\parbox{42pt}{
\begin{picture}(42,42)(-21,-21)
\put(0,10.5){\line(-1,0){17.195}}
\put(0,10.5){\line(1,0){17.195}}
\put(0,-10.5){\line(-1,0){17.195}}
\put(0,-10.5){\line(1,0){17.195}}
\put(0,0){\line(1,1){13.85}}
\put(0,0){\line(-1,-1){13.85}}
\put(0,0){\circle{42}}
\end{picture}
}
{}=\int\limits_0^\infty ds_1\ldots
\int\limits_0^\infty ds_n
\;\,
\parbox{42pt}{
\begin{picture}(42,42)(-21,-21)
\put(0,10.5){\line(-1,0){17.195}}
\put(0,10.5){\line(1,0){17.195}}
\put(0,-10.5){\line(-1,0){17.195}}
\put(0,-10.5){\line(1,0){17.195}}
\put(0,0){\line(1,1){13.85}}
\put(0,0){\line(-1,-1){13.85}}
\put(0,0){\circle{42}}
\put(-1,12.5){\llap{$s_1$}}
\put(8,-8.5){$s_n$}
\put(3,0){$\scriptstyle\ldots$}
\end{picture}
}\;\,.
\end{equation}
The one-loop effective action is the functional trace
of the heat kernel, integrated over $s$:
\begin{equation}
\parbox{20pt}{
\begin{picture}(20,20)
\put(10,10){\circle{20}}
\end{picture}
}
{}=\frac{1}{2}\ln\det\frac{1}{H}=\frac{1}{2}\int_0^\infty
\frac{ds}{s}\int dx\,\mathop{\rm tr}{\hat K}(x,x|s)\;.
\end{equation}
Thus, one is left with diagrams with the heat kernels.
It will be seen in a moment why this is better.
The expansion rule for the exponential operator has already been
considered in (1.27). There remains to be
presented the lowest-order approximation for the heat kernel:
\begin{equation}
{\hat K}(x,y|s)=\frac{1}{(4\pi s)^{D/2}}
\left({\rm e}^{-\sigma(x,y)/2s}{\hat a}(x,y)+O[\Re]\right)\;,
\end{equation}
\begin{equation}
D={}\mbox{dimension of the base manifold.}
\end{equation}
At the lowest order in the curvature, the potential $P$ does
not affect this expression but the metric and connection do.
As mentioned above, covariant expansions cannot be rigid.
In (3.51):
\begin{equation}
2\sigma(x,y)=(\mbox{geodetic distance between $x$ and $y$})^2
\end{equation}
in the metric entering the operator $H$. The connection entering
the operator $H$ defines a parallel transport along a line.
Parallel transport is a linear mapping, so there exists
a propagator of parallel transport (the matrix that accomplishes
this mapping). In (3.51):
\begin{equation}
{\hat a}(x,y)={}\parbox[t]{8cm}{propagator of the parallel transport
from $y$ to $x$\\along the geodesic connecting $y$ and $x$.}
\end{equation}
The geodesic comes from the metric, and the parallel transport
from the connection.
The two-point functions (3.53) and (3.54) are the main elements
of the Schwinger--DeWitt technique mentioned above and the basic
building blocks for all Green's functions: of the hyperbolic
operator $H$, and of the elliptic operator $H$, and the heat kernel.
What is special about the heat kernel? Special is the fact that,
as seen from expression (3.51), the heat kernel is finite at the
coincident points. Green's functions of the hyperbolic and
elliptic operators are singular, and this is normal. Abnormal
is the fact that in the loop diagrams they appear at the coincident
points. Finiteness of the heat kernel at the coincident points is
a bonus owing to which all diagrams with the heat kernels are
finite.
The divergences of the loop diagrams reappear in the proper-time
integrals in (3.49). These integrals diverge at the lower limits.
At this stage, one more advantage of the heat kernel comes into
effect. Namely, the manifold dimension $D$ enters only the
overall factor in (3.51). Apart from this factor, the expansion
of the heat kernel in the curvature does not contain $D$ explicitly.
Therefore, loops with the heat kernels are calculated once for
all dimensions, and then the knowledge of the analytic dependence
on $D$ enables one to apply the dimensional regularization to
the proper-time integrals. One integrates by parts in $s$ keeping
$\mathop{\rm Re}D<4$ and next goes over to the limit $D\to 4$.
For example,
\begin{equation}
\int\limits_0^\infty\frac{ds}{s^{D/2-1}}f(s)=\frac{1}{2-D/2}f(0)
-\int\limits_0^\infty ds\,\ln s \frac{df(s)}{ds}
+O\left(2-D/2\right)\;.
\end{equation}
The dimensional regularization annihilates all power divergences.
Only the logarithmic divergences survive and take the form of
poles in dimension. These poles affect only the polynomial terms
in the spectral representations of the formfactors. They appear in
the coefficients of the polynomials, thereby making these coefficients
indefinite. As a consequence, the local terms of
the effective action will have indefinite coefficients.
I shall come back to this issue.
After the substitution of the heat kernels for the propagators,
the calculation of loops becomes an entertaining geometrical
exercise.
\subsubsection{Loops and Geometry}
The heat kernel involves $\sigma$ and ${\hat a}$.
The derivative of $\sigma$
\begin{equation}
\nabla^\mu \sigma(x,y)\equiv\sigma^\mu(x,y)
\;\;\qquad
\parbox{90pt}{
\begin{picture}(90,25.7)(-31.5,-10.7)
{\thicklines
\qbezier(-30,0)(0,20)(15,10.04)
}
\put(15,10.04){\vector(3,-2){11.8}}
\put(-30,0){\circle*{3}}
\put(15,10.04){\circle*{3}}
\put(-25.5,-5.7){$y$}
\put(15,14.54){\llap{$x$}}
\put(29.8,1.8){$\sigma^\mu(x,y)$}
\end{picture}
}
\end{equation}
is the vector tangent to the geodesic connecting $y$ and $x$,
directed outwards, and normalized to the geodetic distance
between $y$ and $x$:
\begin{equation}
g_{\mu\nu}\sigma^\mu\sigma^\nu=2\sigma\;,\quad\;\;
\sigma^\mu\Bigl|_{x=y}=0\;,\quad\;
\det\nabla^\nu\sigma^\mu\Bigl|_{x=y}\ne 0\;.
\end{equation}
The normalization condition is a closed equation for $\sigma$
which together with the conditions at the coincident points
can serve as the definition of $\sigma$. The defining equation
for ${\hat a}$ together with the condition at the coincident points is
\begin{equation}
\sigma^\mu\nabla_\mu{\hat a}(x,y)=0\;,\quad\;\;
{\hat a}\Bigl|_{x=y}={\hat 1}\;.
\end{equation}
The determinant
\begin{equation}
\det\Bigl(\nabla^x_\mu\nabla^y_\nu\sigma(x,y)\Bigr)=
g^{1/2}(x)g^{1/2}(y)\Delta (x,y)
\end{equation}
is known as the Van Vleck--Morette determinant. It is responsible,
in particular, for a caustic of the geodesics
emanating from $x$ or $y$.
The vector $\sigma^\mu$ can be used to expand any function in
a covariant Taylor series. For a scalar, this series is of the form
\begin{equation}
f(y)=\sum_{n=0}^\infty\frac{(-1)^n}{n!}\sigma^{\mu_1}\ldots
\sigma^{\mu_n}\nabla_{\mu_1}\ldots\nabla_{\mu_n}f(x)\;.
\end{equation}
If $f$ is not a scalar, it should at first be parallel
transported from $y$ to $x$:
\begin{equation}
f(y)={\hat a}(y,x)\sum_{n=0}^\infty\frac{(-1)^n}{n!}
\sigma^{\mu_1}\ldots
\sigma^{\mu_n}\nabla_{\mu_1}\ldots\nabla_{\mu_n}f(x)\;.
\end{equation}
The covariant Taylor expansion is a regrouping of the ordinary
Taylor expansion. Whatever the connection is, it cancels in
this series. The series can formally be written in the
exponential form
\begin{equation}
f(y)={\hat a}(y,x)\exp\left(-\sigma^\mu\nabla_\mu\right)f(x)
\end{equation}
which will be of use below. Two-point functions expanded
in this way get expressed through their covariant derivatives
at the coincident points. Thus
\begin{equation}
\Delta (x,y)=1+\frac{1}{6}R_{\mu\nu}\sigma^\mu\sigma^\nu+\ldots\;.
\end{equation}
A loop always involves the ring of ${\hat a}$'s
\begin{equation}
{\hat a}(x,x_1){\hat a}(x_1,x_2)\ldots{\hat a}(x_n,x)
\;,\;\qquad
\parbox{35pt}{
\begin{picture}(35,28)(-14,-15)
\put(-7,14){\line(1,0){14}}
\put(-7,-14){\line(1,0){14}}
\put(-14,0){\line(1,2){7}}
\put(-14,0){\line(1,-2){7}}
\put(14,0){\line(-1,2){7}}
\put(14,0){\line(-1,-2){7}}
\qbezier(14,14)(24,0)(14,-14)
\put(14,-14){\vector(-4,-3){3}}
\end{picture}
}
\end{equation}
i.e., the parallel transport around a geodetic polygon.
The ring of two ${\hat a}$'s is the parallel transport
there and back along the same path. Therefore,
\begin{equation}
{\hat a}(x,x_1){\hat a}(x_1,x)\equiv {\hat 1}\;.
\end{equation}
The ring of three ${\hat a}$'s is the parallel transport around
the geodetic triangle. It involves the commutator curvature,
and the curvature terms can be calculated:
\begin{equation}
{\hat a}(x,x_1){\hat a}(x_1,x_2){\hat a}(x_2,x)={\hat 1}+
\frac{1}{2}{\hat{\cal R}}_{\alpha\beta}
\sigma_1{}^\alpha\sigma_2{}^\beta+\ldots\;,
\end{equation}
\begin{equation}
\parbox{77pt}{
\begin{picture}(77,54)(-32,-7)
{\thicklines
\put(0,20){\line(3,2){30}}
\put(0,20){\line(3,-2){30}}
\put(30,0){\line(0,1){40}}
}
\put(0,20){\vector(-3,2){12}}
\put(0,20){\vector(-3,-2){12}}
\put(6,18.55){$x$}
\put(33,39){$x_1$}
\put(33,-2){$x_2$}
\put(-12,28){\llap{$\sigma_2{}^\mu$}}
\put(-12,12){\llap{$\sigma_1{}^\mu$}}
\end{picture}
}\;\,.
\end{equation}
This is sufficient
because any polygon can be broken into triangles:
\begin{equation}
\parbox{28pt}{
\begin{picture}(28,28)(-14,-14)
{\thicklines
\put(-7,14){\line(1,0){14}}
\put(-7,-14){\line(1,0){14}}
\put(-14,0){\line(1,2){7}}
\put(-14,0){\line(1,-2){7}}
\put(14,0){\line(-1,2){7}}
\put(14,0){\line(-1,-2){7}}
}
\put(14,0){\line(-3,2){21}}
\put(14,0){\line(-3,-2){21}}
\put(14,0){\line(-1,0){28}}
\put(2.5,3){\vector(-1,0){9.5}}
\put(-7,-3){\vector(1,0){9.5}}
\end{picture}
}\;\,.
\end{equation}
Solution of the geodetic triangle is also involved.
In the notation of (3.67),
\begin{equation}
\Bigl(\sigma^\mu(x_1,x_2)\Bigr)^2=\sigma_1{}^2+\sigma_2{}^2
-2\sigma_1\sigma_2-\frac{1}{3}R_{\mu\alpha\nu\beta}
\sigma_1{}^\mu\sigma_1{}^\nu\sigma_2{}^\alpha\sigma_2{}^\beta
+\ldots\;.
\end{equation}
Here the first two terms make the Pythagorean theorem,
the third term accounts for the angle not being the right
angle, and the terms with the Riemann curvature can be calculated.
The above is to give a flavour of what loops imply.
\subsubsection{Calculation of Loops}
The heat kernel calculates loops with a remarkable elegance.
As an example, consider the contribution of the second order
in the curvature to the effective action. The respective
one-loop diagram contains two curvatures $\Re$ and two
heat kernels with the proper times $s_1$ and $s_2$:
$$
\parbox{80pt}{
\begin{picture}(80,40)(-20,0)
\qbezier(0,20)(20,40)(40,20)
\qbezier(0,20)(20,0)(40,20)
\put(-7.30,20){\circle*{20}}
\put(47.30,20){\circle*{20}}
\put(-7.30,2.86){\llap{$\Re$}}
\put(47.30,2.86){$\Re$}
\put(20,33.5){$s_1$}
\put(20,3.1){$s_2$}
\end{picture}
}
{}+O[\Re^3]
$$
\begin{equation}
{}=\int dx\,g^{1/2}\int dy\,g^{1/2}\,\Re(x)
{\hat K}(x,y|s_1){\hat K}(x,y|s_2)\Re(y)+O[\Re^3]\;.
\end{equation}
Suppose that the calculation only needs to be done with
accuracy $O[\Re^3]$. Then one can insert in (3.70) the
lowest-order approximation for the heat kernels.
In this approximation, the rings of ${\hat a}$'s collapse
to ${\hat 1}$, and the remaining ${\hat a}$'s always
transport the $\Re$'s to the same point arranging their
complete contraction. With the ${\hat a}$'s and the numerical
coefficients omitted, the diagram (3.70) is of the form
\begin{eqnarray}
\frac{1}{s_1{}^{D/2}}\frac{1}{s_2{}^{D/2}}
\int dx\,g^{1/2}\int dy\,g^{1/2}
\hphantom{
\quad\left(-\frac{\sigma(x,y)}{2s_1}\right)
\exp\left(-\frac{\sigma(x,y)}{2s_2}\right)
}\nonumber\\
{}\times\Re(x)\exp\left(-\frac{\sigma(x,y)}{2s_1}\right)
\exp\left(-\frac{\sigma(x,y)}{2s_2}\right)
\Re(y)\;.
\end{eqnarray}
But the exponents here simply add, and the two heat kernels
turn into one with a complicated proper-time argument:
$$
\frac{1}{(s_1s_2)^{D/2}}\int dx\,g^{1/2}\int dy\,g^{1/2}\,\Re(x)
\exp\left(-\frac{s_1+s_2}{2s_1s_2}\sigma(x,y)\right)
\Re(y)\hphantom{{}={}={}={}}
$$
\begin{equation}
{}=\frac{1}{(s_1+s_2)^{D/2}}\int dx\,g^{1/2}\int dy\,g^{1/2}\,\Re(x)
K\left(x,y\Bigl|\frac{s_1s_2}{s_1+s_2}\right)
\Re(y)\;.
\end{equation}
One only needs to rewrite this heat kernel in the operator form:
\begin{equation}
\frac{1}{(s_1+s_2)^{D/2}}\int dx\,g^{1/2}\,\Re
\exp\left(\frac{s_1s_2}{s_1+s_2}\Box\right)
\Re(y)\;,
\end{equation}
and the loop is done. The proper-time integral
\begin{equation}
\int\limits_0^\infty ds_1
\int\limits_0^\infty ds_2\,
\frac{1}{(s_1+s_2)^{D/2}}
\exp\left(\frac{s_1s_2}{s_1+s_2}\Box\right)
=F(\Box)
\end{equation}
is the formfactor.
What has happened? The propagators in the loop glued together,
and the loop turned into a tree:
\begin{equation}
\parbox{200pt}{
\begin{picture}(80,40)(-20,0)
\qbezier(0,20)(20,40)(40,20)
\qbezier(0,20)(20,0)(40,20)
\put(-7.30,20){\circle*{20}}
\put(47.30,20){\circle*{20}}
\put(70,20){\vector(1,0){20}}
\put(100,0){\begin{picture}(80,40)(-20,0)
{\linethickness{3pt}
\put(0,20){\line(1,0){40}}
}
\put(-7.30,20){\circle*{20}}
\put(47.30,20){\circle*{20}}
\end{picture}}
\end{picture}
}\;\,.
\end{equation}
This is what means to do the loop. {\it It means to
turn it into a tree.} The role of the propagator in
the tree is played by the formfactor $F(\Box)$.
Consider now any multi-loop diagram with parallel propagators.
It turns into a tree
\begin{equation}
\parbox{200pt}{
\begin{picture}(80,40)(-20,0)
\qbezier(0,20)(20,40)(40,20)
\qbezier(0,20)(20,0)(40,20)
\qbezier(0,20)(20,25)(40,20)
\qbezier(0,20)(20,15)(40,20)
\qbezier(0,20)(20,50)(40,20)
\qbezier(0,20)(20,-10)(40,20)
\put(-7.30,20){\circle*{20}}
\put(47.30,20){\circle*{20}}
\put(70,20){\vector(1,0){20}}
\put(100,0){\begin{picture}(80,40)(-20,0)
{\linethickness{3pt}
\put(0,20){\line(1,0){40}}
}
\put(-7.30,20){\circle*{20}}
\put(47.30,20){\circle*{20}}
\end{picture}}
\end{picture}
}
\end{equation}
in a completely similar way. The inverse proper times add:
$$
\frac{1}{s_1}+\frac{1}{s_2}+\ldots=
\frac{1}{s_{\mbox{\scriptsize total}}}
$$
(the law of parallel conductors). There is nothing to do.
For more than two curvatures a more powerful method is used.
Consider the diagram
\begin{equation}
\parbox{80pt}{
\begin{picture}(80,70)(-40,-20)
\put(-20,0){\line(1,0){40}}
\put(0,30){\line(-2,-3){20}}
\put(0,30){\line(2,-3){20}}
\put(-26.53,-3.265){\circle*{20}}
\put(26.53,-3.265){\circle*{20}}
\put(0,37.30){\circle*{20}}
\put(-17,8){\llap{$y_1$}}
\put(17,8){$y_2$}
\put(6,27){$x$}
\end{picture}
}
{}+O[\Re^4]\;,
\end{equation}
and suppose again that it is needed only up to the
next order in the curvature. Then, with the ${\hat a}$'s
and the numerical coefficients omitted, it is of the form
\begin{eqnarray}
\frac{1}{s_1{}^{D/2}}\frac{1}{s_2{}^{D/2}}
\frac{1}{s_3{}^{D/2}}
\int dx\,g^{1/2}\int dy_1\,g^{1/2}\int dy_2\,g^{1/2}
\hphantom{
\Re(y_1)\Re(y_2)\Re\;{}
}\nonumber\\
{}\times\exp\left(-\frac{\sigma(x,y_1)}{2s_1}
-\frac{\sigma(x,y_2)}{2s_2}
-\frac{\sigma(y_1,y_2)}{2s_3}\right)\Re(x)\Re(y_1)\Re(y_2)\;.
\;{}
\end{eqnarray}
Choose one of the vertices, say $x$, to be the observation point
of the effective Lagrangian. One of the curvatures, $\Re(x)$,
is already there. Shift the remaining curvatures to $x$ using
the covariant Taylor series:
\begin{equation}
\Re(y_i)=\exp\left(-\sigma_i{}^\mu\nabla_\mu\right)\Re(x)\;,
\end{equation}
\begin{equation}
\sigma_i{}^\mu=\sigma^\mu(x,y_i)\;,
\quad i=1,2\,.
\end{equation}
Next, consider the geodetic triangle with the same vertices
as in the diagram. For the geodesics connecting $x$ with
$y_i$, write
\begin{equation}
2\sigma(x,y_i)=(\sigma_i)^2\;,
\end{equation}
and, for the geodesic between the $y$'s, use the Pythagorean
theorem:
\begin{equation}
2\sigma(y_1,y_2)=(\sigma_1)^2+(\sigma_2)^2
-2\sigma_1\sigma_2+O[\Re]\;.
\end{equation}
Finally, replace the integration variables:
\begin{equation}
y_1{}^\mu\to\sigma_1{}^\mu\;,\quad
y_2{}^\mu\to\sigma_2{}^\mu\;.
\end{equation}
The Jacobian
\begin{equation}
\left|\frac{\partial\sigma^\mu(x,y_i)}{\partial y_i{}^\nu}
\right|^{-1}=\frac{g^{1/2}(x)}{g^{1/2}(y_i)}\Delta^{-1}(x,y_i)
=\frac{g^{1/2}(x)}{g^{1/2}(y_i)}\left(1+O[\Re]\right)
\end{equation}
removes the measure $g^{1/2}$ from the integral in $y_i$ and
brings an extra $g^{1/2}$ to the integral in $x$. Expression
(3.78) takes the form
\begin{eqnarray}
\frac{1}{(s_1s_2s_3)^{D/2}}\int dx\, g^{1/2}\,
\left(g^{1/2}(x)\right)^2\int d\sigma_1d\sigma_2
\exp\left(-\frac{\sigma_1{}^2}{4s_1}
-\frac{\sigma_2{}^2}{4s_2}
\hphantom{(x)\quad{}}
\vphantom{\frac{\sigma_1{}^2+\sigma_2{}^2-2\sigma_1\sigma_2}{4s_3}}
\right.\nonumber\\
{}-\left.\frac{\sigma_1{}^2+\sigma_2{}^2-2\sigma_1\sigma_2}{4s_3}
-\sigma_1{}^\mu\nabla_\mu{}^1
-\sigma_2{}^\mu\nabla_\mu{}^2\right)
\Re(x)\Re_1(x)\Re_2(x)\;.
\quad{}
\end{eqnarray}
Here the labels $1,2$ on $\nabla_\mu$ and $\Re$ point out which
$\nabla_\mu$ acts on which $\Re$. The operators $\nabla_\mu$
figure as parameters in the integral, and, up to the next order
in $\Re$, they commute. Since the parameters commute, the integral
in $\sigma_1{}^\mu$, $\sigma_2{}^\mu$ is an ordinary Gaussian
integral. Do it. The extra factor $\left(g^{1/2}(x)\right)^2$
cancels, and the result is
\begin{equation}
B(s_1,s_2,s_3)\int dx\,g^{1/2}\exp\left(\sum_{i,k=1}^2
b_{ik}(s_1,s_2,s_3)
\nabla_i\nabla_k\right)\Re(x)\Re_1(x)\Re_2(x)
\end{equation}
where $B(s_1,s_2,s_3)$ is some function of the proper times,
and the exponent is a quadratic form in $\nabla_1$, $\nabla_2$
with $s$-dependent coefficients. The loop is done. The integral
\begin{eqnarray}
\int\limits_0^\infty ds_1ds_2ds_3\,
B(s_1,s_2,s_3)\exp\left(\sum_{i,k=1}^2
b_{ik}(s_1,s_2,s_3)
\nabla_i\nabla_k\right)
\hphantom{\nabla_2)}\nonumber\\
{}=F(\nabla_1{}^2,\nabla_2{}^2,\nabla_1\nabla_2)
\end{eqnarray}
is the formfactor. Integration by parts in $x$ brings it
to the $\Box$ arguments:
\begin{equation}
F(\nabla_1{}^2,\nabla_2{}^2,\nabla_1\nabla_2)\to
F(\nabla_1{}^2,\nabla_2{}^2,\nabla^2)\;.
\end{equation}
The effect of the calculation above is again that the loop
is turned into a tree:
\begin{equation}
\parbox{200pt}{
\begin{picture}(80,70)(-40,-20)
\put(-20,0){\line(1,0){40}}
\put(0,30){\line(-2,-3){20}}
\put(0,30){\line(2,-3){20}}
\put(-26.53,-3.265){\circle*{20}}
\put(26.53,-3.265){\circle*{20}}
\put(0,37.30){\circle*{20}}
\put(50,15){\vector(1,0){20}}
\put(80,-20){\begin{picture}(80,70)(-40,-20)
{\linethickness{3pt}
\put(-20,0){\line(1,0){40}}
\put(0,0){\line(0,1){30}}
}
\put(-27.30,0){\circle*{20}}
\put(27.30,0){\circle*{20}}
\put(0,37.30){\circle*{20}}
\end{picture}}
\end{picture}
}\;\,.
\end{equation}
The vertex of the tree is the formfactor
$F(\nabla_1{}^2,\nabla_2{}^2,\nabla_3{}^2)$.
This method applies to any diagram with the heat kernels. One
only needs to do Gaussian integrals, and the result is always
the exponential of a quadratic combination of $\nabla$'s.
The formfactor is a function of the products $\nabla_i\nabla_k$.
\subsubsection{The One-Loop Formfactors}
The result of the proper-time integrations depends essentially
on the dimension~$D$. For $D=4$, the one-loop formfactors in the
effective action (3.40) are as follows.
With one exception, all second-order formfactors are logs:
\begin{eqnarray}
F_1(\Box)&=&\frac{1}{60}\frac{1}{2(4\pi)^2}\ln(-\Box)
+\mbox{ const.}\;,\\
F_2(\Box)&=&-\frac{1}{180}\frac{1}{2(4\pi)^2}\ln(-\Box)
+\mbox{ const.}\;,\\
F_3(\Box)&=&\frac{1}{18}\frac{1}{2(4\pi)^2}\;,\\
F_4(\Box)&=&\frac{1}{2}\frac{1}{2(4\pi)^2}\ln(-\Box)
+\mbox{ const.}\;,\\
F_5(\Box)&=&\frac{1}{12}\frac{1}{2(4\pi)^2}\ln(-\Box)
+\mbox{ const.}
\end{eqnarray}
Since
\begin{equation}
-\ln(-\Box)=\int\limits_0^\infty dm^2\,\frac{1}{m^2-\Box}
+\mbox{ const.}\;,
\end{equation}
these expressions have the spectral forms (3.44) with definite
spectral weights and indefinite additive constants (polynomials
of the zeroth power). Respectively, the effective action contains
a set of local terms with unspecified coefficients:
\begin{eqnarray}
\Gamma&=&\frac{1}{2(4\pi)^2}\int dx\,g^{1/2}\Bigl(
c_1R+c_2\mathop{\rm tr}{\hat P}+c_3R_{\mu\nu}R^{\mu\nu}
+c_4R^2\nonumber\\
&&{}+c_5\mathop{\rm tr}({\hat P}{\hat P})
+c_6\mathop{\rm tr}({\hat{\cal R}}_{\mu\nu}
{\hat{\cal R}}^{\mu\nu})+\frac{1}{18}R\mathop{\rm tr}{\hat P}
+\mbox{ nonlocal terms}\Bigr)\;.\hphantom{\qquad{}}
\end{eqnarray}
The nonlocal terms are specified completely.
The third-order formfactors have no polynomial terms and
indefinite coefficients. The simplest third-order formfactor is
${F_1(\Box_1,\Box_2,\Box_3)}$ in (3.43). It has the spectral form
(3.45), and its spectral weight ${\rho_1(m_1^2,m_2^2,m_3^2)}$ is
obtained as follows. Consider a triangle of three spectral masses
\begin{center}
\parbox{190pt}{
\begin{picture}(190,37)(-20,-7)
\put(-20,0){\line(1,0){40}}
\put(0,30){\line(-2,-3){20}}
\put(0,30){\line(2,-3){20}}
\put(-13,13.2){\llap{$m_1$}}
\put(13,13.2){$m_2$}
\put(-4,-6.6){$m_3$}
\put(60,13.2){$A={}$area of the triangle.}
\end{picture}
}
\end{center}
It can be built only if every mass is smaller than the sum
of the two others. The spectral weight $\rho_1$ is zero if
the triangle cannot be built. Otherwise, it is proportional
to the inverse area of this triangle:
\begin{eqnarray}
\rho_1(m_1^2,m_2^2,m_3^2)=-\frac{1}{3}\frac{1}{2(4\pi)^2}
\frac{1}{4\pi A}
\hphantom{\theta(m_1+m_2-m_3)
\theta(m_1+m_3}\nonumber\\
{}\times\theta(m_1+m_2-m_3)
\theta(m_1+m_3-m_2)
\theta(m_2+m_3-m_1)\;.
\end{eqnarray}
The remaining 28 third-order formfactors are expressed through
$F_1$ and are tabulated \cite{36}. The tables
contain various integral representations of the formfactors,
and their asymptotics.
The loop of the minimal second-order operator with arbitrary
metric, connection, and potential is called standard loop
because every calculation with it is done once, and the
results can be tabulated. A calculation in any specific model
boils down to combining the standard loops and using the tables.
A number of recipes for the reduction to minimal operators
can be found in \cite{24}. Doing loops becomes a business similar
to doing integrals.
The fact that some coefficients in the effective action
remain unspecified is none of the tragedy. The effective
action is a phenomenological object intended for obtaining
the values of observables. The spectral weights are certain
phenomenological characteristics of the vacuum like the
permittivity of a medium. They are to be calculated from
a more fundamental microscopic theory. Some microscopic
theory of some level is incapable of specifying some of
the coefficients. So what? Classical theory was capable
of even less, and, nevertheless, celestial mechanics has
been successfully worked up\footnote{Remarkably, without
a knowledge of string theory!}. The only important question
is whether the lack of knowledge affects the problems that
we want to solve. This will be cleared up in the next lecture.
}
\section[Lecture 4]{Vacuum Currents and The Effect of Particle Creation}
\label{sec:4}
{\renewcommand{\theequation}{4.\arabic{equation}}
\subsubsection{Vacuum Currents}
Consider quantum electrodynamics. In this case, $\varphi^a(x)$
is a set of the vector connection field and the electron--positron
field
\begin{equation}
\mbox{QED:}\;\;\quad \varphi^a=\Bigl({\cal A}_\mu,\psi\Bigr)\;.
\end{equation}
The commutator curvature is, up to a coefficient,
the Maxwell tensor, and the
operator field equations are of the form
\begin{equation}
\nabla^\nu{\cal R}_{\nu\mu}({\hat{\cal A}})
+J_\mu({\hat\psi})
=-J_\mu^{\mbox{\scriptsize ext}}
\end{equation}
where $J_\mu({\hat\psi})$ is the operator electron--positron
current, and $J_\mu^{\mbox{\scriptsize ext}}$ is an external
source. Averaging these equations over the in-vacuum state,
one obtains, according to the general derivation above,
the same terms but as functions of the mean field plus a set
of loops:
\begin{equation}
\nabla^\nu{\cal R}_{\nu\mu}(\langle{\cal A}\rangle)
+{}
\parbox{31.4pt}{
\begin{picture}(31.4,32)(0,-6)
\put(0,7){$J_\mu(\langle\psi\rangle)$}
\put(3,-6){\line(1,1){33}}
\end{picture}
}
{}+{}
\parbox{30pt}{
\begin{picture}(30,32)(0,-6)
{\thicklines
\put(20,10){\circle{20}}
\put(0,10){\line(1,0){10}}
\put(0,12){${\scriptstyle{\cal A}}$}
\put(19,22){\llap{${\scriptstyle{\cal A}}$}}
\put(18,-6){\llap{${\scriptstyle{\cal A}}$}}
}
\put(3,-6){\line(1,1){33}}
\end{picture}
}
{}+{}
\parbox{30pt}{
\begin{picture}(30,32)(0,-6)
{\thicklines
\put(20,10){\circle{20}}
\put(0,10){\line(1,0){10}}
\put(0,12){${\scriptstyle{\cal A}}$}
\put(19,22){\llap{$\psi$}}
\put(18,-6){\llap{${\scriptstyle{\cal A}}$}}
}
\put(3,-6){\line(1,1){33}}
\end{picture}
}
{}+{}
\parbox{30pt}{
\begin{picture}(30,32)(0,-6)
{\thicklines
\put(20,10){\circle{20}}
\put(0,10){\line(1,0){10}}
\put(0,12){${\scriptstyle{\cal A}}$}
\put(19,22){\llap{${\scriptstyle{\cal A}}$}}
\put(18,-6){\llap{$\psi$}}
}
\put(3,-6){\line(1,1){33}}
\end{picture}
}
{}+{}
\parbox{30pt}{
\begin{picture}(30,32)(0,-6)
\thicklines
\put(20,10){\circle{20}}
\put(0,10){\line(1,0){10}}
\put(0,12){${\scriptstyle{\cal A}}$}
\put(19,22){\llap{$\psi$}}
\put(18,-6){\llap{$\psi$}}
\end{picture}
}
{}=-J_\mu^{\mbox{\scriptsize ext}}\;.
\end{equation}
There is another such equation, for $\psi$, but, since
$\psi$ has no external source, its solution is
\begin{equation}
\langle\psi\rangle=0\;.
\end{equation}
Then, in (4.3), $J_\mu(\langle\psi\rangle)$ vanishes, and
the loops with the vertices $S_{{\cal A}{\cal A}{\textstyle\psi}}$
vanish. There are no such vertices in QED but, if there were,
as in gravidynamics, they would be proportional to
$\langle\psi\rangle$ and vanish by (4.4). The photon loop
also vanishes because neither there is a vertex
$S_{{\cal A}{\cal A}{\cal A}}$ but this is already a specific
property of QED. Only the electron--positron loop survives.
The surviving loop is a function of $\langle{\cal A}\rangle$,
and, by derivation, is the electron--positron current
averaged over the in-vacuum:
\begin{equation}
\parbox{30pt}{
\begin{picture}(30,32)(0,-6)
\thicklines
\put(20,10){\circle{20}}
\put(0,10){\line(1,0){10}}
\put(0,12){${\scriptstyle{\cal A}}$}
\put(19,22){\llap{$\psi$}}
\put(18,-6){\llap{$\psi$}}
\end{picture}
}
{}=J_\mu^{\mbox{\scriptsize vac}}
(\langle{\cal A}\rangle)=
\langle\mbox{in vac}|
J_\mu({\hat\psi})|\mbox{in vac}\rangle\;.
\end{equation}
This is the vacuum current. According to (4.3),
the {\it observable} electromagnetic field satisfies
the Maxwell equations with an addition of the vacuum current:
\begin{equation}
\nabla^\nu{\cal R}_{\nu\mu}({\cal A})
=-J_\mu^{\mbox{\scriptsize vac}}({\cal A})
-J_\mu^{\mbox{\scriptsize ext}}\;.
\end{equation}
We obtain this current by varying the effective action and
next replacing the Euclidean resolvents with the retarded
resolvents:
\begin{equation}
J_\mu^{\mbox{\scriptsize vac}}({\cal A})=\left.
\frac{\delta\Gamma({\cal A})}{\delta{\cal A}^\mu}
\right|_{\Box\to\Box_{\mbox{\scriptsize ret}}}\;,
\end{equation}
\begin{equation}
\Gamma({\cal A})=\int dx\,g^{1/2}\Bigl[{\cal R}F(\Box){\cal R}
+F(\Box_1,\Box_2,\Box_3){\cal R}_1{\cal R}_2{\cal R}_3+\ldots
\Bigr]\;.
\end{equation}
It is completely similar if $\varphi^a(x)$ is a set of the metric field
and any matter fields
\begin{equation}
\mbox{GRAVITY:}\;\;\quad \varphi^a=\Bigl(g_{\mu\nu},\psi\Bigr)\;.
\end{equation}
The only difference is that the vertex $S_{ggg}$ is nonvanishing:
\begin{equation}
R_{\mu\nu}(\langle g\rangle)-\frac{1}{2}\langle
g_{\mu\nu}\rangle R(\langle g\rangle)
+{}
\parbox{30pt}{
\begin{picture}(30,32)(0,-6)
\thicklines
\put(20,10){\circle{20}}
\put(0,10){\line(1,0){10}}
\put(0,13.5){$g$}
\put(19,22){\llap{$\psi$}}
\put(18,-6){\llap{$\psi$}}
\end{picture}
}
{}+{}
\parbox{30pt}{
\begin{picture}(30,32)(0,-6)
\thicklines
\put(20,10){\circle{20}}
\put(0,10){\line(1,0){10}}
\put(0,13.5){$g$}
\put(19,23){\llap{$g$}}
\put(18,-5){\llap{$g$}}
\end{picture}
}
{}=8\pi T_{\mu\nu}^{\mbox{\scriptsize ext}}\;,
\end{equation}
\begin{equation}
\langle\psi\rangle=0\;,
\end{equation}
and it is assumed again that the matter fields have no sources.
Again, by derivation, the matter loop is the energy-momentum
tensor of the field ${\hat\psi}$ averaged over the in-vacuum but
the vacuum current contains, in addition, the graviton loop:
\begin{equation}
T_{\mu\nu}^{\mbox{\scriptsize vac}}=
=\langle\mbox{in vac}|
T_{\mu\nu}({\hat\psi})|\mbox{in vac}\rangle
+\mbox{ the graviton loop.}
\end{equation}
The Einstein equations are replaced by the expectation-value
equations in the in-vacuum state:
\begin{equation}
R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R
=8\pi T_{\mu\nu}^{\mbox{\scriptsize vac}}(g)
+8\pi T_{\mu\nu}^{\mbox{\scriptsize ext}}\;.
\end{equation}
Since the gravitational field couples to everything,
the equation (4.10) should contain loops of all matter fields
in Nature.
The effective actions for all loops including the graviton loop
have the same structure:
\begin{equation}
T_{\mu\nu}^{\mbox{\scriptsize vac}}(g)=-\frac{2}{g^{1/2}}\left.
\frac{\delta\Gamma(g)}{\delta g^{\mu\nu}}
\right|_{\Box\to\Box_{\mbox{\scriptsize ret}}}\;,
\end{equation}
\begin{equation}
\Gamma(g)=\int dx\,g^{1/2}\Bigl[R_{..}F(\Box)R_{..}
+F(\Box_1,\Box_2,\Box_3)R_{1..}R_{2..}R_{3..}+\ldots
\Bigr]\;.
\end{equation}
Only the coefficients of the formfactors are different.
To have the correct coefficients, one would need to know
the full spectrum of particles. Therefore, in the case of
gravity, the axiomatic approach is most suitable.
Now recall that the curvatures are redundant, and the effective
action is in fact a functional of the conserved currents
(3.16) and (3.17). Owing to this fact, the expectation-value
equations (4.6) and (4.13) close with respect to these currents:
\begin{equation}
\Bigl(\nabla^\nu{\cal R}_{\nu\mu}\Bigr)+
f(\Box_{\mbox{\scriptsize ret}})
\Bigl(\nabla^\nu{\cal R}_{\nu\mu}\Bigr)+
O\Bigl(\nabla^\nu{\cal R}_{\nu\mu}\Bigr)^2
=-J_\mu^{\mbox{\scriptsize ext}}\;,
\end{equation}
\begin{eqnarray}
\Bigl(R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R\Bigr)+
f_1(\Box_{\mbox{\scriptsize ret}})
\Bigl(R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R\Bigr)
\hphantom{+f_2(\Box_{\mbox{\scriptsize ret}})
(-g_{\mu\nu}\Box)+{}+{}
}\nonumber\\
{}+f_2(\Box_{\mbox{\scriptsize ret}})
(\nabla_\mu\nabla_\nu-g_{\mu\nu}\Box)R
+O\Bigl(R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R\Bigr)^2
=8\pi T_{\mu\nu}^{\mbox{\scriptsize ext}}\;.\quad{}
\end{eqnarray}
Of course, with respect to the mean fields, these equations
are closed from the outset but, at an intermediate stage,
they are closed with respect to the Maxwell and Einstein
currents. When solved with respect to these currents, they
become literally the Maxwell and Einstein equations with
some external sources but {\it not} the original ones.
To make this clear, use the fact that the vacuum terms
are proportional to the Planck constant and solve the
equations by iteration:
\begin{equation}
\nabla^\nu{\cal R}_{\nu\mu}=
=-J_\mu^{\mbox{\scriptsize ext}}
+f(\Box_{\mbox{\scriptsize ret}})
J_\mu^{\mbox{\scriptsize ext}}
+O\left(J_\mu^{\mbox{\scriptsize ext}}\right)^2\;,
\end{equation}
\begin{eqnarray}
R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R
=8\pi T_{\mu\nu}^{\mbox{\scriptsize ext}}
-f_1(\Box_{\mbox{\scriptsize ret}})
8\pi T_{\mu\nu}^{\mbox{\scriptsize ext}}
\hphantom{+f_2(\Box_{\mbox{\scriptsize ret}})+{}
}\nonumber\\
{}+f_2(\Box_{\mbox{\scriptsize ret}})
(\nabla_\mu\nabla_\nu-g_{\mu\nu}\Box)
8\pi T^{\mbox{\scriptsize ext}}
+O\left(T_{\mu\nu}^{\mbox{\scriptsize ext}}\right)^2\;.\quad{}
\end{eqnarray}
These are the Maxwell and Einstein equations with the original
sources propagated in a nonlocal and nonlinear manner.
There is an effect in these equations that drives the entire problem.
\subsubsection{Emission of Charges}
Consider again QED and suppose that the external source
has a compact spatial support. This source is the current of
a set of electrically charged particles moving inside a
spacetime tube but, since the observable electromagnetic
field is the expectation value, only the total current in
(4.6) or (4.18) is observable:
\begin{equation}
J_\mu^{\mbox{\scriptsize tot}}
=J_\mu^{\mbox{\scriptsize ext}}
+J_\mu^{\mbox{\scriptsize vac}}({\cal A})\;.
\end{equation}
And the total current has a noncompact spatial support
because the vacuum contribution is nonlocal. One may
calculate the flux of charge through the support tube of
$J^{\mbox{\scriptsize ext}}$ and even through a wider
tube (see Fig. 3), and it will be nonvanishing:
\begin{equation}
e_{\cal T}(\Sigma_1)-
e_{\cal T}(\Sigma_2)=
\frac{1}{4\pi}\int\limits_{\Sigma_1}^{\Sigma_2}
J_\mu^{\mbox{\scriptsize vac}}\,d{\cal T}^\mu\ne 0\;.
\end{equation}
Here $e_{\cal T}(\Sigma)$ is the amount of the electric charge
contained inside the tube ${\cal T}$ at a given instant $\Sigma$.
The charge inside the tube is not conserved.
\input figthree.tex
If, when moving away from the support of
$J^{\mbox{\scriptsize ext}}$, the flux (4.21) falls off rapidly,
then its nonvanishing only means that the boundary of the
original source gets spread. Because of the creation of
virtual pairs, this boundary can never be located precisely.
The charges of the external source immersed in the quantum
vacuum are always annihilated and created again in a
slightly different place. There is no point to worry about.
Just step aside a little.
However, one may ask if there is a flux of charge through
an infinitely wide tube:
\begin{equation}
e(\Sigma_1)-
e(\Sigma_2)=
\frac{1}{4\pi}\int\limits_{\Sigma_1}^{\Sigma_2}
J_\mu^{\mbox{\scriptsize vac}}\,d{\cal T}^\mu
\Bigl|_{r\to\infty}\;.
\end{equation}
In this equation, $e(\Sigma)$ is the total amount of the
electric charge in the compact domain of space at a given
instant $\Sigma$. For (4.22) to be nonvanishing,
$J_\mu^{\mbox{\scriptsize vac}}$ should behave as
\begin{equation}
J_\mu^{\mbox{\scriptsize vac}}=O\left(\frac{1}{r^2}\right)\;,
\quad r\to\infty\;,
\end{equation}
\begin{equation}
r\propto\sqrt{\mbox{area of }{\bf{\cal S}}}
\end{equation}
where ${\bf{\cal S}}$ is the intersection of ${\cal T}$
with $\Sigma$ (Fig. 3). In this case, it would turn out
that the charge disappears, i.e., {\it our source is emitting
charge}. But even this may not be a point of concern if the
current in (4.22) oscillates with time, and the oscillations sum
to zero for a sufficiently long period between $\Sigma_1$
and $\Sigma_2$. The expectation values have uncertainties,
and these oscillations are a quantum noise. Just do not
measure (4.22) too often.
However, one may ask if the charge emitted for the entire
history
\begin{equation}
e(-\infty)-
e(+\infty)=
\frac{1}{4\pi}\int\limits_{\Sigma\to -\infty}^{\Sigma\to +\infty}
J_\mu^{\mbox{\scriptsize vac}}\,d{\cal T}^\mu
\Bigl|_{r\to\infty}
\end{equation}
is nonvanishing. There will always be oscillations in the current
but they may sum not to zero. Since, as $r\to\infty$, all fields
fall off, there are, in this limit, the asymptotic Killing vectors
corresponding to all the symmetries of flat and empty spacetime.
Therefore, one may ask the same questions about the emission of
energy and any other charges. Thus the quantity
\begin{equation}
M(-\infty)-
M(+\infty)=
\int\limits_{\Sigma\to -\infty}^{\Sigma\to +\infty}
T_{\mu\nu}^{\mbox{\scriptsize vac}}\xi^\nu\,d{\cal T}^\mu
\Bigl|_{r\to\infty}
\end{equation}
with $\xi^\nu$ the asymptotic timelike Killing vector
is the energy emitted by the source for the entire history.
If the total emitted charges are nonvanishing, then this is
the real effect, and then the question emerges: what are
the carriers of these charges? There should be some real
agents carrying them away. But the particles of the original
source stay in the tube. Besides them, there is only the
electron--positron field but it is in the in-vacuum state.
This means that, at least initially, there are neither
electrons nor positrons. There remains to be assumed
a miracle: that either the real electrons or the real
positrons -- depending on the sign of the emitted
charge -- get created. Then they are created by pairs,
and, say, the created positron is emitted while the
created electron stays in the compact domain.
This crazy guess can be checked. We have two ways of
calculating the vacuum currents: through the effective
action and by a direct averaging of the operator currents
as in (4.5) and (4.12). Specifically, for the in-vacuum of
electrons and positrons we have
\begin{equation}
T_{\mu\nu}^{\mbox{\scriptsize vac}}=
=\langle\mbox{in vac}|
T_{\mu\nu}({\hat\psi})|\mbox{in vac}\rangle
\end{equation}
where $T_{\mu\nu}({\hat\psi})$ is the operator
energy-momentum tensor of the electron--positron field
${\hat\psi}$. The equation for ${\hat\psi}$
\begin{equation}
\left(\lefteqn{\!\not}\partial+\mu-{\rm i}q\langle
\lefteqn{\,\not}{\cal A}\rangle
\right){\hat\psi}=0
\end{equation}
contains the electromagnetic field which in (4.27) figures
as an external field but is in fact the mean field solving
the expectation-value equations. We know that, in the past,
all mean fields are static. In the future, they become
static again because, if the total emitted charges are finite,
then all the processes should die down. Thus, there are
two asymptotically static regions: in the past and in the
future. The carriers of the emitted charges should be
detectable in the future as particles with definite energies.
But then the state in which they are absent is the
out-vacuum whereas their quantum state is the in-vacuum.
{\it It may be the case that the in-vacuum contains the
out-particles.} This will be the case if, between the
static regions in the past and future, there is a region
where $\langle{\cal A}\rangle$ is nonstatic because then
the basis functions of the Fock modes that are the
eigenfunctions of the energy operator in the future and
the basis functions that are such in the past are different
solutions of the Dirac equation (4.28).
If we expand ${\hat\psi}$ in the basis solutions of the
out-particles, insert this expansion in (4.27), and then
insert (4.27) in (4.26), the result will be
\begin{equation}
M(-\infty)-M(+\infty)=
\Bigl\langle\mbox{in vac}\Bigl|
\sum_A\varepsilon_A\,
{\hat a}^{+}_{\mbox{\scriptsize out}}{}^A
{\hat a}_{\mbox{\scriptsize out}}{}^A
\Bigr|\mbox{in vac}\Bigr\rangle
\end{equation}
where $\varepsilon_A$ is the energy of the out-mode $A$, and
similarly for the other charges. This result needs no comments.
Miracles happen.
\subsubsection{Emission of Charges (Continued)}
An important point concerning miracles is that they happen
not always. Let us see what is needed for this particular
miracle to happen. For that, it is necessary to introduce
characteristic parameters of the problem. There are two sets
of parameters.
\subparagraph{Parameters of the quantum field:}
$q$, $\mu$.
\subparagraph{Parameters of the external source:}
$e$, $l$, $\nu$.\par
\medskip
\noindent Here, $q$ and $\mu$ are the charge and mass of the
vacuum particles (e.g., of the electrons and positrons),
$e$ is the charge of the external source, $l$ is the
characteristic width of its support tube, and $\nu$ is
the frequency parameter that characterizes the nonstationarity
of the source.
The vacuum current in (4.18) is of the form
\begin{equation}
J^{\mbox{\scriptsize vac}}=
\int\limits_0^\infty dm^2\,\rho(m^2)\frac{1}{m^2-
\Box_{\mbox{\scriptsize ret}}}
J^{\mbox{\scriptsize ext}}
+O\left(J^{\mbox{\scriptsize ext}}\right)^2\;.
\end{equation}
Here and above, the notation
$\Box_{\mbox{\scriptsize ret}}$
is to record that the resolvent is to be taken retarded.
The structure of the nonlinear terms in (4.30) is similar:
there is an overall resolvent acting on a function quadratic in
$J^{\mbox{\scriptsize ext}}$ (see (2.50)). If the vacuum particles
are massive, the spectral weight will be proportional to
the $\theta$-function:
\begin{equation}
\rho(m^2)\propto\theta(m^2-4\mu^2)
\end{equation}
to tell us that there is a threshold of pair creation. We need
to find the behaviour of
$J^{\mbox{\scriptsize vac}}$
at a large distance from the support of
$J^{\mbox{\scriptsize ext}}$:
\begin{equation}
J^{\mbox{\scriptsize vac}}\Bigl|_{r\gg l}=?
\end{equation}
First we need to calculate the action of the retarded resolvent
on a source $J^{\mbox{\scriptsize ext}}$ having a compact spatial
support. If $J^{\mbox{\scriptsize ext}}$ is static, the result is
\begin{equation}
\left.\frac{1}{m^2-\Box_{\mbox{\scriptsize ret}}}
J^{\mbox{\scriptsize ext}}\right|_{r\gg l}=
\frac{C}{r}\exp(-mr)\;,\;\quad
J^{\mbox{\scriptsize ext}}\mbox{ static.}
\end{equation}
At a large distance from the source, this is the Yukawa
potential. Because the function (4.33) is static, it does not
depend on the spacetime direction in which the limit
$r\gg l$ is taken. If
$J^{\mbox{\scriptsize ext}}$
is nonstatic, this is no more the case. The limit $r\gg l$
is direction-dependent, and there are directions in which
the decrease is slower. Namely, in the directions of the
outgoing light rays,
\begin{equation}
\left.\frac{1}{m^2-\Box_{\mbox{\scriptsize ret}}}
J^{\mbox{\scriptsize ext}}\right|_{r\gg l}=
\frac{C}{r}\exp\left(-m\sqrt{rU}\right)\;,\;\quad
J^{\mbox{\scriptsize ext}}\mbox{ nonstatic}
\end{equation}
where $U$ is a function of time\footnote{Of the retarded time
since the surfaces $\Sigma$ to which the outgoing light rays
belong are null.} whose order of magnitude is
\begin{equation}
U\sim\frac{1}{\nu}\;.
\end{equation}
Expression (4.34) is to be inserted in the spectral integral
(4.30), and, since the spectrum is cut off from below, we find
that the vacuum current is suppressed by the factor
\begin{equation}
J^{\mbox{\scriptsize vac}}\sim\exp\left(
-\frac{\mu\sqrt{r}}{\sqrt{\nu}}\right)\;,\;\quad r\gg l\;.
\end{equation}
This is what constrains miracles. However, we find also that
the suppressing factor depends on the frequency of the source
and {\it can be removed by raising the frequency}. The farther
from the support of $J^{\mbox{\scriptsize ext}}$, the greater
the frequency should be for the current to be noticeable.
The pair creation starts as soon as the energy $\hbar\nu$
exceeds the threshold
\begin{equation}
\hbar\nu>2\mu c^2
\end{equation}
but, for the source to emit charge, the frequency should be
even greater:
\begin{equation}
\hbar\nu>(\mu c^2)\left(\frac{\mu c}{\hbar}l\right)\;.
\end{equation}
This is easy to understand. The particles start being created
in the support of the source with small momenta and cannot
go far away. The extra factor
$(\mu c/\hbar)l$
in (4.38) may be interpreted as the number of created particles
for which there is room in the support of the source. If the
creation is more violent, the particles get out of the tube.
This is the meaning of condition (4.38). The mechanism of emission
and conservation of charge is illustrated in Fig. 4. There are
initially the charges of the external source in its support tube.
They repel the like particles of the created pairs and,
when the number of the latter exceeds $(\mu c/\hbar)l$, push
them out of the tube. The unlike particles stay in the tube
and diminish its charge.
\input figfour.tex
Since the cause of the vacuum instability is the nonstationarity
of the external source, it is interesting to consider the case where
the energy $\hbar\nu$ exceeds overwhelmingly all the other energy
parameters of the problem. One can then study the strong effect
of particle production. It is assumed, in particular, that
$\hbar\nu$ exceeds both the rest energy of the vacuum particle
and its Coulomb energy in the external field:
\begin{equation}
\hbar\nu\gg\mu c^2\;,
\end{equation}
\begin{equation}
\hbar\nu\gg\frac{qe}{l}\;.
\end{equation}
In the limit (4.39), the flux of charge at a given distance
from the source ceases depending on the mass $\mu$, and
the vacuum particles can be considered as massless.
Condition (4.40) enables one to get rid of the consideration
of the static vacuum polarization which is irrelevant to
the problem. The approximation (4.39) and (4.40) is called
high-frequency approximation.
The effective action has been calculated above as an expansion
in powers of the curvature but the conditions of validity
of this expansion have not been discussed. This lack can now be met.
It is the high-frequency approximation in which this expansion
is valid. Indeed, consider the series (4.8). Every next term in
this series contains an extra power of ${\cal R}$, and, by dimension,
its formfactor contains an extra power of $\Box^{-1}$. The commutator
curvature is proportional to the charges and to $\hbar^{-1}$:
\begin{equation}
{\cal R}\sim\frac{qe}{\hbar l^2}\;.
\end{equation}
In the limit $r\gg l$ along the outgoing light rays, the
operator $\Box$ contains one time derivative:
\begin{equation}
\Box\sim\frac{\nu}{l}\;.
\end{equation}
As a result, every next term of the series contains, as
compared to the previous one, the extra factor
\begin{equation}
\frac{qe}{\hbar\nu l}\ll 1\;.
\end{equation}
In addition, the formfactors in (4.8) can be calculated
in the massless limit, as has been done above.
However, the inquest of miracles is not yet completed.
Assuming that the vacuum particles are massless or that
the high-frequency regime holds, we get rid of the suppressing
exponential in (4.36) but we still need to check the power
of decrease of the current. The power should be the one in
(4.23) for the emission of charge to occur. We can readily check this
since we know the behaviour of the resolvent. Expression (4.34)
is again to be inserted in the spectral integral (4.30) but this
time assuming that the spectrum begins with zero mass:
\begin{equation}
J^{\mbox{\scriptsize vac}}\Bigl|_{r\gg l}=
\int\limits_{\bf 0}^\infty dm^2\,\rho(m^2)
\frac{C}{r}\exp\left(-m\sqrt{rU}\right)\;.
\end{equation}
We see that, for the current to decrease as $O(1/r^2)$,
the spectral weight should have a finite and nonvanishing
limit at zero mass:
\begin{equation}
\rho(0)={}\mbox{finite}{}\ne 0\;.
\end{equation}
For the respective formfactor, this is a condition on its
behaviour at small $\Box$. The behaviour should be
\begin{equation}
F(\Box)=
\int\limits_0^\infty dm^2\,\frac{\rho(m^2)}{m^2-\Box}{}\;\quad
\parbox[t]{1cm}{
$\longrightarrow$\\$\Box\to 0$
}
{}-\rho(0)\ln (-\Box)\;.
\end{equation}
We arrive at the following consistency condition on the
vacuum formfactors. In the limit where one (any) of the
$\Box$ arguments is small and the others are fixed,
the formfactors should not grow faster than $\ln(-\Box)$:
\begin{equation}
F(\Box)\Bigl|_{\Box\to 0}=\mbox{const.}\ln(-\Box)\;,
\end{equation}
\begin{equation}
F(\Box_1,\Box_2,\Box_3)\Bigl|_{\Box_1\to 0}=
f(\Box_2,\Box_3)\ln(-\Box_1)\;,
\end{equation}
$$
\hbox to 190pt{{}\dotfill{}}
$$
If they grow faster, the charges cannot be maintained finite,
i.e., an isolated system cannot exist in such a vacuum.
If they grow as $\ln(-\Box)$, the theory of isolated systems
is consistent but these systems emit charges. If they grow
slower, the charges are conserved.
One can check whether the one-loop formfactors satisfy this
consistency condition. The second-order formfactors
(3.90)--(3.94) do. The third-order formfactors behave
generally as \cite{35}
\begin{equation}
F(\Box_1,\Box_2,\Box_3)\Bigl|_{\Box_1\to 0}=
f(\Box_2,\Box_3)\frac{1}{\Box_1}+
g(\Box_2,\Box_3)\ln(-\Box_1)+\ldots\;.
\end{equation}
The alarming terms $1/\Box$ appear only in the arguments
acting on the gravitational curvatures. Therefore, they
can affect only the vacuum energy-momentum tensor, and it
has been checked that, in the energy-momentum tensor,
these terms coming from different formfactors cancel.
In the currents, the one-loop formfactors satisfy strictly
the consistency condition. Since, in addition, their
asymptotic $\ln(-\Box)$ terms are nonvanishing, the emission
of charges in the high-frequency regime is real. The only
thing that remains to be checked is that this emission is
not a pure quantum noise. It will be checked by a direct
calculation.
Now one can answer also the question about the indefinite
local terms in the effective action. The coefficients of
these terms are the unspecified constants in (3.90)--(3.94).
In the limit $\Box\to 0$, the values of these constants are
immaterial. Only the terms $\ln(-\Box)$, $\Box\to 0$ of the
formfactors work, and, therefore, the incompleteness of
local quantum field theory does not affect the presently
considered problem.
It will be noted that there are now two mechanisms by which
an isolated system can emit energy. One is purely classical:
a nonstationary source can emit the electromagnetic or
gravitational waves. The other is quantum:
immersed in the vacuum,
a nonstationary source can emit also charged particles.
A high-frequency source will generally emit {\it both}.
\subsubsection{Particle Creation by External Fields}
The problem of particle creation by external fields is a part
of the expectation-value problem. In the context of the
foregoing, it can be set as follows. Consider the quantum
field that satisfies a linear second-order equation
\begin{equation}
\left(g^{\mu\nu}\nabla_\mu\nabla_\nu{\hat 1}+{\hat P}\right)
\phi=0
\end{equation}
containing three external fields: the metric, the connection,
and the potential. The external fields are asymptotically
static in the past and future but otherwise arbitrary except
that their currents
\begin{eqnarray}
J_{\alpha\beta}&=&R_{\alpha\beta}-\frac{1}{2}g_{\alpha\beta}R\;,\\
{\hat J}_\alpha&=&\nabla^\beta{\hat{\cal R}}_{\alpha\beta}\;,\\
{\hat Q}&=&{\hat P}+\frac{1}{6}R{\hat 1}
\end{eqnarray}
are confined to a spacetime tube. The quantum field is in the
in-vacuum state. What is the energy of the quanta of the field
$\phi$ created by the external fields for the entire history?
In the high-frequency approximation, we have everything to
answer this question.
To formulate the answer, I need some preliminary construction.
Every current has an associated quantity called its radiation
moment. It will now be defined.
Consider a timelike geodesic in the external metric of
equation (4.50). It enters the domain of nonstationarity of external
fields with a definite energy and goes out of this domain with
a definite energy. Let $E$ be its energy per unit rest mass
on going out. I am only interested in the geodesics that
escape to $r=\infty$. They have $E>1$, and, instead of $E$,
I shall use the parameter $\gamma$ defined as
\begin{equation}
\gamma=\frac{\sqrt{E^2-1}}{E}\;,\quad E>1\;,\quad
0<\gamma<1\;.
\end{equation}
At $r=\infty$, the geodesic has a certain spatial direction, or,
equivalently, it comes to a certain point of the celestial
2-sphere. I shall denote this sphere as ${\cal S}$, its points
as $\theta$:
\begin{equation}
\theta=(\theta_1,\theta_2)\;,\;\quad\theta\in{\cal S}\;,
\end{equation}
and the integral over the unit 2-sphere as
\begin{equation}
\int d^2{\cal S}(\theta)\,(\cdots)\;.
\end{equation}
A geodesic with given $\gamma$ and $\theta$ will be called
$\gamma,\theta$ -geodesic (see Fig. 5).
\input figfive.tex
A $\gamma,\theta$ -geodesic can be emitted from every point
of a compact domain. Therefore, the $\gamma,\theta$ -geodesics
with {\it the same values} of $\gamma$ and $\theta$ make a
congruence, and it can be proven that this congruence is
hypersurface-orthogonal. Let the orthogonal hypersurfaces be
\begin{equation}
T_{\gamma\theta}(x)=\mbox{const.}
\end{equation}
Since the parameters $\gamma,\theta$ fix the congruence,
they fix also the family of the orthogonal hypersurfaces (4.57),
and the "const." in (4.57) fixes a member of the family. The
function $T_{\gamma\theta}$ is determined up to a transformation
$T_{\gamma\theta}\to f\left(T_{\gamma\theta}\right)$. This
arbitrariness will be removed by the normalization condition
\begin{equation}
\left(\nabla T_{\gamma\theta}\right)^2=-\left(1-\gamma^2\right)
\end{equation}
and the condition that the vector $\nabla T_{\gamma\theta}$
is past directed. It is a property of the geodetic congruences
that the norm in (4.58) can be chosen constant.
The radiation moment of any scalar current $J$ is the following
hypersurface integral:
\begin{equation}
D=\frac{1}{4\pi}\int dx\,g^{1/2}\delta
\left(T_{\gamma\theta}(x)-\tau\right)J(x)\;.
\end{equation}
If the current is not a scalar, it should first be parallel
transported from the integration point to $r=\infty$ along
the respective $\gamma,\theta$ -geodesic. Thus if the current
is a vector, its radiation moment is
\begin{equation}
D^\alpha=\frac{1}{4\pi}\int dx\,g^{1/2}\delta
\left(T_{\gamma\theta}(x)-\tau\right)J^\beta(x)
a_\beta{}^\alpha(x,\infty)
\end{equation}
where $a_\beta{}^\alpha(x,\infty)$ is the propagator of
parallel transport of vectors to infinity along the
$\gamma,\theta$ -geodesic emanating from $x$. The radiation
moment $D^\alpha$ is then a vector at infinity. In the same
way, the radiation moment is defined for any current.
For the three currents (4.51)--(4.53), the radiation moments will
be denoted respectively as
\begin{equation}
J_{\alpha\beta},\;{\hat J}_\alpha,\;{\hat Q}
\longrightarrow D_{\alpha\beta},\;{\hat D}_\alpha,\;{\hat D}\;.
\end{equation}
Since the indices of the radiation moments pertain to a point
at infinity, their contractions like
\begin{equation}
{\hat D}_\alpha{\hat D}^\alpha=g_{\alpha\beta}
{\hat D}^\alpha{\hat D}^\beta\;,\quad\mbox{etc.}
\end{equation}
always assume the flat metric $g_{\alpha\beta}$ at infinity.
All radiation moments are functions of four parameters:
\begin{equation}
D=D(\gamma,\theta,\tau)\;.
\end{equation}
In the limit $\gamma=1$, the $\gamma,\theta$ -geodesics become null.
The orthogonal hypersurfaces (4.57) also become null, and the geodesics
themselves become their generators. For the radiation moments,
this is a regular limit. Nothing special happens to them in this
limit except that they become very important. The radiation
moments at $\gamma=1$ govern the emission of waves in classical
theory. Thus if $J_\alpha$ in (4.52) is an electric current,
then the following expression:
\begin{eqnarray}
&&\Bigl(M(-\infty)-M(+\infty)\Bigr)_{\mbox{electromagnetic waves}}
\nonumber\\
&&\hphantom{=}
{}=\frac{1}{4\pi}\int\limits_{-\infty}^\infty d\tau
\int d^2{\cal S}(\theta)\,
\left.\left[g_{\alpha\beta}
\left(\frac{d}{d\tau}D^\alpha\right)
\left(\frac{d}{d\tau}D^\beta\right)\right]\right|_{\gamma=1}
\qquad\qquad\qquad\qquad\;
\end{eqnarray}
is the energy of the electromagnetic waves emitted by this
current for the entire history. A similar expression with
the tensor current (4.51):
\begin{eqnarray}
&&\Bigl(M(-\infty)-M(+\infty)\Bigr)_{\mbox{gravitational waves}}
\nonumber\\
&&{}=\frac{1}{4\pi}\int\limits_{-\infty}^\infty d\tau
\int d^2{\cal S}(\theta)\,
\frac{1}{2}(g_{\alpha\mu}g_{\beta\nu}-
\frac{1}{2}g_{\alpha\beta}g_{\mu\nu})
\left(\frac{d}{d\tau}D^{\alpha\beta}\right)
\left.\left(\frac{d}{d\tau}D^{\mu\nu}\right)\right|_{\gamma=1}
\quad\;\nonumber\\
\end{eqnarray}
is the energy of the gravitational waves emitted by the
current $J_{\alpha\beta}$ for the entire history.
The radiation moment is a generating function for the
multipole moments. The multipole expansion is the
expansion of $D$ at $\gamma=0$. It makes sense for
nonrelativistic systems since $\gamma$ is proportional
to $1/c$.
Expressions (4.64) and (4.65) are the solutions of the classical
radiation problem. And here is the solution of the
quantum radiation problem \cite{50}:
\begin{eqnarray}
&&\Bigl(M(-\infty)-M(+\infty)\Bigr)_{\mbox{created particles}}
\nonumber\\
&&\hphantom{=}
{}=\frac{1}{(4\pi)^2}
\int\limits_0^1 d\gamma\,\gamma^2
\int\limits_{-\infty}^\infty d\tau
\int d^2{\cal S}(\theta)\mathop{\rm tr}\left[
\left(\frac{d^2}{d\tau^2}{\hat D}\right)^2\right.\nonumber\\
&&\hphantom{={}=}\qquad\qquad\qquad\quad
{}-\frac{1}{3}\frac{1}{(1-\gamma^2)}g_{\alpha\beta}
\left(\frac{d}{d\tau}{\hat D}^\alpha\right)
\left(\frac{d}{d\tau}{\hat D}^\beta\right)\nonumber\\
&&\hphantom{={}=}\qquad\qquad\qquad\quad
{}+\frac{1}{30}{\hat 1}
(g_{\alpha\mu}g_{\beta\nu}-
\frac{1}{3}g_{\alpha\beta}g_{\mu\nu})
\left.\left(\frac{d^2}{d\tau^2}D^{\alpha\beta}\right)
\left(\frac{d^2}{d\tau^2}D^{\mu\nu}\right)\right]\;.
\nonumber\\
\end{eqnarray}
This is the energy of the quanta of the field $\phi$ created
by the external fields for the entire history. As compared
to the expressions above, there is an extra time
derivative in the case of the tensor and scalar moments. It accounts
for the dimension of the coupling constant. Also, instead
of setting $\gamma=1$, one needs to integrate over $\gamma$.
Otherwise, the similarity is striking. The quantum problem
of particle creation becomes almost the same thing as the
classical problem of emission of waves.
The presence in (4.66) of an integral over $\gamma$ is not just
a technical detail. The radiation moments have both the
longitudinal projections, i.e., the projections on the
direction of the geodesic at infinity and the transverse
projections. Inspecting the contractions of the moments
in (4.64)--(4.66), one can see that, at $\gamma=1$, the
longitudinal projections drop out of these contractions.
In the integral over $\gamma$, also the longitudinal
projections survive. Owing to this fact, spherically
symmetric sources cannot emit waves but can produce
particles from the vacuum.
Now I can explain why, when expanding the effective action,
I stopped at the terms cubic in the curvature. In the
high-frequency approximation, the expansion (3.40) needs
to be calculated up to the lowest-order terms that give
a nonvanishing effect. The terms of first order in the
curvature are local and give no effect. The terms of
second order in the curvature are nonlocal and contribute
to the energy flux at infinity but it turns out that
{\it their contribution is a pure quantum noise}. The
real effect of particle production begins with the third
order in the curvature. Expression (4.66) results from
the triangular loop diagrams.
Since varying the action destroys one curvature, a cubic
action generates a quadratic current. This gives the radiation
energy a chance to be positive definite. Expression (4.66)
is positive definite indeed:
\begin{equation}
\Bigl(M(-\infty)-M(+\infty)\Bigr)_{\mbox{created particles}}
\ge 0\;.
\end{equation}
In particular, for the matrix contributions, this follows
from relations (3.11), (3.12) and the positive definiteness
of the matrix $\omega_{ab}$:
\begin{equation}
\mathop{\rm tr}
\left(\frac{d^2}{d\tau^2}{\hat D}\right)^2\ge 0\;,\;\quad
\mathop{\rm tr}\left[g_{\alpha\beta}
\left(\frac{d}{d\tau}{\hat D}^\alpha\right)
\left(\frac{d}{d\tau}{\hat D}^\beta\right)\right]\le 0\;.
\end{equation}
The positivity of the gravitational-field contribution can
be proven directly.
\subsubsection{The Backreaction Problem}
The energy emitted by an isolated system (in all forms) should
be bounded both from below and from above: it should be positive
and less than the energy stored in the initial state
\begin{equation}
0\le\Bigl(M(-\infty)-M(+\infty)\Bigr)\le M(-\infty)\;.
\end{equation}
In expression (4.66), the positivity is guaranteed but the
energy conservation is not. The reason is that the setting
of the problem with external fields is physically inconsistent.
The vacuum current determines the solution of the mean-field
equations, and the mean field rather than the external field
determines the vacuum current. If the backreaction of the vacuum
is neglected, the conservation laws need not be observed.
One case in which the vacuum backreaction may not be neglected
is where both mechanisms of the energy emission, classical
and quantum, are engaged simultaneously. This concerns
particularly the vector connection field. In expression (4.66),
the integral over $\gamma$ has a pole ${(1-\gamma)^{-1}}$ in
the term with the vector moment. The residue of the integrand
in this pole is precisely the quantity (4.64), i.e., the energy
of the outgoing waves of the vector connection field. If it is
nonvanishing, e.g., if the external source emits both the
electromagnetic waves and the electrically charged particles, the
integral in $\gamma$ diverges. The result is a disaster: the
radiation energy appears to be infinite. In fact it should be
taken into account that the created charge affects the
generation of the electromagnetic waves, and the respective
changes in the electromagnetic field affect the creation of charge.
In the self-consistent solution, the disaster is removed.
Another example concerns the metric field when it has an event
horizon. In this case, the integral in $\tau$ diverges at the
upper limit. By construction, $\tau$ is the time of an external
observer. As $\tau\to\infty$, the source moving in the tube
hits the event horizon. Its proper time does not turn into
infinity. The integrand in (4.66) is just finite in this limit,
and the integral in $\tau$ diverges linearly. This is the Hawking
constant flux of radiation from the black hole. If its
backreaction on the metric is neglected, the total emitted
energy is infinite.
But even when the quantity (4.66) is finite, it depends on the
frequency of the source. If the source is external, this
frequency is a free parameter. The energy of created quanta
grows with frequency, and, typically, the ratio
\begin{equation}
\left.\frac{M(-\infty)-M(+\infty)}{M(-\infty)}\right|_{\nu\to\infty}
\sim\ln\nu
\end{equation}
also grows so that, at a sufficiently high frequency, the energy
conservation law will be violated. The backreaction should take
into account that, when the source creates real particles,
it loses energy and slows down. It then creates less particles,
and the process dies away. The conservation laws will then be restored.
The backreaction problem has been solved only in a few cases
\cite{51}--\cite{56}. The examples for which it has been solved
show that the
solution can be unexpected and interesting.
}
\section{Section Heading}
\begin{abstract}
Theory of expectation values is presented as an alternative to
S-matrix theory for quantum fields. This change of emphasis is
conditioned by a transition from the accelerator physics to
astrophysics and cosmology. The issues discussed are the time-loop
formalism, the Schwinger--Keldysh diagrams, the effective action,
the vacuum currents, and the effect of particle creation.
\end{abstract}
\section*{Introduction}
High-energy physics will probably have to undergo major changes.
The accelerators will cease being its experimental base, and it
will become a part of astrophysics. Simultaneously, the S-matrix
will cease being the central object of high-energy theory because
the emphasis on this object is entirely owing to the accelerator
setting of the problem. If there is a background radiation that
originates from some initial state in the past, then where is
the S-matrix here? Astrophysics and cosmology offer the
evolution problems rather than the scattering problems. The
gravitational collapse is a typical initial-value problem.
It is such by its physical setting irrespective of whether
the state of the system is classical or quantum. The nature
of measurement also changes. No final state is prepared.
One measures observables like temperatures or mechanical
deflections and subjects these measurements to a statistical
treatment to obtain the value of the observable. This means
that one measures expectation values in the given
initial state.
S-matrix theory should give way to expectation-value theory.
There is a proof that accelerator physics is dead: Gabriele
Veneziano is leaving CERN for Coll\`ege de France. At this
historic moment, my mission is to convert him into a new
faith. The present preaching consists of 4 lectures:
\begin{enumerate}
\item Formal aspects of expectation-value theory.
\item The in-vacuum state and Schwinger--Keldysh diagrams.
\item The effective action.
\item Vacuum currents and the effect of particle creation.
\end{enumerate}
Literature to Lectures 1 and 2 is in \cite{1}--\cite{16}.
Additional literature to Lecture~3 is in \cite{17}--\cite{41}
and to Lecture 4 in \cite{42}--\cite{56}.
\input lecture1.tex
\input lecture2.tex
\input lecture3.tex
\input lecture4.tex
|
2,877,628,091,479 | arxiv | \section{Introduction}
\subsection{Induced seismic hazard}
Seismicity caused by human activity, what is currently being called induced seismicity, is not a new phenomenon. Over the last several decades, workers have noted that earthquakes are triggered by human activities including nuclear explosions \citep{Boucher1969}, fluid extraction \citep{Segall1989}, fluid injection \citep{Seeber2004, Ellsworth2013}, controlled filling of artificial reservoirs (e.g., Koyna, India) \citep{Gupta2002a}, and mining and excavation \citep{McGarr1976}.
But interest in induced seismicity has recently spiked, as has the rate of induced earthquakes in the central and eastern US \citep{Ellsworth2013,Weingarten2015}. Here, it appears that fluid injections, primarily involving wastewater, are causing extensive seismic activity including events such as the 2011 $m_{w}~4.0$ earthquake in Youngstown, Ohio, \citep{Kim2013}, the 2011 $m_{w}~4.7$ central Arkansas earthquake \citep{Horton2012}, the 2011 $m_{w}~5.7$ central Oklahoma earthquake \citep{Keranen2013}, and the 2012 $m_{w}~4.9$ east Texas earthquake \citep{Frohlich2014}.
For modern deep geothermal energy projects, induced seismicity is a concern because fluids must be injected to stimulate and enhance reservoir permeability, allowing the heat to be extracted. There are two recent examples in Switzerland: the Basel EGS experiment in 2006 \citep{Haring2008} and the St.~Gallen hydrothermal injection in 2013 \citep{Kraft2013,Edwards2015,Obermann2015}. Both projects were canceled: Basel because of widely-felt seismic activity, and St.~Gallen due to gas inflow, the low natural fluid flow rate, and the high level of seismic activity during a short-term stimulation. These experiments demonstrated that project managers and operators have to be able to manage induced seismic hazard and must strike a balance between reservoir creation (i.e., permeability enhancement, which is required for a geothermal system to be profitable) and induced seismicity. Induced seismicity during geothermal projects is a blessing and a curse: the spatial extent of micro-seismicity is a proxy for the size of the stimulated reservoir, but felt and potentially-damaging earthquakes pose seismic risk to people and infrastructure. Induced earthquakes in deep geothermal reservoirs are usually smaller than $m~3$, but larger events ($>m~4$) can occur, the largest so far being an $m~4.6$ earthquake at the Geysers geothermal site in 1982 \citep{Majer2007}. Certainly, induced earthquakes felt by the public may deter future geothermal projects.
Despite the cancellations at Basel and St.~Gallen, several geothermal projects in Switzerland are in development. As part of the Swiss national energy strategy, deep geothermal heat should supply $5-10\%$ of the national baseload electricity \citep{Giardini2014}. One of the main obstacles to achieving this goal is induced seismic hazard. To minimize induced seismic hazard, it is crucial not only to monitor and analyze induced events, but also to develop a near-real-time tool for making operational decisions. Such a hazard management scheme should be used to plan and operate reservoir stimulation so that large induced earthquakes are avoided \citep[e.g.,][]{Bachmann2011,Mena2013,Goertz-Allmann2013}.
\subsection{Near-real-time forecasting: towards an adaptive traffic light system}
\citet{Bommer2006} introduced a traffic light system to monitor and react to seismic activity during geothermal reservoir stimulation. Like most traffic lights, this system distinguished three hazard levels, which were based on the size of events, observed peak ground velocity, and public response. But the thresholds used to change the light were chosen subjectively, primarily by expert judgment \citep{Hirschberg2015}, and in practice the system has resulted in operators taking action too late to avoid large events or a high seismicity rate. For example, in Basel the early induced earthquakes suggested that felt events were likely, but the traffic light system failed to anticipate them \citep{Haring2008}.
An improved hazard management scheme should be a dynamic, forward-looking system that incorporates real-time data and makes probabilistic forecasts of induced seismicity and its consequences. Such an Adaptive Traffic Light (ATL) system is composed of several modules (Figure \ref{fig0}):
\begin{enumerate}
\item Collecting prior information, e.g., geological setting for hazard assessment and building classifications for risk assessment (yellow in Figure \ref{fig0}). These data are essential to plan a geothermal project and can address questions such as where to drill wells, the orientation of the local stress field, how to design reservoir creation plans, and the maximum possible magnitude \citep{Gischig2015}.
\item Real-time data flow of hydraulic and seismic information (red in Figure \ref{fig0}). These are hydraulic data (e.g., injection flow rate and pressure measurements in the well) and seismic data that allow one to monitor reservoir creation, circulation, or other activities in the reservoir.
\item Modeling and forecasting seismicity (orange in Figure \ref{fig0}). The key element in an ATL system is seismicity forecasting. To forecast, we consider two periods: a learning period and a forecast period. During the learning period, seismic events are observed and analyzed according to their distribution in time and space. Then a calibrated model forecasts the number, magnitude distribution, and spatial distribution of events in the forecast period.
\item Ground motion models (gray in Figure \ref{fig0}). These models estimate the shaking that an earthquake will cause and are based on properties of the earthquake source (e.g., its magnitude, style of faulting, and depth), wave propagation (distance to the earthquake), and site response (type of rock, soil that can attenuate or amplify ground shaking). Ground Motion Prediction Equations \citep{Douglas2013} and the Virtual Earthquake Approach \citep{Denolle2013,Denolle2014} are examples of possible choices to estimate ground motions.
\item Combining models to account for epistemic uncertainties (green in Figure \ref{fig0}). No single model captures all of the important features of seismicity. Model combination using appropriate weights is one way to try to leverage each model's best features.
\item Calculating hazard and risk (brown in Figure \ref{fig0}). One can estimate the seismic hazard --- the probability that some level of shaking will be exceeded --- by combining ground motion models and either synthetic catalogs generated by forecast models or individual scenario earthquakes. One can use this hazard to estimate the seismic risk: the potential economic, social, and environmental consequences of seismicity.
\item Guiding on-site decision-making processes (white in Figure \ref{fig0}). Based on hazard and risk calculations, operators can make decisions concerning future stimulation strategies and adjust flow rate accordingly.
\end{enumerate}
In this paper, we focus on the forecast models and the performance assessment modules of the ATL system (delineated by a dashed gray line in Figure \ref{fig0}).
\subsection{Models to forecast seismicity}
Induced seismicity models can be grouped into three classes \citep[e.g.,][]{Gischig2013a,Gaucher2015}: statistical, physics-based, and hybrid. In general, statistical models for induced seismicity \citep[e.g.,][]{Reasenberg1989,Hainzl2005,Bachmann2011,Mena2013} are conceptually and computationally simple and include aleatory uncertainty. But they do not explicitly account for the physical processes governing induced seismicity (e.g., fluid flow in fractures, permeability changes, and stress interaction) and, until this study, they have not been used to forecast the spatial distribution of earthquakes. It is sometimes thought that statistical models, because they are primarily based on clustering, are limited in their ability to predict large events or make accurate long-term forecasts. In contrast, physics-based models \citep[e.g.,][]{Olivella1994, Bruel2005, Kohl2007, Baisch2010, Rinaldi2015, McClure2012, Wang2012, Karvounis2015, Mignan2015b} do consider underlying physical processes, and are hoped to perform better when operational conditions change, such as for the shut-in period, and for long-term forecasts. But the high computational expense of most physics-based models precludes their use in near-real-time applications for the moment.
Hybrid models are a compromise between physical models and statistical models. The goal of hybrid model development is to include some physical complexity and replace more complex physical considerations with statistical methods or stochastic processes. \\
\citet{Mena2013} compared forecast models using the Basel dataset and found that Shapiro's model \citep{Shapiro2010} provided a good fit to the rate of induced earthquakes. This model uses the seismogenic index, $\Sigma$, a parameter that describes the expected seismic response of a given site. The seismogenic index is a function of the total injected fluid volume and can be estimated from a short injection period or from the entire stimulation period; it also takes into account the $b$-value of the observed seismicity and the total injected volume. Using $\Sigma$, one can forecast the number of earthquakes in a given magnitude range and given period. Like most statistical models for induced seismicity \citep[e.g.,][]{Bachmann2011}, Shapiro's model does not make any predictive statements about the size or shape of the seismicity cloud. But it is crucial to monitor and anticipate the shape and size of the seismic cloud during reservoir stimulation for two reasons. First, the extent of the seismicity cloud is used to estimate the volume of the stimulated reservoir, which is crucial for energy production. Second, the spatial distribution of seismicity affects hazard and risk analysis: many geothermal sites are located near settlements, making energy transportation cheap but posing a risk to infrastructure and people \citep{Edwards2015}.
Seismic risk strongly depends on geological settings (e.g., rock type under the settlement), building vulnerability, and the depth of induced events. For instance, if a $m_{w}~4$ event occurs $5~km$ below strong, new homes built on a rock site, almost all buildings would remain intact, with only some slight damage. If an event of the same size occurs $3~km$ below vulnerable houses built on a sedimentary basin, it is more likely that the houses would be slightly damaged, and some houses may be moderately or even heavily damaged \citep{Grunthal1998a}. Because the spatial distribution of induced seismicity is so important, any ATL system should be driven by 3D spatial forecasts.
In this study, first we extend Shapiro's model to produce 3D forecasts (SaSS model, i.e., Shapiro and Smoothed Seismicity model). Then, we perform systematic statistical tests on this model and on a hybrid model, in which seismicity is triggered by a numerically modeled pressure diffusion (HySei model, i.e., Hydraulics and Seismicity model). To date these are the only models in our institute, that are calibrated against real data, and systematic re-calibration and testing can be carried out; moreover, they have a good variety of model features, which forecasts are worth evaluating and comparing. To do this, we develop an Induced Seismicity Test Bench.
\subsection{Induced Seismicity Test Bench}
Little work has been done on model selection and model comparison in the context of induced seismicity. To validate, compare, and rank models that can be used for ATL systems, we propose a model development test bench that follows the Collaboratory for the Study of Earthquake Predictability (CSEP, http://www.cseptesting.org/) approach for tectonic earthquakes. CSEP supports scientific earthquake prediction experiments in natural laboratories in multiple regions and spanning the globe \citep[e.g.,][]{Gerstenberger2010, Schorlemmer2010, Zechar2010c, Nanjo2011, Eberhard2012, Mignan2013, Taroni2013, Zechar2013}. This support comes in the form of testing centers that CSEP operates; these centers allow modelers to check the consistency of their model with observations and to compare models. We describe these activities in more detail in Subsection 3.2.
The proposed Induced Seismicity Test Bench requires models to be tested, good quality induced seismicity datasets, and a robust statistical testing framework allowing objective model evaluation. To test model consistency with observations and to rank models, we rely on pseudo-prospective forecasting, i.e., data that come from past stimulation experiments. Modelers calibrate their models using data recorded during a learning period and make forecasts for a subsequent forecast period. Since observed data of the forecast periods are already available, we can compare observed and forecast data after each recalibration and test the consistency of the forecast in terms of seismicity rate, spatial distribution, and magnitude distribution. We can use statistical metrics such as the information gain per earthquake to compare model pairs and rank models according to their forecast skill \citep{Rhoades2011}. Modelers should use the results of testing for further development, creating a feedback between testing and modeling. The long-term goal is to develop an operational ATL system to plan and conduct reservoir creation without a high rate of seismicity or large events. A detailed flowchart of the Induced Seismicity Test Bench can be found in the supplement (Figure S1).\\
The Induced Seismicity Test Bench is a diagnostic tool: it can highlight which model elements, be they physical or statistical, are essential for good forecasts, and why. This can in turn improve the models and our understanding of the underlying physical phenomena. In addition to using the test bench as a diagnostic tool, it can also be utilized on the fly to judge the performance of several models since the last forecast. The results can then be used for further improvement of the individual models and/or they can be applied to weight the models for the next forecast.\\
In the next section, we briefly describe the data from two Enhanced Geothermal Systems: the Basel 2006 experiment and the Soultz-sous-For\^ets 2004 stimulation. In Section 3 we present two models, SaSS (Shapiro and Smoothed Seismicity) model and HySei (Hydraulics and Seismicity), which are calibrated on the datasets; and we also detail the testing approach. We describe the testing results in section 4, discuss our findings in section 5, and conclude in section 6.
\section{Data}
The data we consider in this study come from the Soultz-sous-For\^ets 2004 and Basel 2006 geothermal stimulations. \\
The Basel geothermal site is located in northwestern Switzerland, at the southeastern part of the Upper Rhine Graben (Figure \ref{fig1}.a). The graben structure is an inactive extensional rift system oriented N-S \citep{Zoback1992}. Here, the crystalline basement is covered by $2.4~km$ of sedimentary rock \citep{Haring2008}. The well BASEL1 was drilled to a depth of $5~km$ between May and October 2006. In December 2006, after several hydraulic tests, the reservoir was hydraulically stimulated to enhance its permeability. The plan was to stimulate for 21 days, but after 6 days the injection was stopped due to intensive seismicity. In the year that followed, 3 additional events of $m_{L}~>~3.0$ followed \citep{Haring2008}. Based on the results of a subsequent risk study \citep{Baisch2009,Secanell2009}, the project was abandoned. After several years, the reservoir still has earthquakes, but the seismicity rate is very low (1-3 earthquakes recorded per year) \citep{Deichmann2014}. In this study, we use about 15 days of hydraulic \citep{Haring2008} and seismic data \citep{Dyer2010} from the beginning of the stimulation (2006-12-02, 18:00), and we also use the pre-stimulation injection test data. \\
The Soultz-sous-For\^ets geothermal site is also located in the Upper Rhine Graben, between Kutzenhausen and Soultz-sous-For\^ets, about 70 km north of Strasbourg (Alsace, France; inset in Figure \ref{fig1}). The geothermal gradient is about $100^{\circ}C/km$ within the $1.5~km$ thick sedimentary cover over a granitic basement \citep{Evans2012}. This abnormally high geothermal gradient is related to deep hydrothermal convection cells in the fractured basement \citep{Gerard2006}. The geothermal project here started in the early 1980s and four wells have been drilled into two reservoirs: one at about $3.5~km$ depth (GPK1, GPK2 wells) and another at about $4.5~km$ (GPK2, GPK3, GPK4 wells). Several stimulations and circulation tests were carried out \citep{Gerard2006, Calo2013, Genter2012}. Energy production started in 2008 \citep{Genter2010}. In this study, we use hydraulic and seismic data of the pre-stimulation and stimulation of September 2004 (Figure \ref{fig1}.b, \citet{Dyer2005}). Local magnitudes were corrected by using the scaling relationship by \citet{Douglas2013}. Note that the seismograms in this data set are clipped, causing saturation of the magnitudes at $1.8$; that is, no event has $m_{w} > 1.8$.
\section{Models and testing}
\subsection{The Shapiro and Smoothed Seismicity (SaSS) model}
The SaSS model is computationally simple and based on the seismogenic index, $\Sigma$ \citep{Shapiro2010}; we distribute the earthquakes expected by $\Sigma$ in 3D by smoothing seismicity in space. Shapiro's model, which describes the rate of induced seismicity during stimulation, is defined as:
\begin{equation}
log_{10}(N_{m}(t)) = log_{10}(Q_{c}(t))-bm-\Sigma
\label{eq1}
\end{equation}
where $N_{m}(t)$ indicates the number of induced events above magnitude $m$ up until time $t$, $Q_{c}(t)$ denotes the cumulative injected volume of fluid at time $t$, $b$ is Gutenberg-Richter $b$-value of the observed seismicity, and $m$ is the magnitude above which all events are expected to be reliably recorded (often called the magnitude of completeness).
\\
To forecast the number of events in the forecast period, we estimate $\Sigma$ and $b$ from the learning period, and we predict the total volume that will be injected by the end of the forecast period. \citet{Kiraly2014} compared four deep geothermal datasets and found that in some cases $b$ and $\Sigma$ are not constant during and after stimulation; thus, we re-estimate them at the end of each learning period, every six hours. To predict $Q_{c}(t)$ at the end of a forecast period, we assume that the injection flow during the forecast period will follow the previously-planned strategy. Eq. \ref{eq1} describes the rate of induced seismicity only during stimulation \citep{Shapiro2010}.
As soon as the stimulation stops (the moment of well shut-in), the rate of induced earthquakes is expected to decay; the SaSS model assumes the decay follows the equation of \citet{Langenbruch2010} (using the original notation for consistency):
\begin{equation}
R_{0b}\bigg(\frac{t}{t_{0}}\bigg) = \frac{R_{0a}}{\bigg(\frac{t}{t_{0}}\bigg)^p}
\label{eq2}
\end{equation}
where $R_{0b}$ is the post-stimulation seismicity rate at time $t$ (since the beginning of the stimulation), $t_{0}$ is the length of the stimulation period before shut-in, $R_{0a}$ denotes the average seismicity rate during stimulation, and $p$ controls how quickly the rate decays. For subsequent forecast time windows (i.e., 6-hour time bins of the forecast period, FTWs), the majority of parameters are calibrated on the corresponding learning period, but $Q_{c}$ and $R_{b0}$ are recalculated for each time window. If the learning period ends in the stimulation period but some FTWs expand to the post-stimulation, the estimation of parameter $p$ is not possible, thus we use a generic value: $p = 2$. Also, if $p$ is estimated to be smaller than 2 we set the value to 2, following the value that is proposed by \citet{Langenbruch2010} for an early post-injection period. Detailed flowchart of number component can be found in the supplement (Figure S2).
As in CSEP experiments and suggested by \citet{Shapiro2010}, the number of events in each forecast period is assumed to follow a Poisson distribution and the numbers obtained by using Eq. \ref{eq1} and \ref{eq2} are Poisson expected values; error bars in all subsequent figures indicate the $95\%$ Poisson confidence interval.
\\
To model the 3D spatial distribution of induced earthquakes, we added a spatial component to the model by smoothing the seismicity observed during the learning period (Figure \ref{fig2}.A). Several studies, including the Regional Earthquake Likelihood Models (RELM) experiment \citep{Schorlemmer2010,Zechar2013} have shown that smoothed seismicity models are effective at forecasting the spatial distribution of tectonic earthquakes. To construct a smoothed seismicity model in two dimensions, one applies a two-dimensional smoothing kernel to each past event \citep[e.g.,][]{Helmstetter2007}, calculates the contribution of smoothed earthquakes on a given grid, then sums contributions of all observed earthquakes. To create a probability density function (PDF, i.e., earthquake spatial probability map), one normalizes the smoothed seismicity map so its sum is unity.
We extend the 2D Gaussian smoothed seismicity model of \citet{Zechar2010b} to 3D. For each forecast period, we smooth all prior events, where the contribution of an earthquake to a given voxel (i.e., volume element) is
\begin{eqnarray}
K(x_{e},y_{e},z_{e},x_{1},x_{2},y_{1},y_{2},z_{1},z_{2}) & = &
\frac{1}{8} \Bigg[ erf \bigg( \frac{x_{2}-x_{e}}{\sigma_{1} \sqrt{2}} \bigg) - erf \bigg( \frac{x_{1}-x_{e}}{\sigma_{1} \sqrt{2}} \bigg) \Bigg] \nonumber \\
&& \times \Bigg[ erf \bigg( \frac{y_{2}-y_{e}}{\sigma_{2} \sqrt{2}} \bigg) - erf \bigg( \frac{y_{1}-y_{e}}{\sigma_{2} \sqrt{2}} \bigg) \Bigg] \nonumber \\
&& \times \Bigg[ erf \bigg( \frac{z_{2}-z_{e}}{\sigma_{3} \sqrt{2}} \bigg) - erf \bigg( \frac{z_{1}-z_{e}}{\sigma_{3} \sqrt{2}} \bigg) \Bigg]
\label{eq3}
\end{eqnarray}
where $x_{e}$, $y_{e}$ and $z_{e}$ denote the location of the given earthquake, $x_{1}$, $x_{2}$, $y_{1}$, $y_{2}$, $z_{1}$ and $z_{2}$ are the points that define the edges of the voxel, and $\sigma_{1}$, $\sigma_{2}$ and $\sigma_{3}$ are bandwidths of the 3D Gaussian kernels in EW, NS and vertical directions, respectively.
To make a good smoothed seismicity forecast, we need good bandwidths; we optimize these by dividing data from the current learning period into a training set and a validation set (Figure \ref{fig2}.C). The length of the training and validation sets depend on the length of the forecast period and the learning period. If the length of the forecast period is more than half the length of the learning period, the training and validation sets are each one-half of the learning period. Otherwise, the length of the validation set is equal to the length of the forecast period. We search for the bandwidth combination that, when used to smooth the training set, best forecasts the seismicity of the validation set. To avoid 'surprises,' i.e., events occurring where the model would not expect any events, we distribute a certain fraction of the PDF over all voxels (i.e., surprise factor), following the idea of \citet{Kagan2000}.
We analyze the performance of $1000$ combinations of bandwidths and surprise factors using the training and validation set of the learning period.
The PDF is updated for each new learning/forecast period. Since the PDF is based on the learning period, this model assumes that earthquake locations in the forecast period will not be very different from the seismicity observed so far.
Smoothed induced seismicity models must differ from their tectonic counterparts in at least one aspect: induced models should capture the propagation of the seismicity front after shut-in. In particular, due to pore pressure diffusion, induced seismic activity tends to decrease in the vicinity of the injection well and to concentrate at the boundaries of the reservoir. We attempt to model this time-dependent effect by applying exponential temporal weighting: the most recent event receives a maximum weight (one), and earlier events get smaller weights according to their origin time. This is analogous to the exponential smoothing approach commonly used in time series forecasting \citep{Goodwin2010} and is also connected to the Omori-Utsu relation describing aftershock decay rate \citep{Zhuang2012}.
The forecast magnitude distribution is the Gutenberg-Richter distribution \citep{Gutenberg1944} with the $b$-value estimated from the learning period.
\subsection{The Hydraulics and Seismicity (HySei) model}
The HySei model developed by \citet{Gischig2013a} describes seismicity triggered by pressure diffusion with irreversible permeability enhancement. The biggest advantage of the model is that it quantifies permeability enhancement by calibrating flow rate and wellhead pressure against observations.
The HySei model consists of two main parts: hydraulic inversion and seismicity modeling.
The aim of inverting hydraulic observations is to reconstruct the pressure evolution in the reservoir. We seek the best hydraulic parameters to match the observed well-head pressure with a one-dimensional radial flow model. We use a finite difference method in a circle of $1200~m$ radius distributed on $3000$ nodes, and $1$-minute resolution in time. During the pre-stimulation test injection, we solve the diffusion equation (Eq. \ref{eq1a}) with constant permeability ($\kappa = \kappa_{0}$). During stimulation the governing equations are the diffusion equation (Eq. \ref{eq1a}) with irreversible changing permeability (Eq. \ref{eq1b}) due to increasing pressure that exceeds some threshold (Eq. \ref{eq1c}):
\begin{equation}
\rho S \frac{\partial p}{\partial t} = \nabla \Big( \frac{\kappa\rho}{\mu} \nabla p \Big) + q_{m}
\label{eq1a}
\end{equation}
\begin{equation}
\kappa = \kappa_{0} (u + 1)
\label{eq1b}
\end{equation}
\begin{equation}
\frac{\partial u}{\partial t} = C_{u} H_{pt}\Big( \frac{\partial p}{\partial t} \Big) H_{u} (u_{t}-u)H_{p}(p-p_{t})
\label{eq1c}
\end{equation}
where $\rho$ is fluid density, $S$ is the specific storage coefficient, $\kappa$ is permeability that varies during the stimulation, $\mu$ is fluid viscosity, and $q_{m}$ is a mass source; $\kappa_{0}$ is the initial permeability before the stimulation, $u$ is stimulation factor (i.e., the overall permeability enhancement of the reservoir); $C_{u}$ is stimulation velocity, a constant that scales the rate at which permeability changes, $u_{t}$ is maximum stimulation factor, and $p_{t}$ is threshold pressure, $H_{pt}$ is a Heaviside function, it is one if pressure increases, zero otherwise, $H_{p}$ and $H_{u}$ are Heaviside functions for pressure and stimulation factor. These are smoothed to avoid a singularity and resulting numerical instability. Permeability starts to increase if pressure reaches $p_{t}$. If pressure further increases, the permeability of the reservoir increases until it reaches $u_{t}$. Note that a reversible component of permeability change representing the compliant response fracture to pressurization \citep[e.g.,][]{Rutqvist2003} has not been included in this version of the model.
In the seismicity model, randomly-placed potential nucleation points are triggered by the radial symmetric pressure evolution following the Mohr-Coulomb failure criterion. They have no spatial extent, but differential stress ($\sigma_{1}-\sigma_{3}$) is defined at the seed point. Local $b$-values are determined at the seed points following a linear relationship between differential stress and $b$-value: $b_{max}$ and $b_{min}$ parameters are $b$-values at minimum and maximum values of differential stress, respectively. When a seed point is triggered, a random magnitude is drawn from the magnitude distribution with the local $b$-value. Additional free parameters are the scaling factor $F_{s}$ (the ratio between the number of synthetic and observed events), the stress drop coefficient $d\tau$ (the change of stress conditions after a seed has been triggered), and a criticality threshold $d\mu$, which accounts for the fact that seed points cannot be too close to the failure limit. \\
For this study, we parallelized parts of the code and extended the model to 3D (Figure \ref{fig2}.B) by adding an off-fault component to the originally 2D seismicity model. Assuming that the seismicity is generated on the current main fault, we determine the principal components of the current seismicity cloud and use the empirical distribution of the seismicity along the smallest axis to define off-fault coordinates of the synthetic events.
A detailed flowchart of the HySei model can be found in the supplement (Figure S3).
\\
To represent the spatial differences of the two models, Figure \ref{fig3} shows cross sections of the 3D PDFs of SaSS (upper line) and HySei (bottom line) at the moment and location of the biggest event ($m_w~=~3.1$), which occurred about $5$ hours after the shut-in.
\subsection{Testing}
To assess a single model, we check if its forecasts are consistent with the observations \citep{Zechar2010a}, asking the question: might the observations have been generated by this model? One way we do this is to check if the number of observed earthquakes falls within the $95\%$ confidence interval of the forecast. If so, the model passed the Number-test. In a similar way, we examine if the magnitude distribution of all forecasts is consistent with the observations (Magnitude-test). To test the spatial component (Space-test) \citep{Zechar2010a, Rhoades2011}, we use a testing grid of $4 km \times 4 km \times 4 km$ centered on the well tip and divided into $200 m \times 200 m \times 200 m$ voxels. After normalizing the forecasts so that the number of forecast events matches the number of observed events, we calculate the log-likelihood (LL) of the observation in each voxel. Summing these values gives a joint LL for a specific experiment. The higher the joint LL values are the better the forecast \citep{Zechar2010a, Rhoades2011}.
To check if the forecast is consistent with the observed seismicity of the forecast period, we simulate $1000$ catalogs from the forecast, and find the $5^{th}$ percentile of the LL values for the simulated catalogs. If the LL for the current observation is higher than the $5^{th}$ percentile the forecast passed the Space-test --- the observed seismicity could have been generated by the model. Both models consider the earthquake distribution Poissonian, thus LL values are calculated as follows:
\begin{equation}
L(A) = \sum\limits_{i=1}^{n} \Big[k_{i} \times log \big(\lambda_{A_{i}} \big)-\lambda_{A_{i}}-log \big( k_{i}! \big) \Big]
\label{eq4}
\end{equation}
where $L(A)$ is the Poisson joint LL of forecast A, $n$ is the number of voxels, $k_{i}$ is the number of earthquakes observed in the ith voxel, and $\lambda_{A_{i}}$ is the forecast seismicity rate in the $i^{th}$ voxel of forecast A.
\\
To compare two models, one can directly compare individual LL values of the models either for model components (i.e. event numbers, magnitudes or the spatial component) separately or for the entire model. These measures give information about the model performance not only against data but against other models. Here we would like to emphasize that LL values consider the whole model space. In other words, it reflects the performance of not only the temporal/magnitude/spatial bins that host at least one earthquake but also the empty ones answering the question: what is the probability to have zero earthquake in the given temporal/magnitude/spatial bin?
One can also calculate the information gain of one model with respect to another for model comparisons. This measure emphasizes the non-empty bins by comparing the forecast seismicity rates of model $A$ with that of model $B$ in the voxels where earthquakes occurred. The following formula gives $I_{i}$, the information gain of model $A$ over model $B$ for an earthquake occurring in the $i^{th}$ voxel \citep{Rhoades2011}:
\begin{equation}
I_{i} = \frac{-N_{A}+ N_{B}}{N} + ln \Bigg( \frac{\lambda_{A_{i}}}{\lambda_{B_{i}}} \Bigg)
\label{eq5}
\end{equation}
where $N$ is number of observed events, $\lambda_{A_{i}}$ and $\lambda_{B_{i}}$ denote forecast seismicity rate in the $i^{th}$ voxel of model $A$ and $B$, respectively, $N_{A}$ and $N_{B}$ are the total forecast number of events in model $A$ and $B$, respectively. The first term of the right hand side is a penalty concerning the number of events under each model. We seek to know if one model is better than the other, in other words, if the expected value of the information gain population differs from zero. One can also estimate how much better or worse model $A$ relative to model $B$ (i.e., average information gain) by finding an appropriate estimator. Exponentiating the average information gain yields the average probability gain of model $A$ with respect to model $B$. Additionally, $95\%$ confidence interval of the estimated expected value can be calculated to determine if model $A$ is significantly better or worse than model $B$: if the confidence interval contains zero, the difference between the models is not statistically significant at the $5\%$ significance level. \\
Several techniques are possible to compute the average information gain. \citet{Rhoades2011} suggested to take the arithmetic mean of the information gain distribution as the expected value of the population, based on Student’s t-distribution \citep{Student1908}. We refer to this method as 'Classical mean'. This estimator is best if the population follows a normal distribution. Plotting the distribution of information gains (that is, for individual earthquakes) for SaSS relative to HySei as a function of time and in a quantile-quantile plot (Figure S4) suggests that the information gains are not normally distributed. One possible way to solve this problem is to seek an estimator that can tackle outliers systematically. This can be done by manual data screening and removal of outliers, but it can be impractical due to the large number of data points and possible masking (i.e., large outliers can hide smaller ones). To overcome these problems, we use robust statistics to automatically detect and downweight outliers \citep{Ruckstuhl2014}. We refer to this method as 'Robust mean'. To calculate the expected value of the information gain distribution, we compute a weighted mean where the influence of the outliers is reduced. In particular, we use the Huber M-estimator, implemented as \textit{mlochuber} in the LIBRA matlab package \citep{Verboven2005}. By using the Huber M-estimator, we avoid the problem that a few earthquakes dominate the estimate of the average information gain.
We also explore a non-parametric method: generate $1000$ bootstrap samples of the observed information gains (i.e., we sample with replacement) and find the arithmetic average and $2.5\%$ and $97.5\%$ percentiles, thus obtaining a "Bootstrap mean" and the corresponding $95\%$ confidence interval. Using the same bootstrap samples we also find 'Bootstrap median'. We show a comparison of these methods in the next section.
\section{Results}
\subsection{Consistency tests}
Figure \ref{fig4} shows four snapshots of forecast and observed seismicity rates for both datasets. The top row shows the corresponding hydraulic data (injection rate and well-head pressure) to provide time reference for the forecasts. Blue, red, green, and purple vertical lines indicate the end of the different learning periods: corresponding shaded areas show forecasts of SaSS model (middle row) and HySei model (bottom row) with $95\%$ Poissonian confidence intervals. In case of Basel 2006, both models seriously overpredicts the seismicity rate for LP1 (blue learning period that ends at day $1.25$). This might be due to the short learning period. Giving longer learning period to the models (LP2, red learning period that ends at day $3.25$), the forecast is greatly improved for both models. SaSS struggles to forecast after both LP3 (green learning period that ends at day $5.25$) and LP4 (purple learning period that ends at day $9.5$), while HySei underpredicts after LP3 and gives perfect forecast after LP4.
In the case of Soultz-sous-For\^ets 2004, SaSS gives good forecasts at first (after LP1, the learning period that ends at day $1.75$), then severely underpredicts (after LP2 the learning period that ends at day $3.5$), and finally significantly overpredicts the seismicity rate (after LP3 and LP4, the learning periods that end at day $5$ and $6.5$, respectively). HySei performs well in most of the cases (after LP2, LP3 and LP4), except after LP1. In this case, the model expects higher pressure in response to the injection peaks between day $2-3$, which results in overprediction of the sesimicity rate. This might be due to the fact that a reversible component of permeability change, possibly arising from fracture compliance, is not included in this version of the model.
To show forecasts corresponding to all learning periods, we use a matrix representation where colors indicate the goodness of the forecast (Figure \ref{fig5}): yellow means a perfect forecast; red and blue mean under- or overprediction, respectively. Downward- and upward-pointing triangles denote moments when the observed seismicity rate falls out of the $95\%$ confidence intervals due to serious under- or overprediction, respectively. To avoid overlap of the forecast periods, we represent the $3$-day forecast period vertically: the end of the learning period is indicated on the horizontal axis, time during the $3$-day forecast period is indicated on the vertical axis with subsequent $6$-hour FTWs. Time in the forecast period increases from bottom to top.
The top row of Figure \ref{fig5} shows the observed seismicity rate for both datasets, middle and bottom rows show a comparison of observed seismicity rates with forecasts from SaSS and HySei, respectively.
In Basel, both models mainly overestimate the number of observed earthquakes during the initial stimulation period. When the injection rate was decreased and at shut-in, both models have difficulties forecasting the right number of earthquakes: they severely underpredict the observed seismicity rate. The SaSS model overpredicts for the post-stimulation period, whereas HySei seems to find good estimates most of the time for later periods (with the exception of three time windows). In Soultz-sous-For\^ets 2004, the SaSS model mainly forecasts well or overestimates the number of earthquakes during stimulation. The forecast period corresponding to the learning period of day $3.5$ stands out, when SaSS significantly underpredicted the number of earthquakes. This is because there is not yet enough data of the post-injection period to estimate post-stimulation parameters. During the post-stimulation period, the SaSS model overpredicts almost all FTWs. On the other hand, the HySei model gives generally good results: there are only a few under- and overpredictions, mainly at the beginning of the injection, around shut-in, and near the end of the investigated period.
Overall, in most of the cases, HySei is better at forecasting the number of induced earthquakes; this is reflected by the number of unmarked FTWs in Figure \ref{fig5}. Moreover, for a small period of re-injection in Soultz-sous-For\^ets (at day $8$), HySei forecasts the number of events well, while the SaSS model significantly overpredicts.
In Figure \ref{fig6} we compare the observed magnitude distribution with forecasts from SaSS and HySei. Magnitude bins are $0.1$ units wide and range from $0.9$ to $4$ for Basel 2006 and from $0$ to $1.9$ for Soultz-sous-For\^ets 2004. We remind the reader that the Soultz-sous-For\^ets 2004 magnitudes are truncated, so the final magnitude bin contains all events that would have $m > 1.8$.
Both models forecast the magnitude distribution of micro-seismic events well, meaning that observed seismicity follows the Gutenberg-Richter relation in almost all cases. Nevertheless, the probability of the biggest event of the Basel 2006 project is very small in both models (insets in Figure \ref{fig6}b-c). The truncated magnitudes in Soultz-sous-For\^ets 2004 preclude us from considering the probability of the largest event in this data set, because we have no good estimate for the magnitude of the largest event.
We investigate the spatial component of the models by dividing the joint LL by the number of observed events (LL/Eqk) in Figure \ref{fig65}. We decided to normalize due to the fact that LL values are correlated with the number of earthquakes in a FTW. We use the same matrix representation as we introduced for the number component: end of learning periods are indicated on the horizontal axis, FTWs on the vertical axis. Yellow indicates better results than red, the higher the LL value, the better the forecast is. Crosses represent moments when the model does not pass the Space-test. Gray squares denote moments when no earthquake occurred. Gray dotted line marks the shut-in moment. It is clear that SaSS passes the Space-test more often than HySei does, especially after shut-in for both datasets. Additionally, SaSS's LL values are higher than that of HySei indicating that smoothed seismicity outperforms the simple geometry of HySei's forecasts.
\subsection{Ranking}
To be able to compare the two models we calculate LL from the absolute values of the Number- and Magnitude-test by answering the same question we addressed in case of the spatial component: what is the probability of the observation given the model forecast? We calculate LL values for all FTWs of all model components (Figure S5-S6). Figure \ref{fig7} gives an overview of differences between the model LLs. Green shows when SaSS performs better than HySei, pink shows when HySei is better than SaSS, white indicates that the models forecast similarly.
The magnitude component is exceptional in this figure, because we do not test the consistency of the forecast and observations in incremental FTWs, rather the cumulative distribution. For instance, in case a $3$-day magnitude test we take all events occurred in the forecast period from the end of the learning period until the end of day $3$. This allows a more stable distribution of the observed events that can be tested against a power law.
These results clearly confirm that the magnitude component is very similar in the models, which is not surprising since both models use the Gutenberg-Richter relation. The differences lay in the number and spatial components. In terms of number, SaSS performs better in several moments during the stimulation and in the early post-stimulation period in Basel. HySei gives better results close to the shut-in and generally after the stimulation, especially at later moments of the experiment. The green color in most FTWs of the spatial component reveals that SaSS holds the better spatial component, which is emphasised towards the end of the experiment.
To compare the entire model performance, we merge all components and calculate LL normalized by the number of earthquakes occurred in the given FTW. Figure \ref{fig8} details the sum of LL/Eqk values of the individual FTWs for $6$-, $24$-, $48$-, and $72$-hour forecast periods. Three regimes can be observed in the case of Basel 2006:
\begin{itemize}
\item regime $A$: when models perform similarly well
\item regime $B$: when SaSS model is better than HySei
\item regime $C$: when HySei overcomes SaSS, especially for the longer forecast periods.
\end{itemize}
Comparing these results to the performance of individual model components, it is clear that the regimes are determined by the interplay of the number and spatial components. Both components of both models perform similarly in regime $A$, which results in similar overall performance. Around the shut-in, even if HySei gives better number forecasts for a short period, SaSS can compensate with its spatial component and it overcomes HySei also with its number component by the end of regime $B$, which results in a better overall performance of SaSS for this period. As the number of events drastically decreases relative to previous periods in regime $C$, it seems that HySei's more precise number forecasts compensate against SaSS's better spatial forecasts giving better overall LL values.
In the case of Soultz-sous-For\^ets 2004, only two of the three regimes are present: regime $B$ from the beginning of the experiment about $1.5$ days after the shut-in (almost at the same moment as in Basel) and regime $C$ for the rest of the experiment. In the first part of regime $B$, the slightly better spatial component of SaSS compensates the generally better number component of HySei giving marginally better results to SaSS. From the shut-in until the end of regime $B$, the spatial component of SaSS is clearly better together with the fact that HySei's number component is less dominant than previously. This results in a drop of overall LL. The decrease of number of induced earthquakes (regime $C$) highlights again that HySei's number component overcomes SaSS's better spatial component.
Summarizing the model comparison based on LL: SaSS obtains better results in space generally, in terms of seismicity rate in some moments of the stimulation, and also the entire SaSS model gives better results until a certain point after shut-in (regime $B$) for both datasets; HySei outperforms SaSS in seismicity rate forecast in the post-stimulation period and also the overall LL values of HySei in the late post-stimulation period, especially for longer forecast periods.
Figure \ref{fig9} presents the results of all $6$-hour information gains from the beginning until the end of the experiment for both datasets. Solid black lines indicate the empirical probability densities of the information gains, dotted gray lines denote normal distributions, where the expected values and standard deviations are estimated from the corresponding empirical distributions. To use the classical method to determine the average information gain, the population should be normally distributed. This is not the case, which is why we investigate four methods to calculate the average information gain: classical mean, robust mean, bootstrap mean, and bootstrap median corresponding to red, green, orange, and brown, respectively. Insets show the estimated average values with their uncertainties. \\
For both datasets medians and robust means are closer to the the clear peaks of the populations, whereas classical and bootstrap mean values are shifted and have wider $95\%$ confidence intervals. In the case of the Basel 2006 data, interpretation of model performance depends on the choice of the estimator: for robust mean and bootstrap median HySei performs significantly better than SaSS, for classical and bootstrap mean exactly the opposite. This emphasizes that we should be cautious about information gain interpretations.
In our opinion, in case of information gain calculations, ($1$) it is necessary to check the distribution of the observed information gains, ($2$) it is recommended to use several estimators to have a clearer view of the possible average information gain values, and ($3$) to interpret the results carefully.
An overview of average information gain for 6-, 24-, 48, and 72-hour forecast periods with all four estimators can be found in the supplement (Figure S7-S10).
\section{Discussion}
Predictive models of induced earthquakes can help reduce seismic hazard and risk during reservoir stimulations. Although many models are being developed, most are presented in a context that is descriptive, not predictive: they are tuned using the entire data set, and so their ability to forecast is not checked. In this study, we propose a test bench to objectively evaluate various induced seismicity models. We bring to the test bench two models used to forecast two datasets. We demonstrate that such a test bench can quantify the forecast skill of different models. The results can give guidance how to merge models. One possible way to combine models is weighting models by their past performance. The test bench can provide detailed information about the performance of the tested models that can be converted to probabilistic weights. Weighted average models has the potential to merge the best forecasting features of the tested models and can give important input for real-time forecasting and hazard assessment. The test bench can also highlight model features to be improved, e.g., because the model performs badly at forecasting one of the key parameters (i.e., event number, magnitude distribution, or spatial distribution) or during certain moments (e.g., during stimulation, at shut-in, or after shut-in).
Our test bench showed that both tested models are limited to accurately forecast the rate of induced earthquakes. The forecasts are particularly bad around shut-in. During stimulation and shortly after shut-in, we observe first a slight overprediction and then a severe underprediction as the injection rate decreases and stops. In the post-injection period, SaSS overpredicts the number of events (except the moment when model parameters are not well calibrated due to the very short post-injection period).
As suggested by \citet{Langenbruch2010}, we use a generic value of $2$ for parameter $p$ when parameter estimation is not possible, and the same generic value is used if calculated ones are lower than $2$. In Basel, we observed that calculated values of $p$ are always smaller than $2$. This means that we always apply a decay with $p = 2$, which results in faster decay than the data of learning period would suggest. Nevertheless, all modeled decays are slower than the observed seismicity decay, indicated by massive overpredictions in the post-stimulation periods. In contrast, for Soultz-sous-For\^ets estimated values of $p$ are always higher than $2$ allowing good forecasts at the beginning of the post-stimulation period but the decreasing tendency of the values of $p$ results in overpredictions for later forecast periods. These results suggest that forecasting the post-injection seismicity is difficult and the current post-injection seismicity decay law is not appropriate in an operational forecasting environment.
The spatial forecasts of the SaSS model gave generally good results. But these forecasts are limited by the fact that they are based on the current learning period. The model can give good forecasts when the seismicity is nearly stationary, i.e., new earthquakes occur where previous ones occurred. But this is often not the case in induced seismicity related to geothermal reservoir creation, where seismicity propagates with the pressure front. In future work, to incorporate diffusion-like propagation of the seismicity, we imagine a step-by-step spatial forecast for each FTW of the forecast period. One could simulate thousands of synthetic catalogs for the first FTW based on the learning period. Forecasts of FTWs are based on the PDF calculated from the synthetic catalogs of the previous FTWs. Temporal weighting (exponential or some other temporal weighting) of generated earthquakes can help to simulate the migration of the seismicity cloud.
One might also improve induced seismicity forecasting by considering Coulomb stress changes, which has been shown to a good descriptive model of tectonic seismicity \citep{Steacy2005} and has been considered in the induced seismicity context: \citet{Orlecka-Sikora2010} suggested that static stress transfer can have an accelerating impact on mining-induced seismicity, and \citet{Schoenball2012} concluded that static stress change does not play an important role during stimulation but might help to trigger after shut-in in the Soultz-sous-For\^ets reservoir. Moreover, \citet{Catalli2013} found that $75\%$ of the analyzed induced earthquakes (based on \citet{Deichmann2009}) in Basel occurred in regions of increased Coulomb stress, where failure is thought to be encouraged. Unfortunately, prospective tests of the Coulomb stress hypothesis are difficult because one needs accurate, real-time estimates of hypocenter, magnitude, and focal mechanism, and one also needs some a priori knowledge on fault orientations in the reservoir.
Additional model improvements may relate to the statistical description of earthquake distributions. In the testing framework and also in all CSEP experiments, earthquake occurrence is considered as a Poissonian process \citep{Eberhard2012}; LL and confidence interval computations are based on that assumption. The Poissonian assumption is not completely fulfilled, because earthquakes are not independent, neither in time nor in space. \citet{Eberhard2012} reported that Poissonian distribution was not supported by the seismic data; others \citep[e.g.,][]{Kagan2010,Lombardi2010} have previously shown the same observation in different regions and magnitude ranges. Failures of model forecasts might stem from the Poissonian assumption beside the fact that the model does not incorporate the necessary physical processes. Modeling earthquake occurrence as a Poissonian process is thus not ideal and improvements are subject of further investigations.
It is necessary to emphasize that all tests are highly dependent on the observed catalog. Thus, it is extremely important to detect events and to determine good origin times, magnitudes and precise locations. For the moment, it is still a challenge, especially in near real-time.
Our analysis further revealed that forecasting the rate and magnitude distributions around shut-in also remains a difficult question: the models often underpredict during this period and do not represent the magnitude distribution well. Presumably, this problem is not specific to the data we considered here because in several other projects the biggest event occurred after shut-in \citep{Baisch2006,Asanuma2005}. Focusing on shut-in and the events that follow, \citet{Barth2013} showed theoretically and also confirmed with the analysis of the data from Soultz-sous-For\^ets 2000 that probability of exceeding a certain magnitude can be higher after shut-in than it would have been for on-going injection. \citet{Segall2015} proposed a descriptive model that includes complete poroelastic coupling --- changes in pore pressure induce stresses, and changes in mean normal stress induce changes in pore pressure --- and concluded that an abrupt shut-in can produce sharp increase in the seismicity rate. Post shut-in peaks of the seismicity rate result from the rapid change in stress before the pore pressure can be relieved. Concerning post shut-in magnitudes, \citet{Segall2015} claimed that larger events are absent at short injection times but as injection proceeds the probability of larger earthquakes increases, thus larger events occurring post shut-in are not unexpected. Another explanation for large post-stimulation events came from \citet{McClure2015a}: simulation with the three-dimensional version of CFRAC \citep{McClure2012a} revealed that post-stimulation seismic events can be caused by backflow from dead-end fractures into fractures that host the largest event. He proposed that pumping of fluid to the surface immediately after shut-in could mitigate this effect and reduce post-stimulation seismic activity. The inferences made from these descriptive models ought to be used in future work to improve predictive models such as those considered in this study.
\section{Conclusions}
Forward-looking, near-real-time warning systems can help avoid large induced earthquakes and keep micro-seismicity at a tolerable level during and after project operations. The Induced Seismicity Test Bench can be used to test the core of such a warning system, an Adaptive Traffic Light system. Here, we tested, compared and ranked the performance of the SaSS and the HySei models.\\
To say which of these models performs best is not straightforward. In terms of magnitude, both models forecast micro-seismicity fairly well, but none of them is able to forecast the biggest $m_{w}3.1$ event. In terms of seismicity rate, the HySei model gives good forecasts most of the time, especially for late post-stimulation periods but it can under- and overpredict at some moments. In the case of the Basel 2006 project, we observe a clear distinction between model performance: SaSS is better at some moment of the stimulation period and shortly after shut-in; HySei outperforms SaSS close to shut-in and for the most of the post-stimulation period. In terms of spatial distribution, smoothed seismicity based on learning periods (SaSS model) appears to outperform the radially symmetric geometry (HySei model). If we compare the entire models, SaSS seems to give higher LL/Eqk values at the beginning until a certain moment after shut-in when HySei takes over, especially for longer forecast periods. \\
Although our analysis is restricted to only two geothermal projects, we can generally conclude that the seismogenic index forecasts the earthquake rate better during stimulation and HySei gives better seismicity rates after shut-in; smoothed seismicity with temporal weighting performs better in forecasting the spatial component. Certainly, it would be beneficial to consider additional models and datasets in future work. In this study we introduced a comprehensive test bench for induced seismicity with the goal to better understand the behavior of injection-related reservoirs and to develop an operational Adaptive Traffic Light system for geothermal projects. With the establishment of this test bench, we challenge modelers to make predictive models, forecast induced seismicity, test their models for consistency, and compare model performance: we believe this is the most efficient way to reduce induced seismic hazard.
\begin{acknowledgments}
We acknowledge the GEOTHERM, GEOTHERM-$2$ and GEISERS projects for financial support to develop fundamental ideas concerning the Adaptive Traffic Light System and providing the stimulation data of Soultz-sous-For\^ets 2004. The authors would like to thank EEIG Heat Mining for permission to publish the data. Acknowledgement is also due to the numerous agencies which have supported the Soultz project over the years including the European Union, ADEME of France, BMU of Germany, SER and SFOE of Switzerland, and the EEIG 'Exploitation Mini\`ere de la Chaleur' consortium. Access to the data is provided by contacting the authors. We thank Arnaud Mignan, Antonio Pio Rinaldi and Eduard Kissling for their valuable comments on an earlier version of the manuscript. We also thank Yehuda Ben-Zion as editor, the associate editor, Carsten Dinske and three anonymous reviewers for their comments and suggestions. E.K.-P. acknowledges the GEOTHERM-$2$ project for financing her PhD. This work has been partially completed within the Swiss Competence Center on Energy Research - Supply of Electricity, with the support of the Swiss Commission for Technology and Innovation.
\end{acknowledgments}
|
2,877,628,091,480 | arxiv | \section{Introduction}
In our recent works we have reported about the optical properties of Cesium
Halide thin films, namely, Cesium Chloride (CsCl), Cesium Bromide (CsBr) and
Cesium Iodide (CsI). The optical properties stood out due to the singular
appearance of Surface Plasmon Resonance (SPR) peaks in the visible region.
While we confirmed that SPR peaks arise due to formation of
Cesium metal clusters, two basic questions arise from these observations,
{\sl ``What mechanisms lead to the formation of cesium metal nano-clusters
especially the observed nano-rods?''} and {\sl ``Why does CsI behave
differently from CsCl and CsBr?''} The direct experimental investigation into
the formation of metal nano-clusters is not possible, however, we may
logically speculate the sequence of events that led to the formation of
metal nano-clusters.
\section{Results and discussion}
Thin films of cesium halide were fabricated by thermal evaporation in
vacuums better than ${\rm 10^{-5}}$~Torr. The films were deposited on
microscopy glass slides maintained at room temperature. All the films were
fabricated in identical conditions. Looking
at fig~1, an immediate observation is the striking similarity
between morphology of CsCl, CsBr (not shown here) and CsI polycrystalline
thin films. Large grains are tiled and tighly packed with sharp grain
boundries. The grains however do not have regular shapes. These SEM
micrographs were taken within few hours of sample fabrication. The
morphologies gradually change with time as neighbouring grains receed to
become smaller and spherical in appearance (fig~2).
\begin{figure}[h]
\begin{center}
\includegraphics[width=2.6in]{c61.eps}
\includegraphics[width=2.7in]{c63.eps}
\caption{\sl SEM images show the surface morphology
(polycrystalline nature) of thin films of Cesium Chloride and Cesium Iodide
respectively.}
\end{center}
\label{tiles}
\end{figure}
\begin{figure}[h!!!]
\begin{center}
\includegraphics[width=2.45in]{spherical.eps}
\hfil
\includegraphics[width=3.0in]{shcsi.eps}
\caption{\sl SEM morphologies gradually change with time as neighbouring
grains receed the become smaller and spherical in appearance (Images are of
CsCl and CsI of samples shown in fig~1 after ageing.}
\end{center}
\label{fig2}
\end{figure}
\subsection{Grain Boundary Grooving: Surface Diffusion}
Nature has a natural tendency to force grains to take up a spherical shape.
This is nature's way of minimizing the free energy of the system
(G)~\cite{xin}. The equation used to model this behavior of nature is given
as
\begin{eqnarray}
G=\gamma_sA_s +\gamma_bA_b\label{groove}
\end{eqnarray}
where ${\rm \gamma_s}$ and ${\rm \gamma_b}$ are the surface energy per unit
area and grain boundary energy per unit area respectively. ${\rm A_s}$ and
${\rm A_b}$ being the grain's surface area and grain boundary area
respectively. Grain boundary area is the area of contact between two grains.
For a spherical grain of a given volume, the system's free
energy is the least due to a large decrease in grain boundary area,
${\rm A_b}$ at the cost of a relatively small increase in surface area
(${\rm A_s}$).
\begin{figure}[h!!!]
\begin{center}
\includegraphics[width=3in]{triplejun.eps}
\caption{\sl Schematic representation of the triple junction, the point
where grain boundary meets the surface. }
\end{center}
\label{triplejun}
\end{figure}
The grain shaping or its change to spherical shape is due to grain boundary
grooving. ``Grooving'',\cite{mull,her,tsoga} as the name suggests, is the
phenomenon
by which gaps appear between the grains of polycrystalline samples. The
fissures develop from the film surface towards the substrate. Theoretical
models show the grooving occurs due to atoms moving along the surface away
from the ``triple intersection point'' by surface diffusion~\cite{sun}
(fig~3). Fig~1, clearly shows due to the near hexagonal tiling of the
grains, triple intersection points are formed even within the film. Fig~4
shows pictorial representation of fig~1 (initial situation is shown in
inset) followed by how these triple intersection points evolve, pushing into
the grain and thus giving a sphereical shape.
\begin{figure}[h!!!]
\begin{center}
\includegraphics[width=3in]{t1.eps}
\caption{\sl Pictorial representation of how triple intersection points
evolve, pushing grains to have spherical shapes. Arrows are for indication
of the direction in which these triple points evolve.}
\end{center}
\label{t1}
\end{figure}
In the above paragraph, we have assumed ${\rm \gamma_b}$ to be a constant. This
is far from the truth since the grain boundary energy depends on the lattice
defects caused by the mismatch between the lattice at the grain boundary and
its bulk~\cite{frank,bass,maks,stra,maks1,gari}. This mismatch manifests as a
shell region whose thickness is proportional to the grain size. Smaller grains
have
thinner shells with less lattice mismatch and hence smaller ${\rm \gamma_b}$.
Thus nature breaks the grains into smaller spherical grains with time, thus
reducing the system's free energy by lowering
${\rm \gamma_b}$.\cite{bouville2} Fig~5 highlights the formation of shell
around the grain via volume diffusion of defect towards the surface. We shall
be discussing volume diffusion further, however at this point it is important
to appreciate that if there is a lattice mismatch between the shell and core
then ${\rm \gamma_b}$ takes values to assists grooving of the grains.
\begin{table}[h]
\begin{center}
\caption{Table for Lattice mismatch in core-shell structure}
\vskip 0.5cm
\begin{tabular}{lllll}
\hline
Halides & Core structure & Lattice constant & Shell structure & Lattice
constants \\ \hline
Chlorine & Cubic & 4.12 & Cubic & 5.838 \\
Bromine & Cubic & 4.29 & Cubic & 5.984 \\
Iodine & Cubic & 4.568 & Tetragonal & 3.3645,3.4645,12.552 \\ \hline
\end{tabular}
\label{tab1}
\end{center}
\end{table}
\begin{figure}[h!!!]
\begin{center}
\includegraphics[width=3in]{t3.eps}
\caption{\sl Pictorial representation of shell formation.}
\end{center}
\label{t3}
\end{figure}
As described above, the lattice mismatch of the two regions contributes to
${\rm \gamma_b}$. In our case, this lattice mismatch is due to a core-shell formation
taking place with Cesium shell around a core of Cesium halide. The
existence of free metallic Cesium were shown by the just resolvable XRD
peaks.\cite{kapil1, kapil2}
The XRD results also allowed us to determine the
lattice structure and size for the two regions. Table~1 lists the
results for easy viewing. Larger mismatch in Cesium and Cesium Halide
structures imply greater ${\rm \gamma_b}$. Since, nature tries to bring down
the system's free energy by reducing ${\rm \gamma_b}$ via grain divison, the
shells become thinner, thus reducing the region of lattice mismatch.
Since the cesium shell in CsI takes a tetragonal structure, which is
very different from the cubic structured core, it would necssarily indicate a
faster grain division/breakage in CsI as compared to CsCl and CsBr.
\subsection{Core-Shell Formation: Volume Diffusion}
As stated above, a thin layer (shell) of Cesium is formed around the Cesium
halide core in our samples. The question is, {\sl ``how do the halide atoms
disappear from the surface or possibily how does cesium reach the surface
with no halide atom?''} While iodine might have tendency to sublimate, leaving
behind free Cesium, the remaining halides do not sublimate.
Hence, the question of importance is {\sl ``how does cesium clusters form at
the grain surface?"} Cesium Halides crystals/ films readily form color centers.
These are point defects due to the absence of the massive halides atoms from
the lattice which results in residual tensile stress within the lattice. The
neighbouring lattices inturn experinces compressive stress. The resulting
stresses graidients in turn result in diffusion of defects~\cite{gari}.
Consider the diffusion process of a vacancy. From its intial position to
final poistion, the vacancy moves through intermediate steps which is
essentially marked by the vacancy choosing the highest free energy state and
minimum energy diffusion path. The difference in the free energy between the
intermediate steps is called migration energy ($E_{mig}$). The process of
defect diffusion depends on (i) formation of defect ($E_{form}$) and
(ii) its migration. Hence we can talk of an activation energy for defect
diffusion which would be the sum of these energies, i.e.
\begin{eqnarray}
A=E_{form}+ E_{mig}\nonumber
\end{eqnarray}
The diffusivity in terms of activation energy is given by~\cite{zangwill}
\begin{eqnarray}
D=\frac{f\nu d^2}{6}exp\left(\frac{-A}{KT}\right)\label{gana}
\end{eqnarray}
where, `f' is a correlation factor (propostionality constant), $\nu$ the
attempt frequency, `d' the hop distance. The vacancies due to the stress
gradient migrate outward towards the grain surface, leading to an
accumulation of point defects at the surface. These point defect then would
combine to give Cs metal clusters.
As a starting point of discussion, in eqn(\ref{gana}) let us assume the
activation energy of all the three cesium halides to be comparable. We would
find the diffusion coefficient to be proportional to
\begin{eqnarray}
D_{halide} \propto vd^2\nonumber
\end{eqnarray}
The hoping frequency, `$v$', or the rate at which halide atoms would migrate
to give vacancy's diffusion, would depand on the halide atom's inertia or
mass. The heavier the atom would have lower frequency of hoping. Hence,
`$v$' would be inversely proportional to the halide atoms mass.
Also, `$d$',
the hoping distance would be directly proportional to the lattice constant.
A plot between the lattice constant and halide atomic mass shows a linear
relationship (fig~6). This is expected since, heavier atoms mean larger
radius and in turn larger lattice dimensions. The atom's migration for
filling vacancy (hence give illusion of vacancy migration) would be
discouraged with increasing distance. Hence, `$d$' should show an inverse
proportionality with mass, however, it should be noted here that the lattice
size increases more rapidly than the radius of the halide atom as we move
from CsCl to CsI. This would mean that it would be easier for the iodine atom to
move from one unit cell to another as compared to chlorine atom in its
lattice. Hence, taking these into account, `d' is independent of halide
atom's mass (${\rm d \propto {1 \over m}\times m =constant}$). Hence, the
diffusion coefficient (eqn~2) is inversely proportional to the
concerned halide atom's mass
\begin{figure}[h]
\begin{center}
\includegraphics[width=2.0in, angle=-90]{mass.eps}
\hfil
\includegraphics[width=2.0in, angle=-90]{rad.eps}
\caption{\sl Variation between halogen atomic mass and lattice constant of
Cesium halides}
\end{center}
\label{fig11}
\end{figure}
\begin{eqnarray}
D_{halide} &\propto & \left(1 \over m \right) \left(1 \over m^2
\right)m^2\nonumber\\
&\propto & 1 \over m \label{diff}
\end{eqnarray}
From the listings in Table~2 it is clear $D_{Cl}\,>\,D_{Br}\,>\,D_I$.
This would imply that the cesium shell formation would be fastest in CsCl,
followed by CsBr and finally CsI.
\subsection{Formation of Nano-rods: Surface Diffusion/ Necking}
As the grooving is taking place and the large grains are separating into two
daughter grains, surface diffusion of cesium metal also takes place with the
volume diffusion of defects. This surface diffusion takes place towards sites
of
faceteds on the grain boundary. This assists in accumulation of Cesium that
pushes the daughter grains apart and formation of a bridge of Cesium between
the two grains (see fig~7). As the distances between the two grains
increases this bridge is elongated.
\begin{figure}[h]
\begin{center}
\includegraphics[width=3.0in]{t5.eps}
\caption{\sl Pictorial representation of ``neck" formation.}
\end{center}
\label{t5}
\end{figure}
Transmission Electron micrograph of fig~8, shows the bridges/ nano-rod
formations between two evolving ``daughter grains'' of Cesium Bromide. These
bridges are called ``necks", marking the region of
constricted areas joining two grains. As stated, necking is usually
formed at sites of faceted grain boundaries \cite{faceted1, faceted2}. With
a core-shell structure in all three halides, formation in necking is
expected. The necks are marked by difference in
curvature. While the grain surfaces are convex in nature, necks have concave
surface~\cite{brown1may}. Matter flows from the grain surface to region of
maximum variation of curvature, i.e. towards the neck. That is, the neck
growth and elongation results due to surface diffusion~\cite{julien, maeno}.
Surface diffusion is different from volume diffusion (defects migrating
towards the surface). Since our grains have Cesium shells,
surface diffusion essentially ends up carrying Cesium towards the neck.
Hence, the bridges formed are made up of Cesium which on breaking free from
the daughter-grains
gives the nano-rods which contribute to the Surface Plasmon Resonance seen in
the UV-visible absorption pattern.
\begin{figure}[t!!]
\begin{center}
\includegraphics[width=4.25in, angle=0]{volsur.eps}
\caption{\sl The two diffusions, volume and surface diffusion contribute to
the forming of spherical grains with Cesium shell in Cesium halides.}
\end{center}
\label{thermal}
\end{figure}
Another important observation we reported was the ageing of our samples.
That is, the optical properties of the films were found to change with time.
The absorption peaks not only red-shifted but also their
instensities diminished with time. Also, we observed that the rate of ageing
was sensitive to the ambient atmosphere in which the samples were
maintained, with ageing being more rapid in samples kept out of the
dessicator. Our Gan's model calculations explained the red-shift due to the
nanorods going smaller with lower aspect ratio. We believe the Cesium
nanorods and free Cesium on the grain surfaces are eroded by the reaction
\begin{eqnarray}
2Cs+2H_2O=2CsOH+H_2\nonumber
\end{eqnarray}
\begin{table}
\caption{Table for various useful parameters}
\label{tbl:example}
\begin{tabular}{lllllll}
\hline
Halides & At. No. & Lattice Size & Coval. dist. &
At. Mass & Radius & M. Points \\
\hline
Chlorine & 17& 4.12 & 198 & 35.457 & 99 &645 \\
Bromine & 35 & 4.29 & 227 & 79.904 & 114 &636 \\
Iodine & 53 & 4.56 & 272 & 126.9 & 133 & 621 \\
\hline
\end{tabular}
\end{table}
K. Arima et. al.~\cite{arima} have observed that this reaction preceeds
slowly for atomospheric relative humidity (RH) less than 30\% and increases
with RH. Also, there is a near exponential increase in the rate of reaction
for RH above 80\%. K. Arima et. al.~\cite{arima}
works would explain difference in ageing observed by us in the samples
maintained in the dessicator and those left in open. Also, Antonelli et.
al.~\cite{anton} have shown that vacancy generation increases
under the high hydrostatic pressure (compressive). Hence, metal Cs formation
is encouraged in dessicator with slower $Cs+H_2O$ reaction rate. While this
reaction involves only Cesium, we do observe a difference in ageing rate
depending on which halide is in bonding with Cesium. This is due to fact
that ageing depends on (i) what rate Cesium clusters are formed at the
surface (volume diffusion which is inversely dependent on halide mass) and
(ii) thicker shell leading to faster grooving (surface diffusion due to
lattice mismatch).
Comparing the ageing rate of 600~nm thin films of CsCl, CsBr and CsI, we
found the rate of ageing in the following order, $CsCl\,>\,CsBr$ and no
ageing in CsI. The
fastest ageing was seen in CsCl since color centers moved towards the
surface faster with volume diffusion rate being inversely proportional to
the halide atom's mass. With lattice mismatch being low, grooving was slower
and formation of nano-rods retarded. While in the case of CsI, formation of
Cesium shell would be slower, but grooving faster. The thinner Cesium shells
would not enable formation of large nanorods, giving impression of no ageing
in CsI.
\section{Conclusion}
To summerize, thermal evaporated films of the three halides result in
polycrystalline films with large grains and sharp grain boundaries. The
grain boundaries become sites of grooving with mass of material being pulled
away from the grain boundary to give spherical grains. Along with this,
vacancies in the form of color centers in the
cesium halide lattices move towards the grain surface. The rate at which
this volume diffusion occurs depends on halide atom's mass (its faster in CsCl
than CsBr or CsI). At the surface, these color centers accumulate, forming
metal clusters and giving rise to a core-shell structure. Depending on the
type of lattice mismatch in the core-shell
structure two alternative paths of sequencing occurs. While the screw
dislocation leads to necking, the mismatch in lattice size leads to an
accelerated grooving. Grooving is faster in CsI than CsCl and CsBr. Accelerated
grooving leads to smaller grains. Smaller grains encourage reaction with
water vapor, thus reducing
the shell size. Making it difficult to sustain the necking. The necks break
off faster. Ageing observed in our samples is essentially
due to the nanorods of Cesium and in decreasing aspect ratio. The decreasing
aspect ratio, with time and inturn successive grain division would be due to
the smaller grain size. Smaller grain size would inturn imply thinner shells
and lesser amount of Cesium. This in turn would not allow for growth of
longer nanorods. Since CsI have a large mismatch in their
core-shell lattices, grooving is faster and shell is thinner compared to
CsCl and CsBr. Hence the nano-rods formed are of smaller length. This makes it
difficult to obtain SPR peaks shifting throughout the full visible range
with ageing in CsI.
\acknowledgement
The authors would like to express their sincere gratitude
to Department of Science and Technology (DST) India (SR/NM.NS-28/2010) and
University Grants Commission (UGC, Delhi) (F.No. 39-531/2010 SR) for
the financial assistance given for carrying out this work.
\subsection{References}
|
2,877,628,091,481 | arxiv | \section{Introduction}
\tableofcontents
\section{Introduction and summary}
Moduli stabilization in string theory is a long-standing crucial
issue if one is to make contact with low energy (accelerator)
physics \cite{Lust:2008qc}. A very promising path is to turn on
fluxes along the internal directions that generate a potential for
the moduli fields \cite{flux}. However quantization of strings in
the presence of geometric fluxes is only possible for very special
choices involving only open string fluxes
\cite{Antoniadis:2004pp}. Many interesting cases, such as
combinations of NS-NS and R-R closed string fluxes, are only
amenable to an analysis in the low-energy supergravity
approximation. At the same time, a ten-dimensional perspective has
a hard time describing non-geometric fluxes and torsions that may
admit perfectly consistent description in four dimensions as
effective (gauged) supergravities \cite{flux}. Quite remarkably,
resorting to non-geometric constructions, yet based on exactly
solvable (rational) CFT, it is possible to stabilize many if not
all the closed string moduli at tree level or in perturbation
theory. One can then turn on allowed fluxes or invoke perturbative
and non perturbative effects (such as D-brane instantons) to
stabilize the remaining moduli.
Aim of this paper is to further explore controllable mechanisms
(in string perturbation theory) of moduli stabilization based on
exactly solvable (rational) CFT's. We will present simple
non-geometric examples of asymmetric orbifolds
\cite{Narain:1986qm} of special tori
or, equivalently, free fermionic constructions
\cite{Kawai:1986va,abk} with few moduli. The strategy we adopt rests
on the simple observation that chiral (and thus non-geometric)
twists tend to freeze out untwisted moduli while shifts tend to
eliminate twisted ones \cite{Tfolds}. Asymmetric orbifolds of Type
IIB involving chiral twists with no shifts have been previously
studied in \cite{Bianchi:1999uq}.
The simplest non-geometric twist one can think of, is a ${\mathbb Z}_{2L,R}$
chiral reflection acting on the Left or Right moving closed string
modes. This is nothing but an element of the T-duality group acting on the
worldsheet fields.
Here we combine T-duality twists of this type with asymmetric shifts
to build ${\cal N}=2$
compactifications of Type IIB with few moduli. More precisely, we consider
${\mathbb Z}_{2L}\sigma_A \times
{\mathbb Z}_{2L}' \sigma_B \times {\mathbb Z}_{2R}\bar{\sigma}_C\times {\mathbb Z}_{2R}'\bar{\sigma}_D$ orbifolds of Type IIB on the maximal $T^6$ torus
of $SO(12)$ with $\sigma$'s some half-shifts.
We obtain several models with low
``effective'' Hodge numbers starting from $(h_{11}, h_{21})=(1,1)$.
The construction admits a simple description in terms of free fermions that allows
a systematic search by computer means.
In view of the possibility
of performing an unoriented projection and including D-branes and
open strings, we mainly focus on non-geometric Type IIB models in
four dimensions with chiral actions on Left-movers mirrored by
identical actions on the Right-movers \cite{BPS, open}.
${\cal N}=1$ vacua, following from non-geometric orbifolds of
Type IIB involving $(-)^{F_R}$ projections breaking all the susy
from the Right-movers will be also considered\footnote{
Being non Left-Right symmetric, these models do
not admit a natural unoriented projection but can be coupled to
generalized D-branes of the kind proposed by one of the authors
in \cite{Bianchi:2008cj}.}.
In both cases we find ${\cal N}=1$
models with vector multiplets and few chiral multiplets.
Some comments on the subtle role played by discrete moduli in
asymmetric orbifolds are in order. Asymmetric orbifolds typically
require specific choices of the internal lattice where
``untwisted'' moduli (metric and B-field) are frozen to specific
values. As one is exploring different branches of the original
moduli space, even geometric projections give rise to peculiar
twisted spectra \cite{CRISTINA}. To be specific, starting with the
maximal torus of $SO(12)$ the number of twisted sectors gets
reduced from 48 (16 per each twist) to 12 with a different
chirality structure. As a result, a ${\mathbb Z}_2\times {\mathbb Z}_2$ orbifold of
the $SO(12)$ torus has ``effective'' Hodge numbers $(h_{11}
,h_{21})= (15,15)$ rather than $(h_{11},h_{21}) = (51,3)$ or
$(h_{11},h_{21}) = (3,51)$ as expected when the off-diagonal
components of G and B are set to zero \cite{Berkooz:1996dw}. The
somewhat analogous peculiarities resulting from turning on a
discrete quantized value for the B-field, originally observed in
\cite{BPStor} and then in \cite{MBtor,EWtor,Angelantonj:1999xf},
has been recently reanalyzed in \cite{CBetal, Pesando:2008xt}.
The plan of the paper is as follows. In Section 2 we sketch the
idea of perturbative moduli stabilization by means of (T-duality)
twists and shifts. In Section 3 we describe the basic ingredients
of the free fermionic construction with particular attention to
the case of chiral ${\mathbb Z}_2$ actions. In Section 4 we present the
results of a systematic search over consistent ${\mathbb Z}_2^4$ orbifolds
of Type IIB models with ${\cal N}=2$ susy that admit natural
projections to unoriented ${\cal N}=1$ theories. In particular, we
analyze in some details the ``minimal'' model with ``effective''
Hodge numbers $(h_{11},h_{21})=(1,1)$, that seems to have escaped
previous scans in the literature \cite{Kiritsis:2008mu,
Donagi:2008xy}. In Section 5 we describe oriented Type II models
with ${\cal N}=1$ susy.
In Section 6, we present an unoriented model without D-branes based
on the Type IIB model with $(h_{11},h_{21})=(1,1)$ and consistent
with the asymmetric nature of the shift-orbifolds presented in
Section 4. We also analyze a simple instance of an unoriented
model with open strings.
Finally, Section 7 contains our conclusions and some perspectives on the
issue of moduli stabilization. Useful formulas are reported in
Appendices A and B.
\section{Twists and shifts}
In view of moduli stabilization, a particularly promising class of
solvable models are asymmetric orbifolds of tori
\cite{Narain:1986qm}. Indeed, chiral twists tend to freeze out
untwisted moduli while
(non-geometric) shifts tend to eliminate twisted
moduli. In Left-Right asymmetric constructions level matching
constraints are very demanding and the perspective of a systematic
analysis are daunting. A very simple class of solvable models
which are equivalent to asymmetric orbifolds of special tori are
free fermionic models \cite{Kawai:1986va,abk}. The rules for
constructing modular invariant partition functions compatibly with
both world-sheet and space-time supersymmetry are well understood
and will be reviewed in the next Section. Here we would like to
offer a geometric interpretation of the free fermion
${\mathbb Z}_2$-reflections in terms of T-duality twists and shifts.
We will denote by $I_{i}$ a ${\mathbb Z}_{2L}$ chiral reflection of the
$i^{\rm th}$ Left-moving internal bosonic and fermionic
coordinates \begin{eqnarray}
I_i: && X_L^i \rightarrow - X^i_L \ ,
\quad \quad X_R^i
\rightarrow X^i_R \ , \quad \quad
\psi^i \rightarrow -
\psi^i \ , \quad \quad \tilde\psi^i \rightarrow \tilde \psi^i \ .
\end{eqnarray}
In a similar way one defines the Right-moving twist as
\begin{eqnarray}
\bar I_i: && X_L^i \rightarrow X^i_L \ , \quad \quad X_R^i
\rightarrow -X^i_R \ , \quad \quad \psi^i \rightarrow
\psi^i \ , \quad \quad \tilde\psi^i \rightarrow - \tilde \psi^i \ .
\end{eqnarray}
In addition, we denote by $I_{i_1 i_2\ldots}=I_{i_1} I_{i_2}\ldots $ the simultaneous
reflections along the
$(i_1 i_2\ldots )$ directions and similarly for the Right moving ones.
We will consider ${\mathbb Z}_2^4$ orbifolds with generators including Left and Right twists
$I_{3456}, I_{1256}$ and $ \bar I_{3456},\bar I_{1256}$ respectively.
Each twist breaks half of the Left or Right moving supersymmetries and
one is left with $1/4$ of the original spacetime susy.
Moreover, all untwisted NS-NS moduli fields
\begin{equation} |i\rangle_L \otimes |j \rangle_R = \psi^i_{-{1\over 2}} |0\rangle_L \otimes \tilde\psi^j_{-{1\over 2}} |j \rangle_R
\quad\quad i=1,\ldots , 6 \quad,
\end{equation}
are projected out by the orbifold group. This implies that both
shape and size deformations of the internal manifold are frozen
out. Similarly, in the untwisted R-R sector one can see that only
the scalar and the axion that together with the dilaton/axion NS-NS
moduli complete the universal hypermultiplet survive the
projection.
Let us now consider moduli coming from the twisted sector.
In order to lift as many massless twisted states as possible one
has to combine chiral twists with chiral (non-geometric) shifts.
We denote the Left moving chiral shift along the $i^{\rm th}$ direction by
\begin{equation}
\sigma_{i}: X^i_L \to X^i_L+\delta \ , \qquad X^i_R \to X^i_R \quad ;
\end{equation}
with $2\delta$ a chiral lattice vector. Similarly we denote by
\begin{equation}
\bar \sigma_{i}: X^i_R \to X^i_R+\bar \delta \ , \qquad X^i_L \to X^i_L \quad;
\end{equation}
the Right moving shifts and by $\sigma_{i_1 i_2\ldots}$, $\bar\sigma_{i_1 i_2\ldots}$ the multiple shifts.
Level matching, {\it i.e.\ } modular invariance, puts severe constraints on the allowed choices of $\sigma$'s.
Another tool one can resort to in order to eliminate twisted
moduli is the judicious choice of discrete torsion
\cite{Vafa:1986wx, Vafa:1994rv}, {\it i.e.\ } of the relative signs (for
${\mathbb Z}_2$) that multiply orbits of amplitudes not connected
by modular transformations. In the simplest case, discrete torsion
relates the diagonal modular invariant to the charge conjugation
one. More generally, exotic modular invariant combinations of the
chiral characters can change and in some cases drastically reduce
the number of massless combinations.
\section{Free fermions versus asymmetric orbifolds}
In order to perform a systematic search for models with few moduli
in a full-fledged string description we resort to the free
fermionic construction pioneered by Kawai, Lewellen and Tye
\cite{Kawai:1986va} and by Antoniadis, Bachas and Kounnas
\cite{abk}.
In this description, one fermionizes the internal Left-moving bosonic
coordinates \begin{equation} \partial X^i = y^i w^i \quad\quad i=1,\ldots , 6 \quad ,\end{equation} and rewrites the
worldsheet supercurrent as\footnote{Other choices are possible.}
\begin{equation}
G = \psi^\mu \partial X_\mu + \psi^i y^i w^i \ , \quad\quad \mu=7,8 \quad.
\end{equation}
All fermions $\{ \psi^\mu,\psi^i, y^i, w^i\}$ are taken to be periodic to start with.
The Right-moving fermions $\{ \tilde \psi^\mu,\tilde \psi^i, \tilde y^i, \tilde w^i\}$ are introduced in a similar way.
Now, let us consider the orbifolding of the free fermion system by
${\mathbb Z}_2$ reflections. A reflection is denoted by a fermion set
$b_\alpha$ that includes all fermions odd under the ${\mathbb Z}_2$.
Spacetime susy and modular invariance put additional constraints
on the allowed fermion sets. Preservation of the worldsheet
supercurrent under parallel transport requires \begin{eqnarray}
\forall i \quad\quad && \# ~\psi^i - \# ~y^i -\# ~w^i=0~{\rm mod }~2 \ ;\nonumber\\
\forall i \quad\quad && \# ~\tilde \psi^i - \# ~\tilde y^i -\# ~\tilde w^i=0~{\rm mod }~2 \ .
\label{condsusy}
\end{eqnarray}
Modular invariance (or level matching) amounts to the following conditions on the basis fermionic sets:
\begin{eqnarray}
n(b_\alpha)&=&0~{\rm mod }~8 \ ;\nonumber\\
n(b_\alpha \cap b_\beta)&=&0~{\rm mod }~4 \ ;\nonumber\\
n(b_\alpha \cap b_\beta \cap b_\gamma)&=&0~{\rm mod }~2 \ ;\nonumber\\
n(b_\alpha \cap b_\beta\cap b_\gamma \cap b_\sigma)&=&0~{\rm mod }~2 \ ; \label{consistency}
\end{eqnarray}
with $n(b)$ denoting the difference between the number of Left- and Right- moving fermions in the set $b$ and the greek indices running
over the generators of the orbifold group.
The free fermion description of Type IIB on the $T^6$ maximal torus of $SO(12)$ is obtained
by including the following fermionic sets
\begin{eqnarray}
F &=& \{ \psi^{1\ldots 8} \, y^{1\ldots 6} \,
w^{1\ldots 6} | \, \tilde\psi^{1\ldots 8}\, \tilde{y}^{1\ldots 6}\,\tilde{w}^{1\ldots 6} \} \ , \nonumber\\
S & =& \{\psi^{1\ldots 8} \}\ , \quad\quad \tilde{S} = \{\tilde
\psi^{1\ldots 8} \} \ . \end{eqnarray} Indeed, the quotient by $F$ results
into a sum over all possible boundary conditions of worldsheet
fermions, while $S$ and $\tilde{S}$ realize the Left and Right
moving GSO projections, ensuring spacetime susy. Omitting the
integral over moduli space, the resulting partition function can
be written as \begin{eqnarray}
{\cal T}_{4_L+4_R} &=& {1\over \eta^2 \bar \eta^2} |V_8-S_8 |^2 \left( |O_{12}|^2 + |V_{12}|^2 + |S_{12}|^2+|C_{12}|^2\right) \nonumber\\
&=& \ft18 \left|{\vartheta_3^4\over \eta^{12}}-{\vartheta_4^4\over \eta^{12}} -{\vartheta_2^4\over \eta^{12}} \right|^2 ( |\vartheta_2|^{12}+ |\vartheta_3|^{12}+
|\vartheta_4 |^{12} ) \ , \label{torus1}
\end{eqnarray}
where the subscript $4_L+4_R$ reminds that $4_L$ and $4_R$ susy comes from the Left and Right movers, respectively.
We write partition functions both in terms of characters of
$SO(n)$ at level one or in terms of theta functions, as convenient
in the specific context (see Appendix A for definitions and
conventions). Moreover, signs (discrete torsion) will be chosen
judiciously and respecting the spin-statistic relation.
Another choice consists of keeping only the sets $F$ and $S$, finding the $4_L+0_R$
partition function
\begin{eqnarray}
{\cal T}_{4_L+0_R} &=& {1\over \eta^2 \bar \eta^2} (V_8 - S_8) [O_{12}
\bar V_{20} + V_{12} \bar O_{20} - S_{12} \bar S_{20} - C_{12}
\bar C_{20}] \nonumber\\
&=& {1\over 4 \, \eta^{12}\,\bar \eta^{12}} \left(\vartheta_3^4-\vartheta_4^4 -\vartheta_2^4 \right)
( \vartheta_3^6 \bar \vartheta_3^{10}-\vartheta_4^6 \bar \vartheta_4^{10}
-\vartheta_2^6 \bar \vartheta_2^{10} ) \ . \label{torus2}
\end{eqnarray}
In the following, we will consider asymmetric ${\mathbb Z}_2$ orbifolds of the
${\cal N}=4_L+4_R$ and ${\cal N}=4_L+0_R$ models. The ${\mathbb Z}_2$ elements will be built out
of chiral reflections $I_i,\bar I_i$ and shifts $\sigma_i,\bar \sigma_i$. In the fermionic language twists
and shifts correspond to the following actions on the worldsheet fermions
\begin{eqnarray}
I_i: && \psi^i \to -\psi^i \ , \quad \quad y^i \to -y^i \ ; \nonumber\\
\sigma_i: && y^i \to -y^i \ , \quad \quad w^i \to -w^i \ ;
\end{eqnarray}
with identical expressions for $\bar I_i$ and $\bar \sigma_i$ with fermions replaced by tilde ones.
Alternatively, one can denote reflections and shifts by their associated fermionic sets
\begin{eqnarray}
I_i &=& \{\psi^i \,y^i \} \ , \quad \quad \sigma_i=\{y^i \,w^i \} \ , \nonumber\\
\bar I_i &=& \{\tilde \psi^i\, \tilde y^i \} \ , \quad \quad \bar\sigma_i=\{\tilde y^i \,\tilde w^i \} \ .
\end{eqnarray}
Notice that the non-trivial intersections between the sets are
\begin{eqnarray}
I_i\cap I_j &=& \sigma_i \cap \sigma_j=2 \,\delta_{ij} \ , \quad \quad I_i \cap \sigma_j=\delta_{ij} \ , \nonumber\\
\bar I_i\cap \bar I_j &=& \bar \sigma_i \cap \bar \sigma_j=2 \,\delta_{ij} \ , \quad \quad \bar I_i \cap \bar \sigma_j=\delta_{ij} \ .
\end{eqnarray}
These relations can be used to check the consistency conditions (\ref{consistency}) of an orbifold group generated by
twists and shifts.
\section{ Models with ${\cal N}=1_L + 1_R$}
\label{sect11}
We performed a systematic search of models with basis sets
$F,S,\tilde{S}$ together with four additional sets of the form
\begin{eqnarray} &&
b_1 =(b_{1L},b_{1R})= I_{3456}\, \sigma^{i_1 i_2 \ldots }\,\bar \sigma^{k_1 k_2 \ldots } = \{ (\psi \,y)^{3456} \, (y\, w)^{i_1 i_2 \ldots } | (\tilde y\, \tilde w)^{k_1 k_2 \ldots } \} \ , \nonumber\\
&&b_2 = (b_{2L},b_{2R})=I_{1256}\, \sigma^{j_1 j_2 \ldots }\,\bar \sigma^{l_1 l_2 \ldots } = \{(\psi\, y)^{1256} \, (y\, w)^{j_1 j_2 \ldots } | (\tilde y\, \tilde w)^{ l_1 l_2 \ldots } \} \ , \nonumber\\
&&\bar{b}_1 =(b_{1R},b_{1L})= \bar I_{3456}\, \sigma^{k_1 k_2 \ldots }\,\bar \sigma^{i_1 i_2 \ldots } = \{ ( y\, w)^{k_1 k_2 \ldots } | (\tilde\psi\, \tilde y)^{3456}
(\tilde y\, \tilde w)^{i_1 i_2 \ldots } \} \ , \nonumber\\
&&\bar{b}_2 =(b_{2R},b_{2L})= \bar I_{1256}\, \sigma^{l_1 l_2 \ldots }\,\bar \sigma^{ j_1 j_2 \ldots } = \{ ( y\, w)^{ l_1 l_2 \ldots } | (\tilde\psi\, \tilde y)^{1256}
(\tilde y\, \tilde w)^{j_1 j_2 \ldots } \} \ , \end{eqnarray}
The scanning ran over all choices of sets $ (i_1 i_2 \ldots) $, $(j_1 j_2 \ldots) $, $ (k_1 k_2 \ldots) $, $(l_1 l_2 \ldots) $ compatibly with the conditions (\ref{consistency}).
Each set $b_\a$ breaks half of the spacetime susy's arising from
the Left- or Right- moving sector. One is thus left with ${\cal N} =
1_L + 1_R$ susy.
Defining for notational convenience\footnote{The product is defined as $b_i b_j = b_i \cup b_j - b_i \cap b_j$} $b_3=b_1 b_2$, $\bar b_3=\bar b_1\bar b_2$,
the generic orbifold group element can be written as $b_{a} \bar b_{ b}$
with $a,b=0,..,3$ and $b_0=\bar b_0=1$.
We recall that a contribution of a single Left moving fermion among $\{\psi^i,y^i,w^i\}$ is given by
$\left( {\vartheta_s / \eta}\right)^{1\over 2}$ (with
$s=2,3,4$ labelling the spin structure), and similarly for Right moving fermions with $\vartheta_s / \eta$ replaced by $\bar\vartheta_s / \bar\eta$.
The ${\mathbb Z}_2$ actions are thus equivalent to
\begin{eqnarray}
{\mathbb Z}_2:
&& \vartheta^{1\over 2}_2\to \vartheta^{1\over 2}_1 \quad ,\quad \vartheta^{1\over 2}_3\leftrightarrow \vartheta^{1\over 2}_4 \quad ,\quad
\bar \vartheta^{1\over 2}_2\to \bar \vartheta^{1\over 2}_1 \quad ,\quad \bar \vartheta^{1\over 2}_3\leftrightarrow \bar \vartheta^{1\over 2}_4 \ .
\end{eqnarray}
The torus partition function can then be written as
\begin{equation} {\cal T}=\ft{1}{16 \ \eta^8 \bar \eta^8} \sum_{a,b,c,d=0}^3\, \rho_{ac}\, \bar
\rho_{bd}\, {\bf \L}[^{a b}_{cd}] \quad , \end{equation} where
${\bf \L}[^{ab}_{cd}]$ denotes the contribution of the
$(b_c \bar b_{d})$-projection in the $(b_a \bar b_{b})$-twisted sector,
i.e.
\begin{equation}
{\bf \L}[^{ab}_{cd}]=\ft12 \epsilon_{a,b,c,d} \sum_{\alpha,\beta=0,{1\over 2}} \prod_{i=1}^{12}
\ \vartheta[^{\alpha+b_{aL,i}+ b_{bR,i}}_{\beta+b_{cL,i}+ b_{dR,i} }]^{1\over 2}
\ \bar{\vartheta}[^{\alpha+b_{aR,i}+ b_{bL,i}}_{\beta+b_{cR,i}+ b_{dL,i} }]^{1\over 2}
\end{equation}
with $i$ running over the 12 lattice fermions $y^i w^i$ and $b_{La,i}$,$b_{Ra,i}$ being $0$ or $\ft12$ depending on whether the $i^{th}$ fermion is even or odd under $b_{aL}$ and $b_{aR}$ respectively. $\epsilon_{a,b,c,d} $ are signs fixed by modular invariance up to discrete torsions.
The precise form of ${\bf \L}[^{ab}_{cd}]$ depends on the
details of the specific model.
Finally the $\psi$-contribution to the amplitudes can be written as \begin{eqnarray}
\r_{00}&=&- \frac{\vartheta_{2,{\rm st} } \vartheta_2^3-\vartheta_{3,{\rm st} } \vartheta_3^3 +\vartheta_{4,{\rm st} } \vartheta_4^3}{2 \eta ^4} \ , \nonumber\\
\r_{0h}&=&\frac{ \vartheta_{3,{\rm st} } \vartheta_3 \vartheta_4^2-
\vartheta_{4,{\rm st} }\vartheta_4 \vartheta_3^2}{2 \eta
^4} \ , \nonumber\\
\r_{h0}&=&\frac{ \vartheta_{3,{\rm st} } \vartheta_3 \vartheta_2^2-
\vartheta_{2,{\rm st} }\vartheta_2 \vartheta_3^2}{2 \eta
^4} \ , \nonumber\\
\r_{hh}&=&\frac{ \vartheta_{2,{\rm st} } \vartheta_2 \vartheta_4^2-
\vartheta_{4,{\rm st} }\vartheta_4 \vartheta_2^2}{2 \eta
^4} \ , \nonumber\\
\r_{13} &=&\r_{23}=\r_{32}=-\r_{12}=-\r_{21}=-\r_{31}=\frac{i
\vartheta_{1,{\rm st} }\vartheta_2 \vartheta_3 \vartheta_4} {2
\eta ^4} \ , \end{eqnarray} with $h=1,2,3$ and the subscript ``st'' denoting the
contribution coming from the spacetime part that encodes the
helicity of the particle (see Appendix A for the definitions of
the amplitudes given in terms of the $SO(2n)$ characters). The
massless content of each model can be read by plugging in the
partition function
the well-known theta expansions
\begin{eqnarray}
\vartheta_{1,{\rm st}} &=& (S-C) q^{1\over 8}+\ldots \ , \quad\quad
\vartheta_{2,{\rm st}} = (S+C) q^{1\over 8}+\ldots \ , \nonumber\\
\vartheta_{3,{\rm st}} &=& 1+V\, q^{1\over 2} \ldots \ , \quad\quad\quad ~~~
\vartheta_{4,{\rm st}} = 1-V\, q^{1\over 2} \ldots \ , \nonumber\\
\vartheta_2 &=& 2 q^{1\over 8}+\ldots \ , \quad\quad\quad\quad ~~~~
\vartheta_{3,4} = 1+2\, q^{1\over 2} \ldots \ , \, \quad\quad\quad ~~ \eta = q^{1\over 24}+\ldots \ .
\end{eqnarray}
where $O,V,S,C$ denote a four-dimensional scalar, vector, left spinor and right spinor, respectively.
The result can always be written in the form
\begin{eqnarray}
{\cal T}_0 &=& |V-S-C|^2+ n_v\, \left[ |O-S|^2+|O-C|^2\right]\nonumber\\
&&~~~~~~~~~~~~~~~~~+ (n_h - 1) \left[ (O-S)(\bar O - \bar C )+(O-C) (\bar O - \bar S)\right] +\ldots\nonumber\\
&=& {\bf G}_{2} + n_v {\bf V}_{2} + n_h \, {\bf H}_{2} \ ,
\end{eqnarray}
with $n_h$ and $n_v$ the number of hyper- and vector-multiplets
respectively, and
\begin{eqnarray}
{\bf G}_{2} + {\bf H}_{2} &=& |V-S-C|^2 \ ,
\nonumber\\
{\bf V}_{2} &=& |O-S|^2+|O-C|^2 \ ,
\nonumber\\
{\bf H}_{2} &=& (O-S)(\bar O - \bar C )+(O-C) (\bar O -
\bar S)
\end{eqnarray}
the ${\cal N}=2$ supergravity, hyper- and vector-multiplet contents,
each comprising $4_B+4_F$ physical degrees of freedom.
Due to the asymmetric twists and shifts, the resulting vacuum configurations
do not correspond to
compactifications of Type IIB on geometric CY manifolds, yet the theory enjoys ${\cal N} =2$ spacetime susy.
We are thus led to define the ``effective'' Hodge numbers
\begin{equation}
h_{11} = n_h -1 \ , \quad \quad h_{21} = n_v \ ,
\end{equation}
and also define the ``effective'' Euler characteristic $\chi =
2(h_{11} - h_{21}) = 2(n_h-n_v) - 2$.
In the following, we first describe in some details the simplest model with
minimal massless content\footnote{A related but different model with
extended ${\cal N}=2_L + 2_R$ susy and thus larger massless multiplets
has been exhibited in \cite{Kiritsis:2008mu}.}, namely the one with $(h_{11},h_{12})=(1,1)$. Then, we
report the complete list of models resulting from our scan.
\subsection{An example: $(h_{11},h_{12})=(1,1)$ }
One of the possible choices or twists and shifts that give rise to an interesting $(h_{11},h_{12})=(1,1)$ is the following:
\begin{eqnarray}
b_1&=& I _{3456} ~ \sigma_{1} ~ \overline{\sigma}_{5} \ , \nonumber \\
b_2 &=& I _{1256} ~ \sigma_{3} ~ \overline{\sigma}_{12345} \ , \nonumber \\
\bar{b}_1 &=& \bar{I}_{3456} ~ \sigma_{5} ~ \overline{\sigma}_{1} \ , \nonumber \\
\bar{b}_2 &=& \bar{I }_{1256} ~ \sigma_{12345} ~ \overline{\sigma}_{3} \ .
\label{mod11}\end{eqnarray}
Many of the amplitudes vanish due to the presence of $SO(12)$ fermions in the odd spin structure.
The lattice sums of the non-vanishing amplitudes read
\begin{eqnarray}
\begin{array}{llllll}
&&{\bf \L}[^{00}_{00}]=\ft12 \left( |\vartheta _{2}|^{12} + |\vartheta _{3}|^{12}+|\vartheta_{4}|^{12}\right)\nonumber\\
&&{\bf \L}[^{00}_{h0}]=\ft12 \vartheta _{3}^3 \vartheta _{4}^3 \bar{\vartheta }_{3} \bar{\vartheta }_{4} \left(\bar{\vartheta}_{3}^4+\bar{\vartheta }_{4}^4\right) \nonumber\\
&&{\bf \L}[^{00}_{30}]=\ft12 |\vartheta _{3} \vartheta _{4}|^4 \left(\vartheta_{4}^2 \bar{\vartheta }_{3}^2+\vartheta _{3}^2 \bar{\vartheta }_{4}^2\right)\nonumber\\
&&{\bf \L}[^{00}_{h h'}]={\bf \L}[^{00}_{h 3}]= |\vartheta _{3} \vartheta _{4}|^6 \nonumber\\
&&{\bf \L}[^{00}_{33}]=\ft12 |\vartheta _{3} \vartheta _{4}|^4 \left(|\vartheta_{3}|^2+|\vartheta _{4}|^2\right)\nonumber\\
&&{\bf \L}[^{h 0}_{00}]=\ft12 \vartheta _{2}^3 \vartheta _{3}^3 \bar{\vartheta }_{2} \bar{\vartheta }_{3} \left(\bar{\vartheta}_{2}^4+\bar{\vartheta }_{3}^4\right) \nonumber\\
&&{\bf \L}[^{30}_{00}]=\ft12 |\vartheta _{2} \vartheta _{3}|^4 \left(\vartheta_{3}^2 \bar{\vartheta }_{2}^2+\vartheta _{2}^2 \bar{\vartheta }_{3}^2\right) \nonumber\\
&&{\bf \L}[^{h h'}_{00}]={\bf \L}[^{h 3}_{00}]= |\vartheta _{2} \vartheta _{3}|^6 \nonumber\\
&&{\bf \L}[^{33}_{00}]= \ft12 |\vartheta _{2} \vartheta _{3}|^4 \left(
|\vartheta_{2}|^4 + |\vartheta _{3}|^4 \right)\nonumber\\
&&{\bf \L}[^{h0}_{h0}]=\vartheta _{2}^3 \vartheta _{4}^3 \bar{\vartheta }_{2} \bar{\vartheta }_{4} \left(\bar{\vartheta}_{2}^4-\bar{\vartheta}_{4}^4\right) \nonumber\\
&&{\bf \L}[^{30}_{30}]= \ft12 |\vartheta _{2} \vartheta _{4}|^4 \left(\vartheta_{2}^2 \bar{\vartheta }^2_{4}-\vartheta _{4}^2 \bar{\vartheta }_{2}^2\right) \nonumber\\
&&{\bf \L}[^{hh}_{h' h'}]= |\vartheta _{2} \vartheta _{4} |^6 \nonumber\\
&&{\bf \L}[^{33}_{33}]=\ft12 |\vartheta _{2} \vartheta _{4}|^4 \left(| \vartheta_{2} |^4+ | \vartheta _{4}|^4\right)
\end{array}
\end{eqnarray}
with $h,h'=1,2$ and ${\bf \L}[^{ab}_{cd}] = {\bf \L}[^{ba}_{dc}]^*$. Thanks
to the four independent $Z_2$ chiral twists all massless states in
the untwisted sector, except the ${\cal N} = 2$ supergravity multiplet
and the universal dilaton hypermultiplet, are projected out. In
addition, due to the chiral shifts, most twisted sectors, except
for the $(3,3)$ sector, contribute only massive states. Indeed, a
twisted sector contribute massless states only when the shifts are
along the reflection plane. This condition is satisfied only for
$b_3 \bar b_3=I_{1234}\bar I_{1234} \sigma_{24}\bar
\sigma_{24}$.
As a consequence, the only massless contributions are: \begin{eqnarray}
&&\sum_{a,b=0}^3 {\cal T}_0[^{00}_{ab}] = |V-S-C|^2 \ , \nonumber\\
&&{\cal T}_0[^{3 3}_{0 0}] + {\cal T}_0[^{3 3}_{3 3}]=
|2 O-S-C|^2 \ . \end{eqnarray}
The latter, as anticipated, gives precisely one hyper- and one
vector- multiplet. This is the minimal massless content among all
known Type II compactifications admitting an isomorphism under
exchange of Left- and Right-movers and thus amenable to a natural
unoriented projection. We remark that several Left-Right
asymmetric models are known with fewer moduli: for instance, a ``minimal''
${\cal N}=2_L + 0_R$ model with only the dilaton vector multiplet
recently exhibited in \cite{Dolivet:2007sz} as a starting point
for the construction of ``magic'' ${\cal N}=2$ supergravity theories
\cite{MAGIC}. Moreover, there are models with ${\cal N}=3$ susy
constructed in \cite{Ferrara:1989nm} with only one vector
multiplet, comprising only 3 complex massless scalars, including
the dilaton. Other systematic searches of models with low
``effective'' Hodge numbers \cite{Donagi:2008xy} seem to only
focus on Left-Right symmetric twists and shifts that lead at most
(or at least) to $h_{11} = h_{21} =3$, well known from the work of
Vafa and Witten \cite{Vafa:1994rv} and more recently of
\cite{Camara:2007dy} in the realm of Type I/heterotic duality.
\subsection{Various ${\cal N}=1_L+1_R$ Models }
In addition to the above model, we have found many (new) Type IIB
non-geometric yet Left-Right symmetric models with low ``effective''
Hodge numbers, which may still turn out to be interesting starting
points for Type I model building.
We remark that although our models are given in terms of a rational
CFT, a systematic study of open string descendants is complicated
by the high number of characters involved, typically of the order
of one thousand. They can probably
be explored by computer means along the lines of \cite{Kiritsis:2008mu}.
In Table $1$ we report all our consistent models.
We keep track of the pattern of (pseudo)symmetry breaking
$SO(12)\rightarrow \prod_I SO(n_I)$.
Curiously, the
whole list of models can be grouped into the following three finite series for $(h_{11},h_{12})$:
\begin{eqnarray}
(n,n) && \quad\quad n=1,2,3,4,5,9 \nonumber\\
(2n,2n+6), (2n+6,2n) && \quad\quad n=0,1,2 \nonumber\\
(2n+3,2n+15), (2n+15,2n+3) && \quad\quad n=0,1 \label{models}
\end{eqnarray} Our systematic search scans over all possible shifts with all
discrete torsion signs taken to be plus. A longer list of
consistent models can be built after playing with more general
discrete torsion choices. In particular, it should be noticed that
the ``effective'' Euler number is always a multiple of 12, as
claimed in the Introduction. Finally, as apparent from Table $1$,
identical patterns of (pseudo)symmetry breaking may lead to rather
different massless spectra. This can be explained, in the cases
under consideration, by noticing that models with the same
breaking differ due to different choices of discrete torsions.
The results of our systematic search partly overlap with the recent
results of a scan over ${\mathbb Z}_2$ orbifolds of the product of 18 Ising
models presented in \cite{Kiritsis:2008mu}. An important
difference between the two searches is the choice of the $T^6$
lattice. We start with the non-factorizable $SO(12)$ maximal torus
$T^6_{SO(12)}$ while the authors of \cite{Kiritsis:2008mu} start
with a factorizable $T^6$.
\begin{center}
{\bf Table 1}
\begin{eqnarray}
\begin{array}{|c|c|c|}
\hline
~~~~~~~~~~ b's ~~~~~~~~~~&
~~~~~~~~~~~~~~SO(12)~~~~~~~~~~~~~~
& ~~~~~~~(h_{11},h_{12})~~~~~~~\\
\hline
\hline
\begin{array}{lllllll}
I _{3456} ~ \sigma_{1} ~ \overline{\sigma}_{5} \\
I _{1256} ~ \sigma_{3} ~ \overline{\sigma}_{12345} \\
\bar{I}_{3456} ~ \sigma_{5} ~ \overline{\sigma}_{1} \\
\bar{I }_{1256} ~ \sigma_{12345} ~ \overline{\sigma}_{3}
\end{array}
&
~~SO(2)^4\times O(1)^4 %
&
(1,1)\\
\hline
\begin{array}{lllllll}
I _{3456} ~ \sigma_{1} ~ \overline{\sigma}_{2} \\
I _{1256} ~ \sigma_{3} ~ \overline{\sigma}_{12345} \\
\bar{I}_{3456} ~ \sigma_{2} ~ \overline{\sigma}_{1} \\
\bar{I }_{1256} ~ \sigma_{12345} ~ \overline{\sigma}_{3}
\end{array}
&
~~SO(3)\times SO(2)^2\times O(1)^5 &
(2,2)\\
\hline
\begin{array}{lllllll}
I _{3456} ~ \sigma_{12} ~ \overline{\sigma}_{123456} \\
I _{1256} ~ \sigma_{236} ~ \overline{\sigma}_{1} \\
\bar{I}_{3456} ~ \sigma_{123456} ~ \overline{\sigma}_{12} \\
\bar{I }_{1256} ~ \sigma_{1} ~ \overline{\sigma}_{236}
\end{array}
&
~~SO(3)^2\times SO(2)^2 \times O(1)^2
&
(3,3)\\
\hline
\begin{array}{lllllll}
I _{3456} ~ \sigma_{1} ~ \overline{\sigma}_{5} \\
I _{1256} ~ \sigma_{3} ~ \overline{\sigma}_{12456} \\
\bar{I}_{3456} ~ \sigma_{5} ~ \overline{\sigma}_{1} \\
\bar{I }_{1256} ~ \sigma_{12456} ~ \overline{\sigma}_{3}
\end{array}
&
~~SO(3)\times SO(2)^2\times O(1)^5 &
(4,4)\\
\hline
\begin{array}{lllllll}
I _{3456} ~ \sigma_{126} ~ \overline{\sigma}_{12} \\
I _{1256} ~ \sigma_{346} ~ \overline{\sigma}_{35} \\
\bar{I }_{3456} ~ \sigma_{12} ~ \overline{\sigma}_{126} \\
\bar{I }_{1256} ~ \sigma_{35} ~ \overline{\sigma}_{346}
\end{array}
&
~~SO(2)^4\times O(1)^4
&
(5,5)\\
\hline
\begin{array}{lllllll}
I_{3456} ~ \sigma_{12} ~\bar{\sigma }_{12} \\
I_{1256} ~ \sigma _{34} ~\bar{\sigma }_{56} \\
\overline{I}_{3456}~ \sigma _{12} ~ \bar{\sigma }_{12} \\
\overline{I}_{1256} ~ \sigma _{56}~ \bar{\sigma }_{34}
\end{array}
&
~~SO(2)^6 &
(9,9)\\
\hline
\begin{array}{lllllll}
I_{3456} ~ \sigma_{12} ~
\bar{\sigma}_{13} \\
I_{1256} ~ \sigma_{34} ~
\bar{\sigma}_{25} \\
\overline{I}_{3456} ~ \sigma_{13} ~ \bar{\sigma}_{12} \\
\overline{I}_{1256} ~ \sigma_{25} ~ \bar{\sigma}_{34}
\end{array}
&
~~SO(2)^3\times O(1)^6
&
(6,0)\\
\hline
\begin{array}{lllllll}
I_{3456} ~ \sigma_{12} ~
\bar{\sigma}_{15} \\
I_{1256} ~ \sigma_{34} ~
\bar{\sigma}_{36} \\
\overline{I}_{3456} ~\sigma_{15} ~
\bar{\sigma}_{12} \\
\overline{I}_{1256} ~ \sigma_{36} ~
\bar{\sigma }_{34}
\end{array}
&
~~SO(2)^3\times O(1)^6
&
(0,6)\\
\hline
\end{array}
\nonumber
\end{eqnarray}
\begin{eqnarray}
\begin{array}{|c|c|c|}
\hline
~~~~~~~~~~ b's ~~~~~~~~~~&
~~~~~~~~~~~~~~SO(12)~~~~~~~~~~~~~~
& ~~~~~~~(h_{11},h_{12})~~~~~~~\\
\hline
\hline
\begin{array}{lllllll}
I_{3456} ~ \sigma_{1} ~
\bar{\sigma}_{4} \\
I_{1256} ~ \sigma_{356} ~
\bar{\sigma}_{2} \\
\overline{I}_{3456} ~ \sigma_{4} ~
\bar{\sigma}_{1} \\
\overline{I}_{1256} ~ \sigma_{2} ~
\bar{\sigma}_{356}
\end{array}
&
~~SO(3)^2 \times SO(2)\times O(1)^4
&
(2,8)\\
\hline
\begin{array}{lllllll}
I_{3456} ~ \sigma_{1} ~
\bar{\sigma}_{2} \\
I_{1256} ~ \sigma_{356} ~
\bar{\sigma}_{4} \\
\overline{I}_{3456} ~ \sigma _{2} ~
\bar{\sigma}_{1} \\
\overline{I}_{1256} ~ \sigma_{4} ~
\bar{\sigma}_{356}
\end{array}
&
~~SO(3)^2 \times SO(2)\times O(1)^4
&
(8,2)\\
\hline
\begin{array}{lllllll}
I_{3456} ~ \sigma_{1} ~
\bar{\sigma}_{5} \\
I_{1256} ~ \sigma_{346} ~
\bar{\sigma}_{25} \\
\overline{I}_{3456}~ \sigma_{5} ~
\bar{\sigma}_{1} \\
\overline{I}_{1256} ~\sigma_{25} ~
\bar{\sigma}_{346}
\end{array}
&
~~SO(3)^2 \times SO(2)\times O(1)^4
&
(4,10)\\
\hline
\begin{array}{lllllll}
I_{3456} ~ \sigma_{12} ~
\bar{\sigma}_{45} \\
I_{1256} ~ \sigma_{36} ~
\bar{\sigma}_{5} \\
\overline{I}_{3456} ~\sigma_{45} ~
\bar{\sigma}_{12} \\
\overline{I}_{1256} ~ \sigma_{5} ~
\bar{\sigma}_{36}
\end{array}
& ~~SO(3)^2 \times SO(2)\times O(1)^4
&
(10,4)\\
\hline
\begin{array}{lllllll}
I _{3456} ~ \sigma_{12} ~ \overline{\sigma}_{12} \\
I _{1256} ~ \sigma_{34} ~ \overline{\sigma}_{34} \\
\bar{I }_{3456} ~ \sigma_{12} ~ \overline{\sigma}_{12} \\
\bar{I }_{1256} ~ \sigma_{34} ~ \overline{\sigma}_{34}
\end{array}
&
~~SO(2)^6 &
(15,3)\\
\hline
\begin{array}{lllllll}
I_{3456}~
\bar{\sigma}_{3456} \\
I_{1256} ~
\bar{\sigma}_{1256} \\
\overline{I}_{3456} ~ \sigma_{3456} \\
\overline{I}_{1256} ~ \sigma_{1256}
\end{array}
&
~~SO(2)^6
&
(3,15)\\
\hline
\begin{array}{lllllll}
I_{3456} ~ \sigma_{12} ~
\bar{\sigma}_{34} \\
I_{1256} ~ \sigma_{34} ~
\bar{\sigma}_{123456} \\
\overline{I}_{3456} ~ \sigma_{34} ~
\bar{\sigma}_{12} \\
\overline{I}_{1256} ~ \sigma_{123456} ~
\bar{\sigma}_{34}
\end{array}
&
~~SO(4)\times SO(2)^4
&
(5,17)\\
\hline
\begin{array}{lllllll}
I_{3456} ~ \sigma_{126} ~
\bar{\sigma}_{123456} \\
I_{1256} ~\sigma_{5}~
\bar{\sigma}_{3456} \\
\overline{I}_{3456} ~ \sigma_{123456} ~
\bar{\sigma}_{12} \\
\overline{I}_{1256} ~ \sigma_{3456}~\bar \sigma_{5}
\end{array}
&
~~SO(4)\times SO(2)^4
&
(17,5)\\
\hline
\end{array}
\nonumber
\end{eqnarray}
\begin{eqnarray}
\begin{array}{|c|c|c|}
\hline
~~~~~~~~~~ b's ~~~~~~~~~~&
~~~~~~~~~~~~~~SO(12)~~~~~~~~~~~~~~
& ~~~~~~~(h_{11},h_{12})~~~~~~~\\
\hline
\hline
\begin{array}{lllllll}
I_{3456} ~ \sigma_{1} ~
\bar{\sigma}_{12456} \\
I_{1256} ~\sigma_{356}~
\bar{\sigma}_{23456} \\
\overline{I}_{3456} ~ \sigma_{12456} ~
\bar{\sigma}_{1} \\
\overline{I}_{1256} ~ \sigma_{23456}~\bar \sigma_{356}
\end{array}
&
~~SO(3)^2\times SO(2)\times O(1)^4
&
(6,12)\\
\hline
\hline
\begin{array}{lllllll}
I_{3456} ~ \sigma_{1} ~
\bar{\sigma}_{23456} \\
I_{1256} ~\sigma_{356}~
\bar{\sigma}_{12456} \\
\overline{I}_{3456} ~ \sigma_{23456} ~
\bar{\sigma}_{1} \\
\overline{I}_{1256} ~ \sigma_{12456}~\bar \sigma_{356}
\end{array}
&
~~SO(3)^2\times SO(2)\times O(1)^4
&
(12,6)\\
\hline
\end{array}
\nonumber\end{eqnarray}
\end{center}
\section{Models with ${\cal N}=1_L $}
Another class of interesting Type II models are the Left-Right
asymmetric orbifolds with ${\cal N}=1_L $ spacetime susy. In the
bosonic description, these models arise from including a
projection $(-)^{F_R}\sigma_R$, thus breaking all supersymmetries
associated to the Right-movers and preventing any of those to
reappear in the twisted sectors by means of the order two
chiral shift $\sigma_R$. In the fermionic description, $\sigma_R$ simply amount to
a reflection of all the $SO(12)$ fermions. Thus, the projection is equivalent to
choose a basis of sets consisting only of $F$ and $S$.
This is the starting point of our systematic
search in this largely unexplored class of Type II vacuum
configurations.
The resulting ${\cal N} =4_L + 0_R $ spectrum is
coded in the one-loop torus amplitude (\ref{torus2}).
Supersymmetric massless states only arise from the combination
$(V_8 - S_8) O_{12} \bar V_{20}$, that produces ${\cal N}=4$
supergravity coupled to 18 vector multiplets. A careful look at
the corresponding vertex operators and their OPE's shows that the
gauge group is $SU(2)^6$, as a remnant of the structure of the
internal world-sheet cubic supercurrent \cite{ABKW}. This or an
equivalent model has been found in the seminal paper
\cite{Dixon:1987yp}. The emergence of Right-moving world-sheet
currents, generating a supersymmetric Kac-Moody algebra, has been
deeply analyzed in view of the possibility of producing
non-abelian NS gauge symmetries. The authors of
\cite{Dixon:1987yp}
arrived however at the negative conclusion that (perturbative)
Type II models cannot accommodate the Standard Model with its
matter content.
To the sets $F$ and $S$ we have added two more sets $b_1$ and
$b_2$ producing a breaking of spacetime susy down to ${\cal N}=1_L +
0_R $ and, at the same time, a breaking of the internal
(pseudo)symmetry $SO(20)$. Indeed, what we said in the context of
the above ${\cal N} =4_L+ 0_R $ model applies to ${\cal N} =1_L+ 0_R $, too.
The ``true'' gauge symmetry can only be determined after a careful
analysis of the vertex operators for the vector fields and their
OPE's, while taking into account the precise structure of the
cubic supercurrent. Since the only cubic supercurrent we consider
is expressed in terms of the $SU(2)^6$ structure constants, the
resulting gauge symmetry we find is a subgroup of $SU(2)^6$ with
abelian factors. Moreover, there are massless charged chiral
multiplets that can further break the gauge symmetry by a
perturbative Higgs mechanism.
In the following we describe in some details a specific model with
minimal number of chiral multiplets and then collect the remaining
models in table $2$. The massless spectrum decomposes according to
\begin{eqnarray} {\cal T}_0={\bf G}_{1} +n_v{\bf V}_{1} +n_{v'}{\bf V}'_{1} +n_c
{\bf C}_{1} + n_{c'} {\bf C}'_{1} \ , \end{eqnarray} with \begin{eqnarray}
{\bf G}_{1} + {\bf C}_{1} &=& (V-S-C) \,\bar V \ , \nonumber\\
{\bf V}_{1} &=& (V-S-C) \,\bar O \nonumber\\
{\bf V}'_{1} &=& S \bar S+C \bar C-S\bar O-C\bar O \ , \nonumber\\
{\bf C}_{1} &=& (2O-S-C) \,\bar O
\nonumber\\
{\bf C}'_{1} &=& C\bar S+S\bar C-O\bar S-O\bar C
\end{eqnarray} the content of
the gravity, vector and chiral multiplets, and $n_v+n_{v'}$,
$n_c+ n_{c'}$ the total numbers of vector and chiral multiplets. Although
primed and unprimed multiplets have identical field content, we
find it convenient to distinguish them in order to stress the
different origin, NS-NS or R-R, of their bosonic degrees of
freedom. It is amusing to stress that generalized D-branes
\cite{Bianchi:2008cj} and their exotic open string excitations can
be introduced that couple to the twisted R-R states.
\subsection{An example: $(n_v,n_v';n_c,n_c ')=(14, 0;5, 0)$ }
Let us discuss the model with generators
\begin{eqnarray}
\begin{array}{lllllll}
b_{1}&=& I _{3456} ~ \sigma_{12} ~ \overline{\sigma}_{45} \ , \\
b_{2}&=& I _{1256} ~ \sigma_{36} ~ \overline{\sigma}_{5} \ .
\end{array}
\end{eqnarray}
In addition to breaking spacetime supersymmetry to ${\cal N} =1$, the
two $Z_2$ actions break the internal (pseudo)symmetry according to
\begin{equation} SO(12)_L\times SO(20)_R\to \left[ SO(4)^2\times
SO(2)^2\right]_L\times \left[ SO(2)^2\times SO(16)\right] _R \ .
\end{equation} Actually, $SO(16)_R \rightarrow SO(2)\times SO(14)$, where the
first factor is the little group for massless particles in $D=4$.
The non-vanishing lattice sums read \begin{eqnarray}
&&{\bf \L}[^0_0]=
\ft 12 \left(\vartheta _{3}^6 \bar{\vartheta }_{3}^{10}-\vartheta _{4}^6
\bar{\vartheta }_{4}^{10}-\vartheta _{2}^6 \bar{\vartheta }_{2}^{10}\right)
\nonumber\\
&&{\bf \L}[^0_1]=\ft 12
\vartheta _{3}^2 \vartheta _{4}^2 \bar{\vartheta }_{3}^2 \bar{\vartheta }_{4}^2 \left(\vartheta
_{4}^2 \bar{\vartheta }_{3}^6-\vartheta _{3}^2 \bar{\vartheta}_{4}^6\right)\nonumber\\
&&{\bf \L}[^1_0]=\ft 12
\vartheta _{2}^2 \vartheta _{3}^2 \bar{\vartheta }_{2}^2 \bar{\vartheta }_{3}^2 \left(\vartheta
_{2}^2 \bar{\vartheta }_{3}^6-\vartheta _{3}^2 \bar{\vartheta }_{2}^6\right)
\nonumber\\
&&{\bf \L}[^1_1]=\ft 12
\vartheta _{2}^2 \vartheta _{4}^2 \bar{\vartheta }_{2}^2 \bar{\vartheta }_{4}^2 \left(\vartheta
_{2}^2 \bar{\vartheta }_{4}^6-\vartheta _{4}^2\bar{\vartheta }_{2}^6\right)
\nonumber\\
&&{\bf \L}[^0_2]={\bf \L}[^0_3]=\ft 12
\vartheta _{3}^3 \vartheta _{4}^3 \bar{\vartheta }_{3} \bar{\vartheta }_{4} \left(\bar{\vartheta }_{3}^8- \bar{\vartheta }_{4}^8\right)
\nonumber\\
&&{\bf \L}[^2_0]={\bf \L}[^3_0]=\ft 12
\vartheta _{2}^3 \vartheta _{3}^3 \bar{\vartheta }_{2} \bar{\vartheta }_{3} \left( \bar{\vartheta }_{3}^8-\bar{\vartheta }_{2}^8\right)
\nonumber\\
&&{\bf \L}[^2_2]={\bf \L}[^3_3]=\ft 12
\vartheta _{2}^3 \vartheta _{4}^3 \bar{\vartheta }_{2} \bar{\vartheta }_{4} \left(\bar{\vartheta }_{4}^8- \bar{\vartheta }_{2}^8\right)
\ .
\end{eqnarray}
Massless states come only from the untwisted sector leading to
\begin{eqnarray} {\cal T}_0=(V-S-C) (\bar V + 14 \bar O) + 4(2O-S-C) \bar O = {\bf
G}_{1} + 14 {\bf V}_{1} + 5 {\bf C}_{1} \ . \end{eqnarray} The resulting
gauge group is $SU(2)^4\times U(1)^2$. The universal chiral
multiplet is neutral, while the additional four chiral multiplets
are charged with respect to the abelian factors. They form two
pairs of charge $(\pm 1, 0)$ and $(0,\pm 1)$. Along the flat
directions of the D-term potential, the $U(1)^2$ gauge symmetry is
generically broken. Since no matter fields are charged with
respect to $SU(2)^6$, the latter remains as an unbroken gauge
symmetry in perturbation theory. It would be very important to
study the possibility of including both physical and Euclidean
Left-Right asymmetric D-branes in the background in order to have
a richer matter spectrum and turn on non-perturbative effects.
\subsection{Various ${\cal N}=1_L$ Models }
Table $2$ summarizes the results of our preliminary search of Type
IIB models with ${\cal N} = 1_L + 0_R$. The basis sets are now, besides
the universal $F$ and $S$, the two additional \begin{eqnarray} &&
b_1 = I_{3456}\, \sigma^{i_1 i_2 \ldots }\,\bar \sigma^{k_1 k_2 \ldots } = \{(\psi\, y)^{3456} \, (y\, w)^{i_1 i_2 \ldots } | (\tilde y\, \tilde w)^{k_1 k_2 \ldots } \} \ , \nonumber\\
&&b_2 = I_{1256}\, \sigma^{j_1 j_2 \ldots }\,\bar \sigma^{l_1 l_2 \ldots } = \{(\psi\, y)^{1256} \, (y\, w)^{j_1 j_2 \ldots } | (\tilde y\, \tilde w)^{ l_1 l_2 \ldots } \} \ , \end{eqnarray}
with the scanning that runs over all choices of the sets $ (i_1 i_2 \ldots) $, $(j_1 j_2 \ldots) $, $ (k_1 k_2 \ldots) $, $(l_1 l_2 \ldots) $, compatibly again with the conditions (\ref{consistency}).
Each set $b_\a$ breaks half of the spacetime susy's arising from
the Left-moving sector, while supersymmetry associated to the
Right-moving sectors is completely broken to start with. As
apparent from Table 2, the reduction in the number of moduli in
${\cal N}= 1_L$ models is less significant than in ${\cal N}= 1_L + 1_R$
models. This is due to the presence of the tachyonic vacuum in the
R-moving sector that, combined with internal excitations, can
produce physical ({\it i.e.\ } level matched) particle states.
\begin{center}
{\bf Table 2}
\begin{eqnarray}
\begin{array}{|c|c|c|}
\hline
~~~~~~~~~~ b's ~~~~~~~~~~&
~~~~~~~~SO(12)_{L}\times SO(20)_R~~~~~~~~
& ~~~~~~~(n_v,n_{v'} ; n_c,n_{c'} )~~~~~~~\\
\hline
\hline
\begin{array}{lllll}
I_{3456} ~ \sigma_{12} ~
\bar{\sigma}_{45} \\
I_{1256} ~ \sigma_{36} ~
\bar{\sigma}_{5}
\end{array}
& ~~ \left[SO(4)^2\times SO(2)^2\right]_L\times \left[SO(16)\times
SO(2)^2\right]_R
&
(14,0;5,0)\\
\hline
\begin{array}{lllll}
I_{3456} ~ \sigma_{126} ~
\bar{\sigma}_{12} \\
I_{1256} ~ \sigma_{346} ~
\bar{\sigma}_{35}
\end{array}
& ~~ \left[SO(6)\times SO(2)^3\right]_L\times \left[SO(4)^2\times
SO(12)\right]_R
&
(10,0;25,0)\\
\hline
\begin{array}{lllllll}
I_{3456} ~ \sigma_{1} ~
\bar{\sigma}_{5} \\
I_{1256} ~ \sigma_{3} ~
\bar{\sigma}_{12345}
\end{array}
&
~~ \left[SO(4)^2\times SO(2)^2\right]_L\times \left[SO(8)\times SO(2)\times SO(10)\right]_R %
&
(8,0;27,0)\\
\hline
\begin{array}{lllll}
I_{3456} ~ \sigma_{12} ~
\bar{\sigma}_{123456} \\
I_{1256} ~ \sigma_{236} ~
\bar{\sigma}_{1}
\end{array}
& ~~ \left[SO(4)^2\times SO(2)^2\right]_L\times \left[SO(2)\times
SO(10)\times SO(8)\right]_R
&
(6,8;13,8)\\
\hline
\begin{array}{lllll}
I_{3456} ~ \sigma_{12} ~
\bar{\sigma}_{34} \\
I_{1256} ~\sigma_{34} ~
\bar{\sigma}_{123456}
\end{array}
&
~~ \left[SO(6)\times SO(2)^3\right]_L\times \left[SO(4)\times SO(8)\times SO(8)\right]_R %
&
(6,8;29,8)\\
\hline
\end{array}
\nonumber\end{eqnarray}
\end{center}
\section{Unoriented projections}
The Left-Right symmetric Type IIB string vacua we constructed in
section \ref{sect11} admit a natural $\Omega$ projection. It is an
interesting question whether they can be taken as a starting point
of orientifold constructions with phenomenologically interesting
open string chiral matter.
For unoriented strings \cite{BPS}, several closed string moduli
are odd under
$\Omega$ and are thus projected out. Hypermultiplets of the oriented ${\cal N}=2$ theory
reduce to ${\cal N}=1$ chiral multiplets while vector multiplets lead to vector or chiral multiplets
according to their parity under $\Omega$. In addition,
D-brane sectors should be added
in the presence of non-trivial closed string tadpoles.
Here we discuss the simplest instances
of unoriented projections with and without open strings.
\subsection{The minimal model}
We start by considering the unoriented projection of the ${\cal N}=1_L+1_R$ model
with $(h_{11}= h_{21})=(1,1)$ discussed in Section 4.1,
corresponding to the choice of generators in eq. (\ref{mod11}).
Notice that, since $\Omega$ identifies Left and Right movers, one has
\begin{equation}
{\rm Tr}_{{\cal H}_L\otimes {\cal H}_R } \,
\Omega \,(g^L \otimes g^R)= {\rm Tr}_{{\cal H}_L} \, g_{_\Omega} \ ,
\end{equation}
where $g_{_\Omega}$ is the diagonal action $g^L g^R$ with Left
and Right moving fields identified, i.e. $\bar I_i \to I_i$, $\bar
\sigma_i \to \sigma_i$ . In this way, the $g_{_\Omega}$ amplitudes
are not the naive chiral halves of the amplitudes entering the
torus amplitude. They must be rather written in terms of traces
over the chiral modes of the $g_\Omega$ orbifold group generators
corresponding to the sets \begin{eqnarray} && b_{1\Omega}= I_{3456}
\sigma_{15} =\{ \psi^{3456} \, y^{1346}\, w^{15} \} \ , \nonumber\\
&&b_{2\Omega}= I_{1256} ~\sigma_{1245}=\{ \psi^{1256} \, y^{46}\, w^{1245} \} \ , \nonumber\\
&&b_{3\Omega}= I_{1234} ~\sigma_{24}=\{ \psi^{1234} \, y^{13}\, w^{24} \} \ , \end{eqnarray}
where $b_{3\Omega}=b_{1\Omega} \ b_{2\Omega}$.
In addition, only Left-Right symetrically twisted states enter the Klein-bottle amplitude.
In the direct channel one then gets
\begin{eqnarray} {\cal K} &=& \frac{1}{16} \sum_{a,b,c,d} {\rm Tr}_{{\cal H}_{Lc}\otimes {\cal H}_{Rd} } \Omega\, b_{a}\, \bar b_b
= \frac{1}{4} \sum_{a,b}{\rm Tr}_{{\cal H}_{La}} b_{b\Omega}= \frac{1}{4 \ \eta^8} \sum_{a,b=0}^3
\epsilon_{a,b} \, \rho_{ab} \, {\Lambda}[^a_{b}] \
\end{eqnarray}
where the unoriented lattice sums read
\begin{eqnarray} &&{\Lambda}[^0_{0}] = \vartheta_3^6 +\epsilon\, \vartheta_2^6 \\
&&{\Lambda}[^0_{h}] = \vartheta_4^3 \vartheta_3^3 +\epsilon\, \vartheta_1^3\vartheta_2^3 \\
&&{\Lambda}[^0_{3}] = \vartheta_4^2 \vartheta_3^4 +\epsilon\, \vartheta_1^2\vartheta_2^4 \\
&&{\Lambda}[^3_{0}] = \vartheta_2^2 \vartheta_3^4 +\epsilon\, \vartheta_3^2\vartheta_2^4 \\
&&{\Lambda}[^3_{3}] = \vartheta_1^2 \vartheta_3^4 +\epsilon\, \vartheta_4^2\vartheta_2^4 \\
&&{\Lambda}[^h_{0}] = \vartheta_2^3 \vartheta_3^3 +\epsilon\, \vartheta_3^3\vartheta_2^3 \\
&&{\Lambda}[^h_{h}] = \vartheta_1^3 \vartheta_3^3 + \epsilon\,\vartheta_4^3\vartheta_2^3 \ .
\end{eqnarray}
$\epsilon_{a,b}$ and $\epsilon$ are signs satisfying the fusion
constraints and $h=1,2$. All the other possible lattice sums
vanish. For instance,
\begin{equation} {\Lambda}[^3_{h}] = \vartheta_3\vartheta_4^3
\vartheta_1^2\vartheta_2^2 +
\vartheta_2\vartheta_1^3\vartheta_3^2\vartheta_4^2 \equiv 0 \ .
\end{equation}
Performing an $S$ modular transformation one can determine the
Klein-bottle amplitude in the transverse channel
\begin{equation} \tilde{{\cal K}} = \frac{2^2}{4 \ \eta^8} \sum_{a,b=0}^3
\epsilon_{b,a} \,\sigma_{b,a} \, \rho_{ab} \, {\tilde\Lambda}[^a_{b}]\label{ktra} \ , \end{equation}
where
\begin{eqnarray} &&{\tilde\Lambda}[^0_{0}] = \vartheta_3^6 +\epsilon\, \vartheta_4^6 \\
&&{\tilde\Lambda}[^0_{h}] = \vartheta_4^3 \vartheta_3^3 + \epsilon\,\vartheta_3^3\vartheta_4^3 \\
&&{\tilde\Lambda}[^0_{3}] = \vartheta_4^2 \vartheta_3^4 + \epsilon\, \vartheta_3^2\vartheta_4^4 \\
&&{\tilde\Lambda}[^3_{0}] = \vartheta_2^2 \vartheta_3^4 + \epsilon\, \vartheta_1^2\vartheta_4^4 \\
&&{\tilde\Lambda}[^3_{3}] = \vartheta_1^2 \vartheta_3^4 + \epsilon\, \vartheta_2^2\vartheta_4^4 \\
&&{\tilde\Lambda}[^h_{0}] = \vartheta_2^3 \vartheta_3^3 + \epsilon\, \vartheta_1^3\vartheta_4^3 \\
&&{\tilde\Lambda}[^h_{h}] = \vartheta_1^3 \vartheta_3^3 + \epsilon\, \vartheta_2^3\vartheta_4^3 \ .
\end{eqnarray}
with $\sigma_{ab}$ some signs given in (\ref{sphases}).
Choosing $\epsilon=-1$ and all the remaining signs $\epsilon_{a,b}=1$,
one finds that no massless untwisted or twisted tadpoles are present.
The unoriented model is then consistent by itself and no D-branes are needed.
At the massless level one finds
\begin{equation}
{\cal K}_{\rm massless}=(V-S-C)+(2O-S-C)
\end{equation}
Together with the torus contribution one is left with the minimal ${\cal N}=1$ content
\begin{equation}
\ft12({\cal T}+{\cal K})_{\rm massless}= {\bf G}_{1}+ 2\, {\bf C}_{1} \ .
\end{equation}
\subsection{Models with open strings}
Here we present the simplest instance
of an unoriented projection with open strings. For simplicity we consider the case
of $T^6/{\mathbb Z}_{2L}\times {\mathbb Z}_{2L}'\times{\mathbb Z}_{2R}\times{\mathbb Z}_{2R}'$ with
no shifts. As before, we take the $T^6$ at the $SO(12)$ point. The
orbifold group generators are \begin{equation} b_1=I_{3456} \ , \quad
b_2=I_{1256} \ , \quad \bar b_1=\bar I_{3456} \ , \quad \bar
b_2=\bar I_{1256} \ .\end{equation} The resulting model can be written in
terms of 64 characters collecting the chiral states in the
$a$-twisted sector ($a=0,1,2,3$) with $ {\mathbb Z}_{2L}\times {\mathbb Z}_{2L}$
eigenvalues $(\pm,\pm)$ in one of the four O, V, S, C conjugacy
classes of the $SO(12)$ lattice. The complete list of characters
can be found in Appendix B. In particular, orbifold group
invariant states in the untwisted sector are labelled by $\chi
_{1}, \chi_5,\chi_{9}, \chi_{13}$. The untwisted torus is then
given by \begin{equation} {\cal T}_{\rm unt}=|\chi_{1}|^2+|
\chi_5|^2+|\chi_{9}|^2+| \chi_{13}|^2 \ . \label{tunt} \end{equation}
The twisted amplitudes complete (\ref{tunt})
in a modular invariant form with positive integer coefficients.
We discuss the two possibilities
\begin{eqnarray}
{\cal T}_A&=& |\chi_{1}+\chi_{17}+\chi_{35}+\chi_{49}|^2+|\chi_{5}+\chi_{21}+\chi_{39}+\chi_{53}|^2\nonumber\\
&&
+ |\chi_{9}+\chi_{30}+\chi_{45}+\chi_{64}|^2
+|\chi_{13}+\chi_{26}+\chi_{41}+\chi_{60}|^2 \ , \\
{\cal T}_B &=&
{\chi}_{1} \, {\overline{\chi }}_{1}+{\chi }_{18} \, {\overline{\chi }}_{2}+{\chi }_{33} \, {\overline{\chi }}_{3}+{\chi }_{52} \, {\overline{\chi }}_{4}+{\chi }_{5} \, {\overline{\chi }}_{5}+{\chi }_{22} \, {\overline{\chi }}_{6}+{\chi }_{37} \, {\overline{\chi }}_{7}+{\chi }_{56} \, {\overline{\chi }}_{8}+{\chi }_{9} \, {\overline{\chi }}_{9}\nonumber\\
&&
+{\chi }_{29} \, {\overline{\chi }}_{10}+{\chi }_{47} \, {\overline{\chi }}_{11}+{\chi }_{61} \, {\overline{\chi }}_{12}+{\chi }_{13} \, {\overline{\chi }}_{13}+{\chi }_{25} \, {\overline{\chi }}_{14}+{\chi }_{43} \, {\overline{\chi }}_{15}+{\chi }_{57} \, {\overline{\chi }}_{16}+{\chi }_{17} \, {\overline{\chi }}_{17}\nonumber\\
&&+{\chi }_{2} \, {\overline{\chi }}_{18}+{\chi }_{51} \, {\overline{\chi }}_{19}+{\chi }_{34} \, {\overline{\chi }}_{20}+{\chi }_{21} \, {\overline{\chi }}_{21}+{\chi }_{6} \, {\overline{\chi }}_{22}+{\chi }_{55} \, {\overline{\chi }}_{23}+{\chi }_{38} \, {\overline{\chi }}_{24}+{\chi }_{14} \, {\overline{\chi }}_{25}\nonumber\\
&&+{\chi }_{26} \, {\overline{\chi }}_{26}+{\chi }_{44} \, {\overline{\chi }}_{27}+{\chi }_{58} \, {\overline{\chi }}_{28}+{\chi }_{10} \, {\overline{\chi }}_{29}+{\chi }_{30} \, {\overline{\chi }}_{30}+{\chi }_{48} \, {\overline{\chi }}_{31}+{\chi }_{62} \, {\overline{\chi }}_{32}+{\chi }_{3} \, {\overline{\chi }}_{33}\nonumber\\
&&+{\chi }_{20} \, {\overline{\chi }}_{34}+{\chi }_{35} \,
{\overline{\chi }}_{35}+{\chi }_{50} \, {\overline{\chi
}}_{36}+{\chi }_{7} \, {\overline{\chi }}_{37}+{\chi }_{24} \,
{\overline{\chi }}_{38}+{\chi }_{39} \, {\overline{\chi
}}_{39}+{\chi }_{54} \, {\overline{\chi }}_{40}+{\chi }_{41} \,
{\overline{\chi }}_{41} \nonumber\\&&+{\chi }_{59} \, {\overline{\chi
}}_{42}+{\chi }_{15} \, {\overline{\chi }}_{43}+{\chi }_{27} \,
{\overline{\chi }}_{44}+{\chi }_{45} \, {\overline{\chi
}}_{45}+{\chi }_{63} \, {\overline{\chi }}_{46}+{\chi }_{11} \,
{\overline{\chi }}_{47}+{\chi }_{31} \, {\overline{\chi
}}_{48}+{\chi }_{49} \, {\overline{\chi }}_{49} \nonumber\\&&+{\chi
}_{36} \, {\overline{\chi }}_{50}+{\chi }_{19} \,
{\overline{\chi }}_{51}+{\chi }_{4} \, {\overline{\chi
}}_{52}+{\chi }_{53} \, {\overline{\chi }}_{53}+{\chi }_{40} \,
{\overline{\chi }}_{54}+{\chi }_{23} \, {\overline{\chi
}}_{55}+{\chi }_{8} \, {\overline{\chi }}_{56}+{\chi }_{16} \,
{\overline{\chi }}_{57} \nonumber\\&&+{\chi }_{28} \, {\overline{\chi
}}_{58}+{\chi }_{42} \, {\overline{\chi }}_{59}+{\chi }_{60} \,
{\overline{\chi }}_{60}+ {\chi }_{12} \, {\overline{\chi
}}_{61}+{\chi }_{32} \, {\overline{\chi }}_{62}+{\chi }_{46} \,
{\overline{\chi }}_{63}+{\chi }_{64} \, {\overline{\chi }}_{64}
\ .
\end{eqnarray}
They coincide in the untwisted sector and are distinguished by the
pairing of states in the twisted sectors, namely they
correspond to different choices of discrete torsion giving rise to different modular invariants
\cite{Bianchi:1999uq}.
In particular, case ${\bf A}$ corresponds to a modular invariant
with extended symmetry that gives back the
toroidal compactification
of Type IIB on the $T^6$ based on the lattice of $SO(12)$.
On the other hand, case ${\bf B}$ corresponds to a permutation
modular invariant with effective Hodge numbers $(15,15)$. Indeed,
out of the 64 original characters, the set of massless characters
consists in
\begin{equation}
\{ \chi_{1}, {\chi }_{2}, {\chi }_{3},{\chi }_{4},{\chi }_{17},{\chi }_{18},{\chi }_{23},{\chi }_{24},{\chi }_{33},{\chi }_{35},{\chi }_{38},{\chi }_{40},{\chi }_{49},{\chi }_{52},{\chi }_{54},{\chi }_{55} \} \ ,
\end{equation}
with ${\chi }_{1}=V-S-C+\ldots $ and
$\chi_{i}=2O-S-C +\ldots $ for the remaining ones.
Plugging the above expansions into the expressions
for the two torus amplitudes one finds
\begin{eqnarray}
({\cal T}_{A})_{\rm massless}&=& |V+6O-4S-4C|^2 \ , \nonumber\\
({\cal T}_{B})_{\rm massless}&=& |V-S-C|^2+15|2O-S-C|^2 \ .
\end{eqnarray}
The Klein-bottle amplitude
follows from ${\cal T}_A$ and $ {\cal T}_B$ by reducing to
their diagonal components. In both cases
one finds
\begin{eqnarray}
{\cal K}&=& \chi_{1}+\chi_{17}+\chi_{35}+\chi_{49}+\chi_{5}+\chi_{21}+\chi_{39}+\chi_{53}\nonumber\\
&& + \chi_{9}+\chi_{30}+\chi_{45}+\chi_{64}+\chi_{13}
+\chi_{26}+\chi_{41}+\chi_{60} \ ,
\end{eqnarray}
that produces
\begin{eqnarray}
{\cal K}&=& (V-S-C)+3(2O-S-C)
\end{eqnarray}
at the massless level. The unoriented projection results in case ${\bf A}$ into the supegravity multiplet with 6 vector multiplets of ${\cal N}=4$, while in case ${\bf B}$
it leads to ${\cal N}=1$ supergravity with 6 vector multiplets and
25 chiral multiplets.
Going to the transverse channel one finds
\begin{equation}
\tilde {\cal K}= 2^3(\chi_{1}+\chi_{17}+\chi_{35}+\chi_{49} ) \ .
\end{equation}
The tadpoles can be cancelled by adding
the tranverse Annulus and Moebius amplitudes
\begin{eqnarray}
\tilde {\cal A} &=& 2^{-3} (\chi_{1}+\chi_{17}+\chi_{35}+\chi_{49} )(n_1+n_2+\bar n_1+\bar n_2)^2\nonumber\\
&&+ 2^{-3}(\chi_{5}+\chi_{21}+\chi_{39}+\chi_{53})(n_1-n_2+\bar n_1-\bar n_2)^2 \nonumber\\
&& + 2^{-3}(\chi_{9}+\chi_{30}+\chi_{45}+\chi_{64})(n_1+n_2-\bar n_1-\bar n_2)^2 \nonumber\\
&&+2^{-3}(\chi_{13}+\chi_{26}+\chi_{41}+\chi_{60})(n_1-n_2-\bar n_1+\bar n_2)^2 \ , \\
\tilde {\cal M} &=& - (\chi_{1}+\chi_{17}+\chi_{35}+\chi_{49} )(n_1+n_2+\bar n_1+\bar n_2) \ ,
\end{eqnarray}
provided
\begin{equation}
n_1+n_2=4 \ .
\end{equation}
Finally, applying $S$ and $P=T^{1\over 2} S T^2 S T^{1\over 2}$ modular transformations one
finds the direct amplitudes
\begin{eqnarray}
{\cal A} &=& (\chi_{1}+\chi_{17}+\chi_{35}+\chi_{49} )
(2 n_1\bar n_1+ 2 n_2 \bar n_2)\nonumber\\
&& + (\chi_{5}+\chi_{21}+\chi_{39}+\chi_{53})
(n_1^2+n_2^2+\bar n_1^2+\bar n_2^2) \nonumber\\
&& + (\chi_{9}+\chi_{30}+\chi_{45}+\chi_{64})
(2 n_1 n_2+ 2\bar n_1 \bar n_2) \nonumber\\
&&+(\chi_{13}+\chi_{26}+\chi_{41}+\chi_{60})
(2 n_1 \bar n_2+ 2 n_2 \bar n_1) \ , \\
{\cal M} &=& (\chi_{5}+\chi_{21}+\chi_{39}+\chi_{53} )
(n_1+n_2+\bar n_1+\bar n_2) \ .
\end{eqnarray}
The massless open string spectrum, encoded in $( {\cal A} + {\cal M})/2$, is that of
${\cal N}=4$ SYM with gauge group $U(N)\times U(4-N)$. Notice that in case
${\bf B}$ only an ${\cal N}=1$ fraction of the ${\cal N}=4$
brane supersymmetry is preserved by the bulk theory. An analogous
behavior can be observed in other cases, most notably the open
descendants of the
$D_{odd}$ series of $SU(2)$ WZW models \cite{Pradisi:1995pp, Pradisi:1996yd}
\section{Conclusions and perspectives}
In perturbative string theory, moduli fields are exactly marginal
deformations of the underlying conformal field theory. In the low
energy description, they correspond to perturbatively exact flat
directions of the scalar potential.
In the present paper, we have exploited ${\mathbb Z}_2$ chiral twists and
shifts in the search of calculable Type IIB models with few
moduli. We have explored both Left-Right symmetric, though
non-geometric, models with ${\cal N} = 1_L + 1_R$ spacetime susy and
Left-Right asymmetric models with ${\cal N} = 1_L + 0_R$ spacetime
susy. We have found a finite series of models enjoying ${\cal N} = 1_L +
1_R$ spacetime susy with very low ``effective'' Hodge numbers
$(h_{11},h_{21})$ given by \begin{eqnarray}
(n,n) && \quad\quad n=1,2,3,4,5,9 \nonumber\\
(2n,2n+6), (2n+6,2n) && \quad\quad n=0,1,2 \nonumber\\
(2n+3,2n+15), (2n+15,2n+3) && \quad\quad n=0,1 \end{eqnarray} Most of
these models have no counterpart in previous CY or RCFT scans
\cite{Kiritsis:2008mu, Donagi:2008xy}. We have studied the
``minimal'' model with $h_{11}=h_{21}=1$ in details and
constructed one of its ${\cal N}=1$ unoriented descendants with no open
strings. This model exhibits the minimal (as far as we know)
${\cal N}=1$ field content found so far in the moduli space of
perturbative string compactifications. We cannot exclude the
possibility that more general chiral twists and shifts could give
rise to perturbative Type IIB models with ${\cal N} =1$ spacetime susy
and only the universal dilaton
chiral multiplet or to an ${\cal N} =2$ model
with $h_{11}=h_{21}=0$\footnote{A Left-Right asymmetric ``minimal'' model with ${\cal N} =2_L + 0_R$
spacetime susy and only the dilaton vector (!) multiplet has been
constructed by similar means in \cite{Dolivet:2007sz}
but does not admit an obvious
unoriented projection.}.
Our main motivation was to identify convenient starting points
for calculable orientifold constructions exhibiting complete
moduli stabilization. We find that asymmetric twists and shifts
can be easily combined in order to freeze out most closed string
moduli. The effect on open string moduli is subtler. The only
model with open unoriented strings, we have analyzed in some
detail, enjoys extended ${\cal N}=4$ susy in the open sector and is
thus non-chiral. Apparently there is some tension between
chirality and moduli stabilization\footnote{P.~Camara and others
share our viewpoint.}. The interesting question of whether
D-branes with phenomenologically viable gauge group and chiral
matter contents can be accommodated in this picture remains open.
\section*{Acknowledgments}
We would like to thank C.~Angelantonj, E.~Dudas,
S.~Ferrara, E.~Kiritsis, C.~Kounnas, K.~Narain, A.~Sagnotti, B.~
Schellekens, Ya.~Stanev, and M.~C.~Timirgaziu for interesting
discussions. Preliminary results were presented by M.~B. at {\it
Vacuum Selection in String Theory} (Liverpool University, March
2008), at {\it String Phenomenology '08} (U. Penn, Philadelphia,
May-June 2008), at {\it Pre-Strings Phenomenology} (CERN, July
2008), at the $4^{th}$ RTN Meeting {\it Symmetries and Structure
of the Universe} (Varna, September 2008), and at {\it Mathematical
Challenges in String Phenomenology} (ESI, Vienna, October 2008).
M.~B. would like to thank the organizers for creating a
stimulating environment and the participants for making useful
comments. This work was supported in part by the MIUR-PRIN
contract 2007-5ATT78, NATO PST.CLG.978785, the RTN grants MRTNCT-
2004-503369, EU MRTN-CT-2004-512194, and MRTN-CT-2004-005104.
P.~A. would like to thank the Physics Departments at University of
Rome ``Tor Vergata'' and at University of Crete in Heraklion for
hospitality during completion of this work.
|
2,877,628,091,482 | arxiv | \section{Introduction}
The starting point of our study is to consider a system of $N$ agents with wealth denoted by $X_1,\ldots,X_N$. At each iteration, two agents picked randomly (say $i$ and $j$) reshuffle their combined wealth by flipping a sequence of fair coins. Mathematically, this exchange rule can be written as:
\begin{equation} \label{binomial_reshuffling}
(X_i,X_j) \;\leadsto\; (B \circ (X_i + X_j)\;,\; X_i + X_j - B \circ (X_i + X_j)),
\end{equation}
where $B \circ (X_i+X_j)$ is binomial random variable with parameters $X_i+X_j$ (combined wealth) and $1/2$ (fair coins). We refer to this dynamics as the {\bf binomial reshuffling model}. Notice that the combined wealth is preserved after the exchange, and hence the model is closed (i.e., the total wealth is conserved). The goal of this manuscript is to study the asymptotic limit of this dynamics as the number of agents and iterations become large. To gain an insight into the dynamics, we provide in figure \ref{fig:simu_agent_based} a numerical simulation with $N=10^4$ agents and after $10^7$ iterations. We observe the total wealth distribution is well approximated by a Poisson distribution whose rate parameter $\lambda$ is equal to the arithmetic mean of the agents' wealth ($\lambda \approx 5$ in the simulation).
\begin{figure}[pt]
\centering
\includegraphics[width=.97\textwidth]{fig1_simulation_agents.pdf}
\caption{Illustration of the convergence of the wealth distribution to a Poisson distribution in the binomial reshuffling model. In the left figure, we represent the initial distribution used in the simulation. We observe in the right figure that the distribution is getting closer to a Poisson distribution. Parameters used: $10^7$ iterations, $N=10^4$ agents.}
\label{fig:simu_agent_based}
\end{figure}
\begin{figure}[pt]
\centering
\includegraphics[width=.97\textwidth]{scheme_sketch_full.pdf}
\caption{The main result of the manuscript is to show the convergence of the mean-field limit of the binomial reshuffling model toward a Poisson distribution with an explicit convergence rate (Theorem \ref{thm1}). Moreover, we also show the convergence of the original agent-based model toward an equilibrium distribution (Theorem \ref{thm2}) but without an explicit rate.}
\label{fig:schema}
\end{figure}
Our main result, Theorem \ref{thm1}, formalizes the empirical observation illustrated in Figure \ref{fig:simu_agent_based}. We consider the mean-field behavior of binomial reshuffling \eqref{binomial_reshuffling} in the large population limit ($N \to \infty$) and prove convergence of the distribution of wealth to a Poisson distribution in the $2$-Wasserstein metric. We also provide in Theorem \ref{thm2} a direct proof of the convergence of the agent-based model toward the Poisson distribution. However, in contrast to Theorem \ref{thm1}, the Theorem \ref{thm2} does not provide a convergence rate toward the equilibrium distribution. We summarize our results in Figure \ref{fig:schema}.
\subsection{Related work}
Before starting our investigation of the binomial reshuffling model, we would like to emphasize its link with other models in econophysics. We start by recalling the uniform reshuffling model \cite{cao_entropy_2021}. In this dynamics, a pair of agents $i,j$ is chosen randomly and their combined wealth is redistributed according to a uniform distribution. Thus, the update rule is given as follows:
\begin{equation} \label{uniform_reshuffling}
(X_i,X_j) \leadsto \left(U\circ (X_i + X_j),X_i + X_j - U\circ (X_i +
X_j)\right),
\end{equation}
where $U\circ (X_i + X_j)$ denotes a uniform random variable on $[0, X_i + X_j]$. The uniform distribution has a larger variance than the binomial distribution $B\circ (X_i + X_j)$. As a result, the uniform reshuffling model generates more {\it wealth inequality} (measured by the so-called Gini index) compared to the binomial reshuffling model. The associated equilibrium is an exponential law instead of a Poisson distribution. Notice also that in contrast to the binomial reshuffling model, the wealth of the agents is a real non-negative number and no longer an integer (i.e., $X_i(t) \in \mathbb{R}^+$).
In contrast to the uniform reshuffling model, the repeated average model \cite{cao_explicit_2021,chatterjee_phase_2022} reduces wealth inequality. In this dynamics, the combined wealth of two agents is simply shared equally, leading to the following update rule:
\begin{equation}
\label{Dirac_reshuffling}
\hspace{-0.55in} (X_i,X_j) \leadsto \left(\delta_{1/2}\circ (X_i + X_j),X_i + X_j - \delta_{1/2}\circ (X_i + X_j) \right),
\end{equation}
in which $\delta_{1/2} \circ (X_i + X_j)$ denotes a Dirac delta centered at $(X_i + X_j)/2$. The long time behavior of such dynamics is a Dirac distribution, i.e., the wealth of all agents are equal, and the Gini index converges to zero \cite{cao_explicit_2021}. We illustrate the three different dynamics in Figure \ref{fig:three_shuffling_model}. The binomial reshuffling model could be seen as an intermediate behavior between the uniform reshuffling model and the repeated average dynamics.
\begin{figure}[ht]
\centering
\includegraphics[width=.9\textwidth]{illustration_binomial_reshuffling.pdf}
\caption{Illustration of the different update rules for three shuffling dynamics. In the repeated average model, the rule is deterministic: the updated wealth of agents $X_i$ and $X_j$ is the average of their combined wealth. In contrast, in the uniform reshuffling model, the updated value is taken from a uniform distribution on $[0,X_i+X_j]$. The binomial reshuffling has an intermediate behavior: the updated value is more {\it likely} to be around the average $(X_i+X_j)/2$.}
\label{fig:three_shuffling_model}
\end{figure}
Modifications of these models, which lead to different dynamics, also exist.
For example, the so-called immediate exchange model introduced in
\cite{heinsalu_kinetic_2014} assumes that pairs of agents are randomly and
uniformly picked at each random time, and each of the agents transfers a random
fraction of its money to the other agents, where these fractions are
independent and uniformly distributed in $[0,1]$. The so-called uniform
reshuffling model with saving propensity investigated in
\cite{chakraborti_statistical_2000,lanchier_rigorous_2018} suggests that the
two interacting agents keep a fixed fraction $\lambda$ of their fortune and
only the combined remaining fortune is uniformly reshuffled between the two
agents:
$$
(X_i,X_j) \leadsto \big(U \circ (\lambda(X_i+X_j)) + (1-\lambda) X_i\;\;,\;\; \lambda X_i+X_j- U \circ (\lambda(X_i + X_j) )\big).
$$
The uniform reshuffling model arises as a particular case if we set $\lambda =
0$. For other models arising from econophysics (including models with bank and
debt), see
\cite{cao_interacting_2022,cao_uncovering_2022,chakraborti_statistical_2000,
chatterjee_pareto_2004, lanchier_rigorous_2019} and references therein.
\subsection{Main result}
In order to state our main result, we need to formalize a notion of mean-field
behavior as the number of agents becomes large. If we assume that updates occur
at random times generated by a Poisson clock
with rate $1/n$, then \eqref{binomial_reshuffling} defines a continuous-time Markov
process $\{X_1(t),\ldots,X_N(t)\}$ for $t \ge 0$, for any initial distribution of wealth.
Let $\mathbf{p}(t)=\left(p_0(t),p_1(t),\ldots,p_n(t),\ldots\right)$
be the law of the process $X_1(t)$ as $N \to \infty$, that is, $p_n(t) =
\lim_{N \to \infty}
\mathbb{P}(X_1(t) = n)$. Then, using standard techniques, we show in
\S \ref{sec:mean_field} that the time evolution of $\mathbf{p}(t)$ is given by
\begin{equation}\label{eq:ODE}
\frac{\mathrm{d}}{\mathrm{d} t} {\mathbf{p}}(t) = Q[{\mathbf{p}}(t)]
\end{equation}
where
\begin{equation} \label{eq:defQ}
Q[{\mathbf{p}}]_n = \sum_{k= 0}^\infty \sum_{\ell = 0}^\infty
\tbinom{k+\ell}{n}\,\tfrac{1}{2^{k+\ell}}\,p_k\,p_\ell\,\mathbbm{1}_{\{n \leq
k+\ell\}} - p_n,
\end{equation}
for $n \ge 0$, with the usual convention that $\tbinom{0}{0}$ is interpreted as
$0$. The transition between the stochastic $N$-agents dynamics \eqref{binomial_reshuffling}
and the infinite system of ordinary differential equations (ODE)
\eqref{eq:ODE} as $n \to \infty$ is referred to as \emph{propagation of chaos}
\cite{sznitman_topics_1991} and has been rigorously justified in various models
arising from econophysics, see for instance
\cite{cao_derivation_2021,cao_entropy_2021,cao_interacting_2022,cortez_uniform_2022,graham_rate_2009,merle_cutoff_2019}.
Given the transition from the interacting system of agents
\eqref{binomial_reshuffling} to the deterministic system of nonlinear ODE \eqref{eq:ODE}, the natural follow-up step is to
investigate the large time behavior of the system of differential equations and
equilibrium solution.
Finally, we also recall that the $2-$Wasserstein metric between two probability mass
functions $\mathbf{p}$ and $\mathbf{q}$ is defined by
\begin{equation} \label{w2defn}
W_2(\mathbf{p},\mathbf{q}) = \inf\left\{\sqrt{\mathbb{E}[|X-Y|^2]} : \mathrm{Law}(X)=\mathbf{p},~\mathrm{Law}(Y)=\mathbf{q}\right\},
\end{equation}
where the infimum is taken over all pairs of random variables $X$ and $Y$
distributed according to $\mathbf{p}$ and $\mathbf{q}$, respectively. Moreover,
let $\mathbf{p}_\lambda^*$ denote a Poisson distribution with rate $\lambda >
0$, that is,
\begin{equation}
\label{eq:Poisson}
p_{\lambda,k}^* = \frac{\lambda^k e^{-\lambda}}{k!},
\end{equation}
for $k \in \mathbb{N}$. The following Theorem is our main result.
\begin{theorem}\label{thm1}
Let $\mathbf{p}(0)$ be a probability distribution on $\mathbb{N}$ with mean
$\lambda$ and finite variance $\sigma^2$, and suppose that $\mathbf{p}(t)$ be
defined by \eqref{eq:ODE}. Then, \begin{equation}\label{Wasserstein_conv}
W_2(\mathbf{p}(t) ,\mathbf{p}_\lambda^*) \le C t^{-1/2},
\end{equation}
where $C > 0$ is a constant that only depends on the initial variance
$\sigma^2$.
\end{theorem}
The proof of Theorem \ref{thm1} is given in \S \ref{sec:3}.
Informally speaking, this result says that when the number of agents and the
number of iterations is large, the distribution of wealth of the agents under
the binomial reshuffling model converges to a Poisson distribution, see Figures
\ref{fig:simu_agent_based} and \ref{fig:schema}. We note that numerics indicate that it may be possible to
improve the convergence rate of Theorem \ref{thm1}, at least for some initial
probability distributions $\mathbf{p}(0)$, see the discussion in \S
\ref{sec:discuss}.
\subsection{Organization}
The remainder of the present paper is organized as follows. In \S
\ref{sec:mean_field}, using classical techniques, we show that the nonlinear system
of nonlinear ODEs \eqref{eq:ODE} is indeed the mean-field limit of binomial
reshuffling dynamics in the large $N$ limit. In \S \ref{sec:3} we establish several
results about the large time behavior of the nonlinear ODE system
\eqref{eq:ODE}, and ultimately prove Theorem \ref{thm1} using a coupling argument
inspired by recent work on the uniform reshuffling model
\cite{cao_entropy_2021}. In \S \ref{sec:another_route} we take on a different
approach similar to the methods proposed in
\cite{lanchier_rigorous_2017,lanchier_rigorous_2018,lanchier_rigorous_2019},
and show a different way to establish the convergence to the Poisson
distribution. In \S \ref{sec:discuss} we discuss the presented results. The
Appendix \ref{appendix} records a qualitative way of demonstrating the large
time convergence of the solution of \eqref{eq:ODE} to a Poisson distribution.
\section{Mean-field limit}\label{sec:mean_field} \subsection{Notation}
Let $\mathbb{N}$ denote the set of nonnegative integers $\mathbb{N} =
\{0,1,2,\ldots,\}$, and bold lower case letter $\mathbf{p} = \{p_n\}_{n \in
\mathbb{N}}$ denote probability distributions on $\mathbb{N}$.
We say that $B$ is a Bernoulli
random variable if $\mathbb{P}(B = 0) = \mathbb{P}(B = 1) = 1/2$. For random
variables $X$ and $Y$ taking values in $\mathbb N$, we write $X \perp Y$ to
mean that $X$ and $Y$ are mutually independent. We say that $X$ is a binomial
random variable with parameters $n$ and $\gamma$ if the distribution
$\mathbf{p}$ of $x$ satisfies
$$
p_k = {n \choose k} \gamma^k (1-\gamma)^{n-k},
$$
for $k=0,\ldots,n$, and $p_k=0$ otherwise.
If $X$ and $Y$ are random
variables taking values in the nonnegative integers, then we write $B \circ
(X+Y)$ to denote a binomial random variable with parameters $X+Y$ and $1/2$,
put differently,
\begin{equation}
\label{eq:B_XY}
B \circ (X + Y) = \sum_{n=1}^{X+Y} B_n,
\end{equation}
where $\{B_n\}_{n \in \mathbb{N}}$ are independent Bernoulli random variables
(which are independent from $X$ and $Y$).
\subsection{Mean-field limit}
In the following, we provide a heuristic derivation of the mean-field ODE
system \eqref{eq:ODE} from the binomial reshuffling dynamics
\eqref{binomial_reshuffling}; the derivation is based on classical techniques,
see for example
\cite{cao_derivation_2021,cao_entropy_2021,cao_uncovering_2022}.
Let $\mathrm{N}^{(i,j)}_t$ be independent Poisson processes with intensity
$1/N$. Then, the dynamics can be written as:
\begin{equation}
\label{eq:SDE_binomial}
\mathrm{d} X_i(t) = \sum_{\substack{j=1 \\ j\neq i}}^N \left(\sum_{k=1}^{X_i(t-)+X_j(t-)} \!\!\!\!\!\!B_k(t) \;\;\;\;-\;\; X_i(t-)\right) \mathrm{d} \mathrm{N}^{(i,j)}_t,
\end{equation}
with $\{B_k(t)\}_{k \in \mathbb{N},t>0}$ being a collection of independent Bernoulli random variables. Using our notation \eqref{eq:B_XY}, one can write:
\begin{equation}
\label{eq:SDE_binomial2}
\mathrm{d} X_i(t) = \sum_{\substack{j=1 \\ j\neq i}}^N \Big(B\circ(X_i(t-)+X_j(t-)) \;-\; X_i(t-)\Big) \mathrm{d} \mathrm{N}^{(i,j)}_t.
\end{equation}
As the number of agents $N$ goes to infinity, we would expect that the processes $\{X_i\}_{1\leq i\leq N}$ become (asymptotically) independent and of the same law. Therefore, the limit dynamics
would be of the form:
\begin{equation}
\label{eq:SDE_binomiallimit}
\mathrm{d} \overline{X}(t) = \left(B\circ \left(\overline{X}(t-)\! + \!\overline{Y}(t-)\right) - \overline{X}(t-)\right) \mathrm{d} \overline{\mathrm{N}}_t
\end{equation}
where $\overline{Y}$ is an independent copy of $\overline{X}$ and $\overline{\mathrm{N}}_t$ is a Poisson process with unit intensity. The proof of such convergence is referred to as {\it propagation of chaos}, and it is out of the scope of the manuscript. We refer to \cite{cao_derivation_2021,cao_interacting_2022,cortez_quantitative_2016,cortez_uniform_2022,merle_cutoff_2019,sznitman_topics_1991} for the readers interested in this topic. The Kolmogorov backward equation associated with the SDE \eqref{eq:SDE_binomiallimit} reads as
\begin{equation}
\label{eq:KBE}
\mathrm{d} \mathbb{E}[\psi(\overline{X}(t))] = \mathbb{E}\left[\psi\left(B\circ \left(\overline{X}(t)\! + \!\overline{Y}(t)\right)\right) - \psi\left(\overline{X}(t)\right)\right]\,\mathrm{d} t
\end{equation}
In other words, the limit dynamics corresponds to the following pure jump process:
\begin{equation}
\label{binomial_limit}
\overline{X} \leadsto B\circ (\overline{X}\!+\!\overline{Y}).
\end{equation}
To write down the evolution equation for the law of the process $\overline{X}(t)$ (denoted by ${\mathbf{p}}(t)$), we need the following elementary observation:
\begin{lemma}\label{lem1}
Suppose $X$ and $Y$ two i.i.d. random variables with probability mass function
${\mathbf{p}} = \{p_n\}_{n \in \mathbb N}$. Let $Z = \sum_{k=1}^{X+Y} B_k$ where
$\{B_k\}_{k \in \mathbb{N}}$ are a collection of independent Bernoulli random
variables, which are independent of $X$ and $Y$. Then,
$$
\mathbb{P}(Z = n) = \sum_{k=0}^\infty \sum_{\ell = 0}^\infty
\tbinom{k+\ell}{n}\,\tfrac{1}{2^{k+\ell}}\,p_k\,p_\ell\,\mathbbm{1}_{\{n \leq
k+\ell\}},
$$
for $n \in \mathbb{N}$.
\end{lemma}
\begin{proof} By the law of total probability, we have
\begin{align*}
\mathbb{P}(Z = n) &= \sum_{m=n}^\infty \mathbb{P}\left(Z = n \mid
X+Y = m\right)\mathbb{P}(X+Y=m), \\
&= \sum_{m=n}^\infty \tbinom{m}{n}\,\tfrac{1}{2^m}\,\sum_{k \leq m} p_k\,p_{m-k}, \\
&= \sum_{k=0}^\infty \sum_{\ell=0}^\infty \tbinom{k+\ell}{n}\,\tfrac{1}{2^{k+\ell}}\,p_k\,p_\ell\,\mathbbm{1}_{\{n \leq k+\ell\}},
\end{align*}
which completes the proof. \end{proof}
It follows from Lemma \ref{lem1} that the evolution equation for the law
$\mathbf{p}(t)$ of $\overline{X}(t)$ defined in \eqref{eq:SDE_binomiallimit}
satisfies
\begin{equation}\label{eq:ODE_repeat} \frac{\mathrm{d}}{\mathrm{d} t}
{\mathbf{p}}(t) = Q[{\mathbf{p}}(t)],
\end{equation}
where
\begin{equation}\label{eq:Q_repeat}
Q[{\mathbf{p}}]_n = \sum_{k=0}^\infty \sum_{\ell = 0}^\infty \tbinom{k+\ell}{n}\,\tfrac{1}{2^{k+\ell}}\,p_k\,p_\ell\,\mathbbm{1}_{\{n \leq k+\ell\}} - p_n,
\end{equation}
for $n \in \mathbb{N}$. To conclude this section, we also record a simple
observation that provides intuition on the large time behavior. The result characterizes the dynamics in the case that the initial distribution is binomial.
\begin{lemma}
Let $X$ and $Y$ be independent binomial random variable with parameters $n \in
\mathbb{N}$ and $\mu/n \in [0,1]$. Then, $B \circ (X+Y)$ is a binomial random
variable with parameters $2n$ and $\mu/(2n)$.
\end{lemma}
\begin{proof} Let $X$ be a binomial random variable with parameters $n$ and
$\mu/n$, and let $Y$ be an i.i.d. copy of $X$. If we set $Z = X + Y$, then $Z$ is a binomial random variable with parameters $2n$ and $\mu/n$. Thus, for all $k = 0,\ldots,2n$ we have
\begin{align*}
\mathbb{P}\left(B \circ Z = k\right) &= \sum_{\ell=k}^{2n} \binom{2n}{\ell}\left(\frac{\mu}{n}\right)^\ell\left(1-\frac{\mu}{n}\right)^{2n-\ell}\binom{\ell}{k}\frac{1}{2^\ell}, \\
&= \sum_{\ell=k}^{2n} \binom{2n}{k}\binom{2n-k}{\ell-k}\left(\frac{\mu}{2n}\right)^\ell\left(1-\frac{\mu}{n}\right)^{2n-\ell},\\
&=\binom{2n}{k}\left(\frac{\mu}{2n}\right)^k \sum_{\ell=k}^{2n} \binom{2n-k}{\ell-k}\left(\frac{\mu}{2n}\right)^{\ell-k}\left(1-\frac{\mu}{n}\right)^{2n-\ell},\\
&= \binom{2n}{k}\left(\frac{\mu}{2n}\right)^k \sum_{\tilde{\ell}=0}^{2n-k} \binom{2n-k}{\tilde{\ell}}\left(\frac{\mu}{2n}\right)^{\tilde{\ell}}\left(1-\frac{\mu}{n}\right)^{2n-k-\tilde{\ell}} ,\\
&= \binom{2n}{k}\left(\frac{\mu}{2n}\right)^k\left(1-\frac{\mu}{n}+\frac{\mu}{2n}\right)^{2n-k} \\
&= \binom{2n}{k}\left(\frac{\mu}{2n}\right)^k\left(1-\frac{\mu}{2n}\right)^{2n-k},
\end{align*}
whence the proof is finished. \end{proof}
\section{Large time behavior}\label{sec:3}
\subsection{Evolution of moments}
\label{subsec:3.1}
We begin by establishing several elementary properties of the nonlinear ODE
system \eqref{eq:ODE}. First, we show through straightforward calculation, that
the Poisson distribution is an equilibrium solution of \eqref{eq:ODE}.
At this stage, we do not argue the uniqueness of this equilibrium
solution, but argument presented in \S \ref{sec:another_route} implies that the
Poisson distribution is indeed the unique equilibrium.
\begin{lemma} Suppose that $Q$ is defined by \eqref{eq:defQ}. Then,
\begin{equation}
\label{eq:equilibrium}
Q[\mathbf{p}_\lambda^*] = 0,
\end{equation}
where $\mathbf{p}_\lambda^*$ is the Poisson distribution defined in \eqref{eq:Poisson}.
\end{lemma}
\begin{proof} By changing variables in the double summation defining $Q$ we
have
\begin{eqnarray*}
Q[\mathbf{p}_\lambda^*]_n &=& \sum_{m=n}^\infty \sum_{j=0}^m {m \choose
n} \frac{1}{2^m} p_{\lambda,m-j}^* p_{\lambda,j}^* - p_{\lambda,n}^*, \\
&=& \sum_{m=n}^\infty \sum_{j =0}^m
\frac{m!}{(m-n)! n!} \frac{1}{2^m} \frac{\lambda^{m-j} e^{-\lambda}}{(m-j)!}
\frac{\lambda^j e^{-\lambda}}{j!} - p_{\lambda,n}^*, \\
&=& \sum_{m =n}^\infty \sum_{j = 0}^m
\frac{m!}{(m-n)! n!} \frac{1}{2^m} \frac{\lambda^{m-j} e^{-\lambda}}{(m-j)!}
\frac{\lambda^j e^{-\lambda}}{j!} - p_{\lambda,n}^*, \\
&=& \sum_{m =n}^\infty \frac{1}{(m-n)! n!} \lambda^{m} e^{-2\lambda} -
p_{\lambda,n}^* , \\
&=& \frac{\lambda^n e^{-\lambda}}{n!} \sum_{m =0}^\infty \frac{1}{m!}
e^{-\lambda} - p_{\lambda,n}^*, \\
&=& \frac{\lambda^n e^{-\lambda}}{n!} - p_{\lambda,n}^* =0,
\\
\end{eqnarray*}
which completes the proof.
\end{proof}
\begin{lemma}\label{prop1} Assume that ${\mathbf{p}}(t) = \{p_n(t)\}_{n \in
\mathbb{N}}$ is a classical and global in time solution of the system
\eqref{eq:ODE} whose initial probability mass function ${\mathbf{p}}(0)$ has
mean $\mu$ and finite variance $\sigma^2$. Then
\begin{equation}\label{eq:moments}
\frac{\mathrm{d}}{\mathrm{d} t} \sum_{n = 0}^\infty n\,p_n(t) = 0 \quad \text{and} \quad \frac{\mathrm{d}}{\mathrm{d} t} \sum_{n = 0}^\infty n^2\,p_n(t) = \frac{\mu^2+\mu}{2} - \frac 12\,\sum_{n = 0}^\infty n^2\,p_n(t).
\end{equation}
That is, the mean value of ${\mathbf{p}}(t)$ is preserved for all $t \geq 0$
and its second (non-centered) moment converges exponentially fast to $\mu^2 +
\mu$. \end{lemma}
\begin{proof} Making use of the evolution equation \eqref{eq:ODE} we deduce that
\begin{align*}
\sum_{n = 0}^\infty n\,p'_n &= \sum_{k = 0}^\infty \sum_{\ell = 0}^\infty
\left(\sum_{n =0}^{
k+l}n\,\tbinom{k+\ell}{n}\,\tfrac{1}{2^{k+\ell}}\right)\,p_k\,p_\ell -
\sum_{n= 0}^\infty n\,p_n ,\\
&= \sum_{k = 0}^\infty \sum_{\ell =0}^\infty
\tfrac{k+\ell}{2}\,p_k\,p_\ell - \sum_{n =0}^\infty n\,p_n = 0,
\end{align*}
where the last identity follows from the conservation $\sum_{n = 0}^\infty
p_n(t) = 1$ for all $t\geq 0$. A similar computation yields the second identity
provided in \eqref{eq:moments}, whence
$$
\sum_{n =0}^\infty n^2\,p_n(t) = \mu^2+\mu + \big(\sum_{n =0}^\infty n^2\,p_n(0) -
\mu^2 - \mu\big)\mathrm{e}^{-t/2}
$$
converges exponentially fast to $\mu^2 + \mu$. \end{proof}
We end this subsection with a numerical experiment indicating the relaxation of the solution of \eqref{eq:ODE} to its Poisson equilibrium distribution $\mathbf{p}_\lambda^*$, as is shown in Figure \ref{numerics_binomial_reshuffling}.
\begin{figure}[ht]
\centering
\includegraphics[width=.8\textwidth]{Numerics_binomial_reshuffling.pdf}
\caption{Simulation of the Boltzmann-type mean-field ODE system \eqref{eq:ODE} starting with the Dirac initial datum ${\bf p}(0)$, i.e., $p_\lambda(0) = 1$ and $p_n(0) \neq 0$ for all $n \neq \lambda$ with $\lambda = 5$. The blue and the orange curve represent the numerical solution (at time $t =1.5$) and the equilibrium $\mathbf{p}_\lambda^*$, respectively. We emphasize that in this example ${\bf p}(t=1.5)$ and $\mathbf{p}_\lambda^*$ are almost indistinguishable.}
\label{numerics_binomial_reshuffling}
\end{figure}
\subsection{Convergence towards Poisson equilibrium}
\label{subsec:3.2}
In this section, we will modify a coupling method provided in
\cite{cao_entropy_2021} to justify the convergence of the solution of
\eqref{eq:ODE_repeat} to the Poisson equilibrium distribution in the
$2$-Wasserstein metric. Recall that the $W_2(\mathbf{p},\mathbf{q})$ denotes
the $2$-Wasserstein distance between two probability distributions $\mathbf{p}$
and $\mathbf{q}$ on $\mathbb{N}$, see the definition \eqref{w2defn}.
We begin by providing a stochastic representation of the evolution equation \eqref{eq:ODE_repeat}, on which a coupling argument relies.
\begin{proposition}\label{stochastic_representation}
Assume that ${\mathbf{p}}(t)$ is a solution of \eqref{eq:ODE_repeat} with initial condition ${\mathbf{p}}(0)$ being a probability mass function whose support is contained in $\mathbb N$ having mean value $\mu$. Defining $(X_t)_{t\geq 0}$ to be a $\mathbb N$-valued continuous-time pure jump process with jumps of the form
\begin{equation}\label{coupling_nonlinear}
\begin{array}{ccc}
X_t & \leadsto & B\circ (X_t+Y_t),
\end{array}
\end{equation}
where $Y_t$ is an i.i.d. copy of $X_t$ and the jump occurs according to a Poisson clock running at the unit rate. If $\mathrm{Law}(X_0) = {\mathbf{p}}(0)$, then $\mathrm{Law}(X_t) = {\mathbf{p}}(t)$ for all $t\geq 0$.
\end{proposition}
\begin{proof}
Taking $\varphi$ to be an arbitrary but fixed test function, we have
\begin{equation}\label{testfunc}
\frac{\mathrm{d} }{\mathrm{d} t} \mathbb E[\varphi(X_t)] = \mathbb E[\varphi(B\circ(X_t+Y_t))] - \mathbb E[\varphi(X_t)].
\end{equation}
Let ${\mathbf{p}}(t)$ to be the probability mass function of $X_t$, we can rewrite \eqref{testfunc} as
\begin{align*}
\frac{\mathrm{d} }{\mathrm{d} t} \sum_{n=0}^\infty \varphi(n)\,p_n(t) &= \sum_{k=0}^\infty\sum_{\ell=0}^\infty\sum_{n=0}^{k+\ell} \tbinom{k+\ell}{n}\,\frac{1}{2^{k+\ell}}\,\varphi(n)\,p_k(t)\,p_\ell(t) - \sum_{n=0}^\infty \varphi(n)\,p_n(t),\\
&= \sum_{n=0}^\infty \left(\sum_{k = 0}^\infty \sum_{\ell =0}^\infty \tbinom{k+\ell}{n}\,\tfrac{1}{2^{k+\ell}}\,p_k\,p_\ell\,\mathbbm{1}_{\{n \leq k+\ell\}} - p_n\right)\varphi(n).
\end{align*}
Thus, ${\mathbf{p}}(t)$ satisfies the ODE system \eqref{eq:ODE_repeat} and the proof is completed.
\end{proof}
\begin{remark}\label{rem}
Using a similar reasoning, we can show that if $(\overline{X}_t)_{t\geq 0}$ is a $\mathbb N$-valued continuous-time pure jump process with jumps of the form
\begin{equation}\label{coupling_limit}
\begin{array}{ccc}
\overline{X}_t & \leadsto & B\circ(\overline{X}_t+\overline{Y}_t),
\end{array}
\end{equation}
where $\overline{Y}_t$ is an i.i.d. copy of $\overline{X}_t$ and the jump occurs according to a Poisson clock running at the unit rate. Then $\mathrm{Law}(\overline{X}_0) = {\mathbf{p}}^*_\lambda$ implies $\mathrm{Law}(\overline{X}_t) = {\mathbf{p}}^*_\lambda$ for all $t\geq 0$, where ${\mathbf{p}}^*_\lambda$ is the Poisson distribution.
\end{remark}
\subsection{Proof of Theorem \ref{thm1}} \label{proofthm1}
We are now prepared to prove our main result.
\begin{proof}[Proof of Theorem \ref{thm1}]
The proof strategy is based on coupling the two probability mass functions
${\mathbf{p}}(t)$ and ${\mathbf{p}}^*_\lambda$ for all $t\geq 0$. Assume that
$(X_t)_{t\geq 0}$ and $(\overline{X}_t)_{t\geq 0}$ are $\mathbb N$-valued
continuous-time pure jump processes with jumps of the form
\eqref{coupling_nonlinear} and \eqref{coupling_limit}, respectively. We can
take $(X_t,Y_t)$ and $(\overline{X}_t,\overline{Y}_t)$ as in the statement of
Proposition \ref{stochastic_representation} and Remark \ref{rem}, respectively.
Meanwhile, we require that $X_t \perp \overline{Y}_t$, $\overline{X}_t \perp
Y_t$ and $(X_t,\overline{X}_t) \perp (Y_t,\overline{Y}_t)$, i.e., several
independence assumptions can be imposed along the way when we introduce the
coupling. We emphasize that we can employ the same set of independent fair
coins in the definition of $B\circ (X_t+Y_t)$ and
$B\circ(\overline{X}_t+\overline{Y}_t)$, leading us to the representations
\begin{equation}\label{eq:coupling_coins}
B\circ (X_t+Y_t) = \sum_{k=1}^{X_t+Y_t} B_k \quad \text{and}\quad B\circ(\overline{X}_t+\overline{Y}_t) = \sum_{k=1}^{\overline{X}_t+\overline{Y}_t} B_k,
\end{equation}
in which $\{B_k\}$ is a collection of independent Bernoulli random variables. Due to the coupling we have just constructed, along with the notation $R_t := |X_t+Y_t - \overline{X}_t - \overline{Y}_t|$, we deduce that
\begin{align*}
\frac{\mathrm{d}}{\mathrm{d} t}\mathbb E[(X_t-\overline{X}_t)^2] &= \mathbb E\left[\left(B\circ (X_t+Y_t)-B\circ(\overline{X}_t+\overline{Y}_t)\right)^2\right] - \mathbb E[(X_t-\overline{X}_t)^2]\\
&= \mathbb{E}\left[\mathbb{E}\left[\left|\sum_{k=1}^{R_t} B_k\right|^2 \mid R_t\right]\right]-\mathbb E[(X_t-\overline{X}_t)^2]\\
&= \mathbb{E}\left[\mathbb{E}\left[\sum_{i,j=1}^{R_t}B_i\,B_j \mid R_t\right]\right]-\mathbb E[(X_t-\overline{X}_t)^2]\\
&= \mathbb{E}\left[\frac 12 \,R_t + \frac 14 \, R_t\,(R_t - 1) \right] -\mathbb E[(X_t-\overline{X}_t)^2] \\
&= \frac{1}{4}\,\mathbb{E}[R^2_t] + \frac{1}{4}\,\mathbb{E}[R_t] - \mathbb E[(X_t-\overline{X}_t)^2] \\
&= \frac{1}{4}\,\mathbb{E}[|X_t+Y_t - \overline{X}_t - \overline{Y}_t|] - \frac{1}{2}\,\mathbb E[(X_t-\overline{X}_t)^2],
\end{align*}
where the last identity follows from the elementary observation that \[\mathbb{E}[R^2_t] = \mathbb{E}\left[(X_t-\overline{X}_t)^2 + (Y_t-\overline{Y}_t)^2\right] = 2\,\mathbb E[(X_t-\overline{X}_t)^2].\] As an immediate by-product of the preceding computations, we obtain
\begin{align*}
\frac{\mathrm{d}}{\mathrm{d} t}\mathbb E[(X_t-\overline{X}_t)^2] &= \frac{1}{4}\,\mathbb{E}[|X_t-\overline{X}_t+Y_t-\overline{Y}_t|] - \frac{1}{2}\,\mathbb E[(X_t-\overline{X}_t)^2] \\
&\leq \frac{1}{2}\,\mathbb{E}\left[(X_t-\overline{X}_t)^2 + (Y_t-\overline{Y}_t)^2\right] - \frac{1}{2}\,\mathbb E[(X_t-\overline{X}_t)^2] = 0.
\end{align*}
Next, we notice that before we reach the time $T$ for which $\mathbb E[(X_T-\overline{X}_T)^2] \leq 1$, we can further deduce that
\begin{align*}
\mathbb{E}[|X_t+Y_t - \overline{X}_t - \overline{Y}_t|] &\leq \sqrt{\mathbb{E}\left[(X_t-\overline{X}_t+Y_t-\overline{Y}_t)^2\right]} \\
&\leq \sqrt{2}\,\sqrt{\mathbb E[(X_t-\overline{X}_t)^2]} \leq \sqrt{2}\,\mathbb E[(X_t-\overline{X}_t)^2],
\end{align*}
where the last inequality follows from $\mathbb E[(X_t-\overline{X}_t)^2] \geq 1$ for all $t \in [0,T]$. Consequently, we arrive at
\begin{equation}\label{eq:exponential_decay}
\frac{\mathrm{d}}{\mathrm{d} t}\mathbb E[(X_t-\overline{X}_t)^2] \leq -\left(\frac 12 - \frac{\sqrt{2}}{4}\right)\,\mathbb E[(X_t-\overline{X}_t)^2]~~~\textrm{for all $0 \leq t \leq T$}.
\end{equation}
Unfortunately, the aforementioned argument leading to the exponential decay of $\mathbb E[(X_t-\overline{X}_t)^2]$ before a finite time $T$ breaks down when the quantity of interest $\mathbb E[(X_t-\overline{X}_t)^2]$ becomes no larger than $1$ (which is guaranteed when $t$ is sufficiently large). Thus, we have to resort to a different approach in order to have a good enough upper bound for $\mathbb{E}[|X_t-\overline{X}_t+Y_t-\overline{Y}_t|]$. To this end, we will show that
\begin{equation}\label{eq:goal}
\mathbb{E}[|R_t|] \leq 2\,\mathbb E[(X_t-\overline{X}_t)^2] - \left(1-\sqrt{\frac{2}{3}}\right)\min\left\{\mathbb E[(X_t-\overline{X}_t)^2],\left(\mathbb E[(X_t-\overline{X}_t)^2]\right)^2\right\}
\end{equation}
for all $t \in \mathbb{R}_+$, from which we end up with the following differential inequality
\begin{equation}\label{eq:diff_inequ}
\frac{\mathrm{d}}{\mathrm{d} t}\mathbb E[(X_t-\overline{X}_t)^2] \leq -\frac{1-\sqrt{\frac{2}{3}}}{4}\,\min\left\{\mathbb E[(X_t-\overline{X}_t)^2],\left(\mathbb E[(X_t-\overline{X}_t)^2]\right)^2\right\}
\end{equation}
holding for all $t\geq 0$. In particular, for $t \geq T = \min\{t \geq 0 \mid \mathbb E[(X_t-\overline{X}_t)^2] \leq 1\}$ the inequality \eqref{eq:diff_inequ} reads as \[\frac{\mathrm{d}}{\mathrm{d} t}\mathbb E[(X_t-\overline{X}_t)^2] \leq -\frac{1-\sqrt{\frac{2}{3}}}{4}\,\left(\mathbb E[(X_t-\overline{X}_t)^2]\right)^2, \] which leads us to
\begin{equation}\label{eq:large_time_estimate}
\mathbb E[(X_t-\overline{X}_t)^2] \leq \frac{1}{\frac{1 - \sqrt{2\slash 3}}{4}\,t + 1} ~~~\textrm{for all $t \geq T$}.
\end{equation}
If we combine \eqref{eq:exponential_decay} and \eqref{eq:large_time_estimate}, and pick $\overline{X_0}$ with law ${\mathbf{p}}(0)$ so that $W^2_2({\mathbf{p}}(0), {\mathbf{p}}^*_\lambda) = \mathbb E[(X_0-\overline{X}_0)^2]$, we obtain \eqref{Wasserstein_conv} and the proof will be finished. Now it remains to justify the validity of the refined estimate \eqref{eq:goal} for $\mathbb{E}[|X_t - \overline{X}_t + Y_t - \overline{Y}_t|]$, and we consider the following two cases:
\begin{itemize}
\item \emph{Case i)}~~ Suppose that $\mathbb{P}(|X_t - \overline{X}_t|=1) \leq c\,\mathbb E[(X_t-\overline{X}_t)^2]$ for some constant $c \in (0,1)$ to be specified later. Then we deduce that
\begin{align*}
\mathbb{E}[|X_t - \overline{X}_t + Y_t - \overline{Y}_t|] &= 2\,\mathbb{E}[|X_t - \overline{X}_t|] \\
&= 2\,\mathbb{P}(|X_t - \overline{X}_t|=1) + 2\,\sum_{k=2}^\infty k\,\mathbb{P}(|X_t - \overline{X}_t|=k) \\
&\leq \mathbb{P}(|X_t - \overline{X}_t|=1) + \sum_{k=1}^\infty k^2\,\mathbb{P}(|X_t - \overline{X}_t|=k) \\
&= (1+c)\,\mathbb E[(X_t-\overline{X}_t)^2] \\
&= 2\,\mathbb E[(X_t-\overline{X}_t)^2] - (1-c)\,\mathbb E[(X_t-\overline{X}_t)^2].
\end{align*}
\item \emph{Case ii)}~~ Suppose that $\mathbb{P}(|X_t - \overline{X}_t|=1) \geq c\,\mathbb E[(X_t-\overline{X}_t)^2]$, where the constant $c$ is the same one as appeared in \emph{Case i)}. We now proceed as follows:
\begin{align*}
\mathbb{E}[|X_t - \overline{X}_t + Y_t - \overline{Y}_t|] &\leq \mathbb{E}[|X_t-\overline{X}_t| + |Y_t-\overline{Y}_t| \\
&\qquad ~~ - 2\,\mathbbm{1}_{\{X_t-\overline{X}_t = 1\}}\,\mathbbm{1}_{\{Y_t-\overline{Y}_t = -1\}}] \\
&\leq 2\mathbb E[(X_t-\overline{X}_t)^2] - 2\mathbb{P}(X_t - \overline{X}_t=1)\mathbb{P}(Y_t - \overline{Y}_t=-1) \\
&= 2\mathbb E[(X_t-\overline{X}_t)^2] - 2\mathbb{P}(X_t - \overline{X}_t=1)\mathbb{P}(X_t - \overline{X}_t=-1).
\end{align*}
Since we have assumed that $\mathbb{P}(|X_t - \overline{X}_t|=1) \geq c\,\mathbb E[(X_t-\overline{X}_t)^2]$, without any loss of generality we may further assume that
\begin{equation}\label{eq:assumption1}
\mathbb{P}(X_t - \overline{X}_t=1) \geq \frac{c}{2}\,\mathbb E[(X_t-\overline{X}_t)^2].
\end{equation}
As $\mathbb E[X_t - \overline{X}_t] = \mu - \mu = 0$, we also have $\mathbb E\left[|X_t - \overline{X}_t|\,\mathbbm{1}_{\{X_t - \overline{X}_t <0\}}\right] = \mathbb E\left[|X_t - \overline{X}_t|\,\mathbbm{1}_{\{X_t - \overline{X}_t >0\}}\right]$, from which it follows that
\begin{equation}\label{eq:lower_bound_preliminary}
\mathbb{P}(X_t - \overline{X}_t=-1) + \mathbb E\left[|X_t - \overline{X}_t|\,\mathbbm{1}_{\{X_t - \overline{X}_t <-1\}}\right] \geq \mathbb{P}(X_t - \overline{X}_t=1).
\end{equation}
Due to the identity \[\mathbb E[|X_t - \overline{X}_t|^2] = \mathbb{P}(|X_t - \overline{X}_t|=1) + \mathbb E[|X_t - \overline{X}_t|^2\,\mathbbm{1}_{\{|X_t - \overline{X}_t| > 1\}}],\] we have the bound
\begin{equation}\label{eq:lower_bound_2}
\begin{aligned}
\mathbb E\left[|X_t - \overline{X}_t|\,\mathbbm{1}_{\{X_t - \overline{X}_t <-1\}}\right] &\leq \mathbb E\left[|X_t - \overline{X}_t|^2\,\mathbbm{1}_{\{|X_t - \overline{X}_t| > 1\}}\right] \\
&\leq (1-c)\,\mathbb E[|X_t - \overline{X}_t|^2].
\end{aligned}
\end{equation}
Combining \eqref{eq:assumption1}, \eqref{eq:lower_bound_preliminary} and \eqref{eq:lower_bound_2} yields
\begin{equation}\label{eq:lower_bound_3}
\mathbb{P}(X_t - \overline{X}_t=-1) \geq \left(\frac{c}{2}-(1-c)\right)\,\mathbb E[|X_t - \overline{X}_t|^2] = \left(\frac{3c}{2}-1\right)\,\mathbb E[|X_t - \overline{X}_t|^2].
\end{equation}
Combining the two lower bounds \eqref{eq:assumption1} and \eqref{eq:lower_bound_3} leads us to
\begin{equation}\label{eq:final_bound}
\mathbb{P}(X_t - \overline{X}_t=1)\,\mathbb{P}(X_t - \overline{X}_t=-1) \geq \frac{c}{2}\left(\frac{3c}{2}-1\right)\left(\mathbb E[|X_t - \overline{X}_t|^2]\right)^2,
\end{equation}
whence we finally deduce that \[\mathbb{E}[|X_t - \overline{X}_t + Y_t - \overline{Y}_t|] \leq 2\,\mathbb E[(X_t-\overline{X}_t)^2] - c\left(\frac{3c}{2}-1\right)\left(\mathbb E[|X_t - \overline{X}_t|^2]\right)^2.\]
\end{itemize}
Setting $c = \sqrt{2\slash 3}$ and combining the discussions above yield the advertised estimate \eqref{eq:goal}, thereby completing the entire proof Theorem \ref{thm1}.
\end{proof}
\begin{remark}
One might have noticed that the coupling argument presented here is more sophisticated than the corresponding coupling argument used for the uniform reshuffling model \cite{cao_entropy_2021}. One simple explanation is that the random variable $U\circ (X_i + X_j)$ appearing in the update of the uniform reshuffling dynamics \eqref{uniform_reshuffling} admits a nice ``factorization property'', meaning that we have
\begin{equation}\label{eq:nice_separation}
\mathrm{Uniform}([0,X_i+X_j]) \overset{\mathrm{d}}{=} \mathrm{Uniform}([0,1])\cdot (X_i+X_j),
\end{equation}
where the notation $X \overset{\mathrm{d}}{=} Y$ is used whenever the random variables $X$ and $Y$ share the same distribution. However, it is not possible (in our opinion) to ``decompose'' the random variable $B\circ (X_i + X_j)$ as a product of two independent random variables similar to \eqref{eq:nice_separation}. Loosely speaking, the noise (or randomness) introduced in the binomial reshuffling dynamics is somehow ``intrinsic'' while the noise rendered by the uniform reshuffling mechanism is ``extrinsic''.
\end{remark}
\section{Alternative approach to convergence}\label{sec:another_route}
In this section, we consider the discrete time version of the proposed binomial reshuffling model and we sketch the argument (in the same spirit as those used in \cite{lanchier_rigorous_2017,lanchier_rigorous_2018}), which shows the convergence of the distribution of money to a Poisson distribution. The general strategy is to investigate the limiting behavior for each fixed number $N$ of agents as time becomes large (by focusing on the motion of dollars), and then compute the probability that a typical individual (immersed in an infinite population) has $n$ dollars at equilibrium in the limits as $N \to \infty$.
Let ${\bf X}(t) = \left(X_1(t),\ldots,X_N(t)\right)$ with $t \in \mathbb N$ and denote by
\[\mathcal{A}_{N,\mu} := \big\{{\bf X} \in \mathbb{N}^N \mid \sum_{n=1}^N X_i = N\mu\big\}\]
the configuration (or state) space. We will also denote $[N] = \{1,2\ldots,N\}$ for notation simplicity. Given ${\bf Y},{\bf Z} \in \mathcal{A}_{N,\mu}$, it is clear that \[\mathbb{P}\left(X(t+1) = {\bf Z} \mid X(t) = {\bf Y} \right) \neq 0\] if and only if
$Y_k = Z_k$ for all $k \in [N] \setminus \{i,j\}$ and $Y_i + Y_j = Z_i + Z_j$ for some $(i,j) \in [N]^2 \setminus \{i=j\}$. By a similar argument as given in \cite{lanchier_rigorous_2017,lanchier_rigorous_2018}, one can show that the discrete time binomial reshuffling dynamics is a finite irreducible and aperiodic Markov chain, whence the process will converge to a unique stationary distribution (as $t \to \infty$) regardless of the choice of initial configuration. We now show that the process is time-reversible with the following multinomial stationary distribution
\begin{equation}\label{eq:multinomial}
\mu_\infty\left({\bf X}\right) := \binom{N\mu}{X_1,X_2,\ldots,X_N}\prod_{i =1}^N \frac{1}{N^{X_i}},
\end{equation}
i.e., each dollar is independently in agent $i$'s pocket with probability $\frac{1}{N}$. Indeed, given ${\bf Y},{\bf Z} \in \mathcal{A}_{N,\mu}$ with $\mathbb{P}\left(X(t+1) = {\bf Z} \mid X(t) = {\bf Y} \right) \neq 0$ as described above, we have that
\begin{equation*}
\begin{aligned}
\mathbb{P}\left(X(t+1) = {\bf Z} \mid X(t) = {\bf Y} \right) &= \frac{2}{N(N-1)}\,\mathbb{P}\left(\textrm{Binomial}\left(Y_i+Y_j,\frac 12\right) = Z_i\right) \\
&= \frac{2}{N(N-1)}\binom{Y_i+Y_j}{Z_i}\left(\frac 12\right)^{Y_i + Y_j} \\
&= \frac{2}{N(N-1)}\binom{Y_i+Y_j}{Z_i}\left(\frac 12\right)^{Z_i + Z_j}.
\end{aligned}
\end{equation*}
Therefore, \[\frac{\mathbb{P}\left(X(t+1) = {\bf Z} \mid X(t) = {\bf Y} \right)}{\mathbb{P}\left(X(t+1) = {\bf Y} \mid X(t) = {\bf Z} \right)} = \frac{\binom{Y_i + Y_j}{Z_i}}{\binom{Z_i + Z_j}{Y_i}} = \frac{(Y_i)!(Y_j)!}{(Z_i)!(Z_j)!} = \frac{\mu_\infty({\bf Z})}{\mu_\infty({\bf Y})} \] or
\begin{equation}\label{eq:detailed_balance}
\mathbb{P}\left(X(t+1) = {\bf Z} \mid X(t) = {\bf Y} \right)\mu_\infty({\bf Y}) = \mathbb{P}\left(X(t+1) = {\bf Y} \mid X(t) = {\bf Z} \right)\mu_\infty({\bf Z}).
\end{equation}
On the other hand, the detailed balance equation \eqref{eq:detailed_balance} holds trivially when ${\bf Y} \in \mathcal{A}_{N,\mu}$ and ${\bf Z} \in \mathcal{A}_{N,\mu}$ are such that $\mathbb{P}\left(X(t+1) = {\bf Z} \mid X(t) = {\bf Y} \right) = 0$. In summary, the discrete time binomial reshuffling process is (time) reversible with respect to the multinomial distribution \eqref{eq:multinomial} and the distribution \eqref{eq:multinomial} is indeed the stationary distribution of the binomial reshuffling model. Now we can prove the following convergence result.
\begin{theorem}\label{thm2}
For the discrete time binomial reshuffling model, for each fixed $n$ we have that
\begin{equation*}
\lim_{t\to \infty} \mathbb{P}\left(X_1(t) = n\right) = \binom{N\mu}{n}\left(\frac{1}{N}\right)^n\left(1 - \frac{1}{N}\right)^{N\mu - n}.
\end{equation*}
Consequently, \[\lim_{N\to \infty}\lim_{t\to \infty} \mathbb{P}\left(X_1(t) = n\right) = \frac{\mu^n\,\mathrm{e}^{-\mu}}{n!}.\]
\end{theorem}
\begin{proof} The proof is similar to the proofs of Theorem 1 and Theorem 2 in \cite{lanchier_rigorous_2017} for other econophysics models. For all ${\bf X} \in \mathcal{A}_{N,\mu}$ such that $X_1 = n$, we have that
\begin{align*}
\mu_\infty({\bf X}) &= \binom{N\mu}{n,X_2,\ldots,X_N} \left(\prod_{i = 2}^N \frac{1}{N^{X_i}}\right)\frac{1}{N^n} \\
&= \binom{N\mu}{n}\binom{N\mu - n}{X_2,\ldots,X_N} \left(\prod_{i = 2}^N \frac{1}{N^{X_i}}\right)\frac{1}{N^n}.
\end{align*}
Therefore, the stationarity of the multinomial distribution $\mu_\infty$ and the multinomial theorem allow us to deduce that
\begin{align*}
\lim_{t\to \infty} \mathbb{P}\left(X_1(t) = n\right) &= \mu_\infty\left(\{{\bf X} \in \mathcal{A}_{N,\mu} \mid X_1 = n\}\right) \\
&= \sum_{{\bf X} \in \mathcal{A}_{N,\mu}} \binom{N\mu}{n}\binom{N\mu - n}{X_2,\ldots,X_N} \left(\prod_{i = 2}^N \frac{1}{N^{X_i}}\right)\frac{1}{N^n}\mathbbm{1}_{\{X_1 = n\}} \\
&= \binom{N\mu}{n}\left(\frac{1}{N}\right)^n\sum_{X_2+\cdots+X_N = N\mu-n} \binom{N\mu - n}{X_2,\ldots,X_N} \left(\prod_{i = 2}^N \frac{1}{N^{X_i}}\right) \\
&= \binom{N\mu}{n}\left(\frac{1}{N}\right)^n\left(\sum_{i=2}^N \frac{1}{N}\right)^{N\mu - n} = \binom{N\mu}{n}\left(\frac{1}{N}\right)^n\left(1-\frac{1}{N}\right)^{N\mu - n}.
\end{align*}
As a consequence, taking the large population limit as $N \to \infty$ and
recalling the classical result on Poisson approximation to binomial
distribution, we finally obtain
$$
\lim_{N\to \infty}\lim_{t\to
\infty} \mathbb{P}\left(X_1(t) = n\right) = \frac{\mu^n\,\mathrm{e}^{-\mu}}{n!}.
$$
This finishes the proof of theorem \ref{thm2}. \end{proof}
\section{Discussion}
\label{sec:discuss}
In this manuscript, we have introduced the binomial
reshuffling model. We proved that, in the mean-field limit, the distribution of
wealth under this model converges to the Poisson distribution. In the context
of econophysics, this model is particularly natural due to the connection with
coin flipping: agents redistribute their combined wealth by flipping a sequence
of fair coins. We managed to show a quantitative large time convergence result
to a Poisson equilibrium distribution for the solution of \eqref{eq:ODE} thanks
to a coupling argument.
\begin{figure}[ht]
\centering
\includegraphics[width=.97\textwidth]{simu_cv_equlibrium.pdf}
\caption{Starting with the initial probability distribution $\mathbf{p}(0)$ illustrated in Figure \ref{fig:simu_agent_based}, we solve the mean-field limit \eqref{eq:ODE}
numerically and plot the distance to the equilibrium Poisson distribution in
the $W_1$ metric (left) and the $W_2$ metric (right). We observe that the convergence is numerically exponentially fast in both metrics.}
\label{fig:simu_cv_equlibrium}
\end{figure}
In an attempt to determine if the rate established in Theorem \ref{thm1} can be
improved, we can approximate the solution to the
ODE system \eqref{eq:ODE} numerically. We start the initial probability
distribution $\mathbf{p}(0)$ illustrated in Figure \ref{fig:simu_agent_based} that has mean
$\lambda = 5.15$. Since this initial distribution is supported on
$\{0,\ldots,10\}$, its behavior over reasonable amounts of time can be
approximated by probability vectors $\mathbf{p}(t) = (p_0,\ldots,p_{55})$
truncated at $n=55$. Indeed, when $\lambda=5.15$, we have $p_\lambda^*(56) =
\lambda^{56} \mathrm{e}^{-\lambda}/(56!) \approx 10^{-34}$ which is approximately equal
to the relative precision of Quadruple-precision floating-point numbers (which
is how we represent real numbers for the numerics). The system of ODEs was
solved using a fourth-order Runge-Kutta method. The $W_1$ and $W_2$ metric are
straightforward to compute for $1$-dimensional probability distribution,
indeed, if $F$ and $G$ denote the cumulative
distribution function of $\mathbf{p}$ and $\mathbf{q}$, respectively, then
$$
W_p(\mathbf{p},\mathbf{q}) = \left( \int_0^1 |F^{-1}(z) - G^{-1}(z)|^p \mathrm{d} z \right)^{1/p},
$$
where the inverse of the cumulative distribution function is defined by
$F^{-1}(z) = \min \{ k \in \mathbb{N} : F(k) \ge z\}$, see for example
\cite[Remark 2.30]{COTFNT}. We plot the results in Figure \ref{fig:simu_cv_equlibrium}. Since
the $W_1$ and $W_2$ metrics are decreasingly linearly in the log scale of the
Figure, the numerics suggest that it may be possible to improve the converge
rate estimate of Theorem \ref{thm1} to exponential convergence, at least for
some initial probability distributions. We leave this as an open problem.
Several other open questions still remain to be solved in future work. For
instance, it seems very hard to find a natural Lyapunov functional associated
with the Boltzmann-type evolution equation \eqref{eq:ODE}, which is pretty
weird since for most of the classical econophysics models (see for instance
those studied in
\cite{cao_derivation_2021,cao_entropy_2021,cao_explicit_2021,cao_uncovering_2022,matthes_steady_2008,naldi_mathematical_2010})
natural Lyapunov functionals do exist.
\subsection*{Acknowledgment} It is a great pleasure to express our gratitude to Sebastien Motsch for many helpful suggestions. We would also like to thank Augusto Santos for his answer to a question of Fei Cao on MathOverflow \cite{santos_convergence_2022}, where a detailed proof of Lemma \ref{eq:preliminary_DS} is provided. Nicholas F. Marshall was supported in part by NSF DMS-1903015. This work was initiated as the AMS MRC conference on Data Science at the Crossroads of Analysis, Geometry, and Topology.
\begin{appendix}
\section{Convergence to Poisson via Laplace transform}\label{appendix}
We include here another (although qualitative) approach for proving the large time convergence of the solution of the ODE system \eqref{eq:ODE} to the Poisson equilibrium, based on the application of Laplace transform. The primary motivation to present the Laplace transform approach lies in the emergence of a surprising connection between the convergence problem at hand and a closely related dynamical system. Indeed, we will need the following preliminary result on a specific dynamical system, which seems to be interesting in its own right.
\begin{lemma}\label{eq:preliminary_DS}
Assume that the following infinite dimensional ODE system
\begin{equation}\label{eq:ODE2}
a'_n(t) = a^2_{n+1}(t) - a_n(t),~~~{n \in \mathbb N}
\end{equation}
admits a unique (smooth in time) solution, whose initial datum $\{a_n(0)\}_{n\geq 0}$ satisfies $a_n(0) < a^2_{n+1}(0)$ for all $n$ and $\mathrm{e}^{-\mu_1\,2^{-n}} \leq a_n(0) \leq \mathrm{e}^{-\mu_2\,2^{-n}}$ for all large enough $n$ where $\mu_1,\mu_2 \in \mathbb{R}_+$. Then there exists some $\mu \in [\mu_1,\mu_2]$ such that $a_n(t) \xrightarrow{t\to \infty} \mathrm{e}^{-\mu\,2^{-n}}$ for all $n \in \mathbb N$.
\end{lemma}
\begin{proof} We will only provide a sketch of the proof here and refer to \cite{santos_convergence_2022} for a detailed argument. We first notice that the infinite dimensional cube $[0,1]^{\mathbb N}$ is invariant under the evolution of $\{a_n(t)\}_{n\geq 0}$, i.e., if $a_n(0) \in [0,1]$ for all $n \in \mathbb N$, then $a_n(t) \in [0,1]$ for all $n \in \mathbb N$ and all $t\in \mathbb{R}_+$. Moreover, the tail of the initial condition fully determines the asymptotic behavior of the system \eqref{eq:ODE2} since the state variable $a_m$ impacts the evolution of $a_n$ as long as $m > n$ but not vice versa. Furthermore, the solution of \eqref{eq:ODE2} enjoys a nice monotonicity property: If $\{\bar{a}_n\}_{n\geq 0}$ is another solution of \eqref{eq:ODE2} whose initial datum $\{\bar{a}_n(0)\}_{n \geq 0}$ satisfies $\bar{a}_n(0) \geq a_n(0)$ for all $n$, then $\bar{a}_n(t) \geq a_n(t)$ for all $n \in \mathbb N$ and $t\geq 0$. In particular, if there exists some $N \in \mathbb N$ for which $\mathrm{e}^{-\mu_1\,2^{-n}} \leq a_n(0) \leq \mathrm{e}^{-\mu_2\,2^{-n}}$ holds whenever $n \geq N$, then we must have $[\liminf_{t \to \infty} a_n(t), \limsup_{t \to \infty} a_n(t)] \in [\mathrm{e}^{-\mu_1\,2^{-n}},\mathrm{e}^{-\mu_2\,2^{-n}}]$ for all $n$. Lastly, the advertised conclusion follows from another monotonicity property of the solution of \eqref{eq:ODE2}: if $a_n(0) < a^2_{n+1}(0)$ for all $n$, then $a_n(t) \geq a_n(s)$ for all $t \geq s$ and all $n$.
\end{proof}
Armed with Lemma \ref{eq:preliminary_DS}, we are able to demonstrate the convergence of the solution of \eqref{eq:ODE} to the Poisson distribution by virtue of the Laplace transform.
\begin{proposition}\label{prop:Appendix}
Assume that ${\mathbf{p}}(t) = \{p_n(t)\}_{n \geq 0}$ is a classical (and global in time) solution of the system \eqref{eq:ODE} with a initial probability mass function ${\mathbf{p}}(0)$ having mean value $\mu$, then ${\mathbf{p}}(t) \xrightarrow{t \to \infty} {\mathbf{p}}^*_\lambda$.
\end{proposition}
\begin{proof} For $x \in [0,1]$, let $\phi(x,t) = \sum_{n=0}^\infty p_n(t)\,x^n$ to be the Laplace transform of ${\mathbf{p}}(t)$, it suffices to establish the convergence
\begin{equation}\label{eq:Laplace_conv}
\phi(x,t) \xrightarrow{t\to \infty} \mathrm{e}^{\mu(x-1)},
\end{equation}
since the function $\mathrm{e}^{\mu(x-1)}$ is the Laplace transform of the Poisson distribution. We now show that $\phi(x,t)$ satisfies the following partial differential equation (PDE):
\begin{equation}\label{eq:Laplace_PDE}
\partial_t \phi(x,t) + \phi(x,t) = \left(\phi\left(\tfrac{1+x}{2}, t\right)\right)^2.
\end{equation}
Indeed, we have
\begin{align*}
\partial_t \phi(x,t) + \phi(x,t) &= \sum_{n=0}^\infty \sum_{k=0}^\infty \sum_{\ell=0}^\infty \binom{k+\ell}{n}\,\frac{p_k}{2^k}\,\frac{p_\ell}{2^\ell}\,x^n\, \mathbbm{1}_{\{k+\ell \geq n\}} \\
&= \sum_{N=0}^\infty \sum_{\substack{k,\ell = 0 \\ k+\ell = N}}^\infty \frac{p_k}{2^k}\,\frac{p_\ell}{2^\ell}\,\sum_{n=0}^N \binom{N}{n}\,x^n \\
&= \sum_{k =0}^\infty \sum_{k = 0}^\infty \frac{p_k}{2^k}\,\frac{p_\ell}{2^\ell}\,(1+x)^{k+\ell} = \left(\phi\left(\tfrac{1+x}{2}, t\right)\right)^2.
\end{align*}
We remark here that the PDE \eqref{eq:Laplace_PDE} is complemented with an initial datum $\phi(x,0)$ which satisfies $\phi(1,0) = 1$ and $\phi'(1,0) = \mu$. Moreover, due to the conservation of mass and mean (recall Lemma \ref{prop1}), we also have
\begin{equation}\label{eq:constraints}
\phi(1,t) \equiv 1, ~~~\textrm{and}~~~ \partial_x \phi(1,t) \equiv \mu ~~~ \textrm{for all $t \geq 0$.}
\end{equation}
If we set $a_n(t) = \phi(1-2^{-n},t)$ for all $n \in \mathbb N$ and all $t \in \mathbb{R}_+$, then \eqref{eq:Laplace_PDE} implies that $a'_n(t) = a^2_{n+1}(t) - a_n(t)$. Thanks to the constraint that $\partial_x \phi(1,t) \equiv \mu$, we also have $a_n(t) \approx 1 - \mu\,2^{-n}$ for all large $n$. Finally, the obvious observation that $1-2^{-n} \leq \left(1 - 2^{-(n+1)}\right)^2$ allows us to apply Lemma \eqref{eq:preliminary_DS}, and conclude that \[\phi(1-2^{-n},t) \xrightarrow{t\to \infty} \mathrm{e}^{-\mu\,2^{-n}}~~~ \textrm{for all $n \in \mathbb N$}.\] Therefore, by the continuity of $\phi$ (with respect to $x$) we deduce the claimed convergence $\phi(x,t) \xrightarrow{t\to \infty} \mathrm{e}^{\mu(x-1)}$.
\end{proof}
\end{appendix}
|
2,877,628,091,483 | arxiv | \section{Introduction}\label{S-intro}
Transformation Optics \cite{Kildishev-S-2011pu}
and
Transformation Acoustics \cite{Chen-C-2010jpd}
are powerful new techniques used to design
\emph{transformation devices},
which usually tend to the exotic --
invisibility or event cloaks \cite{McCall-FKB-2011jo},
illusion generators,
and so on.
Unfortunately,
none are useful as demonstration devices accessible
to non-specialists.
Here,
in contrast to this typical situation,
we describe how to make a transformation device (T-device)
that fits on a tabletop and which controls water waves
visible to the naked eye.
Although T-devices such as cloaks cannot be made with simple isotropic
materials,
one important type can:
one whose design principle transforms
from waves travelling on the surface of a uniform sphere,
to waves travelling on a flat disk.
The transformation used preserves the properties
of the original wave propagation --
that of circular trajectories of equal circumference --
by making the disk properties non-uniform:
here,
we make a shallow pond with varying depth.
In optics,
such a device is known as the
Maxwell's fisheye lens \cite{Tyc-HSB-2011njp,Luneberg-MTO}.
It has a long history,
originally being proposed in a problem set
in the 1850's \cite{Maxwell-1853-fisheye,Maxwell-1854-fisheye-soln}.
Until recently an obscure theoretical curiosity,
the device was brought to wider attention by
controversial claims that
such a device could generate perfect optical images \cite{Leonhardt-2009njp}.
Leaving aside that debate
(see \cite{Blaikie-2011njp} for a recent critical summary),
Leonhardt usefully noted that actually building the device
becomes possible if you take
only the central portion and surround it by a mirror
\cite{Leonhardt-2009njp}.
This trick also makes the water wave version easier to build.
But what makes a Maxwell's fisheye lens --
or its Fishpond counterpart --
interesting?
As in an ordinary pond,
a pointlike wave source anywhere on the surface
of a Maxwell's Fishpond
generates an outgoing set of ripples.
However,
in a Maxwell's Fishpond,
the ripples do not just spread out and disperse,
they \emph{also} converge on the opposite side,
before again diverging
and travelling back to the start where they again reconverge,
and repeat this process until they eventually dissipate.
This is just as light does
in the optical Maxwell's fisheye lens \cite{Kinsler-F-2010njp-fisheye},
and is exactly what rays or waves confined
on the surface of a sphere would do.
More generally,
the Maxwell's fisheye
is one of a more general class of classical \emph{transformation optics}
devices \cite{Tyc-HSB-2011njp};
and these others,
such as the Eaton or Luneberg lenses,
will also have accessible transformation aquatics equivalents
based on water waves --
just as they do in the more technologically exotic
field of plasmonics \cite{Kadic-DGE-2011jmo,Zentgraf-LMVZ-2011nn}.
Here we will derive the water depth profile
needed to make a Maxwell's Fishpond in the shallow water limit,
and compare that profile to a simple approximation
using a shallow spherical dome.
Although the approximation can work surprisingly well,
our accurate device does better,
and can hint at --
at least to the eye -- up to \emph{five} successive refocussings!
More rigorously,
we also present simulation results indicating how the device
works in practise,
as well as two experimental schemes set up with relatively
little demands on equipment.
Contributions to this work were as follows:
PK conceived the Maxwell's Fishpond idea,
and built the first crude prototype.
He also designed the version used here,
but with students NK and TT shadowing that design process.
NK and TT did the first experiments,
and CT and JT followed next;
all four writing reports and giving presentations as part of their coursework.
PK was the primary author of this paper,
assisted by material from the student reports,
he also did all the computer simulation work.
CT, JT, and TT also assisted in the preparation
of this final manuscript.
\section{Fisheye, Fishpond}\label{S-fish}
The Maxwell's fisheye concept is based on mimicking
the properties of ray trajectories on the surface of a sphere
using a flat surface with spatially modulated properties.
This is interesting,
because on a sphere any set of rays emitted from a point
follow their individual ``great circle'' geodesics,
and so will automatically converge on the
exact opposite side of the sphere.
Thus,
any flat T-device version should also have this property --
rays diverging from \emph{any} point would automatically
focus at the complementary point of the plane.
Thus,
both on the sphere and in the fisheye,
an object at any point is guaranteed to form an image;
this is most certainly not the case in ordinary imaging systems.
Further,
the rays would then re-diverge before converging again;
in an ideal ray device,
these image reformations would continue forever.
To achieve the transformation from a spherical device
with its curved surface,
to a flat one,
we use a stereographic projection.
Imagine a sphere sitting with its south pole on a flat sheet,
as shown on the upper part of fig. \ref{fig-projection2D}.
Then any point (e.g. $A$ or $B$) on the sphere is mapped onto the sheet
by following a straight line from the north pole,
through $A$ (or $B$),
and onward until it intersects the sheet at $A'$ (or $B'$).
In this way the curved southern hemisphere maps onto a disk on the flat sheet
centered on the south pole.
The northern hemisphere is mapped to points further away;
with points very near the north pole being extremely remote,
and the north pole itself having to be omitted.
\begin{figure}
\centering
\includegraphics[angle=-0,width=0.80\columnwidth]{fig-01-project2D}
\caption{
The sphere-to-plane fisheye projection can be imagined by considering
a transparent sphere with a light source placed at the north pole $N$,
objects on the sphere then cast a shadow on the plane matching
the projection.
The southern hemisphere of the sphere of radius $s$
becomes a finite disk of radius $r_0=2s$,
whereas the northern hemisphere becomes the entire plane
that remains \emph{outside} the disk.
Lines from pole-to-pole
(meridians, or lines of longitude)
on the sphere become radial lines
(see e.g. the dot-dashed line),
circles on the sphere parallel to the equator
(parallels, or circles of latitude)
become projected circles whose size depends on how far north or south
of the equator (dashed line) they are.
}
\label{fig-projection2D}
\end{figure}
Of course,
although we would like to make a fisheye (or Fishpond)
based on this projection,
we do not want one that is infinitely big.
We therefore follow Leonhardt \cite{Leonhardt-2009njp}
and place a mirror at the equator,
confining all ray paths to the southern hemisphere,
and so confining all projected rays inside a circle
with twice the sphere's radius.
Since both hemispheres have the same properties,
the ray properties are preserved -- although the
great circles are now folded back on themselves
and have a kink where they are reflected,
they still are guaranteed to form a image of any point.
The process of opening out and flattening the surface
of a sphere into an equivalent sheet,
as if it were a map projection used for an atlas,
has an important feature.
Regions near the equator are stretched and expanded,
while those near the south pole are only slightly changed.
Mathematically we can define
a complex quantity $z = x + \imath y$,
where $x, y$ represent the Cartesian coordinate on the plane.
This means that any point $(X,Y,Z)$
(or at angles $\theta, \phi$)
on the unit sphere
is projected (or ``mapped'') down onto
~
\begin{align}
z
&=
\frac{X+\imath Y}{1-Z}
&=
\exp \left( \imath \phi \right)
\cot\left( \theta/2 \right)
.
\end{align}
This means that a given line element $dS^2 = dX^2 + dY^2 + dZ^2$ on the sphere
is transformed into a line element in the plane $dR^2=dx^2 + dy^2$
that progressively lengthens as we move towards the equatorial perimeter
at $r=1$.
~
\begin{align}
\frac{2}{1 + r^2}
dR^2
&=
dS^2
.
\end{align}
where $r^2 = x^2 + y^2$,
and we can also note that angles on the sphere
are preserved when projected onto the plane.
This length transformation
means that an object (or ray)
travelling at a fixed speed on the sphere
will have a projection on the plane that travels faster
the closer it gets to the north pole.
Thus a fixed object speed $v_0$ on the sphere
is projected onto the disk as a radially varying
velocity profile $v(r)$.
Because of the way the projection works --
or the line element $dS$ converts to $dR$ --
the velocity profile $v(r)$
for a sphere of radius $r_0/2$
is projected onto its counterpart disk of radius $r_0$
as
~
\begin{align}
v(r)
&=
v_0 \left[1 + \left(\frac{r}{r_0}\right)^2 \right]
.
\label{eqn-fisheye-v}
\end{align}
Any disk which transports objects or waves with this velocity profile
will be a T-device representing a sphere.
Note that
this velocity profile has a counterpart in a
gradient-index version of Snell's law
that steers propagating rays
so that they match the paths that follow from
projections of the great circle paths on the sphere.
Indeed,
from a mathematical perspective we might have expected this,
because the projection preserves angles,
which also means that the device can be made with isotropic materials.
\subsection{Optics}\label{S-fish-optics}
In optics,
one can actually make a thin spherical shell that
will guide light inside it
(see e.g. \cite{Righini-RST-1972ao}),
just like the spherical reference device for a planar fisheye lens.
But
to obtain a design for the flat Maxwell's fisheye lens
we need to design an optical device
that has the light speed profile defined in eqn. \eqref{eqn-fisheye-v}
by modulating the refractive index.
If starting with a shell with refractive index $n_0$
(and hence speed of light $c' = c/n_0$),
we can convert this to a disk with a radially varying
refractive index profile
~
\begin{align}
n(r)
&=
\frac{n_0}
{1 + \left(r/r_0\right)^2}
,
\label{eqn-fisheye-n}
\end{align}
where $n_0$ is the maximum refractive index we can achieve,
and $r_0$ is our desired radius scale.
In the original fisheye lens,
$r$ was unbounded,
causing the device to need unrealistically small values of $n$
at large radii $r$.
The introduction of an equatorial mirror,
as discussed above,
circumvents this restriction;
and for $n_0 \ge 2$ the minimum refractive index required
is always $\ge 1$.
Electromagnetic Maxwell's fisheye lenses have been made
(e.g. \cite{Ma-SOTL-2011njp,Smolyaninova-SKS-2010ol,Gabrielli-LL-2010arxiv,Gabrielli-L-2011njp}),
but require significant technological skill
to build and investigate.
Hence our interest in water waves,
which gives a much wider audience
access to these interesting devices.
\subsection{Water waves}\label{S-fish-pond}
We want to make a Fishpond,
not a fisheye;
water waves are easily visible,
intuitive,
low-tech,
and are accessible and safe for a wide variety of ordinary people.
Nevertheless,
experimental water wave systems can still be used
as models for
quite a surprising variety of phenomena:
e.g.
event horizons
and Hawking radiation \cite{Jannes-PCMMR-2011jpc,Rousseaux-MMPL-2008njp}
and
neutron star collapse \cite{Foglizzo-MGD-2012prl}.
To obtain a design for such a Maxwell's Fishpond
we need only work out how to design a device
that has the speed profile defined in eqn. \eqref{eqn-fisheye-v}.
In general,
water waves can have a complicated and nonlinear behaviour,
so constructing a general fishpond will be either very difficult
or impossible.
But there is an important subset of water waves
for which we \emph{can} get a simple solution --
those waves that occur in very shallow water.
For water of a constant depth
that is significantly less than a wavelength,
the wave speed for small waves is simply \cite{Mayo-1997tpt}
~
\begin{align}
v_w
&=
\sqrt{gd}
\label{eqn-fishpond-water-c}
,
\end{align}
where $g=9.81$m/s$^2$ is the gravitational acceleration
and d is the water depth.
In this extreme limit,
other factors such as the wave amplitude and wavelength
no longer matter,
and we can control \emph{any} suitable wave with the same
depth modulation.
Further,
as long as $d(r)$ varies slowly over wavelength scales,
we can use the formula
to describe waves travelling across a varying depth profile $d(r)$.
If the centre of the fishpond has depth $d_0$,
the water wave speed there is $v_w(0) = \sqrt{g d_0}$.
Thus the radial wave velocity profile will be
~
\begin{align}
v_w(r)
&=
v_w(0)
\sqrt{\bar{d}}
,
\label{eqn-fishpond-v}
\end{align}
where $\bar{d}(r) = d(r)/d_0$ is the relative depth profile.
Thus,
comparing eqns. \eqref{eqn-fisheye-v} and \eqref{eqn-fishpond-v}
we see that to match the two velocity profiles
we need to have
~
\begin{align}
\sqrt{\bar{d}(r)}
&=
\left[1 + \left(r/r_0\right)^2 \right]
\\
\bar{d}(r)
&=
\left[1 + \left(r/r_0\right)^2 \right]^2
\\
&=
1 + 2 \left(r/r_0\right)^2 + \left(r/r_0\right)^4
\label{eqn-fishpond-vcf}
.
\end{align}
The parameters we chose for our Fishpond
were based on an assumed water wavelength of about 20mm;
the result can be seen in fig. \ref{fig-fishpond-made}.
\begin{figure}
\includegraphics[angle=-0,width=0.85\columnwidth]{fig-02a-fponddiagram}
\includegraphics[angle=-0,width=0.85\columnwidth]{fig-02b-fpondphoto}
\caption{
The Maxwell's Fishpond.
(a) A cross section
with the vertical scale grossly exaggerated
for clarity.
The indicated dimensions are those used for our actual device,
but heights or widths can be rescaled freely --
subject to the proviso that ripples will have a wavelength
longer than the maximum depth.
In fact,
an even shallower Fishpond would provide a better match to this criterion,
but since water has a significant surface tension,
this makes covering the centre region problematic.
(b) A photograph of our device.
}
\label{fig-fishpond-made}
\end{figure}
One might also construct other types of geodesic lens
\cite{Luneberg-MTO,Sarbort-T-2012jo,Tyc-HSB-2011njp}
using water waves,
or other types of acoustic waves \cite{Bramhavar-PNENM-2011prb},
as discussed in the appendix at \ref{S-fish-EatonEtc}
and \ref{S-fish-other}.
\section{Modelling}\label{S-model}
We tested our design using computer simulations for a variety of cases
ranging from those applicable to the ideal Maxwell's Fishpond
and approximate Fishponds,
to a full finite element simulation for our Fishpond device.
For the idealized comparisons,
we used the fact that the fisheye and Fishpond
behave in an essentially identical manner,
once the distinctions between the polarizable EM field
and scalar water waves have been accounted for.
This means that since an ideal Fishpond has the same properties
as an ideal fisheye lens,
FDTD simulations of Maxwell's equations for the fisheye lens
will indicate the behaviour of an ideal Fishpond.
\begin{figure}
\includegraphics[angle=-0,width=0.42\columnwidth]{fig-03a-maxfeye-ez.t0378.eps}
\includegraphics[angle=-0,width=0.42\columnwidth]{fig-03b-maxfeye-ez.t0626.eps}
\includegraphics[angle=-0,width=0.42\columnwidth]{fig-03c-maxforb-ez.t0370.eps}
\includegraphics[angle=-0,width=0.42\columnwidth]{fig-03d-maxforb-ez.t0610.eps}
\includegraphics[angle=-0,width=0.42\columnwidth]{fig-03e-circle-ez.t0321.eps}
\includegraphics[angle=-0,width=0.42\columnwidth]{fig-03f-circle-ez.t0522.eps}
\caption{
Snapshots from simulations representing an ideal Maxwell's Fishpond (a,b),
an approximate Fishpond with a spherical-cap (SC) depth profile (c,d),
and an ordinary flat-bottomed fishpond (e,f).
The upper frames (a, b) show the fisheye/pond wave patterns
near the first and second refocussing times;
the middle ones (c,d) the approximate SC pond,
and the lower two (e,f) show the non-focussing behaviour
of a flat-bottomed pond when the ripples are not started
at the exact centre.
To the eye,
the second Fishpond reformation (b)
is essentially identical to the original source wave.
}
\label{fig-fishpond-snapshots}
\end{figure}
To get an initial estimate of the importance of the correct depth profile,
we used MEEP \cite{Oskooi-RIBJJ-2010cpc}
FDTD simulations of Maxwell's equations.
This approach was taken because we already had such EM simulations running,
and because the MEEP software
is flexible, open source, and freely available.
All that is required is to
converted our chosen depth profiles
back into a a refractive index profile
using the reverse of the process that led to eqn. \eqref{eqn-fishpond-vcf}.
Sample MEEP control files are available in appendix \ref{S-meepctl}.
As well as the exact Maxwell's Fishpond,
and amongst other variations,
we modelled an approximate depth profile
based on a shallow spherical cap (SC).
It turns out that as long as the correct 1:4 ratio
of minimum to maximum depths is maintained,
this worked remarkably well.
In fig. \ref{fig-fishpond-snapshots} we can see the simulation results
equivalent to our shallow water wave model,
showing snapshot pairs that demonstrate the image reformation properties.
We see that the ideal Maxwell's fisheye lens and Fishpond
will give accurate refocussing
(fig. \ref{fig-fishpond-snapshots}(a,b)),
and this is repeated very many times before the performance starts to degrade
due to the dispersion caused by how different wavelengths
interact with the finite-sized geometry.
(see e.g. \cite{Kinsler-F-2010njp-fisheye}).
Next,
the simulations matching the approximate
domed pond profile,
do quite well
(see fig. \ref{fig-fishpond-snapshots}(c,d)),
but with some distortion clearly evident on the second reformation.
However,
the chosen ``best reformation'' snapshots of the domed pond flatter slightly,
as the frames before and after shown a significant ellipticity;
and the third reformation (not shown),
whilst still giving a localised wave bunch,
has lost its concentric-ring character.
Finally,
simulations of a flat-bottomed pond
(see fig. \ref{fig-fishpond-snapshots}(e,f))
show only a poor attempt at a first focus,
followed by a rapid evolution towards an apparently random pattern
with no reformations apparent at all.
Other simulations including a variety of strengths of non-radial distortion
of the Fishpond depth profile
were also performed.
Depth variations of about 10\% away from the exact profile
do give tolerable results
for the first few image reformations,
but the distortion
strongly degrades the beautiful concentric-ring character
of the exact Maxwell's Fishpond.
Unfortunately,
it is hard to build a real water wave device that works
perfectly
in the shallow water limit.
Most notably,
we will expect to see some residual dispersion,
due to the depth-dependent speeds of different wavelength ripples
and the effects of surface tension;
this is discussed later in section \ref{S-discuss}.
Thus the brief reformations expected in the ideal case
will blur out into longer process
as different wave components refocus at different times,
and for the shorter $\lambda$ waves,
the depth profile will be less perfect.
More realistic finite element simulations
were also done
using the open source simulator OpenFOAM \cite{OpenFOAM}
with the {interMixingFoam} engine
on a fast desktop PC.
Despite computational constraints,
the simulations gave good results,
with the effect of dispersion demonstrated,
as can be seen in the appendix at \ref{S-model-OpenFOAM}.
\section{Experiments}\label{S-expt}
To make the Maxwell's Fishpond,
our departmental Mechanical Workshop
machined us a brass insert with the
necessary depth profile (see fig. \ref{fig-fishpond-made})
and mounted this in a nylon ring,
with a small notch to indicate the preferred water level.
The choice of brass and nylon was based on convenience,
not necessity,
any waterproof materials could suffice.
Our brass insert was not perfectly smooth,
but was machined to tolerances of much less than 1mm.
Two schemes for obtaining quantitative data were used:
one by NK and TT,
the other by CT and JT.
Both were influenced by the fact that
although viewing the Fishpond directly
gives a very strong impression of how well it works,
this human perception does not translate easily
into objective experimental data.
Although even tiny ripples were surprisingly visible to the eye
when in motion,
it was less easy to get good experimental images.
\begin{figure}
\centering
\includegraphics[angle=-0,width=0.90\columnwidth]{fig-05-ThioSetup}
\caption{
The experimental setup of NK and TT.
The water dropper was held in place by a retort stand
(not shown).
}
\label{fig-fishpond-expt-TT}
\end{figure}
Being the first attempt,
the NK/TT experimental setup was relatively simple.
In a darkroom,
they reflected lamplight off the water surface onto a screen,
and took video images of the screen
(see fig. \ref{fig-fishpond-expt-TT}).
This screen was shielded from any direct light from the source lamp.
The rippled water surface caused intensity variations
on the screen,
depending on whether the particular perturbation
tended to focus or defocus the light,
making even very shallow ripples visible
(see fig. \ref{fig-fishpond-expt-NKTT}).
The intensity pattern then indicated the progress of the ripples
from source to image,
and back.
They then analysed the video by eye,
frame by frame,
to locate the times and positions of the reformations.
The CT/JT setup used lightproof box to eliminate stray light.
Inside the box they imaged the reflection of a diffuse light source
directly using a high resolution webcam
(see fig. \ref{fig-fishpond-expt-CTJT}).
This enabled them to electronically process the images.
\begin{figure}
\centering
\includegraphics[angle=-90,width=0.90\columnwidth]{fig-06-fishpond-photo-B}
\caption{
A snapshot of the NK/TT experiment in progress,
with the Fishpond in the foreground,
and the screen above and behind.
We can clearly see the reflected ripple patten on the right of the screen;
this image was taken just before the second reformation.
}
\label{fig-fishpond-expt-NKTT}
\end{figure}
In both experiments,
ripples were created using a water dropper
to drop a single water droplet into the Fishpond;
the height of which was varied to ensure
that the strongest possible ripples were generated,
but not so big as to be accompanied by splashing or bubbles.
A full range of starting positions was investigated,
since the Fishpond should create reformations from any point --
although starting positions near the wall suffered due
to edge effects.
The positioning of screens, light sources, cameras,
and so on were systematically varied to achieve the best images.
For NK/TT,
a shallow angle of reflection enhanced the images,
although if too shallow this significantly reduced the fraction
of the water surface that could be seen.
For CT/JT,
the diffuse light source was placed at a low angle,
but with the camera at a 90$^\circ$ reflectance angle;
thus giving a strong contrast from the light reflected
off the ripples.
The diffuse light source avoided problems caused
by reflections off the bottom centre of the Fishpond.
\begin{figure}
\centering
\includegraphics[angle=-0,width=0.60\columnwidth]{fig-07-Tan-Setup}
\caption{
The experimental setup of CT and JT.
In addition the Fishpond was placed in a waterbath
with a heat pump,
to enable the temperature to be changed in a controllable way.
}
\label{fig-fishpond-expt-CTJT}
\end{figure}
\begin{figure}
\centering
\includegraphics[angle=-90,width=0.90\columnwidth]{fig-08-ripplescan-despeck.ps}
\caption{
Typical differential image obtained by CT and JT,
after first subtracting one webcam frame from the next,
and them removing speckle,
in order to enhance ripple visibility
The purple lines cross the centre of the Fishpond,
with the more horizontal on a line linking
the source-point to the reformation point,
with the other at right angles from the centre of the pond.
The image is foreshortened due to the angle of view
of the webcam.
}
\label{fig-fishpond-CTJT-frames}
\end{figure}
\subsection{Setup 1}
The first experiment,
performed by NK and TT,
used the setup of fig. \ref{fig-fishpond-expt-TT},
where videos were taken from each initial droplet
to final dissipation of the ripples.
Each instance was viewed carefully
frame-by-frame to determine the
time taken for the easily detectable 1st and 2nd reformations.
Reformation times were found by first locating the desired reformation
within a short ``target'' sequence of video frames.
With each reformation point taken to be indicated
by the frame with the most localised ripple pattern,
the short target sequence was viewed independently by NK and TT,
both forward and backwards.
Once the best reformation frame(s) were chosen,
the reformation time could be calculated
using the frame rate of the camera.
NK and TT also attempted to increase the longevity of the ripples
in order to detect third reformations.
However,
reducing the water viscosity (and hence dissipation)
by using water at 70C
had negligible effect,
and reducing the surface tension
using either detergent or temperature
slowed the wave speeds,
increasing the effect of dissipation.
The positions of source,
and 1st and 2nd reformation points were also compared and measured.
However,
since the foreshortening of the reflected ripple pattern
means that it appears as an ellipse,
times were calculated from instances where the source and reformation points
appeared on a horizontal line on the screen,
i.e. the long axis of the ellipse.
The average times taken for first and second reformations were
($1.04 \pm 0.04$)s
and ($1.07 \pm 0.12$)s.
The variation between times taken for first reformations
of different ripples at varied start positions
was dominated by video frame rate.
The larger variation for the second reformations
was due to the difficulty in determining the video frame
with the most localized ripple pattern,
since by then the ripples had both dispersed and diminished significantly.
The theoretically expected reformation time can be calculated
most easily by referring to the reference sphere.
The water depth at the centre of the Fishpond matches that
on the imaginary reference sphere,
which in our case is 2.5mm deep,
leading to a wave velocity of $v = \sqrt{gd} \approx 157$mm/s.
Each reformation on the sphere takes place after one half circumnavigation,
and the sphere radius $s$ is half that of the Fishpond radius of $r_0=100$mm;
thus the theoretical reformation time is
~
\begin{align}
T &= \frac{\pi s}{v_0}
\approx
\frac{3.142 ~.~ 50\textrm{mm}}{157 \textrm{mm/s}}
\approx
1.003 \textrm{s}
,
\label{eqn-expt-predictT}
\end{align}
which seems to be in good agreement with the measurements --
but see the discussion in section \ref{S-discuss}.
\subsection{Setup 2}
In the second experiment,
CT and JT directly imaged the ripples using a webcam
and VirtualDub \cite{VirtualDub},
as shown in fig. \ref{fig-fishpond-expt-CTJT}.
To emphasize the ripple dynamics,
they subtracted each frame from the previous one
using AviSynth \cite{AviSynth}.
The resulting differential image data
as shown in fig. \ref{fig-fishpond-CTJT-frames},
reveals only the wave motion,
which was then analysed using Tracker \cite{Brown-Tracker}.
Each image was scanned along the axis between source and image points,
giving a 2D dataset,
comprising a time series of 1D datasets along this axis.
The result was processed and plotted using a variety of software,
including Microsoft Excel,
Matlab \cite{MatLab},
and Scilab \cite{Scilab}.
In order to optimise the reformation process,
CT and JT systematically analysed results taken for a range
of temperatures and fill volumes,
as shown in tables \ref{table-volume} and \ref{table-temperature}.
The optimum temperature was found to be 15C:
although the water viscocity (and loss) increases for lower temperatures,
the surface tension increases,
leading to faster wave speeds.
Note that the optimum fill volume centred around 190ml,
this can be compared to that for the design parameters,
which radial integration of the design depth profile is found to be 183ml.
This is in good agreement --
a 3ml change in fill volume corresponds to about a 0.1mm depth change,
and velocity shifts of less than 2\%.
The experimental optimum filling volume of 190ml
is higher than the design volume,
perhaps because of the way the Fishpond fills --
e.g.
the design takes no account of surface tension.
\begin{figure}
\centering
\includegraphics[angle=-0,width=0.84\columnwidth]{fig-09a-Tdata15C}\\
~\\
\includegraphics[angle=-0,width=0.84\columnwidth]{fig-09b-Tdata20C}
\caption{
Differential luminance data
indicating the presence of ripples along
the axis between initial disturbance and reformation point.
As time (and frame index) progresses,
we see the ripples travel from source to reformation,
although the pattern is complicated by the two possible paths --
either with an early reflection off the bowl edge,
or with a late reflection.
This is in addition to the spreading out of the ripple pattern
due to dispersion.
Results for two different water temperatures are shown,
at both (a) 15C, and (b) 20C,
with 15C water giving better data,
in agreement with Table \ref{table-temperature}.
These contour plots are made by averaging over
adjacent points
and using a logarithmic scale;
the contours are evenly spaced,
with a minimum level chosen to best display the ripple patterns.
}
\label{fig-fishpond-CTJT-lum}
\end{figure}
In fig. \ref{fig-fishpond-CTJT-lum}(a) we see the first reformation
very clearly,
although dispersion has spread out the initial impulse
introduce by a falling water drop.
On the first traversal,
we can not only see the evidence of several ripple crests,
but also the slight fanning as the ripples disperse.
Further reformations,
although apparent to the eye,
do not show up over the imaging noise.
\begin{table}[h]
\begin{tabular}{|l|c|c|c| c|c|c| c|}
\hline
Volume [ml] & 170 & 175 & 180 & 185 & 190 & 195 & 200 \\
\hline
Reformations & 2 & 2 & 2 & 3 & 3 & 3 & 2 \\
\hline
\end{tabular}
\caption{Relationship between water volume
in the Fishpond
and the number of reformations visible to the eye.}
\label{table-volume}
\end{table}
\begin{table}[h]
\begin{tabular}{|l|c|c|c| c|c|c| c|c|c|}
\hline
Temperature [C] & 0 & 5 & 10 & 15 & 20 & 25 & 30 & 35 & 40 \\
\hline
Reformations & 1 & 2 & 2 & 3 & 2 & 2 & 2 & 2 & 2 \\
\hline
\end{tabular}
\caption{Relationship between temperature
of the Fishpond
and the number of reformations visible to the eye.}
\label{table-temperature}
\end{table}
Reformation times and errors were extracted from data like that shown on
fig. \ref{fig-fishpond-CTJT-lum}
using a curve fitting process.
For example,
at the optimum temperature of 15C,
the first reformation was calculated to have occured
at the 23rd frame (at $0.72 \pm 0.06$s),
with the second being 30 frames later (+ $1.03 \pm 0.11$s).
The second reformation was notably slower than the first,
but then the error is also much larger;
but of course some slowing might be expected since the longer wavelengths
both persist longer and travel more slowly.
Finally,
despite the difficulty in extracting second and third reformation times
from the video data,
and in seeing them in plots such as fig. \ref{fig-fishpond-CTJT-lum},
in the original videos themselves the third reformation is clearly visible
to the eye.
This suggests that significant performance improvements
are still possible in the automated processing of the data.
\section{Surface tension}\label{S-discuss}
Since water is the obvious liquid to use in the Fishpond,
and it has a significant surface tension,
we should estimate its effects.
The wave velocity including the effects of surface tension $\sigma$
(in N/m) on waves of wavelength $\lambda = 2\pi/k$,
in a fluid of density $\rho$ and depth $d$
is
~
\begin{align}
v^2
=
\frac{\omega^2}{k^2}
&=
\frac{g}{k}
\left[
1
+
\frac{\sigma k^2}{g \rho}
\right]
\tanh
\left(
k d
\right)
.
\end{align}
Thus the correction to the leading term
which gives us the shallow water wave speed
is a factor $\epsilon = \sigma k^2/g\rho$.
Surface tension in water reduces with temperature,
being about 0.073 N/m at 20C,
but 0.061 N/m at 90C.
At about 20C,
~
\begin{align}
\epsilon
=
\frac{\sigma k^2}
{g \rho}
&\approx
\frac{0.073 k^2}
{9.81 \times 1000}
=
7.44 \times 10^{-6} k^2
.
\end{align}
For water waves of wavelength 20mm,
$k = 2\pi/0.02 \approx 314$m$^{-1}$,
so that
$\epsilon \approx 0.73$;
the wave speed is therefore
a factor of $\sqrt{1.73}$ (or 30\%) higher
than expected based on depth alone,
and is wavelength dependent even in the shallow water limit.
However,
this speed shift is not \emph{depth} dependent,
so the refocussing character of the Fishpond is unaffacted --
but different wavelengths reform at different times.
This dispersion means that a determination of the reformation time
becomes harder as time progesses,
and will depend on the specific details of how
a given reformation time is evaluated.
Initially,
the NK/TT measured reformation time of $(1.04 \pm 0.04)$s
seemed in good agreement
with the simple prediction of eqn. \eqref{eqn-expt-predictT},
i.e. 1.00s.
However,
we can now see that
surface tension effects should reduce this prediction by 30\%
to about 0.7s,
which agrees with that measured by CT/JT,
and not that of NK/TT.
But why do the two experiments
give such different outcomes?
Two scenarios,
which are not mutually exclusive,
suggest themselves.
First,
the criteria for choosing the reformation times differed,
and this will affect which frame of video selected --
NK/TT chose by eye the frame with the smallest region of disturbed water,
whereas CT/JT applied a simple fitting algorithm to digitised data
along one axis.
Second,
NK/TT relied upon the reported frame rate of their camera,
and perhaps this was not reliable;
although with hindsight we realise that
their framerates might have been easily calibrated by videoing a clock
either before or at the same time as the each experimental run.
It is gratifying that the more sophisticated setup of CT/JT
gives good agreement with theory,
although it is not clear why the first attempt by NK/TT
did less well.
Neverthless,
one of the features of this student project
was it could be implemented in many different ways --
the students were given the Fishpond,
some reading material,
and some suggestions and then
largely left to get on with it as independently they wished.
Still other experimental set-ups and measurements are possible,
and so we expect that the Fishpond itself
will be reused many times in the future.
\section{Summary}\label{S-summary}
We have shown how an exotic phenomenon from transformation optics --
the Maxwell's fisheye lens --
can be converted into simple water waves
in a tabletop ``Maxwell's Fishpond''.
This is currently being used sucessfully as a third year undergraduate
experimental project in the Physics Department of Imperial College London.
While the remarkable series of image reformations
provides the hook which makes the project interesting,
there are many other features that can be investigated as part
of the experiment.
Most straightforwardly,
there is a variety of imaging possibilities to be investigated
(two of which were discussed here),
and various experimental conditions --
lighting, fill depth, etc --
to be determined.
Also,
the effect of viscosity on performance
can be tested by changing the water temperature,
or surface tension can be removed by adding detergent --
or other liquids might be used.
A transparent Fishpond might be made
so as to image the ripples in transmission,
or a vibrating source could be used in an attempt
to generate standing waves.
For the more mathematically inclined,
the nature of stereographic projections can be researched,
other comparable devices --
e.g. the Eaton or Luneburg lenses --
considered,
or numerical simulations attempted.
Alternatively,
rather than only aiming to optimise the number of reformations,
or visibility to the eye,
but it is also possible to consider ease or simplicity of fabrication,
with a view to testing performance as a function of size.
Since our simulations show that the general behaviour persists
even for an approximate depth profile --
such as a shallow dome --
sophisticated or precise manufacturing processes are not needed.
In this way,
this simple,
eye-catching device provides a rich playground
in which a wide variety of students can test their skills
while investigating a novel device not only part of contemporary research --
that of transformation optics and acoustics --
but with a history that goes back to Maxwell himself.
\section*{Appendix}\label{S-appendix}
\subsection{Other Lenses}\label{S-fish-EatonEtc}
One might also construct Eaton and Luneburg lenses\cite{Luneberg-MTO},
or even their generalizations \cite{Tyc-HSB-2011njp,Sarbort-T-2012jo},
using water waves.
The refractive index profiles,
which are proportional to
the inverse of the velocity profiles
for the Eaton and Luneburg lenses,
are
$n_{\textrm{Eaton}}(r) = \sqrt{(2r_0-r)/r}$
and
$n_{\textrm{Luneburg}}(r) = \sqrt{(2r_0^2-r^2)/r_0^2}$.
Then,
by comparing velocity profiles between the optical and water wave cases,
and choosing a reference depth $d_0$ and reference radius $r_0$,
we find that the \emph{retro-reflecting} Eaton pond feature
and the
\emph{focussing} Luneburg pond feature
need the depth profiles
~
\begin{eqnarray}
d_{\textrm{Eaton}}(r)
&=
\frac{r d_0}{2 r_0 - r}
,\\
d_{\textrm{Luneburg}}(r)
&=
\frac{r_0^2 d_0}{2 r_0^2 - r^2}
.
\end{eqnarray}
\subsection{Other waves}\label{S-fish-other}
It is possible to imagine other implementations of the Maxwell fisheye
concept.
For example,
an adaption of the expertise demonstrated
by Bramhavar et al in \cite{Bramhavar-PNENM-2011prb}
might give rise to an appropriately tapered ``Maxwell's Platter''
with the same behaviour for acoustic waves
in a solid.
\subsection{Finite Element simulations}\label{S-model-OpenFOAM}
As an estimator of the necesarily imperfect fishpond experiment,
we (PK) also did more realistic finite element simulations
using the open source simulator OpenFOAM \cite{OpenFOAM}
using the {interMixingFoam} engine,
on a fast desktop PC.
Despite computational constraints,
the general character of the idealised process was preserved,
and the effect of dispersion demonstrated.
A typical simulation result is shown on fig. \ref{fig-foampond-snapshots};
others indicate that larger fishponds may perform better than ours --
although will be harder to construct,
and will suffer more from dispersion.
\begin{figure}
\includegraphics[angle=-0,width=0.95\columnwidth]{fig-04-FoamPond-20120430-combi}
\caption{
Snapshots from a Maxwell's Fishpond 200mm diameter and 10mm deep,
simulated using OpenFOAM,
at the start and each subsequent reformation
up to the fifth.
Although the simulation retains to a reasonable extent the
repeated refocussing,
the wave dispersion continually increases the length of the pulse of ripples,
so that reformations,
while still remaining on the scale of a wavelength,
have an ever increasing duration.
}
\label{fig-foampond-snapshots}
\end{figure}
\subsection{MEEP ctl files}\label{S-meepctl}
\begin{widetext}
\begin{verbatim}
; Simulate the ideal Maxwell Fishpond and approximate circular Ponds
; by converting a depth profile into a refractive index.
;
;
; Dr Paul Kinsler, 2011 & 2012
;
; 1) an exact & idealised Maxwell's Fishpond
; 2) an approximate "SC" Maxwell's Fishpond whose water-depth profile is
; determined by a shallow, convex spherical cap, with a min:max
; depth ratio of 1:4 to match that of an exact Fishpond.
; 3) a Fishpond with a constant depth
;
; The shallow water-wave speed resulting from the depth profile is converted
; into a refractive index.
;
; ======================================================================
; Fisheye sizes and parameters
;
(define-param n 2) ; base index of fisheye
(define-param w 1) ; width of waveguide
(define-param r 10) ; inner radius of ring
; ======================================================================
; Computational resolutions etc
;
(define-param pad 2) ; padding between waveguide and edge of PML
(define-param dpml 2) ; thickness of PML
(define sxy (+ 0.1 (* 2 (+ r w pad dpml)))) ; cell size !odd!
(set! geometry-lattice (make lattice (size sxy sxy no-size)))
; ======================================================================
; Refractive index profile functions
; to calculate the local refractive index as it varies with position.
;
; rr = sqr(x^2+y^2)
(define (frr p)
(sqrt (+ (* (vector3-y p) (vector3-y p))
(* (vector3-x p) (vector3-x p))
)
)
)
; --------------------------------------------------------------------
; 1) The Maxwell's Fishpond,
; and the variation with position of its local refractive index
;
; 1 + rr^2/r^2
(define (fdivisor rr)
(+ 1 ( / (* rr rr) (* r r) )
) )
(define (refindex rr)
(/ n (fdivisor rr))
)
; --------------------------------------------------------------------
; 2) The approximate spherical-cap "SC" Fishpond,
; and the variation with position of its local refractive index
;
; calculate the sphere-size for an r-radius pond with 1:4 depth ratio
; R=10 => D = R^2/6 + 3/2 = 18 1/6 = 18.166666667
(define-param DD (+ 1.5 (/ (* r r) 6)))
(define-param DDp1 (+ 1 DD))
(define-param DD2 (* DD DD))
; 1+D - sqrt(D^2-rr^2)
(define (fdivisorSC rr)
(- DDp1 (sqrt (- DD2 (* rr rr) )))
)
(define (refindexSC rr)
(/ n (sqrt (fdivisorSC rr)))
)
; --------------------------------------------------------------------
; 3) The constant-depth Fishpond
; and the non-variation with position of its local refractive index
;
(define (refindexCO rr)
(sqrt (sqrt 2))
)
; --------------------------------------------------------------------
; --------------------------------------------------------------------
; convert refractive index to epsilon, and make the correct medium
; by uncommenting ONE of the allowed refractive index profile
; function calls in (define (eps rr) ...)
(define (eps rr)
(* (refindex rr) (refindex rr)) ; uncomment for exact Fishpond
; (* (refindex rr) (refindexSC rr)) ; uncomment for approx SC Fishpond
; (* (refindex rr) (refindexCO rr)) ; uncomment for constant-depth Fishpond
)
; Definition of the medium f(p)
(define (fmedium p)
(make medium
(epsilon (eps (frr p)))
) )
; ======================================================================
;
; Create the pseudo-Fisheye/Fishpond structure
;
; Create a ring waveguide by two overlapping cylinders - later objects
; take precedence over earlier objects, so we put the outer cylinder first.
; and the inner (air) cylinder second.
(set! geometry
(list
(make cylinder (center 0 0) (height infinity)
(radius (+ r w)) (material metal))
(make cylinder (center 0 0) (height infinity)
(radius r)
(material (make material-function
(material-func fmedium) )))
) )
; ======================================================================
; Set up the PML at the simulation boundaries
;
(set! pml-layers (list (make pml (thickness dpml))))
(set-param! resolution 10)
; ======================================================================
;
; SOURCES: Put a single point source on the y-axis at x=7.20um
;
;
(set! sources
(list
(make source
(src (make gaussian-src (frequency 0.333333) (width 3))) ; was 33
(component Ez) (center 5.00 0.00) ; was 7.2
)
)
)
; ======================================================================
;
; Run the simulation
;
;
(run-until 180
(at-beginning output-epsilon)
(to-appended "I"
(at-every 0.125 (synchronized-magnetic output-tot-pwr))
)
(to-appended "ez"
(at-every 0.125 output-efield-z))
)
\end{verbatim}
\end{widetext}
|
2,877,628,091,484 | arxiv | \section{Introduction}
\label{sec:introduction}
Present cosmological and astrophysical observations clearly depict
a universe dominated by ``dark'' components, being it made by dark
matter for $25 \%$ and by dark energy for $71 \%$, while ordinary
baryonic matter contributes for only the remaining $4 \%$
\citep{Komatsu10}.
Dark matter has a long history, being ``introduced'' by
\citep{Zwi33}: for solving the problems of high mass-to-light
ratios of galaxy clusters and of the rotation curves of spiral
galaxies.
Later on, several versions of the the so-called cold dark matter
model (CDM) have been built starting from the assumption that a
large amount of non-baryonic matter, matter non-interacting with
the electromagnetic radiation but only detectable by its
gravitational interaction with visible matter, could account for
the observations in the framework of the standard Newtonian
dynamics. But even if its clustering and distribution properties
are fairly well known at every scale (see \citep{NFW96} for the most used model),
its nature is unknown, up to
now, at a fundamental level.
Dark energy has a more recent history: it was ``introduced'' about
ten years ago while reconstructing the Hubble diagram of Type Ia
Supernovae (SNeIa) observations, from which it was deduced that
the universe is now accelerating
\citep{Perlmutter99,Riess04,ast05,clo05}. While growing the
quantity of available cosmological data (measurements of cluster
properties as the mass, the correlation function and the evolution
with redshift of their abundance \citep{eke98,vnl02,bach03,bb03};
the already mentioned Hubble diagram of SNeIa; the optical surveys
of large scale structure \citep{pope04,cole05,eis05}; the
anisotropies in the cosmic microwave background
\citep{Boom,WMAP,Komatsu10}; the cosmic shear measured from weak
lensing surveys \citep{vW01,refr03} and the Lyman\,-\,$\alpha$
forest absorption \citep{chd99,mcd04}), more evidences towards a
spatially flat universe with a subcritical matter content and
undergoing a phase of accelerated
expansion have been collected.\\
Despite of the existence of universe acceleration has been clearly
undisclosed, the nature and the fundamental properties of the
underneath physical mechanism remain essentially unknown
notwithstanding the great theoretical efforts made up to now. By
\textit{simply} adding a constant to the dynamical equations of
universe, the cosmological constant $\Lambda$
\citep{CarLam,Sahni}, in the context of CDM model, it was defined
a new model, the $\Lambda$CDM, which quickly became the {\it
consensus model} because it provides a good fit to most of the
data \citep{Teg03,Sel04,sanch05} giving a reliable snapshot of the
today observed universe. Nevertheless, it is affected by serious
theoretical shortcomings that have motivated the search for more
general and alternative candidates generically referred to as dark
energy. Such models range from scalar fields rolling down self
interaction potentials to phantom fields, from phenomenological
unified models of dark energy and dark matter to alternative
gravity theories \citep{Cap02,PB03,Pad03,Copeland06,CapFra}.
In the last three decades, scalar fields have played an important
role in both cosmology and particle physics \cite{Linde,Binetruy}.
Scalar fields have been postulated as means to explain the early
and late time acceleration of the universe. However, it is almost
always the case that such fields interact with standard matter:
either due to a direct Lagrangian coupling or indirectly through a
coupling to the Ricci Scalar or as the result of quantum loop
corrections. Both for inflation in the early universe and for dark
energy, such couplings can lead to problems. In inflation, for
example, couplings might destroy the flatness of the potential
needed to drive a period of inflation. If there are scalar fields
which permeate the universe today and have non-zero couplings to
matter, then they would induce an additional force in nature. If
the scalar field self-interactions are negligible, then the
experimental bounds on such a field are very strong: either the
couplings to matter are much smaller than gravity, or the scalar
fields are very heavy, so that they mediate a short-ranged
interaction.
A certain class of theories have been proposed, in which the
scalar field properties depend on the environment: its mass
depends on the local environmental density. These are the so
called {\it Chameleon Field Theories}, proposed by \cite{Khoury04}, that employs a combination of
self-interaction and couplings to matter of the scalar field to
avoid the most restrictive of the current gravity bounds. In these
models a scalar field couples to matter with gravitational
strength, in harmony with general expectations from string theory,
whilst, at the same time, remaining relatively light on
cosmological scales. It was found that local gravity constraints
are (roughly) satisfied as long as the mass-scale of the potential
satisfies $M\lesssim (1mm)^{-1}$. This coincides with the scale
associated with the late time acceleration of the universe, and it
is surprising that it should come from local experiments.
Chameleon models have been subject to many studies, from
laboratory experiments up to cosmological probes
\citep{c1,c2,c3,c4,c5,c6,c7,c8,c9,c10,c11,c12,c13,c14,Mota:2008ne,c15}.
Lately, chameleons models where also studied in the context of
Modified Gravity, in particular the so called $f(R)$-gravity
\citep{fr1,fr2,fr3}. In such a case, good results of general
relativity, according to the standard probes, are reproduced at
local scales while dark energy (accelerating behavior) and dark
matter (dynamical clustering) effects are reproduced at larger
scales. The major issue is to select suitable $f(R)$-models
capable of matching the density profiles at the various
gravitational scales, as discussed e.g. in \citep{fr3}.
Chameleon fields evade tight gravity constraints via the so called
chameleon mechanism: it consists on a field-generated force, a
sort of {\it fifth force}, to become short-ranged in highly dense
regions, and long-ranged in low density regions. Such feature
would imply that at different astrophysical scales the fifth force
felt by matter would be suppressed or enhanced according to the
local astrophysical density.
In this work we investigate whether there is evidence for
a coupling between baryonic matter and a massive scalar field which could mimic and replace
the contributions of a possible dark matter component.
In particular, motivated by chameleon models, we aim to verify if it is
possible to observationally detect a scalar field whose mass (or, equivalently,
interaction length) and coupling may change with scale, matching different astrophysical
observations. In this case, by astrophysical observations, we mean data from SNeIa, low surface
brightness (LSB) dwarf galaxies and, finally, clusters of galaxies. The range of scales
is very wide and we are adopting photometric and spectroscopic data
to probe the mechanism at different redshifts.\\
We want to stress here that we are just interested to investigate whether the data
would favor a model where baryons may be coupled to a scalar field whose mass
and coupling change with scale. If such a change is due to a chameleon
mechanism (associated to a local-density variation) or due to some other mechanism,
that is not the main concern in this article and so we will not compute specific predictions for any
particular model.
Chameleon models are highly non linear so that, in principle, no superposition principle for such
a non-linear field would be possible. Due to that, it is extremely difficult to
build the gravitational potential of an extended astrophysical system as the ones we analyse in this work.
Fortunately, we are not specifically investigating chameleon
models; we are here studying a general coupled scalar field model and asking
the data what are the preferred values of its mass and coupling at different
scales. In order to compute an extended gravitational potential (in a
non-linear case as the chameleon model would be), we would need to properly
study the non-linear regime of structure formation within this type of models.
But this is well beyond the scope of this article. In spite of being of
utmost importance to set up N-body simulations with heavy scalar field models,
such task is extremely complex. In fact up to nowadays most of the N-body
simulations assume light scalar fields which do not cluster at small scales.
That is obviously not the case of chameleon fields and such investigation is
not considered in the present paper.
The article is organized as follows: In
\S~(\ref{sec:scalar_theory}) we give a brief but exhaustive
summary of all the main properties of the scalar field theory. In
\S~(\ref{sec:obs_data}) we accurately describe the used
astrophysical and cosmological data and the theoretical model
defined for any of them. In \S~(\ref{sec:results}) we expose our
results with a discussion on their implications for a more general
and comprehensive theory of gravity. Conclusions are drawn in
\S~(\ref{sec:conclusions})
\section{The Scalar field theory}
\label{sec:scalar_theory}
\begin{figure*}
\centering
\includegraphics[width=80mm]{cham_pot1fig11.eps}
\includegraphics[width=80mm]{cham_pot2fig12.eps}
\caption{Chameleon potential in low and high local density environment.\label{fig:cham_pot}}
\end{figure*}
A general action governing the dynamics of the chameleon (scalar) field
$\phi$ can be of the form:
\begin{eqnarray}\label{eq:action_cham}
S &=& \int {\mathrm{d}}^{4} x \sqrt{-g} \left\{ \frac{M_{Pl}^{2}}{2} {\mathcal{R}}
- \frac{1}{2} (\partial \phi)^{2} - V(\phi) \right\} \nonumber \\
&-& \int {\mathrm{d}}^{4}x \, {\mathcal{L}}_{m}(\psi_{m}^{(i)},
g_{\mu \nu}^{i}) \; ,
\end{eqnarray}
where $g$ is the determinant of the metric $g_{\mu\nu}$,
$\mathcal{R}$ is the Ricci scalar, $\psi_{m}^{(i)}$ are the
various matter fields and ${\mathcal{L}}_{m}$ is the Lagrangian
density of ordinary matter.\\
In the expression for the reduced Planck Mass, $M_{Pl} \equiv (8
\pi G_{\ast})^{-1/2}$, $G_{\ast}$ is the bare gravitational
constant and differs from the usually measured one.\\
This can be better understood if we consider the most general case
of a scalar-tensor model, whose action (Eq.~\ref{eq:action_cham} is a particular case
of this) is \citep{Esposito01}:
\begin{eqnarray}\label{eq:scal-tens-action}
S &=& \frac{1}{16 \pi G_{\ast}} \int {\mathrm{d}}^{4}x \sqrt{-g} \left\{ F(\phi) {\mathcal{R}}
- Z(\phi) (\partial \phi)^{2} + \right. \nonumber \\
&-& \left. V(\phi)\right\} - \int d^{4}x \,
{\mathcal{L}}_{m}(\psi_{m}^{(i)}, g_{\mu \nu}^{i}) \; .
\end{eqnarray}
$f(R)$-gravity models too can be enclosed in this general case
since it is straightforward to show that, by suitable
manipulations, they are a subclass of Eq.~(\ref{eq:scal-tens-action});
see for example \citep{CapFra}. From this action one can naively
define Newton's gravitational constant as:
\begin{equation}\label{eq:Geff_GF}
G_{N} \doteq \frac{G_{\ast}}{F} \, ,
\end{equation}
but $G_{N}$ has not the same physical meaning as Newton's
gravitational constant in general relativity. The actual Newtonian
force measured in a Cavendish-type experiment between two test
masses will experience an effective coupling constant
\begin{equation}\label{eq:G_scal}
G_{eff} = \frac{G_{\ast}}{F} \left\{1+ \alpha(\phi)\right\} \doteq G_{N} \left\{1+ \alpha(\phi)\right\} \, ,
\end{equation}
where the term $G_{\ast}/F$ is due to the (average) exchange of
gravitons between the two bodies, while $G_{\ast}/F \cdot
\alpha(\phi)$ comes from the exchange of a scalar particle between
them, with the analytical expression for $\alpha$ depending on the
particular scalar theory one considers. It is also clear from this
expression, that what in general relativity is a \textit{true}
constant, now becomes a possible function of time and radius, so
the use of the term \textit{``constant"} is quite inappropriate.
When we will speak about a \textit{varying gravitational constant}
we always refer to the effective scalar field gravitational constant
expression.
The key ingredient in studying the cosmological dynamics of a
field, when it supports a chameleon mechanism, is that the scalar
field feels both a potential, $V(\phi)$, and a coupling to matter
depending on $\rho$, the local density of matter, and on the
coupling constant, $\beta$. At the end the field dynamics is
governed by an effective potential,
\begin{equation}\label{eq:V_cham}
V_{eff}(\phi) = V(\phi) + \sum_{i} \rho_{i} e^{\beta_{i} \phi /
M_{Pl}} \; .
\end{equation}
If $V(\phi)$ is a run-away potential and $\beta_{i}>0$, the
effective potential has a minimum at $\phi_{min}$ satisfying the
condition:
\begin{equation}
V_{,\phi}(\phi_{min}) + \sum_{i} \frac{\beta}{M_{Pl}} \rho_{i}
e^{\beta_{i} \phi / M_{Pl}} = 0 \; ,
\end{equation}
and the effective mass of the field of small perturbations about
the minimum is
\begin{equation}
m^2 = V_{,\phi\phi}^{eff} = V_{,\phi\phi}(\phi_{min}) + \sum_{i}
\frac{\beta^2}{M_{Pl}^2} \rho_{i} e^{\beta_{i} \phi / M_{Pl}} \; .
\end{equation}
So the effective potential depends on the matter density and both
the minimum in potential and the mass of the scalar field are
function of the local density, as shown in Fig.~(\ref{fig:cham_pot}).
As density increases, the minimum
in potential shifts to smaller values of the field and the mass of
small fluctuations increases. This last property, in particular,
makes the chameleon field able to satisfy the constraints from
laboratory tests of the principle of equivalence, because in high
densities environment, such as terrestrial laboratories, the field
can be heavy enough so to evade them.
It must be stressed, however, that even with such a mechanism, it
is very difficult to build a theory with a late time cosmology
observationally indistinguishable from the standard $\Lambda$CDM
model.
Another interesting consequence of this model comes out when
considering linear perturbations of matter and their related
equation. In the most general case of a scalar-tensor theory \citep{Brax04}
we have:
\begin{equation}\label{eq:pert_eq}
\delta_{c}'' + a H \delta_{c}' = \frac{3}{2} a^2 H^2 \left[ 1 +
\frac{2 \beta^2}{1+a^2 V_{,\phi\phi}/k^2}\right] \; ,
\end{equation}
where $a$ is the scale factor, $H$ is the Hubble function,
$\delta_{c} = \delta(\rho_{m} e^{\beta \phi/M_{Pl}})$ is the
matter density contrast and, if the field is at the minimum, its
mass is $m^2 = V_{,\phi\phi}$. The quantity in brackets can be
interpreted as the expression for the effective gravitational
constant within the context of massive scalar field models \citep{Gannouji09}:
\begin{equation}\label{eq:G_cham}
G_{eff}(a; \beta, m; k) = G_{N} \left( 1 + 2 \beta^{2}
\frac{\frac{k^{2}}{a^{2} m^{2}}}{1 + \frac{k^{2}}{a^{2} m^{2}}}
\right) \; .
\end{equation}
In particular the term proportional to $m^2$ results from the
scalar field-mediated force, which is negligible if the physical
length scale of the perturbation is much larger than the range of
the chameleon-mediated force, namely, if $a/k \gg m^{-1}$. In this
case the left hand side of Eq.~(\ref{eq:pert_eq}) is well
approximated by $3 a^{2} H^{2} \delta_{c}/2$ and the matter
fluctuations grow as in general relativity.
Anyway chameleon theories (i.e. based on Eq.~\ref{eq:V_cham}) do not behave like linear theories (as anticipated
in the introductory section) of massive scalar fields when massive bodies are involved.
Varying the action Eq.~(\ref{eq:action_cham}) with respect to the chameleon (scalar)
field $\phi$ in a spherically symmetric spacetime gives:
\begin{equation}
\frac{{\mathrm{d}}^2 \phi}{{\mathrm{d}} r^2} + \frac{2}{r}
\frac{{\mathrm{d}} \phi}{{\mathrm{d}} r} = \frac{{\mathrm{d}} V_{eff}}{{\mathrm{d}} \phi} \; ,
\end{equation}
because of the effective potential for $\phi$ changes in different
density environments, this differential equation is highly non-linear.
These non-linear features have been studied in \citep{Khoury04},
where it was found that for a spherically symmetric object of mass
$M_{c}$ and radius $R_{c}$ surrounded by a gas of asymptotic density
$\rho_{\infty}$, the profile of the field is governed by the so-called
``thin-shell'' parameter,
\begin{equation}
\Delta = \frac{\mid \phi_{min}^{\infty} - \phi_{min}^{c} \mid}{6 \beta M_{Pl} \Phi_{c}}
\end{equation}
where $\phi_{min}^{\infty}$ and $\phi_{min}^{c}$ are the minima of the
effective potential outside and inside the object respectively and
$\Phi_{c} = M_{c}/8 \pi M_{Pl}^2 R_{c}$ is the Newtonian potential
at the surface of the object. Thus $\Delta$ is the ratio of the
difference in $\phi$ potential to the Newtonian potential and
quantifies how perturbing the object is for the $\phi$ field.
If $\Delta$ is large, which happens for small objects, then the
external (to the object) profile of $\phi$ is the usual Yukawa profile,
\begin{equation}
\phi(r) = - \left( \frac{\beta}{4 \pi M_{Pl}} \right) \frac{M_{c} \exp^{-m_{\infty}}}{r} + \phi_{min}^{\infty} \; ,
\end{equation}
where $m_{\infty}$ is the mass of the scalar field. For large and
compact objects, $\Delta$ is small and the Yukawa profile is
suppressed by a factor of $\Delta$. Thus the term ``thin-shell''
comes from the fact that only a portion of such a ``thin-shell''
contributes to the external Yukawa profile. A discussion of this
issue in the framework of $f(R)$-gravity is in \citep{fr3}.\\
In summary the main ingredients of our model are:
\begin{itemize}
\item a massive scalar field coupled with ordinary observable
baryonic mass;
\item its mass $m$, or its interaction length $L \propto m^{-1}$;
\item its coupling constant with baryonic mass $\beta$
\end{itemize}
\subsection{Modified gravitational potential}
\label{sec:modified_potential}
Taking the inverse Fourier transform of Eq.~(\ref{eq:G_cham}) it
is straightforward to obtain the corresponding expression of the
gravitational potential for a point mass distribution, $\psi(r)$.
Remembering that a potential $\propto \frac{1}{r}$ in real space
yields a $k^{-2}$ term in Fourier space, we can recognize in
Eq.~(\ref{eq:G_cham}) the point-like gravitational potential per
unit mass:
\begin{eqnarray}\label{eq:grav_pot_point}
\psi (r) &=& -\frac{G}{r} \left( 1 + 2 \beta^2 e^{-m r}\right) = \nonumber \\
&=& -\frac{G}{r} \left( 1 + 2 \beta^2 e^{-r/L}\right) \; ,
\end{eqnarray}
where $m$ is the mass of the scalar field, $L \propto m^{-1}$
is the interaction range of the modified gravitational potential,
i.e. the length where the scalar field is effective, and
$\beta$ still being the coupling constant between matter and the
scalar field. The gravitational potential given in
Eq.~(\ref{eq:grav_pot_point}) is a point-like one, so that we have
to generalize it to extended systems if we want to use it for
clusters of galaxies and LSB galaxies. As we will discuss later,
we are going to model galaxy clusters as spherically symmetric
systems: we simply consider the system composed by many
infinitesimal mass elements each one contributing with a
point-like gravitational potential. Then, summing up all terms,
namely integrating them on a spherical volume, we obtain a
suitable potential. Specifically, we have to solve the integral:
\begin{equation}
\Psi(r) = \int_{0}^{\infty} r'^{2} dr' \int_{0}^{\pi} \sin \theta'
d\theta' \int_{0}^{2\pi} d\omega' \psi(r') \; .
\end{equation}
We make explicit that with an abuse of notation we are writing
inside the point-like potential $r'$, while it should be replaced
by $|\vec{\mathbf{x}}-\vec{\mathbf{x}}'| = (r^{2}+r'^{2}-2rr' \cos
\theta')^{1/2}$.
The point-like potential can be split in two terms. The
\textit{Newtonian} component is:
\begin{equation}
\psi_{N}(r) = -\frac{G M}{r} \; ,
\end{equation}
and its extended integral is the well-known expression:
\begin{equation}
\Psi_{N}(r) = -\frac{G M(<r)}{r} \; ,
\end{equation}
where $M(<r)$ is the mass enclosed in a sphere with radius $r$.
The \textit{correction} term coming from the scalar field is:
\begin{equation}
\psi_{C}(r) = -\frac{G M}{r} \left(2 \beta^2 e^{-\frac{r}{L}}
\right) \; ;
\end{equation}
from the integration of the angular part, we have:
\begin{eqnarray}
\Psi_{C}(r) &=& - 2 \pi G \, (2 \beta^2 L) \int_{0}^{\infty}
{\mathrm{d}}r' r' \rho(r') \cdot \nonumber \\
&\cdot& \frac{e^{-\frac{|r-r'|}{L}} -
e^{-\frac{|r+r'|}{L}}}{r}
\end{eqnarray}
The radial integral is numerically estimated once the mass
density is given. A fundamental difference between such a term and
the Newtonian one is that while in the latter the matter outside
the spherical shell of radius $r$ does not contribute to the
potential, in the former external matter takes part to the
integration procedure, even if its contribution is really
negligible in most cases. \\
At the end, the total potential of the spherical mass distribution
will be:
\begin{equation}\label{eq:total corrected potential}
\Psi(r) = \Psi_{N}(r) + \Psi_{C}(r) \; .
\end{equation}
As we will show below, for our purpose we need the gravitational
potential derivative with respect to the variable $r$; this may
not be evaluated analytically so we estimate it
numerically, once we have given an expression for the mass density
$\rho(r)$. While the Newtonian term gives the simple expression:
\begin{equation}
\frac{{\mathrm{d}}\Phi_{N}}{{\mathrm{d}}r}(r) = \frac{G
M(<r)}{r^{2}} \; ,
\end{equation}
the derivative of the corrective potential term is more involved.
We do not give it explicitly for sake of brevity, but only
remind that it is an integral-function of the form
\begin{equation}
{{\mathcal{F}}}(r, r') = \int_{\alpha(r)}^{\beta(r)} dr' \ f(r,r')
\; ;
\end{equation}
from it one has: {\setlength\arraycolsep{0.2pt}
\begin{eqnarray}
\frac{{\mathrm{d}}{\mathcal{F}}(r, r')}{\mathrm{d}r} &=&
\int_{\alpha(r)}^{\beta(r)} dr'
\frac{{\mathrm{d}}f(r,r')}{{\mathrm{d}}r} - f(r,\alpha(r))
\frac{{\mathrm{d}}\alpha}{{\mathrm{d}}r}(r)+ \nonumber \\
&+& f(r,\beta(r)) \frac{{\mathrm{d}}\beta}{{\mathrm{d}}r}(r) \; .
\end{eqnarray}}
Such an expression is numerically derived once the integration
extremes are given.\\
For spiral galaxies, we have the same theoretical apparatus but a
different geometric configuration. Matter in spiral galaxies is
generally modeled as distributed in a thin axis-symmetric disk,
so that the extended gravitational potential is given by:
\begin{equation}
\Psi(r, z) = \int_{0}^{\infty} R' {\mathrm{d}}R' \int_{0}^{2\pi}
{\mathrm{d}}\omega' \hspace{0.1cm} \psi(R',z')\,.
\end{equation}
Even in this case, for being more exact, the couple of variables
$(R',z')$ inside the point-like potential should be replaced by
$|\vec{\mathbf{x}}-\vec{\mathbf{x}}'| = (R^{2}+R'^{2}-2RR' \cos
\omega'+(z-z')^2)^{1/2}$. Once the gravitational potential is
given, the rotation curve for the disk can be easily computed
starting from the relation \citep{Binney}:
\begin{equation}\label{eq:rot_vel}
v_{c}^2 = R \frac{{\mathrm{d}} \Psi(R,z)}{{\mathrm{d}}
R}\bigg{|}_{z=0} \; .
\end{equation}
It is important to underline that both in the clusters of
galaxies case and in the LSB galaxies one, we have to perform
derivatives with respect of the distance from the center
of any system, $r$, of the numerically derived gravitational
potential. To be completely rigorous we should add a term to all the previously
written relations, coming from the derivative of the \textit{function}
$\beta(\rho) \sim \beta(r)$. In our scalar-field approach we have treated the coupling parameter as a
\textit{constant} at all, while in the definition of the chameleon mechanism it is
also possible and not trivial that it may depend on the local density of the gravitational
system one is going to consider. At the same time, we do not know what is
the possible analytical behavior of this quantity or, better: it is
one of our purposes trying to reconstruct it. It is also evident that
if we want to use previous relations in the form we have shown before,
we are implicitly assuming that $\beta$ can satisfy two different
scenarios: \textit{a.} it is really constant, and in this case one expects
not to detect any change in it when comparing different gravitational
scales; or (as we will shown to be our case) \textit{b.} it is a function
of the gravitational scale, but its derivative is supposed to be
negligible, i.e. ${\mathrm{d}}\beta/{\mathrm{d}}r \approx 0$.
\subsection{Modified distance modulus}
\label{sec:modified_distance}
In \citep{Brax04} the Friedmann equation is derived from the action governing
the dynamics of the chameleon field $\phi$ in the Jordan frame:
\begin{equation}\label{eq:fried_cham}
3 H^{2} M_{Pl}^{2} = \rho_{m} e^{\beta \phi / M_{Pl}} +
\frac{1}{2} \dot{\phi}^{2} + V(\phi) + \rho_{r} \, ,
\end{equation}
with contribution from matter, radiation and the scalar field.
Making explicit the expression for the Planck mass, Eq.~(\ref{eq:fried_cham}) becomes:
{\setlength\arraycolsep{0.2pt}
\begin{equation}\label{eq:fried_gannouji}
3 H^{2} = 8 \pi G_{\ast} \left[ \rho_{m} e^{\beta \phi / M_{Pl}} +
\left( \frac{1}{2} \dot{\phi}^{2} + V(\phi) \right) + \rho_{r} \right] \; .
\end{equation}}
We can show that this equation in the chameleon case easily converts in
the most general expression for a given scalar field $\phi$.\\
If we assume the chameleon field $\phi$ is in the minimum of the
effective potential from the early stages of the universe on, then
we have $\frac{\phi}{M_{PL}} \ll 1$ \citep{Gannouji09} during the
subsequent evolution until today. This also means that $e^{\beta
\phi / M_{PL}} = 1$ to very high accuracy so it will disappear
from equations and does not have to be considered here. Then,
considering that the function $F$ appearing in Eq.~(\ref{eq:Geff_GF})
is equal to unity in the scalar field case, we also have $G_{N} =
G_{\ast}$. At the end the Friedmann equation is completely equal
to the usual expression: {\setlength\arraycolsep{0.2pt}
\begin{equation}
3 H^{2} = 8 \pi G_{N} \left[\rho_{m} + \left( \frac{1}{2}
\dot{\phi}^{2} + V(\phi) \right) + \rho_{r} \right],
\end{equation}}
and finally, avoiding radiation contribution while suffix $Sc$ corresponds
to the scalar field:
\begin{equation}
h^{2}(z) = \Omega_{m,0} (1 + z)^{3} + \Omega_{Sc,0} \epsilon(z) \; ,
\end{equation}
where:
{\setlength\arraycolsep{0.2pt}
\begin{eqnarray}
h(z) & \doteq & \frac{H(z)}{H_{0}} \nonumber \\
\Omega_{m,0} & \doteq & \frac{8 \pi G_{N}}{3 H_{0}^{2}} \rho_{m,0} \\
\Omega_{Sc,0} & \doteq & \frac{8 \pi G_{N}}{3 H_{0}^{2}}
\left(\frac{1}{2} \dot{\phi}^{2} + V{\phi} \right)\Bigg{|}_{z=0} =
\frac{8 \pi G_{N}}{3 H_{0}^{2}} \rho_{Sc,0} \nonumber
\end{eqnarray}}
The function $\epsilon(z)$ is unknown; but one knows (assumes) that the
scalar field works like a cosmological constant on cosmological scales, so we may choose it
to be constant in redshift or one can use more general and
extended models as the Chevallier-Polarski-Linder (CPL)
parametrization \citep{Chevallier01,Linder03} usually used to
\textit{phenomenologically} describe dark energy fluids.
Even if we do not have any possibility to discriminate between
$\Lambda$CDM and a scalar field scenario only by using $h(z)$, we
have a discriminating tool in the distance modulus, the main
observable quantity derivable from SNeIa observations, modified
from the usual expression by assuming that the gravitational constant can
vary with time:
\begin{eqnarray}\label{eq:mod_dist_cham}
\mu(z; \beta, m; k) &=& 5 \log \left( (1+z) \int_{0}^{z}
\frac{{\mathrm{d}}z}{h(z)}\right) + \mu_{0} \nonumber \\
&+& \frac{15}{4} \log \frac{G_{eff}(z; \beta, m; k)}{G_{eff}(0; \beta, m; k)} \, .
\end{eqnarray}
In this expression there is an additional term made of with the
ratio between the value of effective gravitational constant at any
redshift and the same quantity evaluated at the present ($z=0$). As
accurately described in \citep{Riazuelo02}, a time-varying
gravitational constant can affect light curves from SNeIa by
changing both the thermonuclear energy release, since the
luminosity at the maximum in the light curve is proportional to
the mass of synthesized nickel, and the time scale of stellar
explosion. This means that by using Eq.~(\ref{eq:mod_dist_cham})
we are going to test the scalar field mechanism on cosmological scales,
in particular considering the consequent role of a possible variation of the effective gravitational
constant in the Universe acceleration rate history
In the scalar field theory one has an analytical expression for the
effective gravitational constant, i.e. Eq.~(\ref{eq:G_cham}),
which we have modified in the
following one for uniforming all the results:
\begin{equation}\label{eq:G_sn}
G_{eff}(z; \beta, L; \lambda) = G_{N} \left( 1 + 2 \beta^{2}
\frac{(1+z)^2 L^2}{(1+z)^2 L^2 + \lambda^2} \right) \; ,
\end{equation}
which depends on the following variables:
\begin{itemize}
\item the redshift $z$ ($a = 1/(1+z)$);
\item the wavelength $k$ (or the length $\lambda \propto k^{-1}$), which one could fix or vary on a grid;
\item the intrinsic model parameters, i.e., the coupling constant $\beta$ and the interaction length $L \propto
m^{-1}$, which can be constrained with a fitting procedure.
\end{itemize}
\section{Observational data}
\label{sec:obs_data}
We are going to test the scalar field mechanism on three
different scale ranges:
\begin{itemize}
\item on cosmological scales, by means of supernovae luminosity distance;
\item on a Mpc-astrophysical scale, using mass profiles of clusters of galaxies;
\item on a kpc-astrophysical scale, analyzing rotation curves from spiral galaxies.
\end{itemize}
For any of them, we have found out the necessary sample data in literature.
\subsection{Cosmological scale: Supernovae
\label{sec:SNdata}
SNeIa are useful because of the possibility to easily modify the expression of distance modulus
for more general theories with a varying gravitational constant
(as in the scalar field case). Moreover, the distance modulus is the
main observable quantity derivable from this kind of astrophysical
objects. Adding to that, the possibility of using data ranging up to
redshift values much larger than those ones from galaxies or clusters
of galaxies ($z \approx 2$) makes us possible to test and verify a
possible temporal variation of the gravitational constant (if
there is any) and so a possible alternative gravity scenario. It
is interesting to underline that with the modified expression of
the distance modulus we can also verify the coexistence, at the
same time, of both \textit{dark energy} and \textit{dark matter},
both explained as different consequences on different scales of
the same unified context, namely, the scalar field. In
fact, in the expression for the distance modulus we are going to
describe in next sections, we employ both a term with a
\textit{dark energy-modeled fluid}, coming out nothing else that
from the effective behavior of the scalar field on cosmological
scales (which does not differ much from the cosmological constant
behavior) and a term acting as a \textit{dark matter-modeled
component}, coming out from the scalar field working on
gravitational scales smaller than the cosmological one.
In this sense, we have also explored how the scalar field process can
mimic dark matter profiles in clusters and spiral galaxies.
We use the \textit{Constitution} sample described in \citep{Hicken09}, which is a data set obtained by combining
the Union data set by \citep{Kowalski08} with new $90$ nearby
objects from the CfA3 release described in \citep{Hicken09A}.
The Union SNeIa compilation is a data set of low-redshift
nearby-Hubble-flow SNeIa and is built with new analysis procedures
for working with several heterogeneous SNeIa compilations. It
includes $13$ independent sets with SNe from the SCP, High-z
Supernovae Search (HZSNS) team, Supernovae Legacy Survey and
ESSENCE Survey, the older data sets, as well as the recently
extended data set of distant supernovae observed with HST. After
various selection cuts were applied in order to create a
homogeneous and high-signal-to-noise data set, we have final $307$
SNeIa events distributed over the redshift interval $0.15 \leq z
\leq 1.55$.
The CfA3 data set is originally made of 185 multi-band optical
SNeIa light curves taken at the F.L. Whipple Observatory of the
Harvard-Smithsonian Center for Astrophysics (CfA); 90 of the
original 185 objects pass the quality cuts of \citep{Kowalski08}
and are added to the Union data set to form the Constitution
one.
The statistical analysis of Constitution SNeIa sample rests on the
definition of the distance modulus given in
Eq.~(\ref{eq:mod_dist_cham}). The best fits were obtained by
minimizing the quantity
\begin{equation}\label{eq: sn_chi}
\chi^{2}_{\mathrm{SN}}(\mu_{0}, \lambda, \{\theta_{i}\}) = \sum^{{\mathcal{N}}}_{j =
1} \frac{(\mu(z_{j}; \mu_{0}, \lambda, \{\theta_{i})\} -
\mu_{obs}(z_{j}))^{2}}{\sigma^{2}_{\mathrm{\mu}}(z_{j})}
\end{equation}
where ${\mathcal{N}}=397$ is the number of observed SNeIa, $\mu$
is the distance modulus (the observed, $\mu_{obs}$, and the
theoretical one, $\mu(z_{j}; \mu_{0}, \lambda, \{\theta_{i})$),
$\sigma^{2}_{\mathrm{\mu}}$ are the measurement variances and
$\{\theta_{i}\} = \{\beta, L\}$ is the parameters theory vector.
The nuisance parameter $\mu_{0}$ encodes the Hubble parameter and
the absolute magnitude $M$, and has to be marginalized over.
Giving the heterogeneous origin of the Constitution data set, and
the procedures described in \citep{Kowalski08} and \citep{Hicken09}
for reducing data, we have worked with an alternative version
Eq.~(\ref{eq: sn_chi}), which consists in minimizing the quantity
\begin{equation}\label{eq: sn_chi_mod}
\tilde{\chi}^{2}_{\mathrm{SN}}(\{\theta_{i}\}) = c_{1} -
\frac{c^{2}_{2}}{c_{3}}
\end{equation}
with respect to the other parameters. Here
\begin{equation}
c_{1} = \sum^{{\mathcal{N}}}_{j = 1} \frac{(\mu(z_{j}; \mu_{0}=0,
\{\theta_{i})\} -
\mu_{obs}(z_{j}))^{2}}{\sigma^{2}_{\mathrm{\mu}}(z_{j})}\, ,
\end{equation}
\begin{equation}
c_{2} = \sum^{{\mathcal{N}}}_{j = 1} \frac{(\mu(z_{j}; \mu_{0}=0,
\{\theta_{i})\} -
\mu_{obs}(z_{j}))}{\sigma^{2}_{\mathrm{\mu}}(z_{j})}\, ,
\end{equation}
\begin{equation}
c_{3} = \sum^{{\mathcal{N}}}_{j = 1}
\frac{1}{\sigma^{2}_{\mathrm{\mu}}(z_{j})}\,.
\end{equation}
It is trivial to see that $\tilde{\chi}^{2}_{SN}$ is just a
version of $\chi^{2}_{SN}$, minimized with respect to $\mu_{0}$.
To that end it suffices to notice that
\begin{equation}
\chi^{2}_{\mathrm{SN}}(\mu_{0}, \lambda, \{\theta_{i}\}) = c_{1} - 2 c_{2}
\mu_{0} + c_{3} \mu^{2}_{0} \,
\end{equation}
which clearly becomes minimum for $\mu_{0} = c_{2}/c_{3}$, and so
we can see $\tilde{\chi}^{2}_{\mathrm{SN}} \equiv
\chi^{2}_{\mathrm{SN}}(\mu_{0} = 0, \lambda, \{\theta_{i}\})$. Furthermore,
one can check that the difference between $\chi^{2}_{SN}$ and
$\tilde{\chi}^{2}_{SN}$ is negligible.
We minimize the $\chi$-square using the Markov Chains Monte Carlo
Method (MCMC) and testing their convergence with the method
described by \citep{Dunkley05}. The $i\sigma$ confidence levels are
easily estimated deriving them from the final samples, using the
$15.87$-th and $84.13$-th quartiles (which define the $68\%$
confidence interval) for $i=1$, the $2.28$-th and $97.72$-th
quartiles (which define the $95\%$ confidence interval) for $i=2$
and the $0.13$-th and $99.87$-th quantiles (which define the
$99\%$ confidence interval) for $i=3$.
\subsection{Galaxy Cluster Sample}
\label{sec:cluster}
Clusters of galaxies are uniquely useful tracer of cosmological
evolution and so ineludible tests in the field of alternative
gravities other than general relativity \citep{Voit05}. They are
fundamental tracers for two main features. First of all, they are
the largest gravitational objects whose masses can be adequately
measured, and the largest objects to have undergone gravitational
relaxation and entered into virial equilibrium. Second, clusters
are essentially ``closed boxes'' that retain all their gaseous
mass content because their gravitational wells are much deep. The
most accepted paradigm is that clusters of galaxies are mostly
made of collisionless cold dark matter particles (CDM model) and
are virialized systems from scale-free Gaussian initial density
perturbations. The CDM paradigm and the numerical simulations make
clear predictions for the structure of clusters of galaxies;
comparisons of these predictions with the results of high-quality
observations is a necessary consistent check and any significant
deviation can place important constraints on their theoretical
model but also on cosmological models and, as in our case, on the
exploration of different gravity theories.
The formalism described in \S~\ref{sec:modified_potential} can be
applied to a sample of $12$ galaxy clusters. We shall use the
cluster sample studied in \citep{Vik05,Vik06} which consists of
$13$ low-redshift clusters spanning a temperature range $0.7\div
9.0\ {\rm keV}$ derived from high quality {\it Chandra} archival
data. In all these clusters, the surface brightness and the gas
temperature profiles are measured out to large radii, so that mass
estimates can be extended up to $r_{500}$ or beyond.
Clusters of galaxies are generally considered self-bound
gravitational systems with spherical symmetry and in hydrostatic
equilibrium if virialized. The last two hypotheses are still
widely used, despite of the fact that it has been widely proved
that most clusters show more complex morphologies and/or signs of
strong interactions or dynamical
activity, especially in their innermost regions \citep{Chak08,DeFil05}. \\
Under the hypothesis of spherical symmetry in hydrostatic
equilibrium, the structure equation can be derived from the
collisionless Boltzmann equation
\begin{eqnarray}\label{Boltzmann equation}
\frac{{\mathrm{d}}}{{\mathrm{d}}r}(\rho_{gas}(r)
\sigma^{2}_{r}) &+&
\frac{2\rho_{gas}(r)}{r}(\sigma^{2}_{r}-\sigma^{2}_{\theta,\omega})
= \nonumber \\
&=& -\rho_{gas}(r) \cdot \frac{{\mathrm{d}}\Psi(r)}{{\mathrm{d}}r}
\end{eqnarray}
where $\Psi$ is the gravitational potential of the cluster,
$\sigma_{r}$ and $\sigma_{\theta,\omega}$ are the mass-weighted
velocity dispersions in the radial and tangential directions
respectively, and $\rho$ is the gas-mass density. For an isotropic
system, it is
\begin{equation}\label{velocity dispersion}
\sigma_{r} = \sigma_{\theta,\omega} \; ;
\end{equation}
the pressure profile can be related to these quantities by
\begin{equation}\label{pressure}
P(r) = \sigma^{2}_{r} \cdot \rho_{gas}(r) \; .
\end{equation}
Substituting Eqs.~(\ref{velocity dispersion})~-~(\ref{pressure})
into Eq.~(\ref{Boltzmann equation}), we have, for an isotropic
sphere,
\begin{equation}\label{isotropic sphere}
\frac{{\mathrm{d}} P(r)}{{\mathrm{d}}r} = - \rho_{gas}(r)
\frac{{\mathrm{d}}\Psi(r)}{{\mathrm{d}}r} \; .
\end{equation}
For a gas sphere with temperature profile $T(r)$, the velocity
dispersion becomes
\begin{equation}\label{temperature}
\sigma^{2}_{r} = \frac{k T(r)}{\mu m_{p}} \; ,
\end{equation}
where $k$ is the Boltzmann constant, $\mu \approx 0.609$ is the
mean mass particle and $m_{p}$ is the proton mass. Substituting
Eqs.~(\ref{pressure})~-~(\ref{temperature}) into
Eq.~(\ref{isotropic sphere}), we obtain
\[
\frac{{\mathrm{d}}}{{\mathrm{d}}r} \left( \frac{k T(r)}{\mu m_{p}}
\rho_{gas}(r) \right) = -\rho_{gas}(r) \frac{{\mathrm{d}}
\Psi(r)}{{\mathrm{d}}r} \; ,
\]
or, equivalently,
\begin{equation}\label{eq:Boltzmann potential}
-\frac{{\mathrm{d}}\Psi(r)}{{\mathrm{d}}r} = \frac{k T(r)}{\mu m_{p}
r}\left[\frac{{\mathrm{d}}\ln\rho_{gas}(r)}{{\mathrm{d}}\ln r} +
\frac{{\mathrm{d}}\ln T(r)}{{\mathrm{d}}\ln r}\right] \; .
\end{equation}
Now the total gravitational potential of the cluster is:
\begin{equation}\label{eq:total corrected potential1}
\Psi(r) = \Psi_{N}(r) + \Psi_{C}(r) \; .
\end{equation}
It is worth underlining that if we consider \textit{only} the
standard Newtonian potential and its derivative in
Eq.~(\ref{eq:Boltzmann potential}), the \textit{total} cluster
mass $M_{cl,N}(r)$ (the standard estimation of clusters mass in a
CDM scenario) is composed by gas mass $+$ galaxies mass $+$ dark
matter and it is given by the expression:
{\setlength\arraycolsep{0.2pt}
\begin{eqnarray}
\label{eq:M_tot} M_{cl,N}(r) &=& M_{gas}(r) + M_{gal}(r) +
M_{DM}(r) = \nonumber
\\
&=& - \frac{k T(r)}{\mu m_{p} G} r
\left[\frac{{\mathrm{d}}\ln\rho_{gas}(r)}{{\mathrm{d}}\ln
r}+\frac{{\mathrm{d}}\ln T(r)}{{\mathrm{d}}\ln r}\right] \; .
\end{eqnarray}}
Generally the galaxy contribution is considered negligible with
respect to the other two components so we have:
\[
M_{cl,N}(r) \approx M_{gas}(r) + M_{DM}(r) \approx
\]
\[
\hspace{1.35cm} \approx - \frac{k T(r)}{\mu m_{p}} r
\left[\frac{{\mathrm{d}}\ln\rho_{gas}(r)}{{\mathrm{d}}\ln
r}+\frac{{\mathrm{d}}\ln T(r)}{{\mathrm{d}}\ln r}\right] \; .
\]
Since the gas-mass estimates are provided by X-ray observations, the
equilibrium equation can be used to derive the amount of dark
matter and to reconstruct its spatial distribution in a cluster of galaxies.
Inserting the previously defined \textit{extended-corrected}
potential of Eq.~(\ref{eq:total corrected potential1}) into
Eq.~(\ref{eq:Boltzmann potential}), we obtain:
\begin{equation}
\label{eq:corrected_mass} -\frac{\mathrm{d}\Psi_{N}}{\mathrm{d}r}
-\frac{\mathrm{d}\Psi_{C}}{\mathrm{d}r} =\frac{k T(r)}{\mu m_{p}
r}\left[\frac{\mathrm{d}\ln\rho_{gas}(r)}{\mathrm{d}\ln r} +
\frac{\mathrm{d}\ln T(r)}{\mathrm{d}\ln r}\right] \; ,
\end{equation}
from which the \textit{extended-corrected} mass estimate follows:
{\setlength\arraycolsep{0.2pt}
\begin{eqnarray}\label{eq:fit relation}
M_{cl,EC}(r) &+& \frac{r^{2}}{G}
\frac{\mathrm{d}\Psi_{C}(r)}{\mathrm{d}r} = \\ &=&
- \frac{k T(r)}{\mu m_{p}G} r
\left(\frac{{\mathrm{d}}\ln\rho_{gas}(r)}{{\mathrm{d}}\ln
r}+\frac{{\mathrm{d}}\ln T(r)}{{\mathrm{d}}\ln r}\right) \nonumber \; .
\end{eqnarray}}
Since the use of a corrected potential avoids, in principle, the additional
requirement of dark matter, the total cluster mass, in this case,
is given by only the baryonic matter counterparts:
\begin{equation}
M_{cl,EC}(r) = M_{gas}(r) + M_{gal}(r) \; ,
\end{equation}
that can be entirely evaluated by observational data. The mass density in the $\Psi_{C}$ term is
\begin{equation}
\rho_{cl,EC}(r) = \rho_{gas}(r) + \rho_{gal}(r) \; ,
\end{equation}
with the density components derived from observations.
Considering that the right term in Eq.~(\ref{eq:fit relation}) is
the total Newtonian mass estimation for a cluster of galaxies, we
easily derive that the corrective term in the gravitational
potential works in mimicking an \textit{effective} dark matter
contribution:
\begin{equation}\label{eq:mass_contr}
\frac{r^{2}}{G} \frac{\mathrm{d}\Psi_{C}}{\mathrm{d}r}(r) =
M_{cl,N}(r) - M_{cl,EC}(r) \; .
\end{equation}
But in our approach, instead of requiring new kind of particles,
it arises by the interaction of baryonic matter with the scalar field
scalar field.
We have hence performed a best-fit analysis of the theoretical
estimation of dark matter, Eq.~(\ref{eq:mass_contr})
{\setlength\arraycolsep{0.2pt}
\begin{equation}\label{eq:theo_dark}
M_{dm,th}(r; \beta, L) \doteq M_{eff}(r; \beta, L) =
\frac{r^{2}}{G} \frac{\mathrm{d}\Psi_{C}(r)}{\mathrm{d}r}(r)
\end{equation}}
which depends on scalar field parameters through the potential
$\Psi_{C}$, versus the same but observationally-derived quantity,
\begin{equation}\label{eq:obs_dark}
M_{dm,obs}(r) = M_{cl,N}(r) - M_{cl,EC}(r) \; .
\end{equation}
We underline here that we could not fit directly the observed
baryonic mass because of the great difference in order of
magnitude between $M_{cl,N}(r)$ and $M_{cl,EC}(r)$, working this
last one like a \textit{small perturbation} to the total mass
estimation. It is clear that the term corresponding to our
theoretical derived dark matter quantity is much bigger than the
baryonic contribution, and even a small and acceptable deviation
of only $1 \%$ in it would have been translated in a larger
deviation of $10 \%$ in the baryonic one.\\
Since not all the data involved in the above estimations have
measurable errors, we cannot perform an \textit{exact}
$\chi$-square minimization. Actually, we can minimize the
quantity:
\begin{equation}
\chi_{Cl}^{2}(\{\theta_{i}\}) = \frac{1}{{\mathcal{N}}-n_{p}-1} \cdot \sum_{i=1}^{{\mathcal{N}}}
\frac{(M_{dm,obs}^{i}-M_{dm,th}^{i}(\{\theta_{i}\})^{2}}{M_{dm,th}^{i}(\{\theta_{i}\})}
\end{equation}
where ${\mathcal{N}}$ is the number of data and $n_{p} = 2$ is the
free parameters number of the model and $\{\theta_{i}\} = \{\beta,
L\}$. As usual we find the minimum in $\chi$-square running MCMCs.
Even if the convergence is achieved after few thousand steps of
the chain, we have decided to run longer chains of $10^{5}$ points
to reduce the noise from the histograms and avoid under- or over-
estimations of errors on the parameters.
\subsubsection{Gas Density Model}
\label{sec:gas_model}
The gas density distribution of the clusters in the
sample is described by the analytic model proposed in~\citep{Vik06}. Such a model
modifies the classical $\beta-$model to represent the characteristic
properties of the observed X-ray surface brightness profiles, i.e.
the power-law-type cusps of gas density in the cluster center,
instead of a flat core and the steepening of the
brightness profiles at large radii. Eventually, a second $\beta-$model,
with a small core radius, is added to improve the model
close to the cluster cores. The analytical
form for the particle emission is given by:
{\setlength\arraycolsep{0.2pt}
\begin{eqnarray}
\label{gas density vik} n_{p}n_{e} &=& n_{0}^{2} \cdot
\frac{(r/r_{c})^{-\alpha}}{(1+r^{2}/r_{c}^{2})^{3\beta-\alpha/2}}
\cdot \frac{1}{(1+r^{\gamma}/r_{s}^{\gamma})^{\epsilon/\gamma}}+
\nonumber \\
&+& \frac{n_{02}^{2}}{(1+r^{2}/r_{c2}^{2})^{3\beta_{2}}} \; ,
\end{eqnarray}}
which can be easily converted to a mass density using the relation:
\begin{equation}
\label{eq:gas_density} \rho_{gas} = n_T \cdot \mu m_p =
\frac{1.4}{1.2} n_e m_p \; ,
\end{equation}
where $n_T$ is the total number density of particles in the gas.
The resulting model has a large number of parameters, some of
which do not have a direct physical interpretation. While this can often
be inappropriate and computationally inconvenient, it suits well
our case, where the main requirement is a detailed qualitative
description of the cluster profiles.\\
In \citep{Vik06}, Eq.~(\ref{gas density vik}) is applied to a
restricted range of distance from the cluster center, i.e. between
an inner cutoff $r_{min}$, chosen to exclude the central
temperature bin ($\approx 10\div 20\ {\rm kpc}$) where the ICM is
likely to be multi-phase, and $r_{det}$, where the X-ray surface
brightness is at least $3 \sigma$ significant. We have
extrapolated the above function to values outside this restricted
range using the following criteria:
\begin{itemize}
\item for $r < r_{min}$, we have performed a linear extrapolation
of the first three terms out to $r = 0$ kpc;
\item for $r > r_{det}$, we have performed a linear extrapolation
of the last three terms out to a distance $\bar{r}$ for which
$\rho_{gas}(\bar{r})=\rho_{c}$, $\rho_{c}$ being the critical
density of the Universe at the cluster redshift:
$\rho_{c} = \rho_{c,0} \cdot (1 + z)^{3}$. For radii larger than $\bar{r}$,
the gas density is assumed constant at $\rho_{gas}(\bar{r})$.
\end{itemize}
We point out that, in Table~\ref{tabcluster}, the radius limit $r_{min}$
is almost the same as given in the previous definition. When the
value given by \citep{Vik06} is less than the cD-galaxy radius, which is
defined in the next section, we choose this last one as the lower
limit. On the contrary, $r_{max}$ is quite different from
$r_{det}$: it is fixed by considering the higher value of temperature profile
and not by imaging methods. \\
We then compute the gas mass $M_{gas}(r)$ and the total mass
$M_{cl,N}(r)$, respectively, for all clusters in our sample,
substituting Eq.~(\ref{gas density vik}) into
Eqs.~(\ref{eq:gas_density}) and (\ref{eq:M_tot}), respectively;
the gas temperature profile is described in details in
\S~\ref{sec:T_prof}. The resulting mass values, estimated at
$r=r_{max}$, are listed in Table~\ref{tabcluster}.
\subsubsection{Temperature Profiles}
\label{sec:T_prof}
As stressed in \S~\ref{sec:gas_model}, for the purpose of
this work, we need an accurate qualitative description of the
radial behavior of the gas properties. Standard isothermal or
polytropic models, or even the more complex one proposed in
\citep{Vik06}, do not provide a good description of the data at all
radii and for all clusters in the present sample. We hence describe the
gas temperature profiles using the straightforward X-ray spectral analysis
results, without the introduction of any analytic model.\\
X-ray spectral values have been provided by A. Vikhlinin (private
communication). A detailed description of the relative spectral
analysis can be found in \citep{Vik05}.
\subsubsection{Galaxy Distribution Model}
\label{sec:gal_model}
The galaxy density can be modelled as proposed by \citep{Bah96}.
Even if the galaxy distribution is a \textit{point}-distribution
instead of a continuous function, assuming that galaxies are in
equilibrium with gas, we can use a $\beta-$model, $\propto
r^{-3}$, for $r < R_{c}$ from the cluster center, and a steeper
one, $\propto r^{-2.6}$, for $r > R_{c}$, where $R_{c}$ is the
cluster core radius (its value is taken from \citep{Vik06}). Its
final expression is:
\begin{equation}\label{gal density bahcall}
\rho_{gal}(r) = \left\{%
\begin{array}{ll}
\rho_{gal,1} \cdot \left[1+
\left(\frac{r}{R_{c}}\right)^{2} \right]^{-\frac{3}{2}} & \hbox{$r < R_{c}$} \\
\rho_{gal,2} \cdot \left[1+
\left(\frac{r}{R_{c}}\right)^{2} \right]^{-\frac{2.6}{2}} & \hbox{$r > R_{c}$} \\
\end{array}%
\right.
\end{equation}
where the constants $\rho_{gal,1}$ and $\rho_{gal,2}$ are chosen
in the following way:
\begin{itemize}
\item \citep{Bah96} provides the central number density of galaxies in
rich compact clusters for galaxies located within a
$1.5$ h$^{-1}$Mpc radius from the cluster center and brighter than $m_3+2^m$
(where $m_3$ is the magnitude of the third brightest galaxy):
$n_{gal,0} \sim 10^{3} h^{3}$ galaxies Mpc$^{-3}$. Then we fix
$\rho_{gal,1}$ in the range $\sim 10^{34}\div 10^{36}$ kg/kpc$^{3}$.
\item the constant $\rho_{gal,2}$ has been fixed with the only
requirement that the galaxy density function has to be continuous at
$R_{c}$.
\end{itemize}
For any cluster we assume that the galaxy population also consists
in a cD galaxy, a giant elliptical galaxy with a diffuse envelope
which is generally located at the center of clusters and whose
typical mass is in the range $10^{12}\div 10^{13} M_{\odot}$. The
cD galaxy density has been modeled as described in \citep{SA06};
they use a Jaffe model of the form:
\begin{equation}\label{jaffe cd galaxy}
\rho_{CDgal} = \frac{\rho_{0,J}}{\left(\frac{r}{r_{c}}\right)^{2}
\left(1+\frac{r}{r_{c}}\right)^{2}} \; ,
\end{equation}
where $r_{c}$ is the core radius while the central density is
obtained from ${\displaystyle M_{J} = \frac{4}{3} \pi R_{c}^{3}
\rho_{0,J}}$. The mass of the cD galaxy has been fixed at $1.14
\times 10^{12}$ $M_{\odot}$, with $r_{c} = R_{e}/0.76$, with
$R_{e} = 25$ kpc being the effective radius of the galaxy. The
central galaxy for each cluster in the sample is assumed to have
approximately this stellar mass.
We have tested the effect of varying galaxy density with the
central density parameter $\rho_{gal,1}$ in the above range $\sim
10^{34}\div 10^{36}$ kg/kpc$^{3}$ on the cluster with the lowest
mass, namely A262. In this case, we would expect greater
variations with respect to other clusters; the result is that the
contribution due to galaxies and cD-galaxy gives a variation $\leq
1\%$ to the final estimate of fit parameters.
Finally, we have assumed that the total galaxy-component mass
(galaxies plus cD-galaxy masses) is $\approx 20\div25\%$ of the
gas mass: in \citep{Schindler02}, the mean fraction of gas versus
the total mass (with dark matter) for a cluster is estimated to be
$15\div 20\%$, while the same quantity for galaxies is $3\div
5\%$. This means that the relative mean mass ratio gal-to-gas in a
cluster is $\approx 20\div 25\%$. We have varied the parameters
$\rho_{gal,1}$, $\rho_{gal,2}$ and $M_{J}$ in their previous
defined ranges to obtain a mass ratio between total galaxy mass
and total gas mass which lies in this range. At the end the
cD-galaxy is dominant with respect to the other galaxies only in
the inner region (below $100$ kpc). As already stated in
\S~\ref{sec:gas_model}, cluster innermost regions have been
excluded from our analysis and so the contribution due to the
cD-galaxy is practically negligible in our analysis. The gas is,
as a consequence, the dominant visible component, starting from
innermost regions out to large radii, being galaxy mass only
$20\div 25\%$ of gas mass.
\subsubsection{Uncertainties on mass profiles}
\label{sec:uncertainties}
Uncertainties on the cluster total mass profiles have been
estimated performing Monte-Carlo simulations \citep{NeuBoh95}. We
proceed to simulate temperature profiles and choose random
radius-temperature values couples for each bin which we have in
our temperature data given by \cite
{Vik05}. Random temperature
values have been extracted from a Gaussian distribution centered
on the spectral values, and with a dispersion fixed to its $68\%$
confidence level. For the radius, we choose a random value inside
each bin. We have performed 2000 simulations for each cluster and
perform two cuts on the simulated profile. First, we exclude those
profiles that give an unphysical negative estimate of the mass:
this is possible when our simulated couples of quantities give
rise to too high temperature-gradient. After this cut, we have
$\approx1500$ simulations for any cluster. Then we have ordered
the resulting mass values for increasing radius values. Extreme
mass estimates (outside the $10\div90\%$ range) are excluded from
the obtained distribution, in order to avoid other high mass
gradients which give rise to masses too different from real data.
The resulting limits provide the errors on the total mass.
Uncertainties on the electron-density profiles have not been
included in the simulations, being them negligible with respect to
those of the gas-temperature profiles.
\subsection{Low surface brightness galaxies}
\label{sec:LSB}
For the analysis of galactic scales we have used a sample of the
so-called low surface brightness (LSB) galaxies and dwarf
galaxies. It is still unclear to what extent rotation curves in
bright spiral galaxies may give clues about the profile of both
visual and dark matter, mainly because they are poor in gas
content, so that rotation curves can be hardly detected out to
sufficiently large radii where they are supposed to be dark matter
dominated. Moreover they also show some typical complex features,
such as extended spiral arms or barred structure that can lead to
consistent non-circular motions and thus making difficult the
interpretation of data. On the other side, LSB galaxies are
supposed to be dark matter dominated at all radii, and therefore
the analysis of their rotation curves can yield important clues
about it. Effectively, LSB galaxies exhibit a large discrepancy
between the detectable and the Newtonian dynamics mass within the
optical disk \citep{deBlok97,McGaugh98,Swaters00,bour}. They are
also a little challenge in the framework of CDM model because
predictions from CDM based simulations have revealed disagreement
with observational profiles of several dwarf galaxies
\citep{Moore94}. In particular, data indicate much less cuspy
distributions of matter than the simulations, and possible
solutions such as feedback effect due to star formation have been
excluded \citep{vandenHoek00} by the observed low star formation
ratio, at present and in the past, from which it can be deduced
that star formation rate has never been important enough in these
galaxies to modify their structure. Then, alternative solutions to
Newtonian dynamics are also possible.
The chosen galaxies for our analysis come from a sample of $15$
elements with high resolution H$\alpha$/HI rotation curves
extracted from the larger sample described in \citep{deBlok02}.
This sample was selected by \citep{Capozziello07} using the
criteria of the contemporary availability of data on rotation
curves, on surface photometry in the R band and on surface
gas-mass density. Rotation curves were derived from spectrographic
observations performed by the authors themselves of
\citep{deBlok02}, while the photometry of the stellar disk and
H$\alpha$/HI surface densities are collected from literature. For
a more complete and detailed discussion one can mainly refers to
\citep{SwatersPhD}. \\
The final sample does not constitute a complete sample of dwarf
and LSB galaxies, but contains galaxies in a wide range in
luminosity and surface brightness for which high-quality rotation
curves are available. Therefore it is well suited for testing
scalar field mechanism in late-type dwarf and LSB galaxies.
Moving to the way of modeling our spiral galaxies, we have to
specify the properties of stellar and gas distribution. Stars are
generally supposed being distributed in a thin and circularly
symmetric disk, with the surface density $\Sigma(R)$ derived from
the observed surface brightness distribution through the relation:
\begin{equation}
\Sigma(R) = 10^{-0.4 (\mu(R) - \mu_{R,\odot} - C)} \; ,
\end{equation}
where $\mu_{R,\odot}=4.46$ is the solar magnitude in the R band
and $C = 21.572$ is the needed constant for converting surface
brightness from magnitude units, $\mathrm{mag}/\mathrm{arcsec}^2$,
to linear units, $L_{\odot}/ \, \mathrm{pc}^2$. The luminosity
surface density is commonly fitted with the exponential disk model
\cite
{Freeman70}:
\begin{equation}
\Sigma(R) = \Sigma_{0} \exp (-R/R_{d}) \; ,
\end{equation}
where $\Sigma_{0}$ is the central surface brightness and $R_{d}$
is the disk scale length. Of course, in determining our
theoretical rotation curve, we need the stellar mass density, that
can be obtained from the luminosity one by simply multiplying it
for the stellar mass-to-luminosity ratio, $Y_{\ast}$, which is,
together with our scalar field parameters, the third free parameter
having to be constrained with the fitting procedure.
Modeling the gas density is more complicated because we do not
have an analytical function able to describe its behavior at all
radii and because the profile is very disturbed. Using
\cite
{SwatersPhD} plots and images, we fit the outer radii profile
with a linear relation (in magnitude unities), while the inner one
is generally fitted by simply interpolating data, with any
analytical expression (generally a polynomial one) which well fits
data points. We then check if the model works well by comparing
the obtained total gas mass with the same quantity but evaluated by extrapolation
from observational data. We verified that we only need very small
normalization constants, in the range $(0.95,1.05)$, to fully match
data values. Finally, we multiply the gas density with the factor $1.4$ for including
helium contribution.
With these model components, the general expression for the
rotation velocity
\begin{equation}
v_{c}^2(R) = R \frac{{\mathrm{d}} \Psi(R,z)}{{\mathrm{d}}
R}\bigg{|}_{z=0} \; ,
\end{equation}
can be decomposed in the following terms:
\begin{equation}
v_{c}^2(R) = v_{c,N}^2(R) + v_{c,C}^2(R) \; ,
\end{equation}
with
\begin{equation}
v_{c,N}^2(R) = R \frac{{\mathrm{d}} \Psi_{N}(R,z)}{{\mathrm{d}}
R}\bigg{|}_{z=0} \; ,
\end{equation}
and
\begin{equation}
v_{c,C}^2(R) = R \frac{{\mathrm{d}} \Psi_{C}(R,z)}{{\mathrm{d}}
R}\bigg{|}_{z=0} \, ,
\end{equation}
the Newtonian and the corrective contributions to the rotational
velocity from the respective terms in the total gravitational
potential. Anyone of the previous terms can then be decomposed in
two different component elements, concerning the two mass
components, namely stars and gas. For stars we have:
\begin{equation}
v_{c,N}^{star}\,^2(R; Y_{\ast}) = R \, \frac{{\mathrm{d}}
\Psi_{N}^{star}(R,z; Y_{\ast})}{{\mathrm{d}} R}\bigg{|}_{z=0} \; ,
\end{equation}
where we make explicit that this term depends on the free parameter $Y_{\ast}$
which appears in the stellar density inside the potential expression. When this term
is not present in the available data, we use the expression given in \citep{Binney} for the case
of an exponential disk and with $Y_{\ast} = 1$, i.e.
\begin{eqnarray}
v_{c,N}^{star}\,^2(R) &=& 4 \pi \, {\mathrm{G}} \Sigma_{0} R_{d}(y)^2 \cdot \\
&\cdot& \left[ I_{0}(y) K_{0}(y) - I_{1}(y) K_{1}(y)\right] \; , \nonumber
\end{eqnarray}
with $y = R/2 R_{d}$. For gas we have
\begin{equation}
v_{c,N}^{gas}\,^2(R) = R \, \frac{{\mathrm{d}}
\Psi_{N}^{gas}(R,z)}{{\mathrm{d}} R}\bigg{|}_{z=0} \; ,
\end{equation}
and, as in the stellar case, when this kind of data is not
available, we derive it in a numerical way by using the modeled gas
density. From the corrective term to the potential we have
\begin{equation}
v_{c,C}^{star}\,^2(R; Y_{\ast}, \beta, L) = R \, \frac{{\mathrm{d}} \Psi_{C}^{star}(R,z; Y_{\ast}, \beta, L)}{{\mathrm{d}} R}\bigg{|}_{z=0}
\end{equation}
and
\begin{equation}
v_{c,C}^{gas}\,^2(R; \beta, L) = R \, \frac{{\mathrm{d}}
\Psi_{C}^{gas}(R,z; \beta, L)}{{\mathrm{d}} R}\bigg{|}_{z=0} \; ;
\end{equation}
both these two quantities are derived numerically. Finally, the
total rotation velocity is the sum in quadrature of all these
elements, i.e.
\begin{eqnarray}\label{eq:rot_vel_final}
v_{c}^2(R) &=& v_{c,N}^{star}\,^2(R; Y_{\ast}) + v_{c,C}^{star}\,^2(R; Y_{\ast}, \beta, L) + \nonumber \\
&+& v_{c,N}^{gas}\,^2(R) + v_{c,C}^{gas}\,^2(R; \beta, L) \; ;
\end{eqnarray}
and the chi-square function is:
\begin{equation}
\chi^{2}_{\mathrm{LSB}}(\{\theta_{i}\}) = \sum^{\mathcal{N}}_{j =
1} \frac{(v_{c,th}(R_{i}, \{\theta_{i})\} -
v_{c,obs}(R_{i}))^2}{\sigma^{2}_{i}}
\end{equation}
where $\mathcal{N}$ is the number of data, $\sigma^{2}_{i}$ are the measurement
variances and $\{\theta_{i}\} = \{\beta, L, Y_{\ast}\}$ is the parameters theory vector. Even in this case,
like for supernovae and clusters, we use MCMC method for minimizing the
chi-square function and deriving errors on the fit parameters.
\section{Results and discussion}
\label{sec:results}
In this section we are going to discuss the obtained results with
the goal to achieve a comprehensive background where scalar field
works at various scales. In particular, we are going to
search for possible trends and correlations between observable
quantities and theoretical scalar field parameters in the various
class of the considered astrophysical objects. SNeIa will be
discussed alone because of the difficulties in interpreting their
results in this general context. On the other hand, clusters and
galaxies seem to be tied in a common picture by a rescaling
process.
\subsection{Supernovae: results}
\label{sec:SN_results}
Difficulties come from the SNeIa analysis. In this case we have
performed MCMC analysis leaving the scalar field parameters free,
with the only minimal requirement that $L>0$, being it a length,
and that $\beta>0$, because of only the term $\beta^{2}$ appears
to be involved in the Eq.~(\ref{eq:G_sn}) and no possibility to
discriminate between positive and negative values is given.
The parameter $\lambda$ may be considered as a length related to
stellar scale, to typical supernovae explosion, and/or to stellar
formation. Since its exact value is unknown, we have varied it on
a grid ranging from $\lambda = \, 10^{-3} {\mathrm{h}}^{-1}$ as
the minimum, and corresponding to a length of $\approx 1 \,
{\mathrm{kpc}}$ (assuming $H_{0}= 74.2$ as in
\citep{Komatsu10}), up to $\lambda = \, 1 \mathrm{h}^{-1}$ as the
maximum, and corresponding to a length $\approx 1 \,
\mathrm{Mpc}$.
{\renewcommand{\arraystretch}{1.5}
\begin{table*}
\begin{center}
\caption{\textit{Supernovae.} Column 1: $\lambda$ value. Column 2: chi square value evaluated at the best fit values for fitting parameters.
Column 3: present matter content, $\Omega_{m,0}$. Column 4: coupling parameter $\beta$ from scalar field
($1\sigma$ confidence interval). Column 5: gravitational length $L$ from scalar field ($1\sigma$ confidence
interval).\label{tabsn}}
\begin{tabular}{ccccc}
\tableline
$\lambda$ & $\chi^{2}_{best}$ &$\Omega_{m,0}$ & $\beta$ & $L$ \\
$(h^{-1})$ & & & & (Mpc) \\
\tableline
\tableline
$10^{-3} \, (\approx 1 \, kpc)$ & $465.610$ &$0.292^{+0.023}_{-0.023}$ & $0.135^{+0.459}_{-0.099}$ & $0.145^{+0.677}_{-0.110}$ \\
$10^{-2} \, (\approx 10 \, kpc)$ & $465.659$ &$0.295^{+0.027}_{-0.024}$ & $0.120^{+0.326}_{-0.088}$ & $0.158^{+0.743}_{-0.119}$ \\
$10^{-1} \, (\approx 100 \, kpc)$ & $465.633$ &$0.301^{+0.040}_{-0.026}$ & $0.114^{+0.305}_{-0.083}$ & $0.174^{+0.758}_{-0.136}$ \\
$10^0 \, (\approx 1 \, Mpc)$ & $465.707$ &$0.298^{+0.031}_{-0.024}$ & $0.129^{+0.347}_{-0.094}$ & $0.159^{+0.991}_{-0.123}$ \\
\tableline
\end{tabular}
\end{center}
\end{table*}}
In Table~\ref{tabsn} we have also reproduced the values of
chi-square function evaluated at the best fit points: they are
completely equal to the same quantity evaluated for a CPL model
with $\Omega_{m,0}, w_{0}$ and $w_{a}$ as fit parameters and a
gravitational constant fixed at its usual Newtonian value
(\textit{constant in time and scale}). For CPL model we have
obtained the following results:
\begin{eqnarray}
\Omega_{m,0} &=& 0.273^{+0.091}_{-0.133} \nonumber \\
w_{0} &=& -0.954^{+0.210}_{-0.275} \nonumber \\
w_{a} &=& 0.003^{+0.201}_{-0.150} \nonumber \\
\chi_{best} &=& 465.665
\end{eqnarray}
If we want to compare the reliability of our model against the
CPL one, we can use a Bayesian type test as BIC. It is defined as
${\mathrm{BIC}} = -2 \ln {\mathcal{L}} + k \ln N$
\citep{Schwarz78}, with $-2 \ln \mathcal{L}$ being the chi-square
value and $\mathcal{L}$ the likelihood function, $k$ the number of
parameters of the model and $N$ the number of points in the
dataset. The best model has the lowest BIC; in particular, when
comparing two different models, if the difference between the two
BIC values is $\Delta \mathrm{BIC}<2$, than there is no
significant difference between the models; if $2<\Delta
\mathrm{BIC}<5$, this difference is substantial; if $5<\Delta
\mathrm{BIC}<10$ there is a ``strong'' evidence in favor of the
model with the lowest BIC value; while for $\Delta
\mathrm{BIC}>10$ this evidence is ``decisive''(following
the most used value scale, the ``Jeffreys' scale'', defined and
reported in \cite{Jeffreys}).\\
If we consider that in the CPL case we have three fit parameters
as in our scalar field model, we can easily deduce that there are no
significative differences between the CPL and the scalar field model.\\
Things change slightly if we consider a $\Lambda$CDM model with a
Newtonian gravitational constant and only one free parameter,
$\Omega_{m,0}$. In this case, even if the chi-square has the same
value of our scalar field model, we have only one parameter, so that
$\Lambda$CDM model with a Newtonian gravitational constant is
strongly favored with respect to a scalar field model with a varying
gravitational constant (we may underline that this is an obvious
consequence of the BIC definition, where the $k \ln N$ tends to
support models with a smaller number of parameters). We will
return later on the chi-square values for a further discussion.
Now we can take a look to the best fit values of model parameters.
For what concerning the matter content parameter, we can observe
that for $\Omega_{m,0}$ they are slightly higher than the latest
estimation \citep{Komatsu10}, $\Omega_{m,0} \approx 0.24$, from combining
WMAP results with BAO and and the Hubble constant measurements,
but they are aligned with usual values derived from only-SN
analysis, which generally lead to higher values for this parameter
\citep{BuenoSanchez09}.\\
We have also to stress that such a high value for $\Omega_{m,0}$,
although comparable with the usual contribution generally attributed
to dark matter, is not contradictory at all with our intent of explaining dark matter
on smaller than cosmological scales and dark energy on cosmological
scales with the same source, namely, the scalar field effective scalar field.\\
In fact we can easily consider that there are two elements contributing
to the $\Omega_{m,0}$ value: one coming from ordinary baryonic matter,
$\Omega_{b,0} \approx 0.04$, and one coming from the scalar field,
thus acting as a dark matter-type component and scaling as $(1+z)^3$.
As the discussion in next pages will show, this is a feasible possibility.
Then, we have a contribution from scalar field to the acceleration
of universe through an effective dark energy component, for an amount
equal to the usually derived one ($\Omega_{ch,0} \approx 0.70$) and
acting as a cosmological constant.\\
Discussions about the scalar field parameters are more tricky: they
do not show significative statistical changes while varying
$\lambda$.
By applying Eq.~(\ref{eq:mod_dist_cham}) to the Constitution data set we are
implicitly assuming that the $\lambda$ length is the same for all
supernovae; if we assume that these objects constitute an
homogeneous astrophysical family, our hypothesis looks general
and quite good.\\
The parameter $\beta$ quantifies the coupling of the
scalar field with ordinary matter and mainly measures how much
the effective gravitational constant deviates from the usual
Newtonian one. From the cosmological analysis it is in the interval
$[0.114, 0.135]$.
On the other side, the length L may be considered in this case
as the \textit{``minimal''} length for which the variation of the gravitational
constant has ``cosmological'' effects ( i.e. detectable with the Hubble SNeIa diagram)
and with the scalar-field mimicking a dark energy-type component. Our analysis
shows that such a scale is $\sim 100$ kpc.
Anyway, we have to combine all these results with the evidence that chi-square
values are completely equal for any $\lambda$, then we have also
to consider the possibility that our analysis is not well-based.
In the top panel of Fig.~(\ref{fig:sn_results}) we have plotted how
much the modified distance modulus in Eq.~(\ref{eq:mod_dist_cham})
differs from the usual expression,
\begin{equation}\label{eq:normmod}
\mu(z) = 5 \log \left( (1+z) \int_{0}^{z} \frac{{\mathrm{d}}z}{h(z)}\right) + \mu_{0} \, .
\end{equation}
If we consider that the modified distance modulus can be written
as:
\begin{equation}
\mu(z; \beta, L; \lambda) = \mu(z) + \frac{15}{4} \log \frac{G_{eff}(z; \beta, L; \lambda)}{G_{eff}(0; \beta, L; \lambda)}
\end{equation}
the correction coming from the effective scalar field gravitational
constant is negligible when compared to the usual distance modulus
expression, being at most $\approx 0.08
\%$. Facing with this question, we have two possibilities:
\begin{enumerate}
\item our results do not come from an effective test of a scalar field mechanism, but are got stuck by the
impossibility to detect changes in the gravitational constant, being the distance modulus mainly
driven by the $\Omega_{m,0}$ parameter in Eq.~(\ref{eq:normmod});
\item we are effectively testing a scalar field mechanism, but then we have the problem of including the obtained
values for scalar field length $L$ in a consistent theoretical background.
\end{enumerate}
\begin{figure*}
\centering
\includegraphics[width=100mm]{MuCorrMuL1000fig32.eps}
\includegraphics[width=93mm]{GeffGNfig30.eps}
\includegraphics[width=100mm]{GeffGeff0fig29.eps}
\caption{Continuous line
is for $\lambda = 10^{-3}$ ($\beta = 0.135$; $L=0.145$ Mpc); dashed line is for $\lambda = 10^{-2}$ ($\beta = 0.120$; $L=0.158$ Mpc); dot-dashed line is for $\lambda = 10^{-1}$ ($\beta = 0.114$; $L=0.174$ Mpc);
dotted line is for $\lambda = 10^{0}$ ($\beta = 0.129$; $L=0.159$ Mpc). The vertical dotted line shows the
maximum redshift from SNeIa sample. \textit{Top panel.} Deviations of the corrected
version of distance modulus from the usual one. \textit{Middle panel.} Ratio between the effective gravitational constant, Eq.~(\ref{eq:G_cham}),
and the Newton gravitational constant value. This ratio quantifies the temporal evolution of the deviation of scalar field mechanism
from Newtonian gravity. \textit{Bottom panel.} Ratio between the effective
gravitational constant, Eq.~(\ref{eq:G_cham}), and the present value of the same quantity. This ratio quantifies the
temporal evolution of the effective gravitational constant.\label{fig:sn_results}}
\end{figure*}
As we can see in the middle and bottom panels of Fig.~(\ref{fig:sn_results}),
we have different scenarios depending on the value of $\lambda$:
\begin{itemize}
\item for $\lambda = 10^{-3}$ we have an effective gravitational constant which is practically constant,
exhibiting a change of only $\sim 0.003 \%$ in the redshift range $[0.005, 1]$ and assessing at a value which
differs from the Newtonian gravitational constant value for $\sim 3.6 \%$. In this case the relative contribution
of the corrective term with respect of the usual one is $\approx 10^{-5} \%$;
\item for $\lambda = 10^{-2}$, we have again a practically constant effective gravitational constant,
with a change of $\sim 0.7 \%$ in the SNeIa redshift range, and assessing at a value which differs from the Newtonian gravitational constant value for $\sim 2.85 \%$. In this case the relative contribution of the corrective term with respect of the usual
one is really small, $\approx 0.0006 \%$;
\item for $\lambda = 10^{-1}$ we have a more sensible rise of effective gravitational constant in the supernovae redshift range, reaching an
asymptotic value bigger than Newtonian one for $\sim 2.6 \%$ at $z > 50$. Following the large change in the redsfhift
dependence of the effective gravitational constant, which can vary for $1 \%$ even in the supernovae redshift interval,
the ratio between the corrective and the usual expression of distance modulus in this case is $\approx 0.028 \%$;
\item for $\lambda = 1$ the rising is less pronounced in the supernovae redshift range with respect of previous case,
but goes over at $z \approx 15$ and reaches its asymptotic value at $z \sim 500$,
being it $\sim 3.32 \%$ bigger than Newtonian gravitational constant. In this case the corrective distance modulus term
reaches a maximum deviation of $\approx 0.08 \%$ around $z \sim 50$; in the same range the effective gravitational constant
can vary for $\approx 3 \%$.
\end{itemize}
\subsection{Clusters of galaxies: results}
\label{sec:Cl results}
When considering clusters of galaxies we remind that in this case
we left free the scalar field parameters, $\beta$ and $L$, with only
the minimum requirement of their positiveness.
As it is possible to see by simple visual inspection, the only bad
fit corresponds to the cluster RXJ1159: using the modelled matter
densities described in
\S~(\ref{sec:gas_model})~-~(\ref{sec:gal_model}), we obtain a too
fast decreasing mass profile in the inner region reaching
unphysical negative values. For this reason we will not consider
it anymore in our considerations. \\
On the contrary, for all the other clusters we have good results,
with mass estimations corresponding in the $1\sigma$ confidence
level. Errors contours reported in
Fig.~(\ref{fig:cham_cl1})~-~(\ref{fig:cham_cl2}) have two
contributions: the main one comes from the statistically derived
errors on mass observations, as described in
\S~(\ref{sec:uncertainties}) and which produce the larger and
irregular borders of the $1\sigma$ confidence level; the smallest
one comes from errors on fitting parameters.
\begin{figure*}
\centering
\includegraphics[width=80mm]{A133fig1.eps}
\includegraphics[width=80mm]{A262fig2.eps}
\includegraphics[width=80mm]{A383fig3.eps}
\includegraphics[width=80mm]{A478fig4.eps}
\includegraphics[width=80mm]{A907fig5.eps}
\includegraphics[width=80mm]{A1413fig6.eps}
\caption{Dark matter profile vs radii for clusters of galaxies. Dashed line is the observationally derived estimation of dark matter, Eq.~(\ref{eq:obs_dark});
solid line is the theoretical estimation for the effective dark matter component, Eq.~(\ref{eq:theo_dark});
dot-dashed lines are the 1-$\sigma$ confidence levels given by errors on fitting parameters plus statistical errors on mass profiles as
discussed in \S~\ref{sec:uncertainties}.\label{fig:cham_cl1}}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=80mm]{A1795fig7.eps}
\includegraphics[width=80mm]{A1991fig8.eps}
\includegraphics[width=80mm]{A2029fig9.eps}
\includegraphics[width=80mm]{A2390fig10.eps}
\includegraphics[width=80mm]{MKW4fig31.eps}
\includegraphics[width=80mm]{RXJ1159fig35.eps}
\caption{Same of Fig.~(\ref{fig:cham_cl1}).\label{fig:cham_cl2}}
\end{figure*}
{\renewcommand{\arraystretch}{1.5}
\begin{table*}
\begin{center}
\caption{\textit{Clusters of galaxies.} Column 1: Cluster name. Column 2: cluster total mass.
Column 3: gas mass. Gas and total mass values are estimated at $r=r_{max}$. Columns 4: gas mass weighted
average temperature. Column 5: virial radius. Column 6: minimum observational radius. Column 7: maximum radius.
Column 8: coupling parameter $\beta$ from scalar field ($1\sigma$ confidence interval). Columns 9: gravitational
length $L$ from scalar field ($1\sigma$ confidence interval).\label{tabcluster}}
\begin{tabular}{ccccccccc}
\tableline
name & $M_{cl,N}$ & $M_{gas}$ & $<T>$ & $r_{vir}$ & $r_{min}$ & $r_{max}$ & $\beta$ & $L$ \\
& $(10^{14} M_{\odot})$ & ($10^{13} M_{\odot}$) & (keV) & (kpc) & (kpc) & (kpc) & & (kpc) \\
\tableline
\tableline
A133 & $4.35874$ & $2.73866$ & $3.68$ & $1694.26$ & $86$ & $1060$ & $2.524^{+0.259}_{-0.228}$ & $888.557^{+463.221}_{-287.415} $ \\
A262 & $0.445081$ & $0.276659$ & $1.92$ & $1199.46$ & $61$ & $ 316$ & $2.786^{+0.397}_{-0.356}$ & $147.977^{+21.798}_{-28.704}$ \\
A383 & $2.79785$ & $2.82467$ & $4.36$ & $1822.12$ & $52$ & $ 751$ & $2.189^{+0.206}_{-0.187}$ & $728.246^{+443.580}_{-194.675}$ \\
A478 & $8.51832$ & $10.5583$ & $7.34$ & $2344.98$ & $59$ & $1580$ & $2.106^{+0.149}_{-0.120}$ & $820.874^{+259.014}_{-194.548}$ \\
A907 & $4.87657$ & $6.38070$ & $5.44$ & $2030.39$ & $563$ & $1226$ & $2.364^{+0.521}_{-0.290}$ & $594.207^{+339.605}_{-183.460}$ \\
A1413 & $10.9598$ & $9.32466$ & $6.76$ & $2259.35$ & $57$ & $1506$ & $2.210^{+0.108}_{-0.105}$ & $1323.890^{+158.186}_{-216.063}$ \\
A1795 & $5.44761$ & $5.56245$ & $5.52$ & $2054.1$ & $79$ & $1151$ & $2.224^{+0.080}_{-0.072}$ & $869.098^{+297.243}_{-145.295}$ \\
A1991 & $1.24313$ & $1.00530$ & $2.23$ & $1338.46$ & $55$ & $ 618$ & $2.439^{+0.693}_{-0.388}$ & $534.918^{+483.400}_{-344.322}$ \\
A2029 & $8.92392$ & $12.4129$ & $7.59$ & $2419.03$ & $62$ & $1771$ & $2.047^{+0.121}_{-0.112}$ & $1073.050^{+237.912}_{-267.762}$ \\
A2390 & $20.9710$ & $21.5726$ & $9.35$ & $2481.14$ & $83$ & $1984$ & $1.888^{+0.067}_{-0.065}$ & $1487.800^{+90.565}_{-107.860}$ \\
MKW4 & $0.469503$ & $0.283207$ & $1.58$ & $1068.31$ & $60$ & $ 434$ & $3.259^{+20.876}_{-0.737}$ & $148.931^{+621.309}_{-141.849}$ \\
RXJ1159 & $0.897997$ & $0.433256$ & $1.40$ & $1115.81$ & $64$ & $ 568$ & $3.412^{+1.702}_{-0.722}$ & $387.568^{+601.661}_{-251.839}$ \\
\tableline
\end{tabular}
\end{center}
\end{table*}}
What can be first noted is that for lower scales there is a large
deviation between our theoretical estimation and the observed one.
Typically one can locate this break-point in the range $[100,150]$
kpc. This is not a new thing when describing clusters of galaxies
with modified gravities and it is not an intrinsic failure of our
theoretical model. Similar issues are present in
\citep{Salzano09}, where $f(R)$- gravity models are applied to
clusters of galaxies. The same situation is in \cite
{Brownstein06}: they use the the Metric - Skew - Tensor - Gravity
(MSTG) as a generalization of the Einstein General Relativity and
derive the gas mass profile of a sample of clusters with gas being
the only baryonic component of the clusters. They consider some
clusters included in our sample (in particular, A133, A262, A478,
A1413, A1795, A2029, MKW4) and the same failing trend is found
for $r \leq 200$ kpc: they overestimate gas mass in the inner
regions with respect of the expected estimation from X-ray
observations. In the same work there is also an interesting note
about MOND theory applied to clusters of galaxies: even if it is
not possible to assess it really fails, surely we can see that
MOND in clusters does not solve the dark matter problem because it
again requires including a mass contribution other than the
observed one.
The reason for this different behavior in the inner regions is in
the break of the hypothesis of hydrostatic equilibrium. If the
hypothesis of hydrostatic equilibrium is not correct, then we are
in a regime where the fundamental relations Eqs.~(\ref{Boltzmann
equation})~-~(\ref{eq:Boltzmann potential}) are not working. As
discussed in \citep{Vik05}, the central - 70 kpc - region of most
clusters is strongly affected by radiative cooling and thus its
physical properties cannot directly be related to the depth of the
cluster potential well. This means that, in this region, the gas
is not in hydrostatic equilibrium but in a multi-phase state. In
this case, the gas temperature cannot be used as a good standard
tracer. Among the main phenomena which causes this we have cooling
flows, merger effects and asymmetric shapes. In particular,
cooling flows produce a decrease in the temperature profile and
consequently local higher gas densities which cannot be related
directly to gravitational effects.\\
A coherent behavior is shown in our plots,
Fig.~(\ref{fig:cham_cl1})~-~(\ref{fig:cham_cl2}). We remind that
there the distribution of dark matter is represented; higher
densities from cooling flows produce higher
non-gravitational-produced gas mass profiles, which result in a
decrease of the dark matter one. In our case, we have that our
theoretical dark matter profile (i.e. the \textit{effective} dark
matter mimicked by the different coupling of scalar field with
baryonic mass) is higher than the observationally derived one (see
for example Abell 262, Abell 383, Abell 478, Abell 1413, Abell
1991, Abell 2029; while for Abell A133 and Abell 1795 one can
perceive the same trend but unfortunately the data do not
extend too small enough radii).\\
However, a more detailed modelling for inner regions is out the
purpose of this work, while we are here interested to show that
the scalar field mechanism can be a valid alternative to dark matter
in order to explain cluster dynamics. In this sense it is very
illuminating that on the most part of objects in our sample there
is a very good agreement between the scalar field model and the
observationally derived dark matter profiles on a wide range of
distances from the centers for any cluster, approximately in the
interval $[100; 1000]$ kpc, or at least up to the maximum radii
coming from observations.
Giving a more detailed glance to the absolute values of scalar field
parameters for clusters, we can see that the coupling constant
$\beta$ looks very well constrained in a really narrow range,
$[1.888; 3.259]$; while the gravitational length $L$ can vary in the interval
$[148.931; 1487.800]$ kpc, which seems consistent with the range
depicted by other used characteristic clusters scale, as the virial radius in a $\Lambda$CDM context.\\
To be more precise, if we consider (and exclude) three peculiar
cases, these intervals can even be constrained in narrower
windows: for $\beta$, the interval may be $[2.047; 2.786]$, while
for $L$ it may be $[147.977; 1323.890]$. \\
In \citep{Vik05}, MKW4 and RXJ1159 are recognized as the
strongest outliers in temperature profile, exhibiting very compact
cooling regions and having a temperature peak smaller than other
sample objects and located in inner region at $r \approx 50$ kpc.
RXJ1159 is better classified as an X-ray Over-Luminous Elliptical
Galaxy; optically, this object appears as a nearly isolated
elliptical galaxy but its X-ray luminosity and extent is typical
of poor clusters. MKW4 is considered a group of galaxies or a poor
cluster \citep{OSullivan03} too, so we have to consider them as
different object from the rest of the sample. Moreover we cannot
give any conclusion about this gravitational class (group of
galaxies)
because only one object has no statistical weight for any analysis.\\
Always in \citep{Vik05}, the cluster Abell 2390 seems to elude
typical clusters scaling relations; this is due to its unusual
central cool region, which extends up to $r \sim 400$ kpc,
probably because the cold gas is pushed out from the center by
radio lobes.
From these considerations, it is interesting to note that just
these three objects result having the most different values for
the coupling constant $\beta$ with respect of the other clusters
and even seem to exhibit a peculiar trend. MKW4 (and RXJ1159) have
more compact inner cool regions and this is associated with: $1.$
higher values for $\beta$, which means a larger coupling of
scalar field with ordinary matter ($\beta$ acts like a
concentration parameter) and $2.$ smaller value for the
gravitational length. On the contrary, Abell 2390 shows a larger
cooling region, which corresponds to: $1.$ smaller values for
$\beta$ and $2.$ larger for $L$.
We underline that in our approach it is very important to find
these kind of correlations: if we want to make the scalar field
mechanism the most general and basic possible, we need to find any
possible link among its parameters and the physical properties of
our analyzed gravitational systems, so being able to perform
forecasts for other scaling properties and to recognize that
a scalar field is acting even when it is not directly possible to
derive easy quantities to be compared with data.
\begin{figure*}
\centering
\includegraphics[width=80mm]{cluster_LTfig17.eps}
\includegraphics[width=80mm]{cluster_LvirialRfig18.eps}
\caption{Scalar field length plotted versus mean (gas density weighted) cluster temperature (\textit{top panel})
and the virial radius (\textit{bottom panel}). Objects in brackets are excluded from fits as described
in \S~(\ref{sec:Cl results}).\label{fig:cluster_LTvirR}}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=80mm]{clusterTprofiles1fig25.eps}
\includegraphics[width=80mm]{clusterTprofiles2fig27.eps}
\caption{Temperature profiles for all clusters plotted as a function of distance from the center
and in units of the scalar field length $L$. The temperatures are scaled to the mean (gas density weighted)
cluster temperature. Only the two group galaxies, MKW4 and RXJ1159, have not been plotted. Colors in the bottom
panel mean: black for clusters with $<T>$ greater than $5$ Kev; cyan for $<T>$ between $2.5$ and $5$ keV;
green for $<T>$ less than $2.5$ KeV; grey for Abell 2390. \label{fig:cluster_T_profiles}}
\end{figure*}
Going on in this search, we have plotted the scalar field length $L$
versus the virial radius of any cluster (bottom panel in
Fig.~(\ref{fig:cluster_LTvirR})), calculated using the relation
\cite
{Bryan98,Evrard96},
\begin{equation}
r_{vir} = 1.95 \, {\mathrm{h}}^{-1} \, {\mathrm{Mpc}} \left(
\frac{<T>}{10 {\mathrm{keV}}} \right)^{1/2} \; ,
\end{equation}
where $h = 0.742$ \citep{Komatsu10} and $<T>$ is the average
temperature for any cluster, derived from gas-weighted fit to the
total cluster spectrum excluding the central region, and resulting
more directly correlated to clusters mass than the X-ray emission
weighted temperatures. Even if the two lengths result very
different, with the scalar field parameter $L$ always smaller than
the virial radius, we have a phenomenological relation between
them (we have performed a weighted linear fit which results better
than any higher order polynomial),
\begin{equation}
\log L = 3.0082 \cdot \log r_{vir} - 7.0476 \; .
\end{equation}
From it one can see that the scalar field length scales as $L \propto
r_{vir}^3 \sim M_{vir}$, so that it may possible to find a
relation between scalar field parameters and the
virial (Navarro - Frenk - White (NFW) model based) ones.\\
We have also checked a possible relation between $L$ and the mean
temperature $<T>$ (top panel in Fig.~(\ref{fig:cluster_LTvirR})),
with the final result:
\begin{equation}
\log L = 1.51085 \cdot \log <T> + 1.79122 \; .
\end{equation}
This last results is again very intriguing, because for clusters
of galaxies we have a mass-temperature relation \citep{Bryan98},
\begin{equation}
M_{\Delta} / T^{3/2} \propto H_{0}/H(z) \; ,
\end{equation}
where $\Delta$ is the overdensity level relative to the critical
density at the cluster redshift, so that $M_{180} = M_{vir}$. Then
we have to consider that for most of our clusters the redshift is
very small so that the ratio $H_{0}/H(z)$ is almost equal to $1$.
We remind that:
\begin{equation}
L \propto <T>^{3/2} \quad {\mathrm{and}} \quad L \propto
r_{vir}^{3} \sim M_{vir} \; ,
\end{equation}
so we can deduce that our scalar field length follows perfectly the
mass-temperature relation, being $L \propto <T>^{3/2} \propto
M_{vir}$ and that any scatter in the previous relation can be
attributed to differences in clusters redshifts.
Finally, we plotted in Fig.~(\ref{fig:cluster_T_profiles}) the
scaled temperature profiles versus the distance from the center of
any clusters scaled with respect of the scalar field length $L$
obtained by fit. It is extremely interesting and not obvious that
the profiles are all re-scaled and self-similar as it happens in
the usual approach. We can even say something more: in the
scalar field approach we can see that some properties which are
depicted in Fig.~(16) of \citep{Vik05}, as for example the
different profiles among subgroups of clusters with mean
temperature in different ranges (less than $2.5$ keV, between
$2.5$ and $5.$ keV and greater than $5$ keV) with distances
rescaled with respect of $r_{500}$, here disappear. In fact, all
the clusters form an homogeneous gravitational family (the only
exception being Abell 2390, as discussed before). We could say
that the scalar field length $L$ contains more fundamental
information than the virial radius.
\subsection{LSB galaxies: results}
\label{sec:LSB_results}
Taking into account LSB galaxies we have one parameter more than
the clusters case: together with the intrinsic scalar field
parameters we have the stellar mass-to-light ratio, $Y_{\ast}$,
which we need for the conversion from the stellar surface
photometry to the stellar mass density. In principle this
parameter could be left free; but we have decided to put a prior
on it derived from literature. In \citep{vandenHoek00} they
investigate star formation history and chemical evolution of LSB
galaxies by modeling their observed spectro-photometric and
chemical properties using a galactic chemical and photometric
evolution model incorporating a detailed metallicity dependent set
of stellar input data. Results show that $Y_{\ast}$ for this class
of galaxies has usually values between $\sim 0.5$ and $\sim 2$.
For this reason we have constrained the mass-to-light ratio to the
interval $[0; 5]$ as a conservative hypothesis, and only in one
case (UGC3851) we
had to enlarge the interval up to $10$.\\
An important assumption about this parameter is that we assume it
being constant over the whole range of data; this is an
unavoidable and the most general assumption one can do without
having details of stellar population distribution inside LSB
galaxies.
A comment is in order for a correct evaluation of the results:
during the analysis we have compared the theoretical rotation
curves coming out from our scalar field model using only the
observable-matter densities described in \citep{SwatersPhD} with
data firstly published in \cite {deBlok02}. These data, available
in the SIMBAD data base, consist of the contributions to the total
rotation curve coming from the stellar component (with an assumed
mass-to-light ratio $Y_{\ast} = 1$) and the gas one (with gas
density normalized with a factor $1.4$ for taking into account
helium contribution), together with the total observed rotation
curve (which can be considered an effect of dark matter in a CDM
scenario or coming out from the interaction with a scalar field as
in our approach). Both the total and the gas rotation curve have
been submitted to a smoothing procedure for deleting
irregularities mainly coming from two different elements. First,
the assumption that has been made when deriving mass models from
rotation curves is that there is symmetry, that all mass in on
circular orbits and that there is continuity with radius. Raw data
show scatter and non-circular motions which can produce virtual or
ambiguous rotation velocities. Second, gas densities often show
small-scale structures, irregularities and look clumpy at any
distance from the center, and this can give possible large
fluctuations in the rotation curve. In \cite {deBlok02} the
smoothing procedure is tested of course, and the smooth rotation
curve is a very good approximation to the raw rotation curve. But
some discrepancies with our theoretical estimations can be find
because of we have used
raw gas density profiles from \citep{SwatersPhD}. \\
If we consider that all gas contributions to rotation curve are
multiplied to $\beta$ only, while the star ones to the combination
$\beta \cdot Y_{\ast}$, in some cases, depending on the fit values
and on their contribution, irregularities in the gas distribution
are emphasized and may affect the total rotation curve profile.
{\renewcommand{\arraystretch}{1.5}
\begin{table*}
\begin{center}
\caption{\textit{LSB galaxies.} Column 1: UGC number. Column 2: Distance from the source literature.
Column 3: disk central surface brightness in R-band, corrected for galactic extinction and inclination. Column 4:
disk scale length. Column 5: total HI gas mass. Column 6: Maximum rotation velocity. Column 7: best fit stellar mass to light ratio ($1\sigma$ confidence interval). Column 8: coupling parameter $\beta$ from scalar field ($1\sigma$ confidence interval). Columns 9: gravitational length $L$ from scalar field ($1\sigma$ confidence interval). \label{tabspiral}}
\begin{tabular}{ccccccccc}
\tableline
UGC & D & $\mu_{0,R}$ & $R_{d}$ & $M_{HI}$ & $V_{max}$ & $Y_{\ast}$ & $\beta$ & $L$ \\
& (Mpc) & (mag/arcsec$^{-2}$) & (kpc) & ($10^{8} M_{\odot}$) & (km/s) &$(Y_{\odot})$ & & (kpc) \\
\tableline
\tableline
U1230 & $51$ & $22.6$ & $4.5$ & $58.0$ & $103$ &$2.09^{+0.96}_{-0.66}$ & $1.328^{+0.281}_{-0.237}$ & $31.011^{+42.784}_{-14.836}$ \\
U1281 & $5.5$ & $22.7$ & $1.7$ & $3.2$ & $57$ &$0.77^{+0.16}_{-0.12}$ & $1.381^{+0.195}_{-0.155}$ & $4.006^{+7.362}_{-1.938}$ \\
U3137 & $18.4$ & $23.2$ & $2.0$ & $43.6$ & $100$ &$1.97^{+0.15}_{-0.15}$ & $1.837^{+0.030}_{-0.028}$ & $75.810^{+117.703}_{-18.384}$ \\
U3371 & $12.8$ & $23.3$ & $3.1$ & $12.2$ & $86$ &$1.70^{+0.58}_{-0.41}$ & $1.444^{+0.206}_{-0.173}$ & $9.864^{+26.568}_{-5.450}$ \\
U3851 & $3.4$ & $22.6$ & $1.5$ & $7.3$ & $55$ &$6.19^{+0.44}_{-0.77}$ & $0.238^{+0.398}_{-0.189}$ & $0.348^{+0.577}_{-0.181}$ \\
U4173 & $16.8$ & $24.3$ & $4.5$ & $21.2$ & $57$ &$1.67^{+0.69}_{-0.64}$ & $0.957^{+0.232}_{-0.375}$ & $4.534^{+12.419}_{-3.061}$ \\
U4278 & $10.5$ & $22.5$ & $2.3$ & $13.6$ & $93$ &$1.23^{+0.16}_{-0.13}$ & $1.299^{+0.074}_{-0.078}$ & $71.714^{+112.411}_{-45.008}$ \\
U4325 & $10.1$ & $21.6$ & $1.6$ & $7.5$ & $123$ &$0.18^{+0.06}_{-0.03}$ & $4.339^{+3.051}_{-1.376}$ & $1.068^{+2.315}_{-0.664}$ \\
U5721 & $6.7$ & $20.2$ & $0.5$ & $6.6$ & $79$ &$0.28^{+0.05}_{-0.04}$ & $2.203^{+0.148}_{-0.114}$ & $7.210^{+12.776}_{-2.429}$ \\
U7524 & $3.5$ & $22.2$ & $2.3$ & $9.7$ & $83$ &$1.52^{+1.09}_{-0.38}$ & $1.579^{+0.444}_{-0.539}$ & $2.212^{+0.964}_{-0.699}$ \\
U7603 & $6.8$ & $20.8$ & $0.7$ & $5.4$ & $64$ &$0.014^{+0.017}_{-0.010}$ & $1.770^{+0.066}_{-0.058}$ & $23.767^{+105.159}_{-15.367}$ \\
U8286 & $4.8$ & $20.9$ & $0.8$ & $3.5$ & $84$ &$0.243^{+0.023}_{-0.021}$ & $2.416^{+0.078}_{-0.075}$ & $29.784^{+77.252}_{-18.995}$ \\
U8837 & $5.1$ & $23.2$ & $1.2$ & $1.6$ & $50$ &$0.888^{+0.409}_{-0.256}$ & $1.940^{+0.689}_{-0.485}$ & $0.349^{+0.436}_{-0.154}$ \\
U9211 & $12.6$ & $22.6$ & $1.2$ & $10.5$ & $64$ &$1.274^{+0.677}_{-0.458}$ & $1.525^{+0.527}_{-0.281}$ & $3.484^{+8.479}_{-1.898}$ \\
U10310 & $15.6$ & $22.0$ & $1.9$ & $12.6$ & $75$ &$0.489^{+0.191}_{-0.123}$ & $1.570^{+0.516}_{-0.294}$ & $3.684^{+6.726}_{-1.930}$ \\
\tableline
\end{tabular}
\end{center}
\end{table*}}
We can now discuss the obtained values for the
stellar mass-to-light ratio. It is easy to see that 10 galaxies
have values compatible with the prescribed range by
\citep{vandenHoek00}; one (UGC3851) has a higher value which is
hard to explain in term of reasonable population synthesis; and
four (UGC4325, UGC5721, UGC7603, UGC8286) have values smaller than
$0.5$. Among these, one is particularly problematic, i.e. UGC7603,
because it has a mass-to-light ratio too much small, $Y_{\ast} = 0.017$,
while the others can be again considered acceptable.\\
UGC3851 is a real challenge for both our approach and the more
traditional one \citep{deBlok02}. The very linear rise in the
inner region rapidly changes in a flat part at larger radii so
that it is very hard to reproduce this sharp change in the slope.
There are two possibilities for this behavior: the HI curve in
outer points is underestimated, or there are non-circular motions
related to a bar-like structure and a star forming region in the
center which affect the H$\alpha$ observations. It is interesting
to note the many similarities between our analysis and that one in
\cite {deBlok02}: both their NFW and isothermal halo model
overestimate velocity in the inner region, with a larger deviation
in the NFW model than the isothermal one, and larger in the inner
region than in the outer one, as in our case. Then their derived
maximum value for the stellar mass-to-light ratio is $Y_{\ast} =
5.4$, very close to our value. Anyway, all the following relations
will be derived without considering this galaxy because no
definitive conclusion can be done about it.
Among the four galaxies with a low mass to light ratio, one of the
most problematic case is UGC5721, showing a very narrow
bi-modality in the parameters distribution that cannot be avoided
and resolved even changing parameters of the trial distribution in
the MCMC. The convergence test inevitably fails in this case, but
we have assumed as best fit results the values associated to the
highest peak in the likelihood distribution shown in
Fig.~(\ref{fig:cham_gal4}), which produce a very good fit to the
observational rotation curve. A discrepancy can be found around
$2$ kpc but as the large error bar shows, in this region there are
some observational difficulties related to gas observations which
produce the detected faster rise in velocity.
\begin{figure*}
\centering
\includegraphics[width=80mm]{UGC1230fig43.eps}
\includegraphics[width=80mm]{UGC1281fig44.eps}
\includegraphics[width=80mm]{UGC3137fig45.eps}
\includegraphics[width=80mm]{UGC3371fig46.eps}
\includegraphics[width=80mm]{UGC3851fig47.eps}
\includegraphics[width=80mm]{UGC4173fig48.eps}
\caption{Rotation curves of LSB galaxies. Dots are velocities from data; solid line is the
theoretical model, Eq.~(\ref{eq:rot_vel_final}); dashed line is the stellar contribution to
the rotation curve assuming $Y_{\ast} = 1$; dotted line is the gas contribution. \label{fig:cham_gal1}}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=80mm]{UGC4278fig49.eps}
\includegraphics[width=80mm]{UGC4325fig50.eps}
\includegraphics[width=80mm]{UGC7524fig52.eps}
\includegraphics[width=80mm]{UGC7603fig53.eps}
\includegraphics[width=80mm]{UGC8286fig54.eps}
\includegraphics[width=80mm]{UGC8837fig55.eps}
\includegraphics[width=80mm]{UGC9211fig56.eps}
\includegraphics[width=80mm]{UGC10310fig57.eps}
\caption{Same of Fig.~(\ref{fig:cham_gal1}). \label{fig:cham_gal2}}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=80mm]{NGC3274histo1fig33.eps}
\includegraphics[width=80mm]{NGC3274histo2fig34.eps}
\includegraphics[width=80mm]{UGC5721fig51.eps}
\caption{Same of Fig.~(\ref{fig:cham_gal1}) for UGC5721. In upper plots are shown the marginalized probability
distributions for $\beta$ and $Y_{\ast}$ obtained from MCMCs and the unresolved bi-modality discussed in
\S~(\ref{sec:LSB_results}).\label{fig:cham_gal4}}
\end{figure*}
For what concerning the scalar field parameters, the coupling constant
$\beta$ seems to be very well constrained in the range $[0.957; 2.416]$, if
we exclude the previously discussed case of UGC3851, whose value is very low,
$\beta = 0.238$, probably because of the degeneracy with $Y_{\ast}$,
and the opposite situation of UGC4325, which exhibits a large value, $\beta = 4.339$,
and also larger errors on this parameter than other galaxies. It is not
a causality that this is also one of the four galaxies with a stellar mass-to-light
ratio lower than the simulation limit. \\
As in clusters case, the scalar field length shows a larger spread, mainly
scaling with the dimension of any LSB galaxy. One problematic situation is UGC8837,
which clearly deviates from the general trend
for what concerning the value of scalar field length $L$, resulting it very
small. Even if the fit of our model to data is very good, we do not
consider anymore this galaxy in the following discussion because its results
are strongly related to the low quality of data. As discussed in
\cite
{deBlok02}, the H$\alpha$ are of good quality, but do not correspond
very well to the HI profile, with a large difference in their
systemic velocity probably due to the interference of inner
star formation region and there located non-circular motions. A further
error source can be also the inclination, being this galaxy
almost edge-on, the most problematic configuration in this case.
Just by visual inspection it is possible to see that the general
shapes of the rotation curves are quite well predicted by our model inside the error
confidence region, even considering some singular cases where are
present irregularities coming from gas distribution (clumpy
peaks, change in the profile convexity). In evaluating the
goodness of our fitted parameters we have to note that all the
rotation curves are limited to a certain distance from the center
of any galaxy. It is well known that spiral galaxies show a great
variety of rotation curves profiles, with flat velocity plateaux,
different slopes (both increasing and decreasing) in outer regions;
but in our case the necessity to match stellar photometry with
more extended gas observations, results in a limited distance range
which may affect our results. As pointed out in
\citep{Capozziello07} this could not solve the previously described
degeneracies between the fitting parameters, or to induce their wrong estimations, so
requiring more extended data for a more exact analysis.\\
Then, for best appreciating the statistical validity of our analysis,
we can compare them with a recent work \citep{Swaters10}
on the same class of galaxies but based on MOND as alternative
theoretical model to dark matter. MOND is well-know to be the main
and most successful alternative scenario in explaining rotation curves
of spiral galaxies, even if not being satisfactory (when even completely
unable) in describing mass profiles in clusters of galaxies.
In \citep{Swaters10} MOND results to predict quite well LSB rotation curves;
a look to their figures show that even MOND is unable to explain
all the features that appear in such profiles, and even some open questions remain,
that seem to be intrinsic in the model while only weakly depending
on possible observational sources of uncertainties. \\
A positive step of our model, is that we are able to obtain
sufficiently good fits for two very different gravitational
systems as clusters of galaxies and LSB and opening at the same
time the possibility of a unique theoretical background
underlying both them. \\
Moreover, some of the correlations they find between MOND
acceleration and physical properties of LSB galaxies can be
found in our approach too. For example, we find a correlation
among the parameter $\beta$ and the extrapolated central disk
surface brightness $\mu_{0,R}$, even more general than their
on, because in our case it well fit all the LSB galaxies,
without any cut-of dependence (their correlation strongly depends
on galaxies with higher surface densities). Two possible fits are:
\begin{equation}
\beta = -0.277167 \cdot \mu_{0,R} + 7.78297
\end{equation}
and
\begin{equation}
\log \beta = -3.80827 \cdot \log \mu_{0,R} + 5.32519 \; ,
\end{equation}
and, as shown in the top panel of Fig.~(\ref{fig:LSB_bLMuVmax}),
they are almost indistinguishable.
Then, we also have a correlation among both scalar field parameters
and the maximum rotation velocity, while in MOND no correlation
has been found. In the middle and bottom panels of Fig.~(\ref{fig:LSB_bLMuVmax})
we have:
\begin{equation}
\log \beta = 0.354674 \cdot \log V_{max} -0.475189 \quad \mathrm{and}
\end{equation}
\begin{equation}
\log L = 3.81376 \cdot \log V_{max} -6.14695 \; .
\end{equation}
Still some questions have to be better
studied, because for LSB galaxies we could not have performed a detailed analysis of scaling
or structural parameters as we did with clusters of galaxies.
\begin{figure*}
\centering
\includegraphics[width=120mm]{spiral_bMu0fig38.eps}
\includegraphics[width=120mm]{spiral_bVmaxfig39.eps}
\includegraphics[width=120mm]{spiral_LVmaxfig42.eps}
\caption{Correlations among scalar field parameters and structural properties of LSB galaxies. (\textit{Top panel.})
Coupling constant $\beta$ versus the central disk surface brightness. The dotted line is the linear
fit; the dashed line is the log-log fit. (\textit{Middle panel.}) Coupling constant $\beta$ versus
the maximum rotation velocity. (\textit{Bottom panel.}) Scalar field length $L$ versus the maximum rotation velocity.
Objects in brackets are excluded from fits as described in \S~(\ref{sec:results}).\label{fig:LSB_bLMuVmax}}
\end{figure*}
Anyway, the selected sample has given important details and
perspectives about the possibility that scalar field works well even
at these scales. But the large dispersion in scalar field parameters
show that something more accurate should be done. The used sample
of LSB galaxies is limited by the fact that many rotation curves
are not smooth, not extended to large radii; and then they
constitute a limited sample of a more complex and extended class
of gravitational systems as galaxies. In a forthcoming paper we
are going to revisit all these questions, enlarging the galaxy
sample from dwarf and irregular galaxies, to high surface
brightness (HSB) spiral galaxies, and to elliptical galaxies.
\subsection{Unified picture}
\label{sec:unified}
Even with all the previous caveats in mind, it seems that results
are consistent when we compare clusters and LSB analysis. We can
say more: even if we will only derive phenomenological and
\textit{visual} relations, whose physical meaning has to be
studied in more detail, they can help us in finding problematic
cases and thus verifying \textit{a posteriori} if they are such
for intrinsic problems of our model or for something different.
Figs.~(\ref{fig:cham_csLd})~-~(\ref{fig:cham_csbM}) show that a possible general trend including both
clusters and LSB galaxies is feasible.\\
In Fig.~(\ref{fig:cham_csLd}) a correlation between the scalar field
length $L$ and the gas density of any object is possible. We have
considered only gas contribution, because we do not want to take
into account dark matter and use only visible matter: in
clusters of galaxies, gas is largely the main contribution to the
visible mass; and for a self-consistent discussion, we have
considered an analogous quantity for LSB galaxies too, also
considering that for calculating the stellar mass contribution, we
have to use the mass-to-luminosity ratio $Y_{\ast}$, being this a
parameter fit.
\begin{figure*}
\centering
\includegraphics[width=150mm]{cluster+spiral_Ldensityfig23.eps}
\caption{$L$ vs $\rho_{gas}$. The dotted lines are the singular fits to clusters and LSB samples.
The dashed line is the fit to the total (clusters + LSB) sample. The fits are weighted with errors
on parameters derived from MCMCs.
Objects in brackets are excluded from fits as described in \S~(\ref{sec:results}).}
\label{fig:cham_csLd}
\end{figure*}
For clusters (without the problematic cases: Abell 2390, MKW4, RXJ1159) we have:
\begin{equation}
\log L = -1.56605 \cdot \log \rho_{gas} + 8.99653 \; ;
\end{equation}
for LSB galaxies (without UGC3851, UGC4325, UGC8837):
\begin{equation}
\log L = -1.8617 \cdot \log \rho_{gas} + 9.95973 \; ;
\end{equation}
while for the total sample (without the previous exceptions):
\begin{equation}
\log L = -1.85764 \cdot \log \rho_{gas} + 10.1326
\end{equation}
We can note that a small difference in the slope between the total
and the cluster-only sample, while the one from spiral is
practically equivalent. This may lead us to say that a possible
general trend is present, and that, likely, it can be made more
clear when adding further intermediate data, as group of galaxies
and elliptical galaxies, or smaller scales ones.
The same, but for the coupling constant $\beta$, is shown in Fig.~(\ref{fig:cham_csbd}).
\begin{figure*}
\centering
\includegraphics[width=150mm]{cluster+spiral_bdensityfig19.eps}
\caption{$\beta$ vs $\rho_{gas}$. The dotted lines are the singular fits to clusters and LSB samples.
The dashed line is the fit to the total (clusters + LSB) sample. The fits are weighted with errors
on parameters derived from MCMCs. Objects in brackets are excluded from fits as described in \S~(\ref{sec:results}).}
\label{fig:cham_csbd}
\end{figure*}
For clusters (without Abell 2390, MKW4, RXJ1159) it is:
\begin{equation}
\log \beta = 0.0946259 \cdot \log \rho_{gas} - 0.0237813 \; ;
\end{equation}
for LSB galaxies (without UGC3851, UGC4325, UGC8837):
\begin{equation}
\log \beta = 0.446629 \cdot \log \rho_{gas} -1.92872 \; ;
\end{equation}
and for the total sample (without the previous exceptions):
\begin{equation}
\log \beta = -0.0290609 \cdot \log \rho_{gas} + 0.444376
\end{equation}
In this case we see that the total-sample slope is different
from the other two cases, even if the one from the cluster-only sample is very
small, and also by visual inspection clusters seem to be less spread
around the main general relation than LSB galaxies. These last ones,
on the contrary, seem to show a proper intrinsic slope, even if it can
depend on the previously described problems we are facing with when
working with a restricted galaxy sample, or with not enough extended
rotation curves.
We note that it is much important to verify a very low value
($\approx 0$) for $\beta$ because we remind that in
\S.\ref{sec:modified_potential} we assumed that $\beta$ is
constant or at least has a weak dependence on scale. This
hypothesis is partially confirmed by clusters and by the total
sample fit.
The same conclusion can be derived from Fig.~(\ref{fig:cham_csbL2}) where
we plot $\beta$ versus $L$: it is evident that there is quite no
dependence for $\beta$ on the gravitational scale.
\begin{figure*}
\centering
\includegraphics[width=150mm]{cluster+spiral_bLfig20.eps}
\caption{$\beta$ vs $L$. The dotted lines are the singular fits to clusters and LSB samples.
The dashed line is the fit to the total (clusters + LSB) sample. The fits are weighted with errors
on parameters derived from MCMCs. Objects in brackets are excluded from fits as described in \S~(\ref{sec:results}).}
\label{fig:cham_csbL2}
\end{figure*}
In this case we also note
that the global fit feels more clusters than LSB, having performed a
weighted fit and clusters showing best constraints on scalar field parameters.
For clusters (without Abell 2390, MKW4, RXJ1159) we have:
\begin{equation}
\log \beta = -0.11781 \cdot \log L + 0.697316 \; ;
\end{equation}
for LSB galaxies (without UGC3851, UGC4325, UGC8837):
\begin{equation}
\log \beta = -0.00355508 \cdot \log L + 0.197588 \; ;
\end{equation}
and for the total sample (without the previous exceptions):
\begin{equation}
\log \beta = 0.0176545 \cdot \log L + 0.306458
\end{equation}
When plotting the scalar field parameters versus the total gas mass enclosed
in the considered gravitational structures it is more evident the need
of more objects for giving more detailed and best constrained results. In
Figs.~(\ref{fig:cham_csLM})~-~(\ref{fig:cham_csbM}) it is evident the big
void lying among clusters and LSB regions.
\begin{figure*}
\centering
\includegraphics[width=150mm]{cluster+spiral_LMgasfig24.eps}
\caption{$L$ vs $M_{gas}$. The dotted lines are the singular fit to clusters and LSB samples.
The dashed line is the linear fit to the total (clusters + LSB) sample. The dot-dashed line is
the logarithmic fit to the total (clusters + LSB) sample. The fits are weighted with errors
on parameters derived from MCMCs. Objects in brackets are excluded from fits as described in \S~(\ref{sec:results}).}
\label{fig:cham_csLM}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=150mm]{cluster+spiral_bMgasfig22.eps}
\caption{$\beta$ vs $M_{gas}$. The dotted lines are the singular fit to clusters and LSB samples.
The dashed line is the fit to the total (clusters + LSB) sample. The fits are weighted with errors
on parameters derived from MCMCs. Objects in brackets are excluded from fits as described in \S~(\ref{sec:results}).}
\label{fig:cham_csbM}
\end{figure*}
We have given possible fits even
in this case; for clusters (without Abell 2390, MKW4, RXJ1159) we have:
\begin{equation}
\log L = 0.558047 \cdot \log \frac{M_{gas}}{10^9 M_{\odot}} + 0.273985 \; ,
\end{equation}
\begin{equation}
\log \beta = -0.0745605 \cdot \log \frac{M_{gas}}{10^9 M_{\odot}} + 0.702031 \; ;
\end{equation}
while for LSB galaxies (without UGC3851, UGC4325, UGC8837):
\begin{equation}
\log L = 1.141 \cdot \log \frac{M_{gas}}{10^9 M_{\odot}} +0.592915 \; ,
\end{equation}
\begin{equation}
\log \beta = -0.355447 \cdot \log \frac{M_{gas}}{10^9 M_{\odot}} + 0.193349 \; ;
\end{equation}
For the total sample (without the previous exceptions) we have tried two different
fits for the length $L$:
\begin{equation}
\log L = 1.141 \cdot \log \frac{M_{gas}}{10^9 M_{\odot}} +0.592915 \; ,
\end{equation}
\begin{equation}
\log L = 3.67182 \cdot \log \left( 1.38315 \log \frac{M_{gas}}{10^9 M_{\odot}} \right) + 1.59465 \; ,
\end{equation}
but without intermediate data we cannot infer any conclusion; while for $\beta$
we have:
\begin{equation}
\log \beta = 0.0335265 \cdot \log \frac{M_{gas}}{10^9 M_{\odot}} + 0.198459 \; .
\end{equation}
\section{Conclusions}
\label{sec:conclusions}
In this work we studied the dynamical properties of several
astrophysical systems within the theoretical framework of
scalar theories.
We investigate whether there are evidences for a scalar field in the considered
astrophysical systems and if it is possible to observationally
detect it. We have taken into account three different classes of
objects: supernovae, low surface brightness spiral galaxies and
clusters of galaxies. Results show that: \textit{$i)$} there is an
intrinsic difficulty in extracting information about scalar field
mechanism (or more generally about a varying gravitational
constant) from supernovae; \textit{$ii)$} a scalar field can
fairly well reproduce the matter profile in clusters of galaxies,
estimated by X-ray observations and without the need of any
additional dark matter; \textit{$iii)$} good fits to the rotation
curves of low surface brightness galaxies, using visible stellar
and gas mass components, are obtained.
These results show that different astrophysical system can be
used as different tracers of the same physical mechanism.
Moreover, they point towards the possibility of a unifying view
of dark matter and dark energy via a scalar field, at least at
galactic and cluster scales \citep{Cardone}. The main criticism of
the approach is related to the fact that the very different
physical properties and evolution of the considered astrophysical
systems could insert unwanted biases and priors leading to a wrong
overall picture of the underlying cosmological model. This
shortcoming could be partially avoided if homogeneous and well
calibrated samples of data at low, medium and high redshifts will
be achieved in future.
\section{Acknowledgments}
DFM thanks the Research Council of Norway FRINAT grant 197251/V30
and the Abel extraordinary chair UCM-EEA-ABEL-03-2010. DFM is also
partially supported by the projects CERN/FP/109381/2009 and
PTDC/FIS/102742/2008. VS has been partially funded by the Research
Council of Norway with a fellow under the YGGDRASIL programme
$2009$-$2010$ and is now working at UPV/EHU under the project
``Convocatoria para la concesi$\acute{\mathrm{o}}$n de Ayuda a la
Especializaci$\acute{\mathrm{o}}$n para Investigadores Doctores en
la UPV/EHU-2009''. SC acknowledges the support of INFN (Sez. di
Napoli) and the ERASMUS/SOCRATES European program. VS acknowledges
V. F. Cardone for helpful comments and suggestions.
|
2,877,628,091,485 | arxiv | \section{Introduction and Background}
Let $G$ be a simple graph on $n$ vertices, and $I(G)$ its edge ideal,
i.e., a squarefree monomial ideal in $R=\Bbbk[x_1,\ldots,x_n]$ with monomial
generators $x_ix_j$ corresponding to each edge $\{i,j\}\in G$.
Such ideals have been extensively studied in such papers as
\cite{MR2301246}, \cite{Ha:2008:MIE:1344768.1344814}, \cite{MR2739498}, \cite{MR1031197},
and more recently, \cite{2010arXiv1012.5329M}.
A goal of much recent research has been to classify behavior of the resolutions
of such ideals $I(G)$ and that of their powers in terms of combinatorial data of $G$.
We provide here an explicit proof that the second power of the edge ideal of the
anticycle has not just a linear resolution, but also linear quotients.
In the course the proof, we additionally demonstrate that all powers $I(P_n^c)^k$ of the edge ideal of the antipath have linear quotients.
\begin{defn}
Let $G$ be a simple graph on $n$ vertices. Then the \emph{edge ideal of G} is the squarefree monomial ideal $I(G)$ given by
\[ I(G) =(x_ix_j:\{i,j\}\in G).
\]
\end{defn}
We say that a graph $G$ has property $P$ if its edge ideal $I(G)$ has such a property; e.g., $G$ is Gorenstein if $I(G)$ is Gorenstein, $G$ is linear if $I(G)$ has a linear resolution, etc. In particular, we will say a graph $G$ has linear quotients if its edge ideal $I(G)$ has linear quotients:
\begin{defn}
Let $I$ be a homogeneous ideal. We say that \emph{$I$ has linear quotients} if there exists some ordering of the generators of $I=(m_1,m_2,\ldots,m_r)$ such that for all $i>1$,
\[ ((m_1,\ldots,m_{i-1}):(m_i))=(x_{k_1},\ldots,x_{k_s})
\]
for some variables $x_{k_1},\ldots,x_{k_s}$. We say that such an ordering $(m_1,m_2,\ldots,m_r)$ is a
\emph{linear quotients ordering of $I$}.
\end{defn}
For two monomials $m$ and $m'$ we define $m' : m$ to be the monomial $\frac{m'}{\gcd(m,m')}$.
Given monomials $m_1, \ldots, m_i$, the colon ideal $(m_1, \ldots, m_{i-1}):(m_i)$ can be computed as
\[ (m_1,\ldots, m_{i-1}):(m_i) = (m_1:m_i, \ldots, m_{i-1}:m_i).
\]
Thus, in order to show that a monomial ideal $I= (m_1, \ldots, m_r)$ has linear quotients, it suffices to show that for each pair of monomials $m_i$ and $m_j$ with $j<i$ that there exists another monomial $m_k$ with $k < i$ with
\[ m_k : m_i = x_l \text{ for some $l$} \qquad \text{and} \qquad x_l \text{ divides } m_j : m_i.
\]
The \emph{graded Betti numbers} of a homogeneous ideal $I$ are given by $\beta_{i,j}(I) = \dim_{\Bbbk}\tor_i(I, \Bbbk)_j$. The graded Betti numbers also correspond to the ranks of the free modules in a minimal free resolution of $I$. We say an ideal $I$ which is generated in degree $d$ has a \emph{linear resolution} if $\beta_{i,j}(I) = 0$ for $j \neq i +d$. Ideals with linear quotients also have linear resolutions.
Providing a linear quotients ordering is one technique for proving that an ideal has a linear resolution, often with combinatorial significance in the case of monomial ideals. In the case of squarefree monomial ideal, an ideal $I$ having linear quotients is equivalent to its Alexander dual $I^{\vee}$ having a shelling order on its facets. For non-squarefree monomial ideals, a linear quotient orderingscan be viewed as giving a shelling order on the Alexander dual of its polarization.
Interest in powers of the anticycle partially draws from a result of Herzog, Hibi and Zheng \cite{MR2091479} which states the following:
\begin{thm}[Herzog, Hibi, Zheng]\label{thm:HHZ}
Let $I$ be a quadratic monomial ideal of the polynomial ring. The following are equivalent:
\begin{enumerate}
\item $I$ has a linear resolution,
\item $I$ has linear quotients,
\item $I^k$ has a linear resolution for all $k \geq 1$.
\end{enumerate}
\end{thm}
For edge ideals, Fr\"oberg showed that $I(G)$ has a linear resolution if and only if the complement of $G$ is chordal \cite{MR1171260}.
Conspicuously missing from the above theorem is the statement that all powers of a quadratic monomial ideal $I$ with linear resolution must have linear quotients. In fact, this is not known. There are numerous examples of non-quadratic monomial ideals possessing a linear resolution, or even linear quotients, whose powers do not. In \cite{MR2184787}, Conca provides a example generated in degree 3 which is not dependent on the characteristic of the field $\Bbbk$.
It would be of interest to construct linear quotients of powers of quadratic monomial ideals with the aim of extending Herzog, Hibi and Zheng's theorem. Alternately, as no counterexamples are known, the construction of a quadratic monomial ideal $I$ with a linear resolution but some power $k$ with no linear quotients ordering on the generators of $I^k$ would be of combinatorial interest.
Our work on the second power of the anticycle was also inspired by a second thread of research. Francisco, H\`a and Van Tuyl first investigated graphs $G$ where $I(G)^k$ has a linear resolution for each $k \geq 2$.
From Fr\"oberg and Herzog, Hibi and Zheng's results, we see that chordal graphs have this property.
More generally, it has been shown by Francisco, H\`a and Van Tuyl that if some power of $I(G)$ has a linear resolution, then the complement of $G$ cannot contain any induced four cycles. Their proof was recorded in \cite{NP2009}.
Inspired by these results, Peeva and Nevo constructed an example of a graph $G$ with no four cycle in its complement and where $I(G)^2$ does not have a linear resolution. Peeva and Nevo have conjectured that their example works only because $I(G)$ has Castelnuovo-Mumford regularity four and that every successive power of an edge ideal should get strictly closer to a linear resolution. See \cite{NP2009} for a more precise statement.
Nevo has also shown that claw-free graphs with no four cycles in their complements have regularity at most three and their second powers have linear resolutions \cite{MR2739498}. Anticycles on more than four vertices meet these criteria and so, it follows that their second powers have linear resolutions. Here we demonstrate that the square of the edge ideal of the anticycle has linear quotients, recovering this result.
\section{Cycles, Anticycles, and Antipaths}
We first describe the edge ideal of the anticycle and partition pairs of its edges into several natural classes. Next, we provide a linear quotients ordering on these classes relative to the previous generators.
The \emph{complement} of a graph $G$ is the graph on the vertices of $G$ containing all edges that are not in $G$. We use $G^c$ to denote the complement graph.
\begin{defn} Let $C_n$ be the cycle graph on $n$ vertices, i.e. the graph consisting of one cycle of length $n$ on these vertices with no chords. The \emph{anticycle graph} $A_n$ is the complement graph of $C_n$, i.e., $A_n = C_n^c$.
\end{defn}
\begin{defn} The \emph{antipath} $P_n^c$ is the graph on $n$ vertices containing of all edges in the complement of a path $P_n$ of length $n-1$. We depict the antipath in the figure below.
\end{defn}
\begin{center}
\begin{tikzpicture}
[scale=.8, vertices/.style={draw, fill=black, circle, inner sep=1pt}]
\useasboundingbox (0,-.75) rectangle (12,.75);
\node [anchor=east] at (-.5,0) {$P_n$:};
\node at (0,0) [vertices, label=below:{$x_1$}]{};
\foreach \to/\from in {1/2,2/3,3/4}
\draw (-2+2*\to,0)--(-2+2*\from,0)
node[vertices, label=below:{$x_{\from}$}]{};
\draw [dashed] (2*3,0)--(2*5,0);
\node [vertices, label=below:{$x_{n-1}$}] (5) at (2*5,0) {};
\draw (2*5,0)--(2*6,0)
node[vertices, label=below:{$x_{n}$}]{};
\end{tikzpicture}
\end{center}
\begin{center}
\begin{tikzpicture}
[scale=.8, vertices/.style={draw, fill=black, circle, inner sep=1pt}]
\useasboundingbox (-5,-.75) rectangle (5,4.75);
\node [anchor=east] at (-5,2.05) {$P_n^c$:};
\node [vertices, label=right:{$x_n$}] (0) at (0:4) {};
\node [vertices, label=right:{$x_{n-1}$}] (30) at (30:4) {};
\node [vertices] (60) at (60:4) {};
\node [vertices, label=above:{$x_4$}] (90) at (90:4) {};
\node [vertices, label=above left:{$x_3$}] (120) at (120:4) {};
\node [vertices, label=left:{$x_2$}] (150) at (150:4) {};
\node [vertices, label=left:{$x_1$}] (180) at (180:4) {};
\foreach \to/\from in {0/30,90/120,120/150,150/180,30/60,60/90}
\draw [-, black!20] (\to)--(\from);
\foreach \to/\from in {0/90, 0/120,0/150,0/180,30/90,30/120,30/150,30/180,90/150,90/180,120/180}
\draw [-] (\to)--(\from);
\foreach \to/\from in {0/60,60/120,60/150,60/180}
\draw [dashed] (\to)--(\from);
\end{tikzpicture}
\end{center}
Producing a linear quotients ordering for graphs with chordal complements is always possible and all of their powers have linear resolutions, as given in Theorem 3.2 in \cite{MR2091479}. However, most naive orderings on the generators of higher powers of $I(G)$ fail to produce linear quotients for $G$ with chordal complements.
\begin{example} Let $R=\Bbbk[x_1,\ldots,x_6]$ and let $I=I(A_n)^2$ be the square of the edge ideal of the anticycle on 6 vertices in $R$. Its generators, written in lex order, are given by:
\begin{align*}
&{x}_{1}^{2} {x}_{3}^{2},{x}_{1}^{2} {x}_{3} {x}_{4},{x}_{1}^{2} {x}_{3}
{x}_{5},{x}_{1}^{2} {x}_{4}^{2},{x}_{1}^{2} {x}_{4} {x}_{5},{x}_{1}^{2}
{x}_{5}^{2},{x}_{1} {x}_{2} {x}_{3} {x}_{4},{x}_{1} {x}_{2} {x}_{3} {x}_{5},
{x}_{1}{x}_{2} {x}_{3} {x}_{6},\\
&{x}_{1} {x}_{2} {x}_{4}^{2}, {x}_{1} {x}_{2} {x}_{4}{x}_{5},{x}_{1} {x}_{2} {x}_{4}
{x}_{6},{x}_{1} {x}_{2} {x}_{5}^{2}, {x}_{1} {x}_{2}{x}_{5} {x}_{6},{x}_{1} {x}_{3}^{2}
{x}_{5},{x}_{1} {x}_{3}^{2} {x}_{6},{x}_{1} {x}_{3}{x}_{4} {x}_{5},\\
&{x}_{1} {x}_{3} {x}_{4} {x}_{6},
{x}_{1} {x}_{3} {x}_{5}^{2},{x}_{1}{x}_{3} {x}_{5} {x}_{6}, {x}_{1} {x}_{4}^{2} {x}_{6},
{x}_{1} {x}_{4} {x}_{5}{x}_{6},{x}_{2}^{2} {x}_{4}^{2},{x}_{2}^{2}
{x}_{4}{x}_{5},{x}_{2}^{2} {x}_{4} {x}_{6},{x}_{2}^{2} {x}_{5}^{2},\\
&{x}_{2}^{2} {x}_{5} {x}_{6},
{x}_{2}^{2}{x}_{6}^{2},
{x}_{2} {x}_{3} {x}_{4} {x}_{5},{x}_{2} {x}_{3} {x}_{4} {x}_{6},{x}_{2}
{x}_{3} {x}_{5}^{2},{x}_{2} {x}_{3} {x}_{5} {x}_{6},{x}_{2} {x}_{3} {x}_{6}^{2},
{x}_{2}{x}_{4}^{2} {x}_{6},\\
&{x}_{2} {x}_{4} {x}_{5} {x}_{6},{x}_{2} {x}_{4}{x}_{6}^{2},
{x}_{3}^{2} {x}_{5}^{2},{x}_{3}^{2} {x}_{5} {x}_{6},{x}_{3}^{2} {x}_{6}^{2},{x}_{3} {x}_{4} {x}_{5} {x}_{6},{x}_{3} {x}_{4} {x}_{6}^{2},
{x}_{4}^{2}{x}_{6}^{2}.
\end{align*}
This ordering \emph{fails} to be a linear quotients ordering. Let $m_i$ be the $i^{\text{th}}$ monomial in the ordering above, and let $I_i$ denote the ideal generated by the first $i-1$ monomials in the ordering. Setting $Q_i=I_i:(m_i)$, we see that
\begin{align*}
Q_9&=({x}_{1}^{2} {x}_{3}^{2},{x}_{1}^{2} {x}_{3} {x}_{4},{x}_{1} {x}_{2} {x}_{3}
{x}_{4},{x}_{1}^{2} {x}_{4}^{2},{x}_{1} {x}_{2} {x}_{4}^{2},{x}_{2}^{2}
{x}_{4}^{2},{x}_{1}^{2} {x}_{3} {x}_{5},{x}_{1} {x}_{2} {x}_{3} {x}_{5}):(x_1x_2x_3x_6)\\
&=(x_4,x_5,x_1x_3)
\end{align*}
is not generated by variables, hence the lex ordering fails to give us linear quotients. Similarly, with reverse lex, we have the following ordered generating set:
\begin{align*}
&{x}_{1}^{2} {x}_{3}^{2},{x}_{1}^{2} {x}_{3} {x}_{4},{x}_{1} {x}_{2} {x}_{3}
{x}_{4},{x}_{1}^{2} {x}_{4}^{2},{x}_{1} {x}_{2} {x}_{4}^{2},{x}_{2}^{2}
{x}_{4}^{2},{x}_{1}^{2} {x}_{3} {x}_{5},{x}_{1} {x}_{2} {x}_{3} {x}_{5},{x}_{1}
{x}_{3}^{2} {x}_{5},\\
&{x}_{1}^{2} {x}_{4} {x}_{5},{x}_{1} {x}_{2} {x}_{4} {x}_{5},{x}_{2}^{2} {x}_{4} {x}_{5},
{x}_{1} {x}_{3} {x}_{4} {x}_{5},{x}_{2} {x}_{3}{x}_{4} {x}_{5},{x}_{1}^{2} {x}_{5}^{2},
{x}_{1} {x}_{2} {x}_{5}^{2},{x}_{2}^{2}{x}_{5}^{2},{x}_{1} {x}_{3} {x}_{5}^{2},\\
& {x}_{2} {x}_{3} {x}_{5}^{2},{x}_{3}^{2}{x}_{5}^{2}, {x}_{1} {x}_{2} {x}_{3} {x}_{6},
{x}_{1} {x}_{3}^{2} {x}_{6},{x}_{1} {x}_{2}{x}_{4} {x}_{6},{x}_{2}^{2} {x}_{4} {x}_{6},
{x}_{1} {x}_{3} {x}_{4} {x}_{6}, {x}_{2} {x}_{3} {x}_{4} {x}_{6},\\
& {x}_{1} {x}_{4}^{2} {x}_{6},{x}_{2} {x}_{4}^{2} {x}_{6},
{x}_{1}{x}_{2} {x}_{5} {x}_{6},{x}_{2}^{2} {x}_{5} {x}_{6},{x}_{1} {x}_{3} {x}_{5}
{x}_{6},{x}_{2} {x}_{3} {x}_{5} {x}_{6},{x}_{3}^{2} {x}_{5} {x}_{6},{x}_{1} {x}_{4} {x}_{5} {x}_{6},\\
& {x}_{2} {x}_{4} {x}_{5} {x}_{6},{x}_{3} {x}_{4} {x}_{5} {x}_{6}, {x}_{2}^{2} {x}_{6}^{2},
{x}_{2} {x}_{3} {x}_{6}^{2},{x}_{3}^{2}{x}_{6}^{2},{x}_{2} {x}_{4} {x}_{6}^{2},
{x}_{3} {x}_{4} {x}_{6}^{2},{x}_{4}^{2} {x}_{6}^{2}.
\end{align*}
This fails to have linear quotients at $Q_{21}=I_{21}:(x_1x_2x_3x_6)=(x_4,x_5,x_1x_3)$. Using a monomial ordering on the generators of $I$ does not appear to ever produce a linear quotients ordering on the generators of $I(A_n)^2$.
\end{example}
This appears to be true more generally -- while all higher powers of edge ideals with linear quotients appear to have linear quotients as well, these linear quotients orders almost never arise from a monomial term ordering.
\section{Antipath Linear Quotients}
Throughout this section we will use $H=P_n^c$ to denote the antipath on $n$ vertices.
The first stage in our linear quotients ordering is to show that the square of the antipath has linear quotients with respect to the lex order. As the complement of the antipath is a chordal graph, it is known that $I(H)$ has a linear resolution via Fr\"{o}berg's Theorem \cite{MR1171260}. Furthermore, as $I(H)$ has a linear resolution and is generated in degree 2, it is known to have a linear quotient ordering and linear resolutions of all of its powers \cite{MR2091479}. However, a linear resolution of its second power does not guarantee a linear quotients ordering of $I(H)^k$, which we provide explicitly here.
\begin{proposition}\label{prop:kantipath}
The $k^{\text{th}}$ power $I(H)^k$ of the edge ideal of the antipath $H$ has linear quotients,
under the lex ordering of the generators.
\end{proposition}
We begin with some notation and a lemma.
Given any $k$ edges $e_1, \ldots, e_k$ in a graph $G$, we will often abuse notation and
write $m=e_1e_2\cdots e_k$ for the monomial
\[ m=\prod_{r=1}^k x_{i_r} x_{j_r}
\]
where $e_r=\{x_{i_r},x_{j_r}\}$. When a monomial $m$ is of this form, we say $m$ is the \emph{product of $k$ edges} of $G$.
\begin{example}Let $G$ be the complete graph on six vertices $\{x,y,z,w,s,t\}$ seen below.
\begin{center}
\begin{tikzpicture}
[scale=.8, vertices/.style={draw, fill=black, circle, inner sep=0.5pt}]
\useasboundingbox (-2.5,-2.4) rectangle (2.5,2.4);
\node [anchor=base] at (-3.3,-.1){$G$:};
\node [vertices] (1) at (0:2) {};
\node [anchor=base] at (2.4,-.1) {$w$};
\node [vertices] (2) at (60:2) {};
\node [anchor=base] at (60:2.3) {$z$};
\node [vertices] (3) at (120:2) {};
\node [anchor=base] at (120:2.3) {$y$};
\node [vertices] (4) at (180:2) {};
\node [anchor=base] at (-2.4, -.1) {$x$};
\node [vertices] (5) at (240:2) {};
\node [anchor=base] at (240:2.5) {$t$};
\node [vertices] (6) at (300:2) {};
\node [anchor=base] at (300:2.5) {$s$};
\foreach \to/\from in {2/1, 3/1, 3/2, 4/1, 4/2, 4/3, 5/1, 5/2, 5/3, 5/4, 6/1, 6/2, 6/3, 6/4, 6/5}
\draw [-] (\to)--(\from);
\end{tikzpicture}
\end{center}
Then the monomial $m=xyzwst\in I(G)^3$ comes from any three edges with each vertex appearing in a unique edge exactly once.
\begin{center}
\begin{tikzpicture}
[scale=.6, vertices/.style={draw, fill=black, circle, inner sep=0.5pt}]
\useasboundingbox (-2.5,-2.5) rectangle (2.5,2.5);
\node [vertices] (1) at (0:2) {};
\node [anchor=base] at (2.4,-.1) {$w$};
\node [vertices] (2) at (60:2) {};
\node [anchor=base] at (60:2.3) {$z$};
\node [vertices] (3) at (120:2) {};
\node [anchor=base] at (120:2.3) {$y$};
\node [vertices] (4) at (180:2) {};
\node [anchor=base] at (-2.4, -.1) {$x$};
\node [vertices] (5) at (240:2) {};
\node [anchor=base] at (240:2.5) {$t$};
\node [vertices] (6) at (300:2) {};
\node [anchor=base] at (300:2.5) {$s$};
\foreach \to/\from in {3/1, 3/2, 4/1, 4/2, 4/3, 5/1, 5/2, 5/3, 6/1, 6/2, 6/4, 6/5}
\draw [black!50,-] (\to)--(\from);
\foreach \to/\from in {5/4}
\draw [very thick, -] (\to)--(\from);
\foreach \to/\from in {6/3}
\draw [very thick, -] (\to)--(\from);
\foreach \to/\from in {1/2}
\draw [very thick, -] (\to)--(\from);
\node [anchor=center] at (210:2.15) {$e_1$};
\node [anchor=center] at (210:.4) {$e_2$};
\node [anchor=center] at (30:2.15) {$e_3$};
\end{tikzpicture}\hspace{2em}
\begin{tikzpicture}
[scale=.6, vertices/.style={draw, fill=black, circle, inner sep=0.5pt}]
\useasboundingbox (-2.5,-2.5) rectangle (2.5,2.5);
\node [vertices] (1) at (0:2) {};
\node [anchor=base] at (2.4,-.1) {$w$};
\node [vertices] (2) at (60:2) {};
\node [anchor=base] at (60:2.3) {$z$};
\node [vertices] (3) at (120:2) {};
\node [anchor=base] at (120:2.3) {$y$};
\node [vertices] (4) at (180:2) {};
\node [anchor=base] at (-2.4, -.1) {$x$};
\node [vertices] (5) at (240:2) {};
\node [anchor=base] at (240:2.5) {$t$};
\node [vertices] (6) at (300:2) {};
\node [anchor=base] at (300:2.5) {$s$};
\foreach \to/\from in {2/1, 3/2, 4/1, 4/3, 5/1, 5/2, 5/3, 5/4, 6/1, 6/2, 6/3, 6/4}
\draw [black!50,-] (\to)--(\from);
\foreach \to/\from in {1/3}
\draw [very thick, -] (\to)--(\from);
\foreach \to/\from in {4/2}
\draw [very thick, -] (\to)--(\from);
\foreach \to/\from in {5/6}
\draw [very thick, -] (\to)--(\from);
\node [anchor=center] at (150:.75) {$e_1$};
\node [anchor=center] at (30:.75) {$e_2$};
\node [anchor=center] at (270:2.1) {$e_3$};
\end{tikzpicture}\hspace{2em}
\begin{tikzpicture}
[scale=.6, vertices/.style={draw, fill=black, circle, inner sep=0.5pt}]
\useasboundingbox (-2.5,-2.5) rectangle (2.5,2.5);
\node [vertices] (1) at (0:2) {};
\node [anchor=base] at (2.4,-.1) {$w$};
\node [vertices] (2) at (60:2) {};
\node [anchor=base] at (60:2.3) {$z$};
\node [vertices] (3) at (120:2) {};
\node [anchor=base] at (120:2.3) {$y$};
\node [vertices] (4) at (180:2) {};
\node [anchor=base] at (-2.4, -.1) {$x$};
\node [vertices] (5) at (240:2) {};
\node [anchor=base] at (240:2.5) {$t$};
\node [vertices] (6) at (300:2) {};
\node [anchor=base] at (300:2.5) {$s$};
\foreach \to/\from in {2/1, 3/1, 3/2, 4/2, 4/3, 5/1, 5/3, 5/4, 6/1, 6/2, 6/4, 6/5}
\draw [black!50,-] (\to)--(\from);
\foreach \to/\from in {4/1}
\draw [very thick, -] (\to)--(\from);
\foreach \to/\from in {2/5}
\draw [very thick, -] (\to)--(\from);
\foreach \to/\from in {3/6}
\draw [very thick, -] (\to)--(\from);
\node [anchor=north] at (180:1.4) {$e_1$};
\node [anchor=west] at (60:1.4) {$e_2$};
\node [anchor=west] at (300:1.4) {$e_3$};
\end{tikzpicture}
\end{center}
So $m=e_1e_2e_3$ for the labeled edge sets in any of the diagrams above.
\end{example}
\begin{lemma}\label{lem:kpowers} The ideal $I(H)^k$ is given by all monomials of degree $2k$ of the form
\begin{multline*}
I(H)^k=(x_{i_1}x_{i_2}\cdots x_{i_k}x_{j_1}x_{j_2}\cdots x_{j_k}: \\
i_1\leq i_2\leq\cdots\leq i_k\leq j_1\leq j_2\leq \cdots j_k \text{ and }i_r+2\leq j_r \text{ for all }r).
\end{multline*}
\end{lemma}
Equivalently, every minimal monomial generator $m \in I(H)^k$ can be written as
a product of $k$ edges $m = e_1\cdots e_k$ where $e_r=\{x_{i_r},x_{j_r}\}$
and
\[ i_1\leq i_2\leq\cdots \leq i_k\leq j_1\leq j_2\leq \cdots \leq j_k.
\]
\begin{proof}
Any monomial $m$ of degree $2k$ can be written as
\[ m = x_{i_1} \cdots x_{i_k} x_{j_1} \cdots x_{j_k}
\]
with $i_1 \leq \cdots \leq i_k \leq j_1 \leq \cdots \leq j_k$. Let $m$ be a minimal generator of $I(H)^k$ and
write $m$ as above. Assume for a contradiction that there is an index $r$ with $i_r + 2 > j_r$. Since the
indices of $m$ have been written in ascending order, we know that
\[ \{ i_r, i_{r+1}, \ldots, i_k, j_1, \ldots, j_r \} \subseteq \{ i_r, i_r +1\}.
\]
Let $m'$ be the degree $k+1$ monomial $m' = x_{i_r}\cdots x_{i_k}x_{j_1} \cdots x_{j_r}$ which
divides $m$. The support of $m'$ is contained in $\{x_{i_r}, x_{i_r +1}\}$ but there are
are no edges in the antipath between $x_{i_r}$ and $x_{i_r +1}$. Thus, $m'$ contains no edge as a factor.
However, as $m$ is a product of $k$ edges, every degree $k+1$ factor of $m$ must contain at least one edge.
This is contradicted by our construction of $m'$, and so we must have $i_r + 2 \leq j_r$ for each $r$.
\end{proof}
We now return to the proof of Proposition~\ref{prop:kantipath}.
\begin{proof}[Proof of Proposition~\ref{prop:kantipath}] From Lemma~\ref{lem:kpowers}, we have that
\begin{multline*}
I(H)^k=(x_{i_1}x_{i_2}\cdots x_{i_k}x_{j_1}x_{j_2}\cdots x_{j_k}: \\
i_1\leq i_2\leq\cdots\leq i_k\leq j_1\leq j_2\leq \cdots j_k \text{ and }i_r+2\leq j_r \text{ for all }r).
\end{multline*}
Any pair of monomial generators $m$ and $m'$ of $I(H)^k$ will be of the forms:
\begin{align*}
m=x_{i_1}x_{i_2}\cdots x_{i_k}x_{j_1}x_{j_2}\cdots x_{j_k}=e_{1}e_{2}\cdots e_k\\
m'=x_{i_1'}x_{i_2'}\cdots x_{i_k'}x_{j_1'}x_{j_2'}\cdots x_{j_k'}=e_1'e_2'\cdots e_k'
\end{align*}
with indices $i_r,i_r',j_r,j_r'$ all satisfying the inequalities above
and for edges $e_r = \{ x_{i_r}, x_{j_r}\}$ and $e'_r = \{x_{i'_r}, x_{j'_r}\}$ of $H$. We show for every such pair of monomials
with $m' \lexgt m$ that $m' : m$ will be divisible by some $x_i=m'':m$ for some $m''\lexgt m$.
{\bf Case 1: Monomials $m$ and $m'$ differ first at some $x_{i_r}$.}
Assume $i_r$ is the first index at which $m$ and $m'$ differ; i.e., $i_s=i_s'$ for all $s<r$ and $i_r'<i_r$.
Let $m''=\frac{\displaystyle x_{i_r'}}{\displaystyle x_{i_r}}m.$ This is certainly a monomial of the appropriate degree which is lex earlier than $m$. To show that $m''\in I(H)^k$, we note that as $i_r'<i_r<j_r-2$, we have an edge $\varepsilon_r=\{x_{i_r'},x_{j_r}\}\in H$. Thus
$$m''=e_1\cdots e_{r-1}\varepsilon_r e_{r+1}\cdots e_k\in I(H)^k.$$
As $m'':m=x_{i_r'}$ and $x_{i_r'}$ divides $m':m$, we either had $m''=m'$ (in which case we satisfy the first condition above) or $m''\neq m'$ and this colon satisfies the second condition above.
{\bf Case 2: Monomials $m$ and $m'$ differ first at some $x_{j_r}$.}
Assume that $m$ and $m'$ do not differ in the $x_{i_s}$; i.e., $i_s=i_s'$ for all $s=1,\ldots,k$.
Let $j_r$ be the first index where $m$ and $m'$ differ. That is, $j_s = j_s'$ for all $s<r$ and $j_r'<j_r$. So
\begin{align*}
m=x_{i_1}\cdots x_{i_k}x_{j_1}\cdots x_{j_{r-1}}x_{j_r}x_{j_{r+1}}\cdots x_{j_k}=e_1e_2\cdots e_{r-1}e_r e_{r+1}\cdots e_k\\
m'=x_{i_1}\cdots x_{i_k}x_{j_1}\cdots x_{j_{r-1}}x_{j_r'}x_{j_{r+1}'}\cdots x_{j_k'}=e_1e_2\cdots e_{r-1}e_r' e_{r+1}'\cdots e_k'.
\end{align*}
Choosing
\begin{align*}
m''&=x_{i_1}\cdots x_{i_k}x_{j_1}\cdots x_{j_{r-1}}x_{j_r'}x_{j_{r+1}}\cdots x_{j_k}\\
&=e_1e_2\cdots e_{r-1}e_r' e_{r+1}\cdots e_r,
\end{align*}
we note that as $e_r'=\{x_{i_r},x_{j_r'}\}\in H$, we have $m''\in I(H)^k$. This is a lex earlier monomial in $I(H)^k$. So $m'':m=x_{j_r'}$ which divides $m':m$.
\end{proof}
\section{Linear Quotient Ordering of Anticycle}
The proof that the square of the edge ideal of the antipath has linear quotients is the first step in constructing a linear quotients ordering of the generators of the anticycle. With this in hand, we now show that the following ordering on the generators of the square of the edge ideal of the anticycle gives us linear quotients. For the remainder of this note, we let $G$ be the anticycle graph and let $H$ be the antipath obtained by deleting some vertex of $G$.
\begin{remark}\label{anticyclelabels}
We will label the vertices in $G$ as follows. Let $x$ be the vertex we delete to obtain $H$, and let $z_1$ and $z_2$ the two non-adjacent vertices in $G$ (so the two neighbors of $x$ in the cycle itself). Finally, let $y_1,\ldots,y_n$ be all the remainging vertices in order, so that $y_1$ is not adjacent to $z_1$ and $y_n$ is not adjacent to $z_n$. Note that each $y_i$ is adjacent to $x$.
Thus, for this section, we assume that $G$ has $n+3$ vertices. See the figure below.
\end{remark}
\begin{center}
\begin{tikzpicture}
[scale=.50, vertices/.style={draw, fill=black, circle, inner sep=1pt}]
\node [vertices, label=right:{$y_i$}] (0) at (0:4) {};
\node [vertices, label=right:{$y_{i-1}$}] (30) at (30:4) {};
\node [vertices, label=above right:{}] (60) at (60:4) {};
\node [vertices, label=above:{$y_2$}] (90) at (90:4) {};
\node [vertices, label=above left:{$y_1$}] (120) at (120:4) {};
\node [vertices, label=left:{$z_1$}] (150) at (150:4) {};
\node [vertices, label=left:{$x$}] (180) at (180:4) {};
\node [vertices, label=left:{$z_2$}] (210) at (210:4){};
\node [vertices, label=below left:{$y_n$}] (240) at (240:4){};
\node [vertices, label=below:{$y_{n-1}$}] (270) at (270:4){};
\node [vertices, label=left:{}] (300) at (300:4){};
\node [vertices, label=right:{$y_{i+1}$}] (330) at (330:4){};
\foreach \to/\from in {90/150,90/180,120/180,180/240,180/270,90/210,90/240,90/270,120/210,120/240,120/270, 150/210,150/240,150/270,210/270}
\draw [-] (\to)--(\from);
\foreach \to/\from in {0/60,30/90,30/120,30/150,30/180,0/90, 0/120,0/150,0/180, 0/210,0/240,0/270,0/300,30/210,30/240,30/270,30/300,30/330,60/150,60/180,60/210,60/240,60/270,60/300,60/330,60/120,270/330,90/330,90/300,120/330,120/300,150/300,150/330,180/330,180/300,210/330,210/300,240/300,240/330}
\draw [densely dashed, very thin] (\to)--(\from);
\draw [line width=1pt](-5,0) arc (180:-180:.7 and .6);
\end{tikzpicture}
\end{center}
\begin{thm}\label{thm:mainanticycletheorem}
Let $G$ be the $(n+3)$-anticycle graph, labeled as in the picture above, with $n \geq 2$.
Let $H=G\setminus\{x\}$ be the induced graph away from $x$.
Let $J=I(H)$ be the edge ideal of $H$ and let $K=I(G\setminus H)=(xy_i: i=1,\ldots,n)$ be the edge ideal
on the edges not in $H$.
Then the edge ideal $I(G)$ has a linear quotients given by the following ordering of its monoimal generators (monomials occurring earlier in this list appear earlier in the order):
\begin{enumerate}
\item\label{Jsquared} $m\in J^2$ ordered via the lex ordering with $z_1< y_1<y_2<\cdots <y_n<z_2$
\item\label{JK} $m\in J\cdot K$
\begin{enumerate}
\renewcommand{\theenumi}{\alph{enumi}}
\renewcommand{\labelenumi}{(\ref{JK}\theenumi)}
\item\label{zees}$m = xy_iz_1z_2$, $i=1,\ldots,n$,
\item\label{z2}$m = xy_iy_jz_2$, $i\leq j$, ordered via lex with $y_1>y_2>\cdots>y_n$, excluding nongenerator $xy_n^2z_2$,
\item\label{z1}$m = xy_iy_jz_1$, $i\leq j$, ordered via lex with $y_1<y_2<\cdots<y_n$, excluding nongenerator $xy_1^2z_1$, and
\item\label{whys} $m = xy_iy_jy_k$, $i\leq j\leq k$, ordered via lex with $y_1>y_2>\cdots>y_n$.
\end{enumerate}
\item\label{Ksquared} $m\in K^2$.
\begin{enumerate}
\item\label{not1}$m=x^2y_iy_j$ ordered via lex excluding $x^2y_1^2$ with $y_1<y_2<\cdots <y_n$
\item $m\label{last1}=x^2 y_1^2$.
\end{enumerate}
\end{enumerate}
\end{thm}
\begin{center}
\begin{tikzpicture}
[scale=.45, vertices/.style={draw, fill=black, circle, inner sep=1pt}]
\node [anchor=east] at (-4.9,0) {$H$:};
\node [vertices, label=right:{$y_i$}] (0) at (0:4) {};
\node [vertices, label=right:{$y_{i-1}$}] (30) at (30:4) {};
\node [vertices, label=above right:{}] (60) at (60:4) {};
\node [vertices, label=above:{$y_2$}] (90) at (90:4) {};
\node [vertices, label=above left:{$y_1$}] (120) at (120:4) {};
\node [vertices, label=left:{$z_1$}] (150) at (150:4) {};
\node [vertices, label=left:{$z_2$}] (210) at (210:4){};
\node [vertices, label=below left:{$y_n$}] (240) at (240:4){};
\node [vertices, label=below:{$y_{n-1}$}] (270) at (270:4){};
\node [vertices, label=left:{}] (300) at (300:4){};
\node [vertices, label=right:{$y_{i+1}$}] (330) at (330:4){};
\foreach \to/\from in {90/150,90/210,90/240,90/270,120/210,120/240,120/270, 150/210,150/240,150/270,210/270}
\draw [-] (\to)--(\from);
\foreach \to/\from in {0/60,30/90,30/120,30/150,0/90, 0/120,0/150, 0/210,0/240,0/270,0/300,30/210,30/240,30/270,30/300,30/330,60/150,60/210,60/240,60/270,60/300,60/330,60/120,270/330,90/330,90/300,120/330,120/300,150/300,150/330,210/330,210/300,240/300,240/330}
\draw [densely dashed, very thin] (\to)--(\from);
\end{tikzpicture}\hfill
\begin{tikzpicture}
[scale=.45, vertices/.style={draw, fill=black, circle, inner sep=1pt}]
\node [anchor=east] at (-5.1,0) {$G \setminus H$:};
\node [vertices, label=right:{$y_i$}] (0) at (0:4) {};
\node [vertices, label=right:{$y_{i-1}$}] (30) at (30:4) {};
\node [vertices, label=above right:{}] (60) at (60:4) {};
\node [vertices, label=above:{$y_2$}] (90) at (90:4) {};
\node [vertices, label=above left:{$y_1$}] (120) at (120:4) {};
\node [vertices, label=left:{$z_1$}] (150) at (150:4) {};
\node [vertices, label=left:{$x$}] (180) at (180:4) {};
\node [vertices, label=left:{$z_2$}] (210) at (210:4){};
\node [vertices, label=below left:{$y_n$}] (240) at (240:4){};
\node [vertices, label=below:{$y_{n-1}$}] (270) at (270:4){};
\node [vertices, label=left:{}] (300) at (300:4){};
\node [vertices, label=right:{$y_{i+1}$}] (330) at (330:4){};
\foreach \to/\from in {90/180,120/180,180/240,180/270}
\draw [-] (\to)--(\from);
\foreach \to/\from in {0/180,30/180,60/180,300/180,330/180}
\draw [densely dashed, very thin] (\to)--(\from);
\end{tikzpicture}
\end{center}
\noindent Before giving the proof, we provide a specific example of the ordering of $I(G)^2$ for the antipath $G$ on $6$ vertices.
\begin{example}
Let $n=3$ so we have the anticycle graph $G$ on vertices $\{x,z_1,y_1,y_2,y_3,z_2\}$.
\begin{center}
\begin{tikzpicture}
[scale=.35, vertices/.style={draw, fill=black, circle, inner sep=1pt}]
\useasboundingbox (-6,-5) rectangle (6,5);
\node [anchor=east] at (-5.7,0) {$G$:};
\node [vertices, label=right:{$y_2$}] (0) at (0:4) {};
\node [vertices, label=above right:{$y_1$}] (60) at (60:4) {};
\node [vertices, label=above left:{$z_1$}] (120) at (120:4) {};
\node [vertices, label=left:{$x$}] (180) at (180:4) {};
\node [vertices, label=below left:{$z_2$}] (240) at (240:4) {};
\node [vertices, label=below right:{$y_3$}] (300) at (300:4) {};
\foreach \to/\from in {0/120,0/180,0/240,60/180,60/240,60/300,120/240,120/300,180/300}
\draw [-] (\to)--(\from);
\end{tikzpicture}
\end{center}
Our two subgraphs $H$ and $G\setminus H$ will be as below.
\begin{center}
\begin{tikzpicture}
[scale=.35, vertices/.style={draw, fill=black, circle, inner sep=1pt}]
\useasboundingbox (-8,-5) rectangle (8,5);
\node [anchor=east] at (-4,0) {$H$:};
\node [vertices, label=right:{$y_2$}] (0) at (0:4) {};
\node [vertices, label=above right:{$y_1$}] (60) at (60:4) {};
\node [vertices, label=above left:{$z_1$}] (120) at (120:4) {};
\node [vertices, label=below left:{$z_2$}] (240) at (240:4) {};
\node [vertices, label=below right:{$y_3$}] (300) at (300:4) {};
\foreach \to/\from in {0/120,0/240,60/240,60/300,120/240,120/300}
\draw [-] (\to)--(\from);
\end{tikzpicture}\hspace{3em}
\begin{tikzpicture}
[scale=.35, vertices/.style={draw, fill=black, circle, inner sep=1pt}]
\useasboundingbox (-8,-5) rectangle (8,5);
\node [anchor=east] at (-5.7,0) {$G \setminus H$:};
\node [vertices, label=right:{$y_2$}] (0) at (0:4) {};
\node [vertices, label=above right:{$y_1$}] (60) at (60:4) {};
\node [vertices, label=left:{$x$}] (180) at (180:4) {};
\node [vertices, label=below right:{$y_3$}] (300) at (300:4) {};
\foreach \to/\from in {0/180,60/180,180/300}
\draw [-] (\to)--(\from);
\end{tikzpicture}
\end{center}
The linear quotients ordering from Theorem~\ref{thm:mainanticycletheorem} on the generators of $I(G)^2$ is given here by
\begin{align*}
I(G)^2=&\,\phantom{+}\,
( {z}_{1}^{2} {y}_{2}^{2},{z}_{1}^{2} {y}_{2} {y}_{3},{z}_{1}^{2} {y}_{2}
{z}_{2},{z}_{1}^{2} {y}_{3}^{2}, {z}_{1}^{2} {y}_{3} {z}_{2},{z}_{1}^{2}
{z}_{2}^{2},{z}_{1} {y}_{1} {y}_{2} {y}_{3},{z}_{1} {y}_{1} {y}_{2} {z}_{2},\\
&\,\phantom{+}\phantom{(}\,
{z}_{1} {y}_{1} {y}_{3}^{2},{z}_{1} {y}_{1} {y}_{3} {z}_{2},
{z}_{1} {y}_{1} {z}_{2}^{2},{z}_{1}{y}_{2}^{2} {z}_{2},
{z}_{1} {y}_{2} {y}_{3} {z}_{2},{z}_{1} {y}_{2} {z}_{2}^{2},{y}_{1}^{2} {y}_{3}^{2},\\
&\,\phantom{+}\phantom{(}\,
{y}_{1}^{2} {y}_{3} {z}_{2},{y}_{1}^{2} {z}_{2}^{2},
{y}_{1} {y}_{2} {y}_{3} {z}_{2}, {y}_{1} {y}_{2} {z}_{2}^{2},{y}_{2}^{2} {z}_{2}^{2})^{(\ref{Jsquared})}\\
&+(x {z}_{1} {y}_{1} {z}_{2},x {z}_{1} {y}_{2} {z}_{2},x {z}_{1} {y}_{3} {z}_{2})^{(2a)}\\
&+(x {y}_{1}^{2} {z}_{2},x {y}_{1} {y}_{2} {z}_{2},x {y}_{1} {y}_{3} {z}_{2},x
{y}_{2}^{2} {z}_{2},x {y}_{2} {y}_{3} {z}_{2})^{(2b)}\\
&+(x {z}_{1} {y}_{3}^{2},x {z}_{1} {y}_{2} {y}_{3}, x {z}_{1} {y}_{1} {y}_{3},
x {z}_{1} {y}_{2}^{2}, x {z}_{1} {y}_{1} {y}_{2})^{(2c)}\\
&+(x {y}_{1}^{2} {y}_{3},x {y}_{1} {y}_{2} {y}_{3},x {y}_{1} {y}_{3}^{2})^{(2d)}\\
&+(x^{2} {y}_{1} {y}_{2},x^{2} {y}_{1} {y}_{3},x^{2} {y}_{2}^{2},x^{2} {y}_{2}
{y}_{3},x^{2} {y}_{3}^{2})^{(\ref{not1})}\\
&+(x^{2} {y}_{1}^{2})^{(\ref{last1})}.
\end{align*}
\end{example}
\subsection{Proof of Theorem~\ref{thm:mainanticycletheorem}}
\begin{proof}[Proof of Theorem~\ref{thm:mainanticycletheorem}]The generators of $I(G)^2$ fall into three main cases, with the second case split up into four subcases and the third case placing the first lex ordered generator at the very end. We will address each case separately.
\begin{notation}Let $I_{M} = \left(I(G)^2\right)_M$ denote the ideal generated by all monomials in the linear quotients ordering before adding $M$, a minimal generator of $I(G)^2$. In general, we will use $Q_{M}$ to denote the colon ideal
$$Q_{M}=I_{M}:(M),$$
though we will often omit the subscript if the stage in the ordering is clear. We show here for all monomial generators $M$ in the quotients ordering that
$$Q_{M}=(x_{i_1},x_{i_2},\ldots,x_{i_k})$$
for some variables $x_{i_1},x_{i_2},\ldots,x_{i_k}\in\{x,z_1,z_2,y_1,y_2,\ldots,y_n\}=V$.
Let $V_{M}$ denote the variables generating $Q_{M}$, or as above, $V_{M}=\{x_{i_1},x_{i_2},\ldots,x_{i_k}\}$ and let $W_{M}=V\setminus V_{M}$.
The general technique used begins with generating $x_i\in V_{M}$ explicitly via exhibition of a monomial generator $m'\in I_{M}$ such that
$$m':M=x_i.$$
After finding our expected $V_{M}$, we note that any remaining minimal monomial generators $m$ of $Q_{M}$ which are not variables, i.e. not in a linear generator of the ideal $(V_{M})$, must have their support, $\supp(m)\in W_{M}$.
We then show that any generators $m'\in I(G)^2$ which would give us
$$m':M=m\in(W_{M})$$
must either have $m\in(V_M)$ (and hence a contradiction, as such a generator cannot be minimal in $Q_M$) or could only come from a monomial $m'$ occurring after $M$ in the linear quotients ordering (and hence another contradiction, as $m\not\in Q_M$.) For consistency, we will always use $M$, $m$ and $m'$ in the same roles throughout the proof.
\end{notation}
\subsubsection{Stage (1):} Note that $I(H)$ is the antipath graph of the path $\{z_1\sim y_1\sim y_2\sim\cdots \sim y_n\sim z_2\}$, so the ordering of $J^2$ given in (\ref{Jsquared}) is a linear quotients ordering by Proposition~\ref{prop:kantipath}.
\subsubsection{Stage (2a):} We now move on to generators in $(2a)$ and show that after adding through the $(i-1)^{\text{st}}$ term in $(2a)$, we have linear quotients when we colon this ideal against our ${\text i}^{\text{th}}$ term, $M=z_1z_2x y_i$. Let $Q$ be this colon ideal,
\begin{align*}
Q &= I_{z_1z_2xy_i}:(z_1z_2xy_i)\\
&=(J^2 + (z_1z_2 xy_j \mid 1 \leq j\leq i-1) ) : (z_1 z_2 x y_i).
\end{align*}
Note that the following inclusions hold, via the elements noted on the right.
\begin{itemize}
\item $Q \supseteq (y_j \mid j \neq i)$ as $y_j = z_1 z_2 y_j y_i : z_1 z_2 x y_i$.
\item $Q \supseteq (z_1)$ when $i \neq 1$ as $z_1 = z_1 z_2 z_1 y_i : z_1 z_2 x y_i$.
\item $Q \supseteq (z_2)$ when $i \neq n$ as $z_2 = z_1 z_2 z_2 y_i : z_1 z_2 x y_i$.
\item $Q \supseteq (y_i)$ when $i \not\in \{1, n\}$ as $y_i = y_i^2 z_1 z_2 : z_1 z_2 x y_i$.
\end{itemize}
\noindent Assume $m \in Q$ is a minimal monomial generator of $Q$ that is not linear, i.e. $m = m' : z_1z_2 x y_i$ for some $m'$ appearing in the ordering earlier than $z_1z_2 x y_i$. As $m$ is minimal, its support cannot contain any of the variables in $Q$ and therefore
\[ \supp(m) \subseteq \begin{cases}
\{x\} & i = 2,\ldots,n-1 , \\
\{x,z_1,y_1\} & i = 1,\\
\{x,z_2,y_n\} & i = n.
\end{cases}
\]
\noindent In the first of these cases, we note that if $x | m$ then $x^2 | m'$. As this does not happen for any $m'$ before $z_1z_2x y_i$, the only cases we need to consider are $i = 1$ and $i = n$. In both of these cases we can assume that $x$ does not divide $m$.
\begin{description}
\item[Case ($i=1$)] In this case, we are adding the generator $z_1z_2xy_1$ to $J^2$, our edge ideal of the antipath, i.e. $Q = J^2 : z_1z_2 xy_1$. Note that $Q\supseteq (y_2, \ldots, y_n, z_2)$. Hence, if we have a minimal monomial generator $m\in Q$ which is not linear, its support must be contained in $\{z_1, y_1\}$.
If $z_1 |m$ then $z_1^2 |m'$ so $m'$ must be of the form $z_1^2 y_jy_k$ with $j,k > 1$. However, we then have $m':z_1z_2xy_1 = z_1 y_j y_k$ which cannot be a minimal generator of $Q$, as both $y_j,y_k \in Q$.
If $y_1 | m$ then $y_1^2 | m'$ so $m'$ must be of the form $y_1^2 y_j z_2$ (for $j > 2$) or $y_1^2 y_j y_k$ (for $j,k > 2$) or $y_1^2 z_2^2$. In these three cases the $m'$ are $y_1 y_j$, $y_1 y_j y_k$, and $y_1 z_2$ respectively. However each of these are not minimal, from $y_j,z_2 \in Q$ for $j > 2$.
\item[Case ($i=n$)]
Now we are adding the final generator $z_1z_2xy_n$ to the ideal
$$I_{z_1z_2xy_n}=J^2+(z_1z_2xy_i:1\leq i\leq n-1).$$
For this, we have $Q = (J^2 + (z_1z_2 xy_j \mid 1 \leq j \leq n-1) ) : (z_1 z_2 x y_n)$ which satisfies $Q\supseteq (y_1, \ldots, y_{n-1}, z_1).$ In this case, if we have a minimal monomial generator $m \in Q$ which is not linear, its support must be contained in $\{z_2, y_n\}$.
If $z_2 | m$ then $z_2^2 | m'$. The only such $m'\in I_{z_1z_2zy_n}$ must be of the form $z_2^2 y_jy_k$ with $j,k < n$. However, we then have $m':z_1z_2xy_1 = z_2 y_j y_k$ which is not a minimal generator as $y_j,y_k \in Q$.
Similarly, if $y_n |m$ then $y_n^2 |m'$. All such $m'\in I_{z_1z_2zy_n}$ are of one of the following three forms:
\begin{enumerate}[{\bf (i)}]
\item $y_n^2 y_j z_1$ (for some $j < n-1$)
\item $y_n^2 y_j y_k$ (for some $j,k < n-1$)
\item $y_n^2 z_1^2$.
\end{enumerate}
In these three cases the $m=m':M$ is
\begin{enumerate}[{\bf (i)}]
\item $m=y_n^2 y_j z_1:z_1z_2zy_n=y_jy_n$,
\item $m=y_n^2 y_j y_k:z_1z_2zy_n=y_j y_ky_n$, and
\item $m= y_n^2 z_1^2:z_1z_2zy_n=y_n z_1$ respectively.
\end{enumerate}
However each of these are not minimal as $y_j,z_1 \in Q$ for $j < n-1$.
\end{description}
So our ordering of our generators is a linear quotients ordering through the end of stage (2a).
\subsubsection{Stage (2b):} The second part of the second stage involves adding monomials $M = xy_iy_jz_2$ to our ideals $I_{M}$ according to the lex order on $(i,j)$.
\begin{align*}
Q &= I_{x y_iy_j z_2}:(x y_i y_j z_2)\\
&=(J^2 + (z_1z_2 xy_j \mid 1 \leq j \leq n) + (x y_{i'} y_{j'} z_2 : (i',j') >_\text{lex} (i,j) ) : (x y_i y_j z_2)
\end{align*}
Note the following inclusions hold, via the elements noted.
\begin{itemize}
\item $Q \supseteq (y_k \mid k < j)$ as $y_k = x y_i y_k z_2 : x y_i y_j z_2$
\item $Q \supseteq (z_1)$ as $z_1 = x y_i z_1 z_2 : x y_i y_j z_2$
\item $Q \supseteq (z_2)$ when $j \neq n$ as $z_1 = y_i y_j z_2^2 : x y_i y_j z_2$
\item $Q \supseteq (y_k \mid k > j+ 1)$ as $y_k = y_i y_j y_k z_2 : x y_i y_j z_2$
\item $Q \supseteq (y_{j+1})$ when $i \neq j$ as $y_{j+1} = y_i y_j y_{j+1} z_2 : x y_i y_j z_2$
\item $Q \supseteq (y_j)$ when $i \leq j-2$ and $j \neq n$ as $y_j = y_i y_j^2 z_2 : x y_i y_j z_2$
\end{itemize}
\noindent Taken together for $M=xy_iy_jz_2$ this gives
\[ Q \supseteq
\begin{cases}
(y_1, \ldots, y_n, z_1, z_2) & j \neq n, i < j-1 \\
(y_1, \ldots, y_{j-1}, y_{j+1}, \ldots, y_n, z_1, z_2) & j \neq n, i+1 = j \\
(y_1, \ldots, y_{j-1}, y_{j+2}, \ldots, y_n, z_1, z_2) & j \neq n, i = j \\
(y_1, \ldots, y_{n-1}, z_1) & j = n.
\end{cases}
\]
Assume $m\in Q$ is a minimal monomial generator that is not linear. That is $m = m' : x y_i y_j z_2$ for some $m'$ before $x y_i y_j z_2$. As $m$ is minimal, its support cannot contain any of the variables in $Q$. Also if $x$ were to be in $\supp(m)$ then $x^2$ would divide $m'$. As no there is no such $m'\in I_{M}$ before $x y_iy_kz_2$, we have $x\not|m$. Thus the support of $m$ satisfies
\[ \supp(m) \subseteq \begin{cases}
\emptyset & j \neq n, i < j-1 \\
\{y_j \} & j \neq n, i+1 =j \\
\{y_j, y_{j+1} \} & j \neq n, i = j \\
\{y_n, z_2 \} & j = n
\end{cases}
\]
\begin{description}
\item[Case ($j \neq n, i < j-1$)] There is nothing to check as $x$ does not divide $m$ and all other variables are in $Q$.
\item[Case ($j \neq n, i + 1 = j$)] In this case $m$ must be a power of $y_j$. As $m$ is not linear, $y_j^2 | m$ and hence $y_j^3 | m'$. However none of the generators of $I(G)^2$ are divisible by $y_j^3$.
\item[Case ($j \neq n, i = j$)] In this case $\supp(m)\subseteq \{ y_j, y_{j+1} \}$. As $m$ is not linear, we have one of the following must hold:
\begin{enumerate}[{\bf (i)}]
\item $y_j^2 |m$
\item $y_j y_{j+1} |m$
\item $y_{j+1}^2 |m$.
\end{enumerate}
In these three cases respectively we must then have
\begin{enumerate}[{\bf (i)}]
\item $y_j^3 |m'$
\item $y_j^2 y_{j+1}|m'$
\item $m'\in\bigl\{y_j^2 y_{j+1}^2, y_jy_{j+1}^3, y_{j+1}^4, xy_j y_{j+1}^2, xy_{j+1}^3, z_2y_jy_{j+1}^2, z_2y_{j+1}^3, x z_2 y_{j+1}^2\bigr\}.$
\end{enumerate}
Case {\bf (i)} cannot happen, as $y_j^3$ does not divide any generator of $I(G)^2$. Similarly, in case {\bf (ii)}, $y_j^2 y_{j+1} | m'$ which would require $y_j y_{j+1} \in I(G)$, which is not a generator of the edge ideal of the anticycle.
Finally, in case {\bf (iii)} all degree 4 monomials divisible by $y_{j+1}^2$ have been enumerated as possible $m'$. None of these are generators of $I(G)^2$ except for $m'=xz_2 y_{j+1}^2$. This however occurs later in our order.
\item[Case ($j = n$)]
In this case $\supp(m) \subseteq \{ y_n, z_2 \}$. As $m$ is not linear, one of $y_n^2$, $y_n z_2$ and $z_2^2$ divide $m$. If $y_n^2$ or $z_2^2$ divide $m$ then $y_n^3$ or $z_2^3$ divide $m'$. However no generator of $I(G)^2$ is divisible by a cube of a variable. If $y_n z_2 | m$ then $m' = y_n^2 z_2^2$ which is not a generator of $I(G)^2$.
\end{description}
\subsubsection{Stage (2c):} Showing that this part of the ordering is a linear quotients ordering can be done using its symmetry with Stage (2b). We wish to show that all $Q$ such that
\begin{align*}
Q
&= I_{x y_i y_j z_1}:(x y_iy_j z_1)\\
&=\biggl(J^2
+ \bigl(z_1z_2 xy_j \mid 1 \leq j \leq n\bigr)
+ \bigl(x y_k y_l z_2 \mid 1 \leq k \leq l \leq n, k < n\bigr)\\
&\;\;\;\;\;+ \bigl( x y_k y_l z_1 \mid (k,l) <_{\text{lex}'}(i,j)\bigr) \biggr) : (x y_iy_j z_1)
\end{align*}
are again generated by variables. We first show that $Q'$ is generated by variables, for
\begin{equation*}
Q'=\biggl(J^2
+ \bigl(z_1z_2 xy_j \mid 1 \leq j \leq n\bigr)+ \bigl( x y_k y_l z_1 \mid (k,l) <_{\text{lex}'}(i,j)\bigr) \biggr) : (x y_iy_j z_1),
\end{equation*}
where the $<_{\text{lex}'}$ denotes the lex ordering on $y_i$ with the variables in reverse order from the $<_{\text{lex}}$ used in Stage (2b).
Via symmetry with Stage (2b), this $Q'$ must have linear quotients via an identical proof. From this, we see
\[ Q' =
\begin{cases}
(y_1, \ldots, y_n, z_1, z_2) & j \neq n, j < i-1 \\
(y_1, \ldots, y_{j-1}, y_{j+1}, \ldots, y_n, z_1, z_2) & i \neq 1, j+1 = i \\
(y_1, \ldots, y_{j-1}, y_{j+2}, \ldots, y_n, z_1, z_2) & i \neq 1, i = j \\
(y_2, \ldots, y_{n}, z_2) & i = 1.
\end{cases}
\]
Clearly $Q' \subset Q$. We note that $Q$ and $Q'$ only differ by a colon ideal of the form
$$\bigl(x y_k y_l z_2 \mid 1 \leq k \leq l \leq n, k < n\bigr):(xy_iy_jz_1).$$
The generators of $Q$ which are not in $Q'$ are of the form $x y_k y_l z_2 : x y_i y_j z_1$ and hence all must divisible by $z_2$.
Since $z_2 \in Q'$ in all cases, we see that $Q$ is generated by variables for all monomials $M$ added in this stage.
\subsubsection{Stage (2d):} For the final case of Stage 2, we add all monomials in $J\cdot K$ of the form $m=x y_i y_j y_k$ ordered via $\text{lex}$ with $y_1>y_2>\cdots y_n$. Our colon ideals then are of the form
\begin{align*}
Q&=I_{x y_i y_j y_k}:(x y_i y_j y_k)\\
&= \biggl(J^2+\bigl(z_1z_2 xy_j \mid 1 \leq j \leq n\bigr)\\
&+ \bigl(x y_k y_l z_2 \mid 1 \leq k \leq l \leq n, k < n\bigr) + \bigl(x y_k y_l z_1 \mid 1 \leq k \leq l \leq n, 1 < l\bigr)\\
&+ \bigl(x y_{i'} y_{j'} y_{k'} \mid 1 \leq i' \leq j' \leq k' \leq n, i'+2 \leq k', (i',j',k') >_\text{lex} (i,j,k)\bigr)\biggr): (x y_i y_j y_k).
\end{align*}
The last set of generators in $I_{xy_iy_jy_k}$ are given by
$$ \bigl(x y_{i'} y_{j'} y_{k'} \mid 1 \leq i' \leq j' \leq k' \leq n, i'+2 \leq k', (i',j',k') >_\text{lex} (i,j,k)\bigr)$$
as the variables can be arranged with indices $i',j',k'$ in increasing order, but $i'+2\leq k'$ as at least one pair of $\{y_{i'},y_{j'},y_{k'}\}$ must be nonadjacent in the anticycle graph. This forces the given inequality.
Our colon ideals now satisfy the following inclusions, via the elements noted.
\begin{itemize}
\item $Q \supseteq (y_l \mid l < j)$ as $y_l = x y_i y_l y_k : x y_i y_j y_k$
\item $Q \supseteq (z_2)$ as $z_2 = x y_i y_k z_2 : x y_i y_j y_k$
\item $Q \supseteq (z_1)$ as $z_1 = x y_i y_k z_1 : x y_i y_j y_k$
\item $Q \supseteq (y_l \mid l \geq j +2)$ as $y_l = y_i y_j y_k y_l : x y_i y_j y_k$
\item $Q \supseteq (y_{j+1})$ when $i +1 \leq j$ and $j+2 \leq k$ as $y_{j+1} = y_i y_j y_{j+1} y_k: x y_i y_j y_k$
\item $Q \supseteq (y_j)$ when $i + 2 \leq j$ and $j+2 \leq k$ as $y_{j+1} = y_i y_j^2 y_k : x y_i y_j y_k$.
\end{itemize}
Together this gives
\[ Q \supseteq
\begin{cases}
(y_1, \ldots, y_{j-1}, y_{j+1}, \ldots, y_n, z_1, z_2) & i = j - 1 \text{ and } j + 2 \leq k\\
(y_1, \ldots, y_{j-1}, y_{j+2}, \ldots, y_n, z_1, z_2) & i = j \text{ or } j = k, k-1 \\
(y_1, \ldots, y_n, z_1, z_2) & \text{otherwise.}
\end{cases}
\]
Assume $m \in Q$ is a minimal monomial generator that is not linear. That is $m = m' : x y_i y_j y_k$ for some $m'$ before $M=x y_i y_j y_k$. As $m$ is minimal, its support cannot contain any of the variables in $Q$. Also if $x |m$ then $x^2| m'$. As this does not happen for any $m'$ before $x y_iy_jy_k$, $x\not\in\supp{(m)}$. Thus the support of $m$ satisfies
\[ \supp(m) \subseteq \begin{cases}
\{y_j \} & i = j - 1 \text{ and } j + 2 \leq k\\
\{y_j, y_{j+1} \} & i = j \text{ or } j = k, k-1 \\
\emptyset & \text{otherwise.}
\end{cases}
\]
\begin{description}
\item[Case ($i = j - 1 \text{ and } j + 2 \leq k$)]
In this case, $m$ must be divisible only by $y_j$ and cannot be linear. Thus $y_j^2 | m$ and $y_j^3 | m'$ which does not hold for any generator $m'\in I(G)^2$.
\item[Case ($i = j \text{ or } j = k, k-1$)]
In this case, $m$ has its support contained in $\{y_j, y_{j+1}\}$. As in the previous case, if the support of $m$ contains $\{y_j\}$, we obtain a contradiction.
If the support of $m$ contains $\{y_{j+1}\}$ and then $m'$ must the product of $y_{j+1}^2$ and two of $x, y_i, y_j, y_k$. However, for this to be a generator of $I(G)^2$ the two chosen vertices must both be adjacent to $y_j$. If $i = j$, then $m' x y_{j+1}^2 y_k$ is the only possibility, but this comes after $x y_j^2 y_k$ in our ordering. If $j=k$ or $j=k-1$ then $m'= x y_i y_{j+1}^2$ is the only possibility. This again lies after $M=x y_i y_j y_k$ in the ordering.
\item[Other Cases]
In the other cases, the quotient contains all variables (except $x$, but there is no term divisible by $x^2$ which occurs prior to $M$ in the ordering.) Hence, $Q$ must be generated by linear terms. \end{description}
\subsubsection{Stage (3a):}
Now we move on to adding those terms in $K^2$, meaning monomials in $I(G)^2$ which came from pairs of edges $xy_i$ and $xy_j$. Our colon ideals will be of the form:
\begin{align*}
Q &=I_{x^2y_iy_j}:(x^2y_iy_j)\\
&= \biggl(J^2 + \bigl(z_1z_2 xy_j \mid 1 \leq j \leq n\bigr)+ \bigl(x y_k y_l z_2 \mid 1 \leq k \leq l \leq n, k < n\bigr)\\
& + \bigl(x y_k y_l z_1 \mid 1 \leq k \leq l \leq n, 1 < l\bigr)+ \bigl(x y_i y_j y_k \mid 1 \leq i \leq j \leq k \leq n, i+2 \leq k\bigr)\\
&+ \bigl(x^2 y_k y_l \mid 1 \leq k \leq l \leq n, 1 < l, (k,l) >_\text{lex} (i,j)\bigr)\biggr): (x^2 y_i y_j).
\end{align*}
These colon ideals satisfy the following inclusions via the elements noted.
\begin{itemize}
\item $Q \supseteq (y_1)$ when $j > 3$ as $y_1 = x y_1 y_i y_j : x^2 y_i y_j$
\item $Q \supseteq (y_1)$ when $i > 1$ as $y_1 = x^2 y_1 y_i : x^2 y_i y_j$
\item $Q \supseteq (y_k \mid 1 < k < j)$ as $y_k = x^2 y_i y_k : x^2 y_i y_j$
\item $Q \supseteq (y_k \mid i + 2 \leq k \leq n)$ as $y_k = x y_i y_j y_k : x^2 y_i y_j$
\item $Q \supseteq (z_2)$ when $i \neq n$ as $z_2 = x y_i y_j z_2 : x^2 y_i y_j$
\item $Q \supseteq (z_1)$ when $j \neq 1$ as $z_1 = x y_i y_j z_1 : x^2 y_i y_j$
\end{itemize}
Together this gives
\[ Q \supseteq
\begin{cases}
(y_3, \ldots, y_n, z_1, z_2) & i=1, j=2\\
(y_1, \ldots, y_n, z_1, z_2) & i + 2 \leq j\\
(y_1, \ldots, y_{j-1}, y_{j+1}, \ldots, y_n, z_1, z_2) & 1 < i = j -1\\
(y_1, \ldots, y_{j-1}, y_{j+2}, \ldots, y_n, z_1, z_2) & 1 < i = j < n\\
(y_1, \ldots, y_{n-1}, z_1) & {i = j = n}.
\end{cases}
\]
Assume $m \in Q$ is a minimal monomial generator that is not linear. That is $m = m' : x^2 y_i y_j$ for some $m'$ before $M=x^2 y_i y_j$. Again, as $m$ is minimal its support cannot contain any of the variables in $Q$. Also if $x | m$ then $x^3|m'$ which does not happen for any $m'\in I(G)^2$. Thus the support of $m$ satisfies
\[ \supp(m) \subseteq \begin{cases}
\{y_1, y_2\} & i = 1, j = 2 \\
\emptyset & i + 2 \leq j \\
\{y_j \} & i = j - 1 \\
\{y_j, y_{j+1} \} & 1 < i = j < n \\
\{y_n, z_2 \} & i = j = n.
\end{cases}
\]
We examine each of these cases individually.
\begin{description}
\item[Case ($i=1, j=2$)]
In this case $m$ is divisible by one of $y_1^2, y_1 y_2, y_2^2$ and hence $m'$ is divisible by $y_1^3, y_1^2 y_2^2, y_2^3$. None of these can hold for $m'$ a generator of $I(G)^2$.
\item[Case ($i + 2 \leq j $)]
There is nothing to check as $x$ does not divide $m'$ and all other variables are in $Q$.
\item[Case ($i = j - 1$)]
In this case $m$ must be a power of $y_j$. As $m$ is not linear, $y_j^2 |m'$ and hence $y_j^3 |m$. No generators of $I(G)^2$ are divisible by $y_j^3$ (or any third power of a variable.)
\item[Case ($1 < i = j < n$)]
In this case $m$ is divisible by one of $y_j^2$, $y_j y_{j+1}$ or $y_{j+1}^2$. If $m'$ is to appear before $x^2 y_i y_j$ in our list, it cannot be $x^2 y_j^2, x^2 y_j y_{j+1},$ nor $x^2 y_{j+1}^2$. As $i=j$, the remaining possibilities for $m$ are $xy_j^3, xy_j^2 y_{j+1}, x y_j y_{j+1}^2$ or a monomial of degree four in $y_j$ and $y_{j+1}$. However, none of these are generators of $I(G)^2$.
\item[Case ($i = j = n$)]
In this case $m$ is divisible by one of $y_n^2, y_n z_2, z_2^2$. So $m'$ is divisible by one of $y_n^4$, $y_n^3z_2$, $z_2^2$.
There are no $m' \in I(G)^2$ such that the first two hold. For the last, if $z_2^2 | m'$ and $y_n$ does not divide $m$ then $m'$ must be one of $z_2^4, z_2^3 x, z_2^3 y_1, z_2^2 x^2, z_2^2 x y_n, z_2^2 y_n^2$. None of these are in $I(G)^2$.
\end{description}
From this, we see that $I(G)^2$ has a linear quotients through Stage (3a).
\subsubsection{Stage (3b):}
Finally, we add our generator $x^2y_1^2$ to our ideal $I_{x^2y_1^2}$. We only need to check that for this one remaining generator, the following colon ideal is generated by variables:
\begin{align*}
Q &= I_{x^2y_1^2}:(x^2y_1^2)\\
&=\biggl(J^2 + \bigl(z_1z_2 xy_j \mid 1 \leq j \leq n\bigr) + \bigl(x y_k y_l z_2 \mid 1 \leq k \leq l \leq n, k < n\bigr) \\
&+ \bigl(x y_k y_l z_1 \mid 1 \leq k \leq l \leq n, 1 < l\bigr) + \bigl(x y_i y_j y_k \mid 1 \leq i \leq j \leq k \leq n, i+2 \leq k\bigr)\\
&+ \bigl(x^2 y_k y_l \mid 1 \leq k \leq l \leq n, 1< l\bigr)\biggr): (x^2 y_1^2).
\end{align*}
We have the following inclusions by the elements noted:
\begin{itemize}
\item $Q \supseteq (y_k \mid 1 < k \leq n)$ as $y_k = x^2 y_1 y_k : x^2 y_1^2$
\item $Q \supseteq (z_2)$ when $i \neq n$ as $z_2 = x y_1^2 z_2 : x^2 y_1^2$.
\end{itemize}
This gives us that our colon ideal satisfies $Q \supseteq (y_2, \ldots, y_n, z_2)$.
So, if $m \in Q$ is a minimal non-linear monomial, then $\supp(m) \subseteq \{y_1, z_1\}$ and $m = m':x^2 y_1^2$ for some $m \in I(G)^2$ before $x^2 y_1^2$. If $y_1 | m$ then $m'$ must be divisible by $y_1^3$. There is no such $m' \in I(G)^2$. Thus $\supp(m) = \{z_1\}$.
Since by assumption, $m$ is not linear, $z_1^2 | m$. Thus, $z_1^2 | m'$ and the other variables dividing $m'$ can only be $z_1, x$ or $y_1$. There is no way to form a generator of $I(G)^2$ using only these variables as $y_1$ and $x$ and $z_1$ are not adjacent to $z_1^2$. Hence, $Q=(y_2, \ldots, y_n, z_2)$.
So this provides a linear quotients ordering on $I(G)^2$.
\end{proof}
\section{Future Research}
For higher powers of the edge ideal $I(A_n)^k$ of the anticycle, it is still unknown if all powers have a linear resolution, much less linear quotients. Construction of linear quotient orderings on $I(A_n)^k$ would accomplish this.
\begin{question} Does $I(A_n)^k$ have linear quotients for $k\geq 3$?
\end{question}
We produced an ordering above on $I(A_n)^2$ by decomposing $A_n$ into complementary subgraphs $P_{n-1}$ and $A_n\setminus P_{n-1}$. While this order is nonunique, ordering the edges of $I(A_n)^2$ by decomposing the graph into the complementary subgraphs $H$ and $G\setminus H$, then considering pairs of edges as appropriate, seems to produce linear quotients orderings with the clearest descriptions. Extending this order to $I(G)^k$ in a similar fashion has proven fairly difficult, even in the case of $I(G)^3$, but would be a natural next step after Theorem~\ref{thm:mainanticycletheorem}.
A problem of more general interest is to complete Theorem~\ref{thm:HHZ} of Herzog, Hibi and Zheng by answering the following question:
\begin{question} Let $G$ be the complement of a chordal graph. Does $I(G)^k$ have linear quotients for $k\geq 2$?
\end{question}
We might also ask for a description of all edge ideals whose powers eventually have linear resolutions.
\begin{question}\label{ques:classesofgraphs} Can we exhibit classes of graphs $G$ such that for all sufficiently large $k$,
\begin{enumerate}[(i)]
\item\label{ques:subclass1} $I(G)^k$ has a linear resolution, or
\item\label{ques:subclass2} $I(G)^k$ has linear quotients?
\end{enumerate}
\end{question}
In \cite{NP2009}, it was conjectured that graphs satistfying Question~\ref{ques:classesofgraphs}(\ref{ques:subclass1}) are precisely those graphs $G$ with a $C_4$-free complement. General conditions for the second class however remain open. It appears that anticycles $A_n$ form such a class, but we wish to find more general conditions for the powers of an edge ideal of a graph to stabilize on linear quotients.
\medskip
\textbf{Acknowledgements.}
We would like to thank Irena Peeva and Eran Nevo for getting us interested in this topic and Adam Van Tuyl for many stimulating conversations.
The first author would like to thank his advisor, Sara Faridi, for her direction and enouragement on this project.
The second author would like to thank her advisor, Mike Stillman, for his detailed commentary on this paper, greatly improving the exposition.
\bibliographystyle{amsalpha}
|
2,877,628,091,486 | arxiv | \section{Introduction}
In this paper we explore signed decompositions of integers by various sequences. After briefly reviewing the literature, we state our results about uniqueness of decomposition, number of summands, and gaps between summands. In the course of our analysis we find a new way to interpret an earlier result about far-difference representations, which leads to a new characterization of the Fibonacci numbers.
\subsection{Background}
Zeckendorf \cite{Ze} discovered an interesting property of the Fibonacci numbers $\{F_n\}$; he proved that every positive integer can be written uniquely as a sum of non-consecutive Fibonacci numbers\footnote{If we were to use the standard definition of $F_0 = 0$, $F_1 = 1$ then we would lose uniqueness.}, where $F_{n+2} = F_{n+1} + F_n$ and $F_1 = 1, F_2 = 2$. It turns out this is an alternative characterization of the Fibonacci numbers; they are the unique increasing sequence of positive integers such that any positive number can be written uniquely as a sum of non-consecutive terms.
Zeckendorf's theorem inspired many questions about the number of summands in these and other decompositions. Lekkerkerker \cite{Lek} proved that the average number of summands in the decomposition of an integer in $[F_n, F_{n+1})$ is $\frac{n}{\varphi^2+1} + O(1)$, where $\varphi = \frac{1+\sqrt{5}}{2}$ is the golden mean (which is the largest root of the characteristic polynomial associated with the Fibonacci recurrence). More is true; as $n\to\infty$, the distribution of the number of summands of $m \in [F_n, F_{n+1})$ converges to a Gaussian. This means that as $n\to\infty$ the fraction of $m \in [F_n, F_{n+1})$ such that the number of summands in $m$'s Zeckendorf decomposition is in $[\mu_n - a\sigma_n, \mu_n + b\sigma_n]$ converges to $\frac1{\sqrt{2\pi}} \int_a^b e^{-t^2/2}dt$, where $\mu_n = \frac{n}{\varphi^2+1} + O(1)$ is the mean number of summands for $m \in [F_n, F_{n+1})$ and $\sigma_n^2 = \frac{\varphi}{5(\varphi+2)}n-\frac{2}{25}$ is the variance (see \cite{KKMW} for the calculation of the variance). \emph{Henceforth in this paper whenever we say the distribution of the number of summand converges to a Gaussian, we mean in the above sense.} There are many proofs of this result; we follow the combinatorial approach used in \cite{KKMW}, which proved these results by converting the question of how many numbers have exactly $k$ summands to a combinatorial one.
These results hold for other recurrences as well. Most of the work in the field has focused on Positive Linear Recurrence Relations (PLRS), which are recurrence relations of the form $G_{n+1} = c_1G_n + \cdots + c_L G_{n+1-L}$ for non-negative integers $L,c_1,c_2,\dots,c_L$ with $L,c_1,$ and $c_n > 0$ (these are called $G$-ary digital expansions in \cite{St}). There is an extensive literature for this subject; see \cite{Al,BCCSW,Day,GT,Ha,Ho,Ke,Len,MW1,MW2} for results on uniqueness of decomposition and \cite{DG,FGNPT,GTNP,KKMW,Lek,LT,MW1,St} for Gaussian behavior.
Much less is known about signed decompositions, where we allow negative summands in our decompositions. This opens up a number of possibilities, as in this case we can overshoot the value we are trying to reach in a given decomposition, and then subtract terms to reach the desired positive integer. We formally define this idea below.
\begin{defn}[Far-difference representation]
A \emph{far-difference representation} of a positive integer $x$ by a sequence $\{a_n\}$ is a signed sum of terms from the sequence which equals $x$.
\end{defn}
The Fibonacci case was first considered by Alpert \cite{Al}, who proved the following analogue of Zeckendorf's theorem. Note that the restrictions on the gaps between adjacent indices in the decomposition is a generalization of the non-adjacency condition in the Zeckendorf decomposition.
\begin{thm} \label{thm:alpert}
Every $x \in \mathbb{Z}$ has a unique Fibonacci far-difference representation such that every two terms of the same sign differ in index by at least 4 and every two terms of opposite sign differ in index by at least 3.
\end{thm}
For example, 2014 can be decomposed as follows:
\be
2014 \ = \ 2584 - 610 + 55 - 13 - 2 \ = \ F_{17} - F_{14} + F_9 - F_6 - F_2.
\ee
Alpert's proof uses induction on a partition of the integers, and the method generalizes easily to other recurrences which we consider in this paper.
Given that there is a unique decomposition, it is natural to inquire if generalizations of Lekkerkerker's Theorem and Gaussian behavior hold as well. Miller and Wang \cite{MW1} proved that they do. We first set some notation, and then describe their results (our choice of notation is motivated by our generalizations in the next subsection).
First, let $R_4(n)$ denote the following summation
\begin{equation} \label{R4(n)}
R_4(n) \ := \
\begin{cases}
\sum_{0 < n-4i \le n} F_{n-4i} \ = \ F_n + F_{n-4} + F_{n-8} + \cdots & \text{ if } n > 0 \\
0 & \text{ otherwise.}
\end{cases}
\end{equation}
Using this notation, we state the motivating theorem from Miller-Wang.
\begin{thm}[Miller-Wang] \label{thm:MW1 result}
Let $\mathcal{K}_n$ and $\mathcal{L}_n$ be the corresponding random variables denoting the number of positive summands and the number of negative summands in the far-difference representation (using the signed Fibonacci numbers) for integers in $(R_4(n-1), R_4(n)]$. As $n$ tends to infinity, $\mathbb{E}[\mathcal{K}_n] = \frac{1}{10}n + \frac{371-113\sqrt{5}}{40} + o(1)$, and is $\frac{1+\sqrt{5}}{4} = \frac{\varphi}{2}$ greater than $\mathbb{E}[\mathcal{L}_n]$. The variance of both is $\frac{15 + 21\sqrt{5}}{1000}n + O(1)$. The standardized joint density of $\mathcal{K}_n$ and $\mathcal{L}_n$ converges to a bivariate Gaussian with negative correlation $\frac{10\sqrt{5}-121}{179} = -\frac{21-2\varphi}{29+2\varphi} \approx -0.551$, and $\mathcal{K}_n + \mathcal{L}_n$ and $\mathcal{K}_n - \mathcal{L}_n$ converge to independent random variables.
\end{thm}
Their proof used generating functions to show that the moments of the distribution of summands converge to those of a Gaussian. The main idea is to show that the conditions which imply Gaussianity for positive-term decompositions also hold for the Fibonacci far-difference representation. One of our main goals in this paper is to extend these arguments further to the more general signed decompositions. In the course of doing so, we find a simpler way to handle the resulting algebra.
We then consider an interesting question about the summands in a decomposition, namely \emph{how are the lengths of index gaps between adjacent summands distributed in a given integer decomposition?} Equivalently, how long must we wait after choosing a term from a sequence before the next term is chosen in a particular decomposition? In \cite{BBGILMT}, the authors solve this question for the Fibonacci far-difference representation, as well as other PLRS, provided that all the coefficients are positive. Note this restriction therefore excludes the $k$-Skipponaccis for $k \ge 2$.
\begin{thm}[\cite{BBGILMT}]\label{thm:skipgaps}
As $n \to \infty$, the probability $P(j)$ of a gap of length $j$ in a far-difference decomposition of integers in $(R_4(n-1), R_4(n)]$ converges to geometric decay for $j \ge 4$, with decay constant equal to the golden mean $\varphi$. Specifically, if $a_1 = \varphi / \sqrt{5}$ (which is the coefficient of the largest root of the recurrence polynomial in Binet's Formula\footnote{As our Fibonacci sequence is shifted by one index from the standard representation, for us Binet's Formula reads $F_n = \frac{\varphi}{\sqrt{5}} \varphi^n - \frac{1-\varphi}{\sqrt{5}} (1-\varphi)^n$. For any linear recurrence whose characteristic polynomial is of degree $d$ with $d$ distinct roots, the $n$\textsuperscript{{\rm th}} term is a linear combination of the $n$\textsuperscript{{\rm th}} powers of the $d$ roots; we always let $a_1$ denote the coefficient of the largest root.} expansion for $F_n$), then $P(j) = 0$ if $j \le 2$ and
\begin{equation} \label{thm:FibonacciGaps}
P(j) \ = \
\begin{cases}
\frac{10a_1\varphi}{\varphi^4-1}\varphi^{-j} & \text{ if } j \ge 4 \\
\frac{5a_1}{\varphi^2(\varphi^4-1)} & \text{ if } j = 3.
\end{cases}
\end{equation}
\end{thm}
\subsection{New Results}
In this paper, we study far-difference relations related to certain generalizations of the Fibonacci numbers, called the $k$-Skipponacci numbers.
\begin{defi}[$k$-Skipponacci Numbers] For any non-negative integer $k$, the $k$-Skipponaccis are the sequence of integers defined by $S^{(k)}_{n+1} = S^{(k)}_n + S^{(k)}_{n-k}$ for some $k \in \mathbb{N}$. We index the $k$-Skipponaccis such that the first few terms are $S^{(k)}_1 = 1$, $S^{(k)}_2 = 2$, ..., $S^{(k)}_{k+1} = k+1$, and $S^{(k)}_n = 0$ for all $n \le 0$. \end{defi}
Some common $k$-Skipponacci sequences are the 0-Skipponaccis (which are powers of 2, and lead to binary decompositions) and the 1-Skipponaccis (the Fibonaccis). Our first result is that a generalized Zeckendorf theorem holds for far-difference representations arising from the $k$-Skipponaccis.
\begin{thm}\label{Thm:Far-Diff}
Every $x \in \mathbb{Z}$ has a unique far-difference representation for the $k$-Skipponaccis such that every two terms of the same sign are at least $2k+2$ apart in index and every two terms of opposite sign are at least $k+2$ apart in index.
\end{thm}
Before stating our results on Gaussianity, we first need to set some new notation, which generalizes the summation in \eqref{R4(n)}. \begin{equation} \label{Rn}
R_k(n) \ := \
\begin{cases}
\sum_{0 < n-b(2k+2) \le n} S^{(k)}_{n-b(2k+2)} \ = \ S^{(k)}_n + S^{(k)}_{n-2k-2} + S^{(k)}_{n-4k-4} + \cdots & \text{ if } n > 0
\\
0 & \text{ otherwise,}
\end{cases}
\end{equation}
\begin{thm} \label{thm:Gaussianity[MW]} Fix a positive integer $k$. Let $\mathcal{K}_n$ and $\mathcal{L}_n$ be the corresponding random variables denoting the number of positive and the number of negative summands in the far-difference representation for integers in $(R_k(n-1),R_k(n)]$ from the $k$-Skipponaccis. As $n\to\infty$, expected values of $\mathcal{K}_n$ and $\mathcal{L}_n$ both grow linearly with $n$ and differ by a constant, as do their variances. The standardized joint density of $\mathcal{K}_n$ and $\mathcal{L}_n$ converges to a bivariate Gaussian with a computable correlation. More generally, for any non-negative numbers $a, b$ not both equal to $0$, the random variable $X_n=a\mathcal{K}_n+b\mathcal{L}_n$ converges to a normal distribution as $n\to\infty$.
\end{thm}
\noindent This theorem is an analogue to Theorem \ref{thm:MW1 result} of \cite{MW1} for the case of Fibonacci numbers. Their proof, which is stated in Section 6 of \cite{MW1}, relies heavily on Section 5 of the same paper where the authors proved Gaussianity for a large subset of sequences whose generating function satisfies some specific constraints. In this paper we state a sufficient condition for Gaussianity in the following theorem, which we prove in \S\ref{sec:gaussianity}. We show that it applies in our case, yielding a significantly simpler proof of Gaussianity than the one in \cite{MW1}.
\begin{thm}\label{thm_generalGaussian}
Let $\kappa$ be a fixed positive integer. For each $n$, let a discrete random variable $X_n$ in $I_n=\{0,1,\dots,n\}$ have
\be {\rm Prob}(X_n=j)\ = \
\begin{cases}
\rho_{j;n}/\sum_{j=1}^n \rho_{j;n} & \text{{\rm if} } j\in I_n \\
0 &\text{{\rm otherwise}}
\end{cases}
\ee
for some positive real numbers $ \rho_{1;n},\dots, \rho_{n;n}$. Let $g_n(x) := \sum_j \rho_{j;n}x^j$ be the generating function of $X_n$. If $g_n$ has form $g_n(x)\ = \ \sum_{i=1}^\kappa q_i(x)\alpha_i^n(x)$ where
\begin{itemize}
\item[(i)] for each $i\in\{1,\dots,\kappa\}$, $q_i,\alpha_i:\mathbb{R}\to\mathbb{R}$ are three times differentiable functions which do not depend on $n$;
\item[(ii)] there exists some small positive $\epsilon$ and some positive constant $\lambda<1$ such that for all $x\in I_\epsilon=[1-\epsilon,1+\epsilon]$, $|\alpha_1(x)|>1$ and $\frac{|\alpha_i(x)|}{|\alpha_1(x)|}<\lambda<1$ for all $i=2,\dots,\kappa$;
\item[(iii)] $\alpha_1'(1)\neq 0$ and $\frac{d}{dx}\left[\frac{x\alpha_1'(x)}{\alpha_1(x)}\right] \left|_{x=1}\neq 0\right.$;
\end{itemize}
then
\begin{itemize}
\item[(a)] The mean $\mu_n$ and variance $\sigma_n^2$ of $X_n$ both grow linearly with $n$. Specifically,
\begin{equation}
\mu_n\ = \ A n+B+o(1)
\end{equation}
\begin{equation}
\sigma_n^2\ = \ C \cdot n+ D+o(1)
\end{equation}
where \begin{equation}A\ = \ \frac{\alpha_1'(1)}{\alpha_1(1)}, \ \ \ \ B\ = \ \frac{q_1'(1)}{q_1(1)}
\end{equation}
\begin{equation}
C\ = \ \left(\frac{x\alpha_1'(x)}{\alpha_1(x)}\right)\Bigg|_{x=1}\ = \ \frac{\alpha_1(1)[\alpha_1'(1)+\alpha_1''(1)]-\alpha_1'(1)^2}{\alpha_1(1)^2}
\end{equation}
\begin{equation}
D\ = \ \left(\frac{xq_1'(x)}{q_1(x)}\right)\Bigg|_{x=1} \ = \ \frac{q_1(1)[q_1'(1)+q_1''(1)]-q_1'(1)^2}{q_1(1)^2}.
\end{equation}
\item[(b)] As $n\to\infty$, $X_n$ converges in distribution to a normal distribution.
\end{itemize}
\end{thm}
Next we generalize previous work on gaps between summands. This result makes use of a standard result, the Generalized Binet's Formula; see \cite{BBGILMT} for a proof for a large family of recurrence relations which includes the $k$-Skipponaccis. We restate the result here for the specific case of the $k$-Skipponaccis.
\begin{lem} \label{Binet-Skipponacci}
Let $\lambda_1,\dots,\lambda_k$ be the roots of the characteristic polynomial for the $k$-Skipponaccis. Then $\lambda_1 > |\lambda_2| \ge \cdots \ge |\lambda_k|$, $\lambda_1 > 1$, and there exists a constant $a_1$ such that
\begin{equation}
S^{(k)}_n \ = \ a_1\lambda_1^n + O(n^{\max(0,k-2)}\lambda_2^n).
\end{equation}
\end{lem}
\begin{thm} \label{thm:gapresult} Consider the $k$-Skipponacci numbers $\{S^{(k)}_n\}$. For each $n$, let $P_n(j)$ be the probability that the size of a gap between adjacent terms in the far-difference decomposition of a number $m \in (R_k(n-1),R_k(n)]$ is $j$. Let $\lambda_1$ denote the largest root of the recurrence relation for the $k$-Skipponacci numbers, and let $a_1$ be the coefficient of $\lambda_1$ in the Generalized Binet's formula expansion for $S^{(k)}_n$. As $n\to\infty$, $P_n(j)$ converges to geometric decay for $j \ge 2k+2$, with computable limiting values for other $j$. Specifically, we have $ \lim_{n\to\infty}P_n(j) = P(j) = 0$ for $j \le k+1$, and
\begin{equation}
P(j) \ = \ \begin{cases}
\frac{a_1\lambda_1^{-3k-2}}{A_{1,1} \left(1-\lambda_1^{-2k-2}\right)^2 (\lambda_1-1)}\lambda_1^{-j} & \text{if }\; k+2 \le j < 2k+2 \\
\frac{a_1\lambda_1^{-2k-2}}{A_{1,1} \left(1-\lambda_1^{-2k-2}\right)^2 (\lambda_1-1)}\lambda_1^{-j} & \text{if }\; j \ge 2k+2.
\end{cases}
\end{equation}
where $A_{1,1}$ is a constant defined in \eqref{E[K+L]}.
\end{thm}
Our final results explore a complete characterization of sequences that exhibit far-difference representations. That is, we study integer decompositions on a sequence of terms in which same sign summands are $s$ apart in index and opposite sign summands are $d$ apart in index. We call such representations \emph{(s,d) far-difference representations}, which we formally define below.
\begin{defn}[$(s,d)$ far-difference representation]\label{def:sdfardiffrep} A sequence $\{a_n\}$ has an \emph{$(s,d)$ far-difference representation} if every integer can be written uniquely as sum of terms $\pm a_n$ in which every two terms of the same sign are at least $s$ apart in index and every two terms of opposite sign are at least $d$ apart in index.
\end{defn}
Thus the Fibonaccis lead to a $(4,3)$ far-difference representation. More generally, the $k$-Skipponaccis lead to a $(2k+2,k+2)$ one. We can consider the reverse problem; if we are given a pair of positive integers $(s,d)$, is there a sequence such that each number has a unique $(s,d)$ far-difference representation? The following theorem shows that the answer is yes, and gives a construction for the sequence.
\begin{thm}\label{farDiffRec} Fix positive integers $s$ and $d$, and define a sequence $\{a_n\}_{n=1}^{\infty}$ by
\begin{itemize}
\item[i.] For $n=1,2,\dots,\min(s,d)$, let $a_n=n$.
\item[ii.] For $\min(s,d)< n\leq \max(s,d)$, let
\be a_n \ = \ \left\{
\begin{array}{l l}
a_{n-1}+a_{n-s} & \quad \text{{\rm if}\ $s<d$}\\
a_{n-1}+a_{n-d}+1 & \quad \text{{\rm if}\ $d\leq s$.}
\end{array} \right.\ee
\item[iii.] For $n> \max(s,d)$, let $a_n=a_{n-1}+a_{n-s}+a_{n-d}$.
\end{itemize}
Then the sequence $\{a_n\}$ has an unique $(s,d)$ far-difference representation.
\end{thm}
In particular, as the Fibonaccis give rise to a $(4,3)$ far-difference representation, we should have $F_n = F_{n-1} + F_{n-4} + F_{n-3}$. We see this is true by repeatedly applying the standard Fibonacci recurrence: \bea F_n \ = \ F_{n-1} + F_{n-2} \ = \ F_{n-1} + \left(F_{n-3} + F_{n-4}\right) \ = \ F_{n-1} + F_{n-4} + F_{n-3}. \eea
To prove our results we generalize the techniques from \cite{Al, BBGILMT, MW1} to our families. In \S\ref{sec:fardiffrepskip} we prove that for any $k$-Skipponacci recurrence relation, a unique far-difference representation exists for all positive integers. In \S\ref{sec:gaussianity} we prove that the number of summands in any far-difference representation approaches a Gaussian, and then we study the distribution of gaps between summands in \S\ref{sec:distrgaps}. We end in \S\ref{sec:genfardiffseq} by exploring generalized $(s,d)$ far-difference representations.
\section{Far-difference representation of $k$-Skipponaccis}\label{sec:fardiffrepskip}
Recall the $k$-Skipponaccis satisfy the recurrence $S^{(k)}_{n+1} = S^{(k)}_n + S^{(k)}_{n-k}$ with $S^{(k)}_i = i$ for $1 \le i \le k+1$. Some common $k$-Skipponacci sequences are the 0-Skipponaccis (the binary sequence) and the 1-Skipponaccis (the Fibonaccis). We prove that every integer has a unique far-difference representation arising from the $k$-Skipponaccis. The proof is similar to Alpert's proof for the Fibonacci numbers.
We break the analysis into integers in intervals $(R_k(n-1), R_k(n)]$, with $R_k(n)$ as in \eqref{Rn}. We need the following fact.
\begin{lem} \label{Lem:R+R=S-1} Let $\{S^{(k)}_n\}$ be the $k$-Skipponacci sequence. Then
\begin{equation} \label{lemma1}
S^{(k)}_{n} - R_k(n-k-2) - R_k(n-1)=1.
\end{equation}
\end{lem}
The proof of follows by a simple induction argument, which for completeness we give in Appendix \ref{sec:proofsfromsecfardiffreplemmas}.
\begin{proof}[Proof of Theorem \ref{Thm:Far-Diff}] It suffices to consider the decomposition of positive integers, as negative integers follow similarly. Note the number 0 is represented by the decomposition with no summands.
We claim that the positive integers are the disjoint union over all closed intervals of the form $[S^{(k)}_n - R_k(n-k-2), R_k(n)]$. To prove this, it suffices to show that $S^{(k)}_{n} - R_k(n-k-2) = R_k(n-1) + 1$ which follows immediately from Lemma \ref{Lem:R+R=S-1}.
Assume a positive integer $x$ has a $k-$Skipponacci far-differenced representation in which $S^{(k)}_n$ is the leading term, (i.e., the term of largest index). It is easy to see that because of our rule, the largest number can be decomposed with the leading term $S^{(k))}_n$ is $ S^{(k)}_n+S^{(k)}_{n-2k-2}+S^{(k)}_{n-4k-4}+\cdots=R_k(n)$ and the smallest one is $S^{(k)}_n-S^{(k)}_{n-k-2}-S^{(k)}_{n-3k-4}-\cdots=S^{(k)}_n-R_k(n-k-2)$, hence $S^{(k)}_n-R_k(n-k-2)\leq x\leq R_k(n)$. Since we proved that $\{[S^{(k)}_n - R_k(n-k-2), R_k(n)]\}_{n=1}^\infty$ is a disjoint cover of all positive integers, for any integer $x\in \mathbb{Z}^+$, there is a unique $n$ such that $S^{(k)}_n - R_k(n-k-2) \le x \le S^{(k)}_n$. Further, if $x$ has a $k$-Skipponacci far-difference representation, then $S^{(k)}_n$ must be its leading term.
Therefore if a decomposition of such an $x$ exists it must begin with $S^{(k)}_n$. We are left with proving a decomposition exists and that it is unique. We proceed by induction.
For the base case, let $n=0$. Notice that the only value for $x$ on the interval $0 \le x \le R_k(0)$ is $x=0$, and the $k$-Skipponacci far-difference representation of $x$ is empty for any $k$. Assume that every integer $x$ satisfying $0 \le x \le R_k(n-1)$ has a unique far-difference representation. We now consider $x$ such that $R_k(n-1) < x \le R_k(n)$. From our partition of the integers, $x$ satisfies $S^{(k)}_n - R_k(n-k-2) \le x \le R_k(n)$. There are two cases.
\begin{itemize}
\item[(1)] $S^{(k)}_n - R_k(n-k-2) \le x \le S^{(k)}_n$. \\
Note that for this case, it is equivalent to say $0 \le S^{(k)}_n - x \le R_k(n-k-2)$. It then follows from the inductive step that $S^{(k)}_n - x$ has a unique $k$-Skipponacci far-difference representation with $S^{(k)}_{n-k-2}$ as the upper bound for the main term.
\item[(2)] $S^{(k)}_n \le x \le R_k(n)$. \\
For this case, we can once again subtract $S^{(k)}_n$ from both sides of the inequality to get $0 \le x-S^{(k)}_n \le R_k(n-2k-2)$. It then follows from the inductive step that $x-S^{(k)}_n$ has a unique far-difference representation with main term at most $S^{(k)}_{n-2k-2}$.
\end{itemize}
In either case, we can generate a unique $k$-Skipponacci far-difference representation for $x$ by adding $S^{(k)}_n$ to the representation for $x - S^{(k)}_n$ (which, from the definition of $R_k(m)$, in both cases has the index of its largest summand sufficiently far away from $n$ to qualify as a far-difference representation. \end{proof}
\section{Gaussian Behavior}\label{sec:gaussianity}
In this section we follow method in Section 6 of \cite{MW1} to prove Gaussianity for the number of summands. We first find the generating function for the problem, and then analyze that function to complete the proof.
\subsection{Derivation of the Generating Function}\label{sec:derivgenfns}
Let $p_{n,m,\ell}$ be the number of integers in $(R_k(n)$, $R_k(n+1)]$ with exactly $m$ positive summands and exactly $\ell$ negative summands in their far-difference decomposition via the $k$-Skipponaccis (as $k$ is fixed, for notational convenience we suppress $k$ in the definition of $p_{n,m,\ell}$). When $n \le 0$ we let $p_{n,m,\ell}$ be 0. We first derive a recurrence relation for $p_{n,m,\ell}$ by a combinatorial approach, from which the generating function immediately follows.
\begin{lem} Notation as above, for $n > 1$ we have
\begin{equation} \label{prec1}
p_{n,m,\ell}\ = \ p_{n-1,m,\ell}+ p_{n-(2k+2),m-1,\ell} + p_{n-(k+2),\ell,m-1}.
\end{equation}
\end{lem}
\begin{proof} First note that $p_{n,m,\ell} = 0$ if $m \le 0$ or $\ell < 0 $. In \S\ref{sec:fardiffrepskip} we partitioned the integers into the intervals $[R_k(n-1)+1,R_k(n)]$, and noted that if an integer $x$ in this interval has a far-difference representation, then it must have leading term $S^{(k)}_n$, and thus $x - S^{(k)}_n \in [R_k(n-1)+1-S^{(k)}_n,R_k(n)-S^{(k)}_n]$. From Lemma \ref{Lem:R+R=S-1} we have
\bea\label{S-R_n-1-R_n-k-2=1}
S^{(k)}_n - R_k(n-1) - R_k(n-k-2)
\ = \ 1,
\eea which implies $R_k(n-1) + 1 - S^{(k)}_n = -R_k(n-k-2)$. Thus $p_{n,m,\ell}$ is the number of far-difference representations for integers in $[-R_k(n-k-2), R_k(n-2k-2)]$ with $m-1$ positive summands and $\ell$ negative summands (as we subtracted away the main term $S^{(k)}_n$).
Let $n > 2k+2$. There are two possibilities.\\
\noindent \texttt{Case 1: $(k-1,\ell) = (0,0)$.}
\noindent Since $S^{(k)}_n - R_k(n-1) - R_k(n-k-2) = 1$ by \eqref{S-R_n-1-R_n-k-2=1}, we know that $S^{(k)}_{n-1} < R_k(n-1) < S^{(k)}_n$ for all $n > 1$. This means there must be exactly one $k$-Skipponacci number on the interval $[R_k(n-1)+1,R_k(n)]$ for all $n > 1$. It follows that $p_{n,1,0} = p_{n-1,1,0} = 1$, and the recurrence in \eqref{prec1} follows since $p_{n-k-2,0,0}$ and $p_{n-2k-2,0,0}$ are both 0 for all $n > 2k+2$. \\
\noindent \texttt{Case 2: $(k-1,\ell)$ is not $(0,0)$.}
\noindent Let $N(I,m,\ell)$ be the number of far-difference representations of integers in the interval $I$ with $m$ positive summands and $\ell$ negative summands. Thus
\begin{align} \label{pnml_sum1}
p_{n,m,\ell}
\;\ = \ &\; N\left[ (0,R_k(n-2k-2)],m-1,\ell \right] + N\left[ (-R_k(n-k-2),0],m-1,\ell \right] \nonumber \\
\;\ = \ &\; N\left[ (0,R_k(n-2k-2)],m-1,\ell \right] + N\left[ (0,R_k(n-k-2)],\ell,m-1 \right] \nonumber \\
\;\ = \ &\; \sum_{i=1}^{n-2k-2} p_{i,m-1,\ell} + \sum_{i=1}^{n-k-2} p_{i,\ell,m-1}.
\end{align}
Since $n > 1$, we can replace $n$ with $n-1$ in \eqref{pnml_sum1} to get
\begin{equation} \label{pnml_sum2}
p_{n-1,m,\ell}
\;\ = \ \; \sum_{i=1}^{n-2k-3} p_{i,m-1,\ell} + \sum_{i=1}^{n-k-3} p_{i,\ell,m-1}.
\end{equation}
Subtracting \eqref{pnml_sum2} from\eqref{pnml_sum1} gives us the desired expression for $p_{n,m,\ell}$. \end{proof}
The generating function $G_k(x,y,z)$ for the far-difference representations by $k$-Skipponacci numbers is defined by \be G_k(x,y,z)\ =\ \sum p_{n,m,\ell}x^my^{\ell}z^n. \ee
\begin{thm} \label{Thm:G_k(x,y,z)} Notation as above, we have
\begin{equation} \label{genfn}
G_k(x,y,z)
\;\ = \ \; \frac{xz-xz^2+xyz^{k+3}-xyz^{2k+3}}{1-2z+z^2-(x+y)z^{2k+2}+(x+y)z^{2k+3}-xyz^{2k+4}+xyz^{4k+4}}.
\end{equation}
\end{thm}
\begin{proof} Note that the equality in \eqref{prec1} holds for all triples $(n,m,\ell)$ except for the case where $n=1$, $m=1$, and $\ell=0$ under the assumption that $p_{n,m,\ell}=0$ whenever $n\leq 0$. To prove the claimed formula for the generating function in \eqref{genfn}, however, we require a recurrence relation in which each term is of the form $p_{n-n_0,m-m_0,\ell-\ell_0}$. This can be achieved with some simple substitutions. Replacing $(n,m,\ell)$ in \eqref{prec1} with $(n-k-2,\ell,m-1)$ gives
\begin{equation} \label{prec2}
p_{n-k-2,\ell,m-1}\ = \ p_{n-(k+3),\ell,m-1}+ p_{n-(3k+4),\ell-1,m-1} + p_{n-(2k+4),m-1,\ell-1},
\end{equation} which holds for all triples except $(k+3,1,1)$. Rearranging the terms of \eqref{prec1}, we get
\begin{equation} \label{prec3}
p_{n-(k+2),\ell,m-1} \ = \ p_{n,m,\ell} - p_{n-1,m,\ell} - p_{n-(2k+2),m-1,\ell}.
\end{equation}
We replace $(n,m,\ell)$ in \eqref{prec3} with $(n-1,m,\ell)$ and $(n-2k-2,m,\ell-1)$ which yields
\begin{equation} \label{prec4}
p_{n-(k+3),l,m-1} \ = \ p_{n-1,m,l} - p_{n-2,m,l} - p_{n-(2k+3),m-1,l},
\end{equation} which only fails for the triple $(2,1,0)$, and
\begin{equation} \label{prec5}
p_{n-(3k+4),l-1,m-1} \ = \ p_{n-(2k+2),m,l-1} - p_{n-(2k+3),m,l-1} - p_{n-(4k+4),m-1,l-1},
\end{equation} which only fails for the triple $(2k+3,1,1)$. We substitute equations \eqref{prec3}, \eqref{prec4} and \eqref{prec5} into \eqref{prec1} and obtain the following expression for $p_{n,m,\ell}$:
\begin{align} \label{pnmlrec}
p_{n,m,l}
\;\ = \ &\; 2p_{n-1,m,l} - p_{n-2,m,l} + p_{n-(2k+2),m-1,l} + p_{n-(2k+2),m,l-1} \nonumber \\
\;&\; - p_{n-(2k+3),m-1,l} - p_{n-(2k+3),m,l-1} + p_{n-(2k+4),m-1,l-1} - p_{n-(4k+4),m-1,l-1}.
\end{align}
Using this recurrence relation, we prove that the generating function in \eqref{genfn} is correct. Consider the following characteristic polynomial for the recurrence in \eqref{prec5}:
\begin{equation} \label{Pxyz}
P(x,y,z)
\ = \ 1 - 2z + z^2 -(x+y)z^{2k+2} + (x+y)z^{2k+3} - xyz^{2k+4} + xyz^{4k+4}.
\end{equation}
We take the product of this polynomial with the generating function to get
\begin{align} \label{GenRec}
P(x,y,z)G_k(x,y,z)
\;\ = \ &\; \left( 1 - 2z + z^2 -(x+y)z^{2k+2} + (x+y)z^{2k+3} - xyz^{2k+4}\right. \nonumber \\
\;&\; \left. + xyz^{4k+4}\right) \cdot \sum_{n \ge 1} p_{n,m,l}x^my^lz^n \nonumber \\
\;\ = \ &\; x^my^lz^n \cdot \sum_{n \ge 1} p_{n,m,l} - 2p_{n-1,m,l} + p_{n-2,m,l} - p_{n-(2k+2),m-1,l} \nonumber \\
\;&\; - p_{n-(2k+2),m,l-1} + p_{n-(2k+3),m-1,l} + p_{n-(2k+3),m,l-1} \nonumber \\
\;&\; - p_{n-(2k+4),m-1,l-1} + p_{n-(4k+4),m-1,l-1}.
\end{align}
Notice that the equality from \eqref{prec5} appears within the summation, and this quantity is zero whenever the equality holds. We have shown that the only cases where a triple does not satisfy the equality is when $(n,m,\ell)$ is given by $(1,1,0)$, $(2,1,0)$, $(k+3,1,1)$ or $(2k+3,1,1)$. Since \eqref{pnmlrec} is a combination of \eqref{prec3}, \eqref{prec4}, \eqref{prec2} and \eqref{prec5}, where these triples fail, it follows that they will also not satisfy the equality in \eqref{pnmlrec}. Thus within the summation in \eqref{GenRec} we are left with a non-zero coefficient for $x^my^{\ell}z^n$. We collect these terms and are left with the following:
\begin{equation}
P(x,y,z)G_k(x,y,z) \ = \ xz - xz^2 + xyz^{k+3} - xyz^{2k+3}.
\end{equation}
Rearranging these terms and substituting in our value for $P(x,y,z)$ gives us the desired equation for the generating function.
\end{proof}
Going forward, we often need the modified version of our generating function in which we factor out the term $(1-z)$ from both the numerator and the denominator:
\begin{align} \label{Genfn2}
G_k(x,y,z)
\;\ = \ &\; \frac{ xz + \frac{1-z^k}{1-z}xyz^{k+3} }{1-z-(x+y)z^{2k+2} + \frac{1-z^{2k}}{1-z}\left(-xyz^{2k+4}\right) } \nonumber \\
\;\ = \ &\; \frac{xz + xy\sum_{j=k+3}^{2k+2}z^j}{1-z-(x + y)z^{2k+2}-xy\sum_{j=2k+4}^{4k+3}z^j}.
\end{align}
For some calculations, it is more convenient to use this form of the generating function because the terms of the denominator are of the same sign (excluding the constant term).
\subsection{Proof of Theorem \ref{thm:Gaussianity[MW]}}\label{sec:subsecgaussianity}
Now that we have the generating function, we turn to proving Gaussianity. As the calculation is long and technical, we quickly summarize the main idea. We find, for $\kappa = 4k+3$, that we can write the relevant generating function as a sum of $\kappa$ terms. Each term is a product, and there is no $n$-dependence in the product (the $n$ dependence surfaces by taking one of the terms in the product to the $n$\textsuperscript{th} power). We then mimic the proof of the Central Limit Theorem. Specifically, we show only the first of the $\kappa$ terms contributes in the limit. We then Taylor expand and use logarithms to understand its behavior. The reason everything works so smoothly is that we almost have a fixed term raised to the $n$\textsuperscript{th} power; if we had that, the Central Limit Theorem would follow immediately. All that remains is to do some book-keeping to see that the mean is of size $n$ and the standard deviation of size $\sqrt{n}$.\\
To prove Theorem \ref{thm:Gaussianity[MW]}, we first prove that for each non-negative $(a,b)\neq (0,0)$, $X_n=a\mathcal{K}_n+b\mathcal{L}_n$ converges to a normal distribution as $n$ approaches infinity.
Let $x=w^a$ and $y=w^b$, then the coefficient of $z^n$ in \eqref{genfn} is given by $\sum_{m,\ell} p_{n,m,\ell}x^my^{\ell}=\sum_{m,\ell} p_{n,m,\ell} w^{am+b\ell}$. Define
\begin{equation}
g_n(w) \ := \ \sum_{m>0,\ell\ge 0} p_{n,m,\ell}w^{am + b\ell}.
\end{equation}
Then $g_n(w)$ is the generating function of $X_n$ because for each $i\in\{1,\dots,n\}$,
\begin{equation}
P(X_n=i)\ = \ \sum_{am+b\ell =i}p_{n,m,\ell}.
\end{equation}
We want to prove $g_n(w)$ satisfies all the conditions stated in Theorem \ref{thm_generalGaussian}. The following proposition, which is proved in Appendix \ref{sec:propmainres}, is useful for that purpose.
\begin{prop} There exists $\epsilon \in (0,1)$ such that for any $w \in I_{\epsilon} = (1-\epsilon,1+\epsilon)$:\label{prop:mainres}
\begin{itemize}
\item[(a)] $A_w(z)$ has no multiple roots, where $A_w(z)$ is the denominator of \eqref{genfn}.
\item[(b)] There exists a single positive real root $e_1(w)$ such that $e_1(w) < 1$ and there exists some positive $\lambda<1$ such that $|e_1(w)|/|e_i(w)|<\lambda$ for all $i \ge 2$.
\item[(c)] Each root $e_i(w)$ is continuous, infinitely differentiable, and
\begin{equation} \label{eprime}
e_1'(w)\ = \ -\frac{(aw^{a-1}+bw^{b-1})e_1(w)^{2k+2}+(a+b)w^{a+b-1}\sum_{j=2k+4}^{4k+3}e_1(w)^j}{1+(w^a+w^b)(2k+2)e_1(w)^{2k+1}+w^{a+b}
\sum_{j=2k+4}^{4k+3}je_1(w)^{j-1}}.
\end{equation}
\end{itemize}
\end{prop}
In the next step, we use partial fraction decomposition of $G_k(x,y,z)$ (from Theorem \ref{Thm:G_k(x,y,z)}) to find a formula for $g_n(w)$. Let $A_w(z)$ be the denominator of $G_k$. Making the substitution $(x,y) = (w^a,w^b)$, we have
\begin{align} \label{pfA_w(z)}
\frac{1}{A_w(z)}
\;\ = \ &\; \frac{1}{w^{a+b}} \sum_{i\ = \ 1}^{4k+3} \frac{1}{(z-e_i(w))\prod_{j \neq i}(e_j(w) - e_i(w))} \nonumber \\
\;\ = \ &\; \frac{1}{w^{a+b}}\sum_{i=1}^{4k+3} \frac{1}{(1-\frac{z}{e_i(w)})} \cdot \frac{1}{e_i(w)\prod_{j \neq i}(e_j(w) - e_i(w))}.
\end{align}
Using the fact that $\frac{1}{1-\frac{z}{e_i(w)}}$ represents a geometric series, we combine the numerator of our generating function with our expression for the denominator in \eqref{pfA_w(z)} to get
\begin{align}
g_n(w)
\;\ = \ &\; \sum_{i=1}^{4k+3} \frac{1}{w^b e_i^n(w)\prod_{j \neq i}(e_j(w) - e_i(w))} -\sum_{i=1}^{4k+3} \frac{1}{w^b e_i^{n-1}(w)\prod_{j \neq i}(e_j(w) - e_i(w))} \nonumber \\
\;&\; + \sum_{i=1}^{4k+3} \frac{1}{e_i^{n-k-2}(w)\prod_{j \neq i}(e_j(w) - e_i(w))} - \sum_{i=1}^{4k+3} \frac{1}{e_i^{n-2k-2}(w)\prod_{j \neq i}(e_j(w) - e_i(w))} \nonumber \\
\;\ = \ &\; \sum_{i=1}^{4k+3} \frac{w^{-b}(1 - e_i(w)) + e_i^{k+2}(w) - e_i^{2k+2}(w)}{e_i^n(w)\prod_{j \neq i}(e_j(w) - e_i(w))}.
\end{align}
Let $q_i(w)$ denote all terms of $g_n(w)$ that do not depend on $n$:
\begin{equation} \label{q(w)}
q_i(w) \ := \ \frac{w^{-b}(1 - e_i(w)) + e_i^{k+2}(w) - e_i^{2k+2}(w)}{\prod_{j \neq i}(e_j(w) - e_i(w))}.
\end{equation}
Setting $\alpha_i:\ = \ 1/e_i$, we can find $g_n(w) = \sum_{i=1}^{4k+3} q_i(w)\alpha_i^n$. We want to apply Theorem \ref{thm_generalGaussian} to $X_n$. All the notations are the same except $\kappa:=4k+3$.
Indeed, by part (c) of Proposition \ref{prop:mainres}, $e_i(w)$ are infinitely many times differentiable for any $i=1,\dots,4k+3$. Since $0$ is not a root of $A_w(z)$, for sufficiently small $\epsilon$, $e_i(w)\neq 0$ for all $w\in I_\epsilon$. Therefore $\alpha_i$ and $q_i$, as rational functions of $e_1,\dots,e_{4k+3}$, are also infinitely many times differentiable; in particular, they are three times differentiable, thus satisfy condition $(i)$ in Theorem \ref{thm_generalGaussian}. By part (b) of Proposition \ref{prop:mainres}, $|e_1(w)|<1$ and $|e_1(w)|/|e_i(w)|<\lambda<1$ for $i\geq 2$. This implies $|\alpha_1(w)|>1$ and $|\alpha_i(w)|/|\alpha_1(w)|<\lambda<1$ for $i\geq 2$, thus $g_n$ satisfies condition $(ii)$ in Theorem \ref{thm_generalGaussian}. The following lemma, whose proof is stated in Appendix \ref{sec:proof_lem_variance_grow}, verifies the last condition.
\begin{lem}\label{lem_variance_grow} Given conditions as above:
\begin{equation}\label{nonzero_mean}
\frac{\alpha_1'(1)}{\alpha_1(1)}\ = \ \frac{-e'_1(1)}{e_1(1)}\ \neq \ 0.
\end{equation}
\begin{equation}\label{nonzero_variance}
\frac{d}{dw}\left[\frac{w\alpha_1'(w)}{\alpha_1(w)}\right] \Big|_{w=1}\ = \ -
\frac{d}{dw}\left[\frac{we_1'(w)}{e_1(w)}\right] \Big|_{w=1}\ \neq \ 0.
\end{equation}
\end{lem}
We can now apply Theorem \ref{thm_generalGaussian} to conclude that $X_n$ converges to a Gaussian as $n$ approaches infinity. Moreover, we have formulas for the mean and variance of $X_n=a\mathcal{K}_n+b\mathcal{L}_n$ for each $(a,b)$ non-negative and not both zero. We have
\begin{equation} \label{E[K+L]}
\mathbb{E}[a\mathcal{K}_n+b\mathcal{L}_n] \ = \ A_{a,b}n + B_{a,b} + o(1),\end{equation}
where $A_{a,b}=\alpha'_1(1)/\alpha_1(1)$ and $B_{a,b}=q_1'(1)/q_1(1)$, which depend only on our choice of $a$ and $b$. Further,
\begin{equation} \label{Var[K+L]}
{\rm Var}(a\mathcal{K}_n + b\mathcal{L}_n) \ = \ C_{a,b}n + D_{a,b} + o(1),
\end{equation} where $C_{a,b}\ = \ \left(\frac{w\alpha_1'(w)}{\alpha_1(w)}\right)'\Big|_{w=1}$ and
$D_{a,b}\ = \ \left(\frac{wq_1'(w)}{q_1(w)}\right)'\Big|_{w=1}$,
which depend only on $a$ and $b$. By lemma \ref{lem_variance_grow}, $A_{a,b}$ and $C_{a,b}$ are non-zero, thus the mean and variance of $X_n$ always grows linearly with $n$.
As proved above, $X_n=a\mathcal{K}_n+b\mathcal{L}_n$ converges to a Gaussian distribution as $n\to\infty$. Let $(a,b)=(1,0)$ and $(0,1)$ we get $\mathcal{K}_n$ and $\mathcal{L}_n$ individually converge to a Gaussian. By \eqref{E[K+L]}, their means both grows linearly with $n$.
\begin{equation}
\mathbb{E}[\mathcal{K}_n]=A_{1,0}n+B_{1,0}+o(1)
\end{equation}
\begin{equation}
\mathbb{E}[\mathcal{L}_n]=A_{0,1}n+B_{0,1}+o(1)
\end{equation}
Moreover, $A_{a,b}=A_{b,a}$ because $A_{a,b}=\frac{\alpha_1'(1)}{\alpha_1(1)}=\frac{-e_1'(1)}{e_1(1)}$ where $e_1(1)$ is a constant and $e'_1(1)$ is symmetric between $a$ and $b$ as shown in \eqref{eprime}. In particular $A_{1,0}=A_{0,1}$, hence $\mathbb{E}[\mathcal{K}_n]-\mathbb{E}[\mathcal{L}_n]$ converges to a constant as $n\to\infty$. This implies the average number of positive and negative summands differ by a constant.
Equation \eqref{Var[K+L]} gives us a way to calculate variance of any joint density of $\mathcal{K}_n$ and $\mathcal{L}_n$. We can furthermore calculate the covariance and correlation of any two joint densities as a function of $e_1$ and $q_1$.
In particular, we prove that $\mathcal{K}_n+\mathcal{L}_n$ and $\mathcal{K}_n-\mathcal{L}_n$ have correlation decaying to zero with $n$. Indeed, from \eqref{Var[K+L]}:
\begin{equation}
{\rm Var}[\mathcal{K}_n]\ = \ C_{1,0}n+D_{1,0}+o(1).
\end{equation}
\begin{equation}
{\rm Var}[\mathcal{L}_n]\ = \ C_{0,1}n+D_{0,1}+o(1).
\end{equation}
\noindent Note that $C_{0,1}=C_{1,0}$ because again we have \be C_{a,b}\ = \ \left(\frac{x\alpha_1'(w)}{\alpha_1(w)}\right)'\Bigg|_{w=1}\ = \ - \left(\frac{we_1'(w)}{e_1(w)}\right)'\Big|_{w=1}\ee
where $e_1(w)$ does not depend on $a,b$ and $e'_1(w)$ is symmetric between $a,b$. Therefore,
\begin{equation}
{\rm Cov}[\mathcal{K}_n+\mathcal{L}_n,\mathcal{K}_n-\mathcal{L}_n] \ = \ \frac{{\rm Var}[2\mathcal{K}_n]+{\rm Var}[2\mathcal{L}_n]}{4} \ = \ {\rm Var}[\mathcal{K}_n]-{\rm Var}[\mathcal{L}_n]\ = \ O(1).
\end{equation}
Therefore
\begin{equation}
{\rm Corr}[\mathcal{K}_n,\mathcal{L}_n]=\frac{{\rm Cov}[\mathcal{K}_n, \mathcal{L}_n]}{\sqrt{{\rm Var}[\mathcal{K}_n]{\rm Var}[\mathcal{L}_n]}}\ = \ \frac{O(1)}{\theta(n)}\ =\ o(1)
\end{equation} (where $\theta(n)$ represents a function which is on the order of $n$). This implies $\mathcal{K}_n-\mathcal{L}_n$ and $\mathcal{K}_n,\mathcal{L}_n$ are uncorrelated as $n\to\infty$. This completes the proof of Theorem \ref{thm:Gaussianity[MW]}. \hfill $\Box$
\subsection{Proof of Theorem \ref{thm_generalGaussian}}
We now collect the pieces. The argument here is different than the one used in \cite{MW1}, and leads to a conceptually simpler proof (though we do have to wade through a good amount of algebra). The rest of this section is just mimicking the standard proof of the Central Limit Theorem, while at the same time isolating the values of the mean and variance.\\
To prove part $(a)$, we use the generating function $g_n(x)$ to calculate $\mu_n$ and $\sigma^2_n$ as follows:
\begin{equation}
\mu_n\ = \ \mathbb{E}[X_n]\ = \ \frac{\sum_{i=1}^n \rho_{i;n}\cdot i}{\sum_{i=1}^n \rho_{i;n}}\ = \ \frac{g_n'(1)}{g_n(1)}
\end{equation}
\begin{equation}
\sigma_n^2\ = \ \mathbb{E}[X_n^2]-\mu_n^2\ = \ \frac{\sum_{i=1}^n \rho_{i;n}\cdot i^2}{\sum_{i=1}^n \rho_{i;n}}-\mu_n^2 \ = \ \frac{[xg'_n(x)]'\big|_{x=1}}{g_n(1)}-\left(\frac{g_n'(1)}{g_n(1)}\right)^2.
\end{equation}
The calculations are then straightforward:
\begin{equation}
g_n'(x)\ = \ \sum_{i=1}^\kappa [q_i(x)\alpha_i^n(x)]'\ = \ \sum_{i=1}^\kappa [q_i'(x)\alpha_i^n(x)+q_i(x)n\alpha_i^{n-1}(x)\alpha'_i(x)]
\end{equation}
\begin{align}\label{variance_formula}
[xg'_n(x)]' & \ = \ \sum_{i=1}^\kappa \left(x[q_i'(x)\alpha_i^n(x)+q_i(x)n\alpha_i^{n-1}(x)\alpha'_i(x)]\right)'\nonumber\\
&\ = \ \sum_{i=1}^\kappa \left( q_i'(x)\alpha_i^n(x)+q_i(x)n\alpha_i^{n-1}(x)\alpha_i'(x)+\right.\nonumber\\
& \left. x\left[ q_i''(x)\alpha_i^n(x)+2q_i'(x)n\alpha_i^{n-1}(x)\alpha_i'(x)+q_in\alpha_i^{n-1}\alpha_i''(x)+q_i(x)n(n-1)\alpha_i^{n-2}(\alpha_i'(x))^2\right]\right).
\end{align}
Since $|\alpha_i(1)/\alpha_1(1)|<\lambda<1$ for each $i\geq 2$, we have
\begin{equation}
\sum_{i=2}^\kappa q_i(1)\alpha_i^n(1)\ = \ \alpha_1^n(1)\sum_{i=2}^\kappa q_i(1)\left(\frac{\alpha_i(1)}{\alpha_1(1)}\right)^n\ = \ o(\lambda^n)\alpha_1^n(1).
\end{equation}
Similarly,
\begin{equation}
\sum_{i=2}^\kappa [q_i(x)\alpha_i^n(x)]'\Big|_{x=1}\ = \ \alpha_1^n(1)\sum_{i=2}^\kappa \left[q'_i(1)+\frac{nq_i(1)\alpha'_i(1)}{\alpha'_i(1)}
\right]\left(\frac{\alpha_i(1)}{\alpha_1(1)}\right)^n\ = \ o(\lambda^n)\alpha_1^n(1)
\end{equation} and
\begin{equation}
\sum_{i=2}^\kappa \Big(x[q_i(x)\alpha_i^n(x)]'\Big)'\Big|_{x=1}\ = \ o(\lambda^n)\alpha_1^n(1).
\end{equation}
Hence
\begin{align}
\mu_n\ = \ \frac{g'_n(1)}{g_n(1)}& \ = \ \frac{[q_1'(1)\alpha_1^n(1)+q_1(1)n\alpha_i^{n-1}(1)\alpha'_1(1)]+ o(\lambda^n) \alpha_1^n(1)}{q_1(1)\alpha_1^n(1)+o(\lambda^n) \alpha_1^n(1)}\nonumber\\
&\ = \ \frac{q_1'(1)+q_1(1)n\frac{\alpha'_1(1)}{\alpha_1(1)}+o(\lambda^n)}{q_1(1)+o(\lambda^n)}
\ = \ \frac{q_1'(1)}{q_1(1)}+n\frac{\alpha_1'(1)}{\alpha_1(1)}+o(1).
\end{align}
Similarly,
\begin{align}
\sigma_n^2 & \ = \ \frac{[xg'_n(x)]'\big|_{x=1}}{g_n(1)}-\mu_n^2\nonumber\\ &\ = \ \frac{([x(q_1(x)\alpha_1(x))']'\Big|_{x=1}+o(\lambda^n)\alpha_1^n(1)}{q_1(1)\alpha_1^n(1)+o(\lambda^n)\alpha_1^n(1)}-\mu_n^2\nonumber\\
&\ = \ \frac{q_1'}{q_1}+\frac{n\alpha_1'}{\alpha_1}+\frac{q''_1}{q_1(1)}+\frac{2q'_1n\alpha_1'}{\alpha_1}+\frac{n\alpha_1''}{\alpha_1}+\frac{n(n-1)(\alpha'_1)^2}{\alpha_1^2}-\left(\frac{\alpha'_1}{\alpha_1}n+\frac{q'_1}{q_1}+o(1)\right)^2\nonumber\\
& \ = \ \frac{\alpha_1(\alpha_1'+\alpha_1'')-(\alpha_1')^2}{\alpha_1^2}\cdot n+\frac{q_1(q_1'+q_1'')-(q_1')^2}{q_1^2}+o(1).
\end{align}
Here we apply \eqref{variance_formula} and use $q_1,\alpha_1$ short for $q_1(1),\alpha_1(1)$. The last things we need are
\be \frac{\alpha_1(1)[\alpha_1'(1)+\alpha_1''(1)]-\alpha_1'(1)^2}{\alpha_1(1)^2}
\ = \ \left(\frac{x\alpha_1'(x)}{\alpha_1(x)}\right)\Bigg|_{x=1}\ee
and
\be\frac{q_1(1)[q_1'(1)+q_1''(1)]-q_1'(1)^2}{q_1(1)^2}\ = \ \left(\frac{xq_1'(x)}{q_1(x)}\right)\Bigg|_{x=1},\ee
which are simple enough to check directly. This completes the proof of part $(a)$ of Theorem \ref{thm_generalGaussian}.\\ \
To prove part $(b)$ of the theorem, we use the method of moment generating functions, showing that moment generating function of $X_n$ converges to that of a Gaussian distribution as $n\to\infty$. (We could use instead the characteristic functions, but the moment generating functions have good convergence properties here.) The moment generating function of $X_n$ is
\be M_{X_n}(t)=\mathbb{E}[e^{tX_n}]\ = \ \frac{\sum_i \rho_{i;n} e^{ti}}{\sum_i {\rho_{i;n}}}\ = \ \frac{g_n(e^t)}{g_n(1)}\ = \ \frac{\sum_{i=1}^\kappa q_i(e^t)\alpha_i^n(e^t)}{\sum_{i=1}^\kappa q_i(1)\alpha_i^n(1)}.\ee
Since $|\alpha_i(e^t)|<|\alpha_1(e^t)|$ for any $i\geq 2$, the main term of $g_n(e^t)$ is $q_1(e^t)\alpha_1(e^t)$. We thus write
\begin{align}
M_{X_n}(t) &\ = \ \frac{\sum_{i=1}^\kappa q_i(e^t)\alpha_i^n(e^t)}{\sum_{i=1}^\kappa q_i(1)\alpha_i^n(1)}
\ = \ \frac{q_1(e^t)\alpha_1^n(e^t)\left[1+\sum_{i=2}^k \frac{q_i(e^t)}{q_1(e^t)}\left(\frac{\alpha_i(e^t)}{\alpha_1(e^t)}\right)^n\right]}{q_1(1)\alpha_1^n(1)\left[1+\sum_{i=2}^\kappa \frac{q_i(1)}{q_1(1)}\left(\frac{\alpha_i(1)}{\alpha_1(1)}\right)^n\right]}\nonumber\\
&\ = \ \frac{q_1(e^t)\alpha_1^n(e^t)[1+ O(\kappa Q\lambda^n)]}{q_1(1)\alpha_1^n(1)[1+ O(\kappa Q\lambda^n)]}
\ = \ \frac{q_1(e^t)}{q_1(1)}\left(\frac{\alpha_1(e^t)}{\alpha_1(1)}\right)^n\left(1+O(\kappa Q\lambda^n)\right),
\end{align}
where $Q=\max_{i\geq 2} \sup_{t\in [-\delta,+\delta]} \frac{q_i(e^t)}{q_1(e^t)}$. As $0<\lambda<1$, $\kappa Q\lambda^n$ rapidly decays when $n$ gets large. Taking the logarithm of both sides yields
\begin{equation}
\log M_{X_t}\ = \ \log \frac{q_1(e^t)}{q_1(1)}+n\log\frac{\alpha_1(e^t)}{\alpha_1(1)}+\log\left(1+O(\kappa Q\lambda^n)\right)\ = \ \log \frac{q_1(e^t)}{q_1(1)}+n\log\frac{\alpha_1(e^t)}{\alpha_1(1)}+o(1).
\end{equation}
Let
$Y_n=\frac{X_n-\mu_n}{\sigma_n}$, then the moment generating function of $Y_n$ is
\begin{equation}
M_{Y_n}(t)\ = \ \mathbb{E}[e^{t(X_n-\mu_n)/\sigma_n}]\ = \ M_{X_n}(t/\sigma_n) e^{-t\mu_n/\sigma_n}.
\end{equation}
Therefore
\begin{equation}\label{log_M_Yn}
\log M_{Y_n}(t)\ = \ \frac{-t\mu_n}{\sigma_n}+ \log \frac{q_1(e^{t/\sigma_n})}{q_1(1)}+n\log\frac{\alpha_1(e^{t/\sigma_n})}{\alpha_1(1)}+o(1).
\end{equation}
Since $\sigma_n=\theta (\sqrt{n})$, $t/\sigma_n\to 0$ as $n\to\infty$. Hence \begin{equation}\label{log_q}
\lim_{n\to\infty} \log \frac{q_1(e^{t/\sigma_n})}{q_1(1)}\ = \ \log 1\ = \ 0.
\end{equation}
Using the Taylor expansion of degree two at 1, we can write $\alpha_1(x)$ as
\begin{equation}
\alpha_1(x)=\alpha_1(1)+\alpha'(1)(x-1)+\frac{\alpha_1''(1)}{2} (x-1)^2+O((x-1)^3).
\end{equation}
Substituting $x=e^{t/\sigma_n}=1+\frac{t}{\sigma_n}+\frac{t^2}{2\sigma_n^2}+O(\frac{t^3}{\sigma_n^3})$ and noting that $\sigma_n=\theta(n^{1/2})$), we get
\begin{equation}
\alpha_1(e^{t/\sigma_n})\ = \ \alpha_1(1)+\alpha'(1)(\frac{t}{\sigma_n}+\frac{t^2}{2\sigma_n^2}+O(n^{-3/2}))+\frac{\alpha_1''(1)}{2} \left[\frac{t^2}{\sigma^2_n}+O(n^{-3/2})\right]+O(n^{-3/2}).
\end{equation}
Taking the logarithm and using the Taylor expansion $\log(1+x)=x-x^2/2+O(x^3)$ gives us:
\begin{align}\label{log_alpha}
\log \frac{\alpha_1(e^{t/\sigma_n})}{\alpha_1(1)} & \ = \ \log \left( 1+\frac{\alpha_1'(1)}{\alpha_1(1)}\frac{t}{\sigma_n}+\frac{\alpha_1'(1)+\alpha''_1(1)}{\alpha_1(1)}\frac{t^2}{2\sigma^2_n}+O(n^{-3/2}\right)\nonumber\\
& \ = \ \frac{\alpha_1'(1)}{\alpha_1(1)}\frac{t}{\sigma_n}+\frac{\alpha_1'(1)+\alpha''_1(1)}{\alpha_1(1)}\frac{t^2}{2\sigma^2_n}-\left(\frac{\alpha_1'(1)}{\alpha_1(1)}\right)^2\frac{t^2}{2\sigma_n^2}+O(n^{-3/2}).
\end{align}
Substituting \eqref{log_q} and \eqref{log_alpha} into \eqref{log_M_Yn}:
\begin{align}
\log M_{Y_n}(t) & \ = \ -\frac{t\mu_n}{\sigma_n}+n\left(\frac{\alpha_1'(1)}{\alpha_1(1)}\frac{t}{\sigma_n}+\frac{\alpha_1'(1)+\alpha''_1(1)}{\alpha_1(1)}\frac{t^2}{2\sigma^2_n}-\left(\frac{\alpha_1'(1)}{\alpha_1(1)}\right)^2\frac{t^2}{2\sigma_n^2}+O(n^{-3/2}) \right)+o(1)\nonumber\\
& \ = \ \left(n\frac{\alpha'_1(1)}{\alpha_1(1)} - \mu_n\right)\frac{t}{\sigma_n} + n\frac{\alpha_1(1)[\alpha_1'(1)+\alpha_1''(1)] - \alpha_1'(1)^2}{\alpha_1(1)^2}\frac{t^2}{2\sigma_n^2}+o(1).
\end{align}
Using the same notations $A,B,C,D$ as in Theorem \ref{thm_generalGaussian}:
\begin{align}
\log M_{Y_n}(t) & \ = \ \frac{An-\mu_n}{\sigma_n}\cdot t+\frac{Cn}{\sigma_n^2}\cdot \frac{t^2}{2}+o(1)\nonumber\\
& \ = \ \frac{B+o(1)}{\sqrt{Cn+D+o(1)}}\cdot t+\frac{Cn}{Cn+D+o(1)}\cdot \frac{t^2}{2}+o(1)\nonumber\\
& \ = \ \frac{t^2}{2}+o(1).
\end{align}
This implies the moment generating function of $Y_n$ converges to that of the standard normal distribution. So as $n\to\infty$, the moment generating function of $X_n$ converges to a Gaussian, which implies convergence in distribution.
\hfill $\Box$
\section{Distribution of Gaps}\label{sec:distrgaps}
\subsection{Notation and Counting Lemmas}
In this section we prove our results about gaps between summands arising from $k$-Skipponacci far-difference representations. Specifically, we are interested in the probability of finding a gap of size $j$ among all gaps in the decompositions of integers $x \in [R_k(n),R_k(n+1)]$. In this section, we adopt the notation used in \cite{BBGILMT}. If $\epsilon_i \in \{-1, 1\}$ and
\be x \ = \ \epsilon_j S^{(k)}_{i_j} + \epsilon_{j-1} S^{(k)}_{i_{j-1}} + \cdots + \epsilon_1 S^{(k)}_{i_1} \ee
is a legal far-difference representation (which implies that $i_j = n$), then the gaps are
\be i_j - i_{j-1}, \ \ \ \ i_{j-1} - i_{j-2}, \ \ \ \ \dots, \ \ \ \ i_2 - i_1. \ee
Note that we do not consider the `gap' from the beginning up to $i_1$, though if we wished to include it there would be no change in the limit of the gap distributions. Thus in any $k$-Skipponacci far-difference representations, there is one fewer gap than summands. The greatest difficulty in the subject is avoiding double counting of gaps, which motivates the following definition.
\begin{defn}[Analogous to Definition 1.4 in \cite{BBGILMT}] \label{GapNotation} \
\begin{itemize}
\item Let $X_{i,i+j}(n)$ denote the number of integers $x \in [R_k(n),R_k(n+1)]$ that have a gap of length $j$ that starts at $S^{(k)}_i$ and ends at $S^{(k)}_{i+j}$.
\item Let $Y(n)$ be the total number of gaps in the far-difference decomposition for \\ $x \in [R_k(n), R_k(n+1)]$:
\begin{equation} \label{Y(n)}
Y(n) \ := \ \sum_{i=1}^n \sum_{j=0}^n X_{i,i+j}(n).
\end{equation}
Notice that $Y(n)$ is equivalent to the total number of summands in all decompositions for all $x$ in the given interval \emph{minus} the number of integers in that interval. The main term is thus the total number of summands, which is
\be \left[A_{1,1}n + B_{1,1} + o(1)\right] \cdot [R_k(n+1)-R_k(n)] \ = \ A_{1,1} n [R_k(n+1)-R_k(n)], \ee
as we know from \S\ref{sec:subsecgaussianity} that $\mathbb{E}[\mathcal{K}_n+\mathcal{L}_n]=A_{1,1}n + B_{1,1} + o(1)$.
\item Let $P_n(j)$ denote the proportion of gaps from decompositions of $x$ $\in$ $[R_k(n)$, $R_k(n+1)]$ that are of length $j$:
\begin{equation} \label{P_n(j)}
P_n(j) \ := \ \frac{\sum_{i=1}^{n-j} X_{i,i+j}(n)}{Y(n)},
\end{equation}
and let
\begin{equation} \label{P(j)}
P(j) \ := \ \lim_{n\to\infty} P_n(j)
\end{equation} (we will prove this limit exists).
\end{itemize}
\end{defn}
Our proof of Theorem \ref{thm:gapresult} starts by counting the number of gaps of constant size in the $k$-Skipponacci far-difference representations of integers. To accomplish this, it is useful to adopt the following notation.
\begin{defi} Notation for counting integers with particular $k$-Skipponacci summands. \label{defi:N(S)notation}
\begin{itemize}
\item Let $N(\pm S^{(k)}_i,\pm S^{(k)}_j)$ denote the number of integers whose decomposition begins with $\pm S^{(k)}_i$ and ends with $\pm S^{(k)}_j$.
\item Let $N(\pm F_i)$ be the number of integers whose decomposition ends with $\pm F_i$.
\end{itemize}
\end{defi}
The following results, which are easily derived using the counting notation in Definition \ref{defi:N(S)notation}, are also useful.
\begin{lem} \label{lem:counting}
\begin{equation} \label{N(S^{(k)}_n)shift}
N(\pm S^{(k)}_i,\pm S^{(k)}_j) \ = \ N(\pm S^{(k)}_1,\pm S^{(k)}_{j-i+1}).
\end{equation}
\begin{equation} \label{N(S^{(k)}_n)in-exclusion}
N(-S^{(k)}_1, +S^{(k)}_j) + N(+S^{(k)}_1, +S^{(k)}_j) \ = \ N(+S^{(k)}_j) - N(+S^{(k)}_{j-1}).
\end{equation}
\begin{equation} \label{N(S^{(k)}_n)cardinality}
N(+S^{(k)}_i) \ = \ R_k(i) - R_k(i-1).
\end{equation}
\end{lem}
\begin{proof} First, note that \eqref{N(S^{(k)}_n)shift} describes a shift of indices, which doesn't change the number of possible decompositions. For \eqref{N(S^{(k)}_n)in-exclusion}, we can apply inclusion-exclusion to get
\bea
& & N(-S^{(k)}_1, +S^{(k)}_j) + N(+S^{(k)}_1, +S^{(k)}_j)
\nonumber\\ & & \ \ \ \ \ = \ N(+S^{(k)}_j) - \left[N(+S^{(k)}_2, +S^{(k)}_j) + N(+S^{(k)}_3, +S^{(k)}_j) + \cdots\right] \nonumber\\ & & \ \ \ \ \ = \ N(+S^{(k)}_j) - \left[N(+S^{(k)}_1, +S^{(k)}_{j-1}) + N(+S^{(k)}_2, +S^{(k)}_{j-1}) + \cdots\right] \nonumber\\ & & \ \ \ \ \ = \ N(+S^{(k)}_j) - N(+S^{(k)}_{j-1}).
\eea
Finally, for \eqref{N(S^{(k)}_n)cardinality}, recall that the $k$-Skipponaccis partition the integers into intervals of the form $[S^{(k)}_n-R_k(n-k-2), R_k(n)]$, where $S^{(k)}_n$ is the main term of all of the integers in this range. Thus $N(+F_i)$ is the size of this interval, which is just $R_k(i) - R_k(i-1)$, as desired. \end{proof}
\subsection{Proof of Theorem \ref{thm:gapresult}}
We take a combinatorial approach to proving Theorem \ref{thm:gapresult}. We derive expressions for $X_{i,i+c}(n)$ and $X_{i,i+j}(n)$ by counting, and then we use the Generalized Binet's Formula for the $k$-Skipponaccis in Lemma \ref{Binet-Skipponacci} to reach the desired expressions for $P_n(j)$, and then take the limit as $n\to\infty$.
\begin{proof}[Proof of Theorem \ref{thm:gapresult}] We first consider gaps of length $j$ for $k+2 \le j < 2k+2$, then show that the case with gaps of length $j \ge 2k+2$ follows from a similar calculation. It is important to separate these two intervals as there are sign interactions that must be accounted for in the former that do not affect our computation in the latter. From Theorem \ref{Thm:Far-Diff}, we know that there are no gaps of length $k+1$ or smaller. Using Lemma \ref{lem:counting}, we find a nice formula for $X_{i,i+j}(n)$. For convenience of notation, we will let $R_k$ denote $R_k(n)$ in the following equations:
\begin{align} \label{X(i,i+c)}
X_{i,i+j}(n)
\;\ = \ &\; N(+S^{(k)}_i)\left[N(+S^{(k)}_{n-i-j+1}) - N(+S^{(k)}_{n-i-j})\right] \nonumber \\
\;\ = \ &\; (R_i - R_{i-1})\left[(R_{n-i-j+1} - R_{n-i-j}) - (R_{n-i-j} - R_{n-i-j-1})\right] \nonumber \\
\;\ = \ &\; R_{i-k-1} \cdot (R_{n-i-j-k} - R_{n-i-j-k-1}) \nonumber \\
\;\ = \ &\; R_{i-k-1} \cdot R_{n-i-j-2k-1}.
\end{align}
To continue, we need a tractable expression for $R_k(n)$. Using the results from the Generalized Binet's Formula in Lemma \ref{Binet-Skipponacci}, we can express $R_k(n)$ as
\begin{align} \label{R_nBinet}
R_k(n)
\;\ = \ &\; S^{(k)}_n + S^{(k)}_{n-2k-2} + S^{(k)}_{n-4k-4} + S^{(k)}_{n-6k-6} + \cdots \nonumber \\
\;\ = \ &\; a_1\lambda_1^n + a_1\lambda_1^{n-2k-2} + a_1\lambda_1^{n-4k-4} + a_1\lambda_1^{n-6k-6} + \cdots \nonumber \\
\;\ = \ &\; a_1\lambda_1^n\left[1 + \lambda_1^{-2k-2} + \lambda_1^{-4k-4} + \lambda_1^{-6k-6} + \cdots\right] \nonumber \\
\;\ = \ &\; a_1\lambda_1^n\left[1 + \left(\lambda_1^{-2k-2}\right) + \left(\lambda_1^{-2k-2}\right)^2 + \left(\lambda_1^{-2k-2}\right)^3 + \cdots\right] \nonumber \\
\;\ = \ &\; \frac{a_1\lambda_1^n}{1-\lambda_1^{-2k-2}} + O_k(1)
\end{align} (where the $O_k(1)$ error depends on $k$ and arises from extending the finite geometric series to infinity). We substitute this expression for $R_k(n)$ into the formula from \eqref{X(i,i+c)} for $X_{i,i+j}(n)$, and find
\bea \label{X(i,i+c)Binet}
X_{i,i+j}(n)
& \ = \ & R_{i-k-1} \cdot R_{n-i-j-2k-1} \nonumber\\ & = & \frac{a_1\lambda_1^{i-k-1}(1 + O_k(1))}{1-\lambda_1^{-2k-2}} \cdot \frac{a_1\lambda_1^{n-i-j-2k-1}(1 + O_k(1))}{1-\lambda_1^{-2k-2}} \nonumber\\
& \ = \ & \frac{a_1^2\lambda_1^{n-j-3k-2}(1 + O_k(\lambda_1^{-i} + \lambda_1^{-n+i+j})}{\left(1-\lambda_1^{-2k-2}\right)^2}.
\eea
We then sum $X_{i,i+j}(n)$ over $i$. Note that almost all $i$ satisfy $\log\log n \ll i \ll n - \log \log n$, which means the error terms above are of significantly lower order (we have to be careful, as if $i$ or $n-i$ is of order 1 then the error is of the same size as the main term). Using our expression for $Y(n)$ from Definition \ref{GapNotation} we find
\begin{align} \label{P_n(c)proof}
P_n(j)
\;\ = \ &\; \frac{\sum_{i=1}^{n-j} X_{i,i+j}(n)}{Y(n)} \nonumber \\
\;\ = \ &\; \frac{a_1^2\lambda_1^{n-j-3k-2}(n-j)(1 + o_k(n \lambda_1^n))}{\left[A_{1,1}n+B_{1,1} + o(1)\right] \cdot \left(1-\lambda_1^{-2k-2}\right)^2 \cdot a_1\lambda_1^n(\lambda_1-1) + O(\lambda_1^n)}.
\end{align}
Taking the limit as $n\to\infty$ yields
\begin{align} \label{P(c)proof}
P(j) \ = \ \lim_{n\to\infty} P_n(j) \ = \ \frac{a_1\lambda_1^{-3k-2}}{A_{1,1} \left(1-\lambda_1^{-2k-2}\right)^2 (\lambda_1-1)}\lambda_1^{-j}.
\end{align}
For the case where $j \ge 2k+2$, the calculation is even easier, as we no longer have to worry about sign interactions across the gap (that is, $S^{(k)}_i$ and $S^{(k)}_{i+j}$ no longer have to be of opposite sign). Thus the calculation of $X_{i,i+j}(n)$ reduces to
\begin{align} \label{X(i,i+j)}
X_{i,i+j}(n)
\;\ = \ &\; N(+S^{(k)}_i)N(+S^{(k)}_{n-i-j})\nonumber \\
\;\ = \ &\; (R_i - R_{i-1})(R_{n-i-j} - R_{n-i-j-1}) \nonumber \\
\;\ = \ &\; R_{i-k-1} \cdot R_{n-i-j-k-1}.
\end{align}
We again use \eqref{R_nBinet} to get
\begin{equation}
X_{i,i+c}(n)
\;\ = \ \; R_{i-k-1} \cdot R_{n-i-j-k-1}
\;\ = \ \; \frac{a_1^2\lambda_1^{n-j-2k-2}(1 + o_k(\lambda_1^n))}{\left(1-\lambda_1^{-2k-2}\right)^2}.
\end{equation}
Which, by a similar argument as before, gives us
\begin{equation} \label{P(j)proof}
P(j)
\;\ = \ \; \frac{a_1\lambda_1^{-2k-2}}{A_{1,1} \left(1-\lambda_1^{-2k-2}\right)^2 (\lambda_1-1)}\lambda_1^{-j},
\end{equation} completing the proof.\end{proof}
\section{Generalized Far-Difference Sequences}\label{sec:genfardiffseq}
The $k$-Skipponaccis give rise to unique far-difference representations where same signed indices are at least $k + 2$ apart and opposite signed indices are at least $2k+2$ apart. We consider the reverse problem, namely, given a pair $(s,d)$ of positive integers, when does there exist a sequence $\{a_n\}$ such that every integer has a unique far-difference representation where same signed indices are at least $s$ apart and opposite signed indices are at least $d$ apart. We call such representations $(s,d)$ far-difference representations.
\subsection{Existence of Sequences}
\begin{proof}[Proof of Theorem \ref{farDiffRec}]
Define
\be
R^{(s,d)}_n \ = \ \sum_{i=0}^{\lfloor n/s\rfloor}a_{n-is}\ = \ a_n+a_{n-s}+a_{n-2s}+\cdots.
\ee
For each $n$, the largest number that can be decomposed using $a_n$ as the largest summand is $R^{(s,d)}_n$, while the smallest one is $a_n-R^{(s,d)}_{n-d}$. It is therefore natural to break our analysis up into intervals $I_n=[a_n-R^{(s,d)}_{n-d},R^{(s,d)}_n]$.
We first prove by induction that
\begin{equation}\label{condition1}
a_n\ = \ R^{(s,d)}_{n-1}+R^{(s,d)}_{n-d}+1,
\end{equation} or equivalently, $a_n-R^{(s,d)}_{n-d}=R^{(s,d)}_{n-1}+1$ for all $n$, so that these intervals $\{I_n\}_{n=1}^\infty$ are disjoint and cover $\mathbb{Z}^+$.
Indeed, direct calculation proves \eqref{condition1} is true for $n=1,\dots,\max(s,d)$. For $n>\max(s,d)$, assume it is true for all positive integers up to $n-1$. We have
\begin{align}
a_{n-s}
\ = \ R^{(s,d)}_{n-s-1}+R^{(s,d)}_{n-s-d}+1
\ =& \ (R^{(s,d)}_{n-1}-a_{n-1})+(R^{(s,d)}_{n-d}-a_{n-d})+1 \nonumber \\
\Rightarrow \ R^{(s,d)}_{n-1}+R^{(s,d)}_{n-d}+1
\ =& \ a_{n-s}+a_{n-1}+a_{n-d}\ = \ a_n.
\end{align}
This implies that \eqref{condition1} is true for $n$ and thus true for all positive integers.\\
We prove that every integer is uniquely represented as a sum of $\pm a_n$'s in which every two terms of the same sign are at least $s$ apart in index and every two terms of opposite sign are at least $d$ apart in index. We prove by induction that any number in the interval $I_n$ has a unique $(s,d)$ far-difference representation with main term (the largest term) be $a_n$.
It is easy to check for $n\leq \max(s,d)$. For $n>\max(s,d)$, assume it is true up to $n-1$. Let $x$ be a number in $I_n$, where $a_n-R^{(s,d)}_{n-d}\leq x\leq R^{(s,d)}_n$. There are two cases to consider.
\begin{enumerate}
\item If $a_n\leq x\leq R^{(s,d)}_n$, then either $x=a_n$ or $1\leq x-a_n\leq R^{(s,d)}_n-a_n=R^{(s,d)}_{n-s}$. By the induction assumption, we know that $x-a_n$ has a far-difference representation with main term of at most $a_{n-s}$. It follows that $x=a_n+(x-a_n)$ has a legal decomposition.
\item If $a_n-R^{(s,d)}_{n-d}\leq x<a_n$ then $1\leq a_n-x\leq R^{(s,d)}_{n-d}$. By the induction assumption, we know that $a_n-x$ has a far-difference representation with main term at most $a_{n-d}$. It follows that $x=a_n-(a_n-x)$ has a legal decomposition.
\end{enumerate}
To prove uniqueness, assume that $x$ has two difference decompositions $\sum_i \pm a_{n_i}=\sum_i \pm a_{m_i}$, where $n_1>n_2>\dots$ and $m_1>m_2>\dots$. Then it must be the case that $x$ belongs to both $I_{n_1}$ and $I_{m_1}$. However, these intervals are disjoint, so by contradiction we have $n_1=m_1$. Uniqueness follows by induction.
\end{proof}
\begin{remark}
As the recurrence relation of $a_n$ is symmetric between $s$ and $d$, it is the initial terms that define whether a sequence has an $(s,d)$ or a $(d,s)$ far-difference representation.
\end{remark}
\begin{cor}
The Fibonacci numbers $\{1,2,3,5,8,\dots\}$ have a $(4,3)$ far-difference representation.
\end{cor}
\begin{proof}
We can rewrite Fibonacci sequence as $F_1=1, F_2=2, F_3=3$, $F_4=F_3+F_1+1$, and $F_n=F_{n-1}+F_{n-2} = F_{n-1} + (F_{n-3}+F_{n-4})$ for $n\geq 5$.
\end{proof}
\begin{cor}
The $k$-Skipponacci numbers, which are defined as $a_n=n$ for $n\leq k$ and $a_{n+1}=a_n+a_{n-k}$ for $n>k$, have a $(2k+2,k+2)$ far-difference representation.
\end{cor}
\begin{proof}
This follows from writing the recurrence relation as $a_n=a_{n-1}+a_{n-k-1}=a_{n-1}+a_{n-k-2}+a_{n-2k-2}$ and using the same initial conditions.
\end{proof}
\begin{cor}
Every positive integer can be represented uniquely as a sum of $\pm 3^n$ for $n=0,1,2,\dots$.
\end{cor}
\begin{proof}
The sequence $a_n=3^{n-1} $ satisfies $a_n=3a_{n-1}$, which by our theorem has an $(1,1)$ far-difference representation.
\end{proof}
\begin{cor}
Every positive integer can be represented uniquely as $\sum_i \pm 2^{n_i}$ where $n_1>n_2>\dots$ and $n_i\geq n_{i-1}+2$, so any two terms are apart by at least two.
\end{cor}
\begin{proof}
The sequence $a_n=2^n $ satisfies $a_n=a_{n-1}+2a_{n-2}$, which by our theorem has a $(2,2)$ far-difference representation.
\end{proof}
\subsection{Non-uniqueness}
We consider the inverse direction of Theorem \ref{farDiffRec}. Given positive integers $s$ and $d$, how many increasing sequences are there that have $(s,d)$ far-difference representation?
The following argument suggests that any sequence $a_n$ that has $(s,d)$ far-difference representation should satisfy the recurrence relation $a_n=a_{n-1}+a_{n-s}+a_{n-d}$. If we want the intervals $[a_n-R_{n-d},R_n]$ to be disjoint, which is essential for the unique representation, we must have
\begin{equation}
a_n-R_{n-d}\ = \ R_{n-1}+1.
\end{equation}
Replacing $n$ by $n-s$ gives us
\begin{equation}
a_{n-s}-R_{n-d-s}\ = \ R_{n-1-s}+1.
\end{equation}
When we subtract those two equations and note that $R_k-R_{k-s}=a_k$, we get
\begin{equation}
a_n-a_{n-s}-a_{n-d}\ = \ a_{n-1}
\end{equation}
or $a_n=a_{n-1}+a_{n-s}+a_{n-d}$, as desired. What complicates this problem is the choice of initial terms for this sequence. Ideally, we want to choose the starting terms so that we can guarantee that every integer will have a unique far-difference representation. We have shown this to be the case which for the initial terms defined in Theorem \ref{farDiffRec}, which we refer as the \emph{standard} $(s,d)$ sequence. However, it is not always the case that the initial terms must follow the standard model to have a unique far-difference representation. In fact, it is not even necessary that the sequence starts with $1$.
In other types of decompositions where only positive terms are allowed, it is often obvious that a unique increasing sequence with initial terms starting at $1$ is the desired sequence. However, in far-difference representations where negative terms are allowed, it may happen that a small number (such as 1) arises through subtraction of terms that appear later in the sequence. Indeed, if $(s,d)=(1,1)$, we find several examples where the sequence need not start with 1.
\begin{exa}\label{exa:one}
The following sequences have a $(1,1)$ far-difference representation.
\begin{itemize}
\item $a_1=2,a_2=6$ and $a_n=3^{n-1}$ for $n\geq 3$
\item $a_1=3,a_2=4$ and $a_n=3^{n-1}$ for $n\geq 3$
\item $a_1=1,a_2=9,a_3=12$ and $a_n=3^{n-1}$ for $n\geq 4$
\end{itemize}
\end{exa}
\begin{exa}\label{exa:two} For each positive integer $k$, the sequence $B_{k}$, defined by $B_{k,i}= \pm 2 \cdot 3^{i-1}$ for $i=k+1$ and $B_{k,i}= \pm 3^{i-1}$ otherwise, has a $(1,1)$ far-difference representation. \end{exa}
\noindent
We prove this by showing that there is a bijection between a decomposition using the standard sequence $b_n=\pm 3^{n-1}$ and a decomposition using $B_{k}$. First we give an example: For $k=2$, the sequence is $1,3,2\cdot 3^2,3^3,3^4,\dots$
\begin{align*}
763 &\ = \ 1-3+3^2+3^3+3^6\nonumber\\
&\ = \ 1-3+(3^3-2\cdot 3^2)+3^3+3^6\nonumber\\
&\ = \ 1-3-2\cdot 3^2+2\cdot 3^3+3^6\nonumber\\
&\ = \ 1-3-2\cdot 3^2+3^4-3^3+3^6\nonumber\\
&\ = \ B_{2,0}-B_{2,1}-B_{2,2} - B_{2,3} + B_{2,4} + B_{2,6}.
\end{align*}
\noindent Conversely,
\begin{align*}
763 &\ = \ B_{2,0}-B_{2,1}-B_{2,2} - B_{2,3} + B_{2,4} + B_{2,6} \nonumber\\
&\ = \ 1-3-2.3^2-3^3+3^4+3^6\nonumber\\
&\ = \ 1-3- (3^3-3^2)-3^3+3^4+3^6\nonumber\\
&\ = \ 1-3+3^2-2.3^3+3^4+3^6\nonumber\\
&\ = \ 1-3+3^2-(3^4-3^3)+3^4+3^6\nonumber\\
&\ = \ 1-3+3^2+3^3+3^6.
\end{align*}
\noindent To prove the first direction, assume $x=\sum_{i\in I} 3^i-\sum_{j\in J}3^j$ where $I,J$ are disjoint subsets of $\mathbb{Z}^+$. If $k$ is not in $I\cup J$, this representation is automatically a representation of $x$ using $B_{k}$. Otherwise, assume $k\in I$, we replace the term $3^k$ by $3^{k+1}-2 \cdot 3^k=B_{k,k+2}-B_{k,k+1}$. If $k+1\notin I$, again $x$ has a $(1,1)$ far-difference representation of $B_{k}$. Otherwise, $x$ has the term $2 \cdot 3^{k+1}$ in its representation, we can replace this term by $3^{k+2}-3^{k+1}$. Continue this process, stopping if $k+2\notin I$ and replacing the extra term if $k+2\in I$. Hence we can always decompose $x$ by $\pm B_{k,i}$.
Conversely, suppose $x=\sum_{i\in I} B_{k,i}-\sum_{j\in J} B_{k,j}$. If $k+1\notin I\cup J$, this representation is automatically a representation of $x$ using $\pm 3^n$. If not, assume $k+1\in I$, we replace $B_{k,k+1}=2\cdot 3^k$ by $3^{k+1}-3^k$. If $k+2\notin I$ we are done, if not, $x$ has a term $2\cdot 3^{k+1}$, replace this one by $3^{k+2}-3^{k+1}$ and continue doing this, we always get a decomposition using $\pm 3^n$. Since there is only one such decomposition, the decomposition using $\pm B_{k,i}$ must also be unique. \hfill $\Box$
\begin{remark}
From Example \ref{exa:two}, we know that there is at least one infinite family of sequences that have $(1,1)$ far-difference representations. Example \ref{exa:one} suggests that there are many other sequences with that property and, in all examples we have found to date, there exists a number $k$ such that the recurrence relation $a_n=3a_{n-1}$ holds for all $n\geq k$.
\end{remark}
\section{Conclusions and Further Research}
In this paper we extend the results of \cite{Al, MW1, BBGILMT} on the Fibonacci sequence to all $k$-Skipponacci sequences. Furthermore, we prove there exists a sequence that has an $(s,d)$ far-difference representation for any positive integer pair $(s,d)$. This new sequence definition further generalizes the idea of far-difference representations by uniquely focusing on the index restrictions that allow for unique decompositions. Still many open questions remain that we would like to investigate in the future. A few that we believe to be the most important and interesting include:
\begin{itemize}
\item[(1)]
Can we characterize all sequences that have $(1,1)$ far-difference representations? Does every such sequence converge to the recurrence $a_n=3a_{n-1}$ after first few terms?
\item[(2)] For $(s,d)\neq (1,1)$, are there any \emph{non-standard} increasing sequences that have a $(s,d)$ far-difference representation? If there is such a sequence, does it satisfy the recurrence relation stated in Theorem \ref{farDiffRec} after the first few terms?
\item[(3)] Will the results for Gaussianity in the number of summands still hold for any sequence that has an $(s,d)$ far-difference representation?
\item[(4)] How are the gaps in a general $(s,d)$ far-difference representation distributed?
\end{itemize}
|
2,877,628,091,487 | arxiv | \section{Introduction}
Frustrated quantum systems pose a significant challenge to condensed matter theory due to their extensive ground state degeneracy \cite{Wannier1950,Anderson1987} and can show fractional quasi-particle statistics as known from quantum Hall physics \cite{Wen1989}. There are a wide variety of interesting phenomena in frustrated systems. Examples include spin liquids, time-reversal symmetry breaking, and kinetic constraints \cite{Balents2010,Batista2016,Zhou2017}. While small systems can be solved with tremendous computational resources, predictions for the low-temperature phases in the thermodynamic limit are scarce and often debated \cite{Yoshioka2009,Shirakawa2017,Szasz2020}. Existing condensed matter realizations are complicated materials and simpler model systems are sought after.
Ultracold atoms provide a unique way to explore quantum many-body physics through quantum simulations of frustrated quantum systems based on first principles. Prominent examples for quantum simulation with ultracold atoms include the direct detection of antiferromagnetic correlations \cite{Greif2013,Hart2015,Drewes2016b,Parsons2016,Boll2016,Cheuk2016,Brown2017,Gall2021} and the observation of many-body localization \cite{Gross2017}. Ultracold atoms in optical lattices implement Hubbard models \cite{Lewenstein2007,BlochDalibardZwerger2008,Esslinger2010}, where neighboring sites are coupled by hopping and atoms interact if they meet on the same lattice site. Fermi-Hubbard systems were first realized with ultracold atoms in square lattices \cite{Joerdens2008,Schneider2008}.
Frustrated lattice geometries have been studied with absorption imaging of ultracold bosonic atoms \cite{Becker2010} which led to quantum simulation of classical frustration \cite{Struck2011}. Other geometrically frustrated two-dimensional lattice geometries like kagome lattices \cite{Jo2012} and the Lieb lattice \cite{Taie2015} have been studied with bosonic atoms, and recently individual bosonic atoms have been imaged in a triangular lattice \cite{Yamamoto2020}.
But for the implementation antiferromagnetic interactions, fermions are the more natural choice \cite{Tieleman2013}. For revealing intricate correlations on short length scales, this asks for a fermionic quantum gas microscope, where all ultracold atoms in the many-body system can be imaged simultaneously. Existing fermionic quantum gas microscopes were used to study Hubbard models on square lattices \cite{Cheuk2015,Parsons2015,Haller2015,Edge2015,Omran2015,Greif2016,Brown2017}. However, to obtain a geometrically frustrated system a non-bipartite lattice geometry is required. The triangular lattice is the paradigm example of a frustrated lattice \cite{Wannier1950}, because a triangle is the simplest structure where antiferromagnetic constraints cannot be simultaneously satisfied on all bonds. For triangular lattices the frustration of antiferromagnetic order leads to a remarkable quantum phase transition for varying interaction between a magnetically ordered state and a disordered state which may be a chiral spin liquid \cite{Shirakawa2017,Szasz2020}.
Here, we demonstrate the first realization of a site-resolved quantum gas microscope of ultracold fermionic atoms in a triangular lattice, thereby paving the way for a new platform to study frustrated Hubbard physics in a lattice with spacing of 1003 nm and strong tunneling in the tight-binding limit. We load a degenerate Fermi gas into the triangular lattice and obtain densities above half filling.
Our experiment uses fermionic $^{6}$Li because it possesses intriguing properties like broad Feshbach resonances and a low mass, allowing to realize Hubbard models at larger tunneling and greater interaction than with other species. However, the low mass comes with difficulties localizing the atoms during imaging. Therefore, we rely on fluorescence imaging during Raman sideband cooling near the ground state in the lattice. Raman sideband cooling of lithium is challenging due to the large lattice depth required to suppress tunneling and to reach the Lamb-Dicke regime. The triangular lattice adds to this difficulty due to extensive required optical access and constraints on the beam geometry. We designed a sophisticated lattice setup to overcome these obstacles, allowing optically resolved imaging of individual fermionic atoms in the triangular optical lattice with high fidelity.
In this paper, we first discuss our experimental setup to prepare $^6$Li degenerate Fermi gases and our novel approach to create a triangular lattice for ultracold atoms. Then we present detailed information about the implementation of single-site resolved imaging in the triangular lattice via Raman sideband cooling. We discuss reconstruction of the lattice occupation from imaging results and the imaging fidelity in a comparative study of single-atom imaging in three different optical lattices. We conclude with an outlook on the study of Fermi-Hubbard physics and frustrated quantum physics in our setup.
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{Fig1.pdf}
\caption{{\bf Triangular-lattice quantum gas microscope.} (\emph{left}) Sketch of triangular lattice and Raman sideband imaging beams and their alignment relative to the vacuum chamber. The stainless steel octagon chamber is equipped with an outer copper coil pair for the MOT field and inner coil pair for the Feshbach field. The triangular lattice is formed by recycling the lattice beam through the recessed top and bottom windows, leaving just enough space for the objective at the top window. The second and third focus are created by 1\,:\,1 imaging systems, which are not shown. Three orange arrows (T1, T2 and T3) indicate the direction of the three beams which cross at the position of the atoms where the triangular lattice is formed. The polarization configuration used for imaging in the lattice is illustrated in the bottom middle inset. The Raman cooling beams (R1 and R2) and the Raman repump beam (RP) are sent through the side windows. (\emph{top right}) Kapitza-Dirac scattering of $^{6}$Li molecular Bose-Einstein condensate (BEC) from the triangular lattice. This image is an average of 10 absorption images after a time-of-flight of \SI{1.5}{\milli\second}, using about 1\% of the maximum lattice laser power and a pulse length of \SI{2}{\micro\second}. For this picture, we used polarization angles of $0^\circ$ for lattice beams T1, T2, and T3 to demonstrate a symmetric lattice. (\emph{bottom right}) Raw site-resolved fluorescence image of $^{6}$Li atoms in the triangular lattice.}
\label{fig: Sketch of the lattice setup and Raman sideband imaging setup}
\end{figure*}
\section{Experimental Setup}
\subsection{Preparation of degenerate Fermi gas}
In the following, we describe the experimental setup and the path to a degenerate Fermi gas. For stability and fast cycle time of the experiment, we designed a single-chamber experiment that allows for sufficient optical access for all required laser beams [Fig.~\ref{fig: Sketch of the lattice setup and Raman sideband imaging setup}].
We start with 600,000 $^{6}$Li atoms in a $\sim \SI{1070}{\nano\meter}$ crossed optical dipole trap (CDT) loaded from a magneto-optical trap (MOT). The MOT is loaded via a Zeeman slower. To increase loading efficiency of the CDT, we use a compressed MOT stage where the power for both the cooling light and the repump light is decreased to 0.01\% and the detuning is changed from \SI{-30}{\mega\hertz} to \SI{-5}{\mega\hertz} within \SI{4}{\milli\second}. The CDT is formed by crossing an incoming laser beam (YLR-300-LP-AC-Y14) with its retroreflection under an angle of $\sim$ 10$^{\circ}$, where the power of each beam is \SI{125}{\watt} with a beam waist of \SI{90}{\micro\meter} at the crossing point. The atoms evenly populate the states $\ket{1}$ $\equiv$ $\ket{\textrm{2$^{2}$S$_{1/2}$ F = 1/2 m$_{F}$ = 1/2}}$ and $\ket{2}$ $\equiv$ $\ket{\textrm{2$^{2}$S$_{1/2}$ F = 1/2 m$_{F}$ = $-1/2$}}$ after loading into the CDT, and the initial density is $\sim$ \SI{1d12}{\centi\meter^{-3}} with a temperature of $\sim$ \SI{220}{\micro\kelvin}. Thereafter, a three-stage evaporation (plain evaporation for \SI{1}{\second}; forced evaporation I for \SI{0.7}{\second}; forced evaporation II for \SI{5}{\second}) leads to a $^{6}$Li degenerate Fermi gas. During the evaporation, a Feshbach field is ramped up to 810\,G, where the scattering length between state $\ket{1}$ and $\ket{2}$ is $a_{s}\approx17,000a_{0}$ \cite{Bartenstein2005, Zuern2013}, where $a_0$ is the Bohr radius. The intensity of the dipole trap stays unchanged in plain evaporation. In forced evaporation I, the intensity of the dipole trap is reduced to 6\% of the initial value following an exponential decay curve with a time constant $\tau_{1}$ = \SI{300}{\milli\second}. In forced evaporation II, the intensity of the dipole trap continues to reduce to 0.4\% of the initial value with a time constant $\tau_{2}$ = \SI{6}{\second}. To prevent the formation of deeply bound lithium molecules and obtain degenerate Fermi gases, the Feshbach magnetic field is switched from 810\,G to 300\,G ($a_{s} = -288 a_{0}$) within \SI{10}{\milli\second} and about \SI{500}{\milli\second} before the end of the forced evaporation stage II, where the density is not yet high enough to form lithium dimers via three-body collisions. With this experimental cycle of \SI{12}{\second} duration, we obtain a degenerate Fermi gas with about 3,000 atoms and temperature below one fifth of the Fermi temperature, determined by a Fermi fit to a non-interacting gas.
\subsection{Triangular lattice setup}
Fluorescence imaging of atoms in the triangular geometry requires a strong three-dimensional confinement at each lattice site. Therefore, we need to find a triangular lattice configuration that provides sufficient lattice depth at the limited available laser power. For this purpose, we interfere three laser beams to create a triangular array of one-dimensional light tubes and add a strongly oblate ``light sheet" beam to complete the three-dimensional confinement. The strongly oblate light sheet has beam waists of $\SI{4.2}{\micro\meter}\times\SI{50}{\micro\meter}\times\SI{70}{\micro\meter}$ and uses power of \SI{24}{\watt} at \SI{1070}{\nano\meter}. The trap frequency along $z$ axis is $\sim$ \SI{160}{\kilo\Hz}. In order to create a deep triangular lattice with resolvable lattice spacing we use an unusual approach. We recycle a single \SI{1064}{\nano\meter} laser beam (MOPA 55W, Nd:YAG, Coherent) twice and cross all three beams at the position of the light sheet, thereby reusing the laser power three times [Fig.~\ref{fig: Sketch of the lattice setup and Raman sideband imaging setup}]. The phases of the three lattice beams do not need to be stabilized because phase drifts only lead to translations of the triangular lattice. To keep these translations within tolerable bounds of about one lattice site per minute, the setup is very rigid and temperature-controlled via water cooling and air conditioning.
All three lattice beams propagate from the negative $z$ direction (down) to the positive $z$ direction (up) with an angle of 45$^{\circ}$ out of the $x$-$y$ plane. Their projections onto $x$-$y$ plane cross to each other at an angle of 120.0(6)$^{\circ}$. The power for each beam is \SI{42}{\watt}, \SI{40}{\watt} and \SI{38}{\watt}, respectively, due to losses caused by optics during the recycling. All three beams have a Gaussian beam waist of $\sim$ \SI{30}{\micro\meter} at the crossing. This leads us to a triangular lattice with a lattice spacing of $a_\text{latt}$ = \SI{1003}{\nano\meter}. Our configuration for the lattice is compatible with a standard octagon vacuum chamber but requires very careful consideration of objective mount and magnetic field coils which typically block the optical access exploited here, as illustrated in Fig.~\ref{fig: Sketch of the lattice setup and Raman sideband imaging setup}. In addition, we have a custom-designed anti-reflection coating for the vacuum windows to reduce the reflection at the 45$^{\circ}$ angle of incidence.
Since the interference pattern between the three crossing beams depends both on the wavevector direction and the polarization of each beam, these parameters have to be carefully adjusted for each beam. The angles between the lattice beams are restricted to about $1^\circ$ by the optical access and we use half-wave plates to control the polarizations of all lattice passes. For the following experiments, we adjusted these to obtain the strongest possible interference pattern in the triangular lattice. We found that the lattice depth is maximal for incoming linear polarization angles of about $40^\circ, -40^\circ$, and $80^\circ$ for lattice beams L1, L2, and L3, respectively, relative to the vertical polarization closest aligned to the $z$ axis [Fig.~\ref{fig: Sketch of the lattice setup and Raman sideband imaging setup}].
Due to birefringence in the vacuum windows and coatings the polarizations may be slightly modified at the atom position. The asymmetry of the configuration leads to anisotropic tunneling in the lattice in our current configuration. We confirmed by explicit calculation that anisotropic triangular lattice geometries can be adiabatically transformed to a symmetric configuration by varying the polarization of one of the three lattice beams. To implement such a scheme, we plan to add the capability to dynamically switch between the maximum-lattice-depth and an isotropic-tunneling configuration during the experimental cycle by upgrading to a motorized wave plate mount in the future.
To prepare a quasi-two-dimensional Fermi gas in the triangular lattice, we first load the degenerate Fermi gas from the CDT into the light sheet and evaporate for another \SI{250}{\milli\second}. The intensity of the light sheet is reduced to 0.2\% of its initial value following an exponential decay curve with a time constant $\tau_3$ = \SI{100}{\milli\second}. This evaporation is necessary to remove excitations created during the loading procedure. Next, the intensity of the light sheet is increased to the initial value again, and the triangular lattice is adiabatically switched on within \SI{100}{\milli\second}. This configuration with maximal depth of lattice and light sheet is used for imaging the atoms by collecting fluorescence during Raman sideband cooling.
For calibration of the lattice depth, we carried out Kapitza-Dirac scattering [analogous to Fig.~\ref{fig: Sketch of the lattice setup and Raman sideband imaging setup}] and measured the atom number in the zeroth order as a function of lattice intensity. Through fitting of the decay curve to a Bessel function, we find a maximum lattice depth of \mbox{$\sim5000\,E_{r}$} with $E_{r} \equiv \hbar^2\pi^2/(2ma_{\text{latt}}^2)=\SI{8.2}{\kilo\Hz}$.
\section{Raman sideband cooling}
In order to keep the atoms localized at each single site during the fluorescence imaging, we utilize Raman sideband cooling to collect scattered photons while keeping the atoms near the ground-state of the harmonic potential. Variations of Raman sideband cooling have been used to detect various atomic species in optical lattices with single-atom resolution \cite{Li2012,Haller2015,Cheuk2015,Parsons2015,Edge2015,Omran2015}. A two-photon Raman sideband transition transfers atoms from one hyperfine ground state to the other hyperfine ground state while lowering the vibrational level in the on-site harmonic trap. The frequency difference between the two photons needs to be calibrated to match the frequency difference between the two hyperfine ground states plus the lattice on-site harmonic oscillator frequency $\omega_\text{latt}$. To close the cooling cycle, the atoms need to be transfered back to the initial hyperfine ground state without changing their vibrational levels. This is implemented through an optical pumping process using the Raman repump laser. In order to keep the heating in the repump process low, a large $\omega_\text{latt}$ is required to suppress recoil heating in $x$-$y$ plane by operating in the Lamb-Dicke regime. After many cycles, most atoms occupy the ground vibrational level which is a dark state in the absence of heating processes. The scattered photons in the optical pumping process are then collected to image the atoms.
Further specifics of our Raman sideband cooling setup are described as follows. A two-photon Raman transition via the $D_1$ line transfers the atoms from $\ket{\textrm{2S$_{1/2}$ F = 3/2}}$ manifold to $\ket{\textrm{2S$_{1/2}$ F = 1/2}}$ while lowering the vibrational state by one. The incoming Raman cooling beam (R1) is locked \SI{5}{GHz} red-detuned to the $D_1$ line and is linearly polarized. It has a power of \SI{2.2}{\milli\watt} and a beam waist of \SI{100}{\micro\meter} on the atoms. After passing through the chamber, we use a double-pass configuration of an acousto-optic modulator (AOM) to generate the second Raman beam (R2) with a detuning of $\SI{228.2}{\mega\Hz}+\omega_\text{latt}/(2\pi)$ and 70\% efficiency.
We choose the angles between the two Raman beams and relative to the lattice to obtain sufficient coupling in-plane as well as in the $z$ direction [Fig.~\ref{fig: Raman beams configuration and Raman cooling process}{\bf(a,b)}].
To determine the $\omega_\text{latt}$ we take sideband spectra by applying a pulse of both Raman beams directly after loading into the lattice, transferring a fraction of the atoms from $\ket{\textrm{2S$_{1/2}$ F = 1/2}}$ to $\ket{\textrm{2S$_{1/2}$ F = 3/2}}$. These atoms are then detected in absorption imaging and the sidebands show the lattice vibrational spacing of $\omega_\text{latt} = 2\pi\times\SI{870(20)}{\kilo\Hz}$ [Fig.~\ref{fig: Raman beams configuration and Raman cooling process}{\bf(d)}].
The Raman repump beam (RP) has a power of \SI{0.15}{\milli\watt} and a beam waist of \SI{500}{\micro\meter} at the focus on the atoms. It is locked \SI{9.6(5)}{\mega\Hz} blue-detuned to the $\ket{\textrm{2S$_{1/2}$ F = 1/2}}$ to the $\ket{\textrm{2P$_{1/2}$ F = 1/2}}$ atomic transition and is circularly polarized. The atoms excited by the Raman repump beam more likely decay down into the $\ket{\textrm{2S$_{1/2}$ F = 3/2}}$ state rather than the $\ket{\textrm{2S$_{1/2}$ F = 1/2}}$ state with a branching ratio of 8\,:\,1. Their vibrational state in the lattice remains mostly unchanged due to the Lamb-Dicke factor $\eta\equiv\sqrt{\hbar k_R^2/(2m\omega_\text{latt})}= 0.29$ in our experiment, where $k_R$ is the wavevector of the Raman repump light. There may be higher vibrational states excited in the vertical light sheet direction. However, due to the depth of the light sheet and the absence of nearby wells the atoms could tunnel to, the impact of elevated temperatures in the $z$ dimension on imaging fidelity is low and we can rely on coupling to other dimensions for cooling, which is provided by the small angle $\alpha$ [Fig.~\ref{fig: Raman beams configuration and Raman cooling process}{\bf(b)}].
The spatial configuration of the Raman cooling beams and repump beam is shown in Fig.~\ref{fig: Sketch of the lattice setup and Raman sideband imaging setup} and Fig.~\ref{fig: Raman beams configuration and Raman cooling process}. To get the best imaging result, we optimize the offset magnetic fields, leading to a magnetic field of 1.1(2)\,G rather than zero field. The parameters for all offset magnetic fields are shown in Fig.~\ref{fig: Raman beams configuration and Raman cooling process}. The lifetime of atoms under continuous Raman cooling in the triangular lattice is \SI{44(2)}{\second}, possibly limited by background gas collisions.
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{Fig2.pdf}
\caption{
{\bf Raman sideband cooling.}
(a), (b) Raman sideband cooling beam configuration. Blue dots mark the triangular lattice sites. The Raman repump beam propagates in the lattice plane ($x$-$y$ plane). The first Raman beam (R1) has horizontal linear polarization and propagates in negative $z$ direction with a shallow angle of $\alpha=7.5(2)^\circ$ relative to the lattice plane. The second Raman beam (R2) is perpendicular to R1 and consists of a mix of horizontally and vertically linear polarizations in a ratio of 4\,:\,1. Two green arrows show the projection of the magnetic field on $x$-$y$ plane and $x$-$z$ plane with angles of $-45^\circ$ and $-70^\circ$ relative to $x$ axis, respectively. (c) Raman sideband cooling transition scheme showing the levels connected by the Raman repump RP and the Raman beams R1 and R2 and the respective detunings $\delta$ and $\Delta$.
(d) Raman sideband spectrum in the triangular lattice. The center peak is the carrier corresponding to hyperfine splitting in the ground state while the sidebands show the lattice vibrational spacing of $\omega_\text{latt}=2\pi\times\SI{870(20)}{\kilo\Hz}$. The amplitude ratio of the sidebands indicates an average number of vibrational quanta of $2^{+3}_{-1}$ in $x$ and $y$ direction. The dots represent experimental data and the solid line is a Gaussian fit. Error bars are the standard deviation of four repetitions.}
\label{fig: Raman beams configuration and Raman cooling process}
\end{figure}
\section{High-resolution imaging}
To achieve imaging of $^{6}$Li atoms in the triangular lattice with single-site resolved sensitivity, a high-resolution imaging system is used to collect the fluorescence during Raman sideband cooling. The imaging system consists of a custom objective (54-25-25@671nm, Navitar) with a focal length of \SI{25}{\milli\meter} and a numerical aperture (NA) of 0.5, and an achromatic doublet (AC508-750-B) with a focal length of \SI{750}{\milli\meter}, leading to a theoretical magnification of 30. The measured magnification is 33. Scattered photons are detected with an exposure time of \SI{500}{\milli\second} by a low-noise scientific CMOS camera (Andor Zyla 4.2 plus) with quantum efficiency of 77\% and pixel size $6.5\times6.5$ \SI{}{\micro\meter}$^2$. The total transmission of imaging optics and narrow-band filters is $\sim$ 80\%, leading to a total photon collection efficiency of $\sim$ 5.4\%. From our pictures we conclude that we detect about 1000 photons per atom, corresponding to a scattering rate for each atom of about \SI{34}{\kilo\Hz}, calculated by dividing the number of photons per atom by the total collection efficiency and the exposure time.
To verify that individual lattice sites are well-resolved, it is necessary to check the point spread function of the system. For this purpose, we take pictures of dilute systems with many well-separated atoms. The point spread function (PSF) is obtained by overlapping and averaging approximately 800 single atoms. After azimuthal averaging we fit to a Gaussian function [Fig.~\ref{fig: 3}{\bf(a)}]. This reveals a full width at half maximum (FWHM) of \SI{720(18)}{\nano\meter}, consistent with our expectation of \SI{711}{\nano\meter}. From the first minimum of an Airy fit, we extract the resolution according to the Rayleigh criterion as \SI{818(8)}{\nano\meter} (4.3 pixels). This is smaller than our triangular lattice spacing of \SI{1003}{\nano\meter} (5.1 pixels), and we therefore resolve individual atoms in the triangular lattice without post-processing.
\section{Image reconstruction and analysis}
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{Fig3.pdf}
\caption{{\bf Image analysis.} (a) Point spread function. Azimuthal average of the point spread function ({red}), with Gaussian fit ({blue}), and Airy fit ({green}). The measured FWHM of \SI{720(18)}{\nano\meter} of the PSF is consistent with the FWHM of \SI{711}{\nano\meter} expected from the numerical aperture of the objective. The inset shows the PSF obtained by averaging isolated atoms. (b) Single atom count histogram. The left peak corresponds to empty sites and the right peak indicates sites occupied by single atoms. The threshold value between no atom and a single atom (vertical orange line) is determined as the intersection point of two Gaussian fits to background and atom signal distribution, respectively. The reconstruction error caused by the overlap is negligible compared to the observed hopping and loss. (c) Determining lattice angles and lattice constant. Single atom positions (exemplary shown in blue) are orthogonally projected onto a line of varying angle, here exemplified by A1 and A2. For each angle, a histogram of the projected positions is depicted. At the lattice angle, the experimental histogram has perfect contrast (right graph) corresponding to lattice vector $b1$. At other angles, for example A2, there is almost no structure in the histograms.}
\label{fig: 3}
\end{figure}
We apply a reconstruction algorithm to extract a digitized occupation matrix of the lattice \cite{Sherson2010}. In order to obtain the geometric parameters of the triangular lattice, we determine lattice angles and lattice constants. This relies on identifying individual isolated atoms and determining their center via Gaussian fits. Then, we project the coordinates of isolated atoms onto an axis with varying rotational angle in the lattice plane. By introducing equidistant bins on this axis we generate a histogram of atom projections [Fig.~\ref{fig: 3}{\bf (c)}]. If the rotation angle is very close to the lattice angle, the histogram shows multiple peaks with minimal width and the separation between the peaks is related to the lattice constant. The angles with respect to $x$ axis for both lattice vectors are determined with high precision to $-45.85(3)^\circ$ and $13.51(1)^\circ$, leading to 59.36(3)$^\circ$ between the lattice vectors. These lattice angles allow us to extract the precision of the relative angles between the lattice beams to 120.0(6)$^{\circ}$. The lattice constants in pixels are $5.09(9)$ and $5.10(4)$, consistent with a symmetric triangular lattice.
While the lattice angles only vary because of alignment changes, the phase of the lattice usually drifts due to thermal effects. To estimate the phase in a picture, we generate the lattice structure and compare it to the position of isolated single atoms and then measure the phase difference between every single atom and the nearest lattice site. With knowledge of the lattice angle, lattice constant, and lattice phase, the exact position of all lattice sites in image coordinates is revealed.
To obtain the occupation of each lattice site, we simultaneously fit 2D Gaussian functions to all lattice sites with significant signal of more than about hundred detected photons per site. The resulting histogram of all Gaussian amplitudes in Fig.~\ref{fig: 3}{\bf (b)} shows a well-separated peak of single atom signal. Due to light-induced collisions, doubly occupied sites are detected as empty sites \cite{Anderson1994,Fuhrmanek2012,Endres2013}. From the histogram, we obtain an optimized threshold between the signal of no atom and single atoms to decide if a lattice site is occupied. As a result of the reconstruction, a matrix with entries zero (empty) or one (occupied) is generated. To handle the triangular lattice structure, we interpret it as a square lattice with diagonal tunneling, sheared by 30$^\circ$.
\subsection{Imaging fidelity}
We evaluate the imaging fidelity by taking a series of five images with \SI{500}{\milli\second} exposure time each and \SI{50}{\milli\second} separation in between. Hopping and loss rate are estimated by comparing two adjacent images [Fig.~\ref{fig: 4}]. The hopping rate is defined by the fraction of sites detected as occupied in the second image only while the loss rate is given by the fraction of atoms lost from picture to picture. We define the imaging fidelity by the fraction of atoms that remained in the same lattice sites. Our single-site imaging has a field of view of $90\times\SI{90}{\micro\meter^2}$, with good hopping rates in a region of $25\times\SI{25}{\micro\meter^2}$ in the center of the atom distribution, which includes 625 lattice sites.
We obtain a hopping rate of 2.0(2)\% and a loss rate of 0.4(2)\% at a detected occupation of up to 50\% by averaging over twelve pairs of pictures. This demonstrates an imaging fidelity of 97.6(3)\%. The detected density is reduced by light-induced pair-wise losses at the beginning of the first image which lowers the density of the loaded Fermi gas from an initial density of approximately 1.3 atoms per lattice site, determined independently by high-field absorption imaging.
\begin{figure}[ht]
\includegraphics[width=\linewidth]{Fig4.pdf}
\caption{{\bf Imaging fidelity.} (a), (b) Two adjacent images of individual $^{6}$Li atoms ({white} dots) in a triangular lattice ({black} dots) imaged with \SI{500}{\milli\second} exposure and separation of \SI{50}{\milli\second}. (c) Reconstructed occupation of picture (a) convolved with the PSF. (d) Hopping and loss during imaging, stationary atoms ({blue}), hopped atoms ({green}) and lost atoms ({red}).}
\label{fig: 4}
\end{figure}
\subsection{Comparison to square lattices}
In addition to the triangular lattice, we implemented a versatile square lattice at the same experimental setup which can be superimposed with the triangular lattice. The square lattice setup can be used at \SI{532}{\nano\meter} or \SI{752}{\nano\meter} lattice spacing. We create the square lattices using the recycled lattice setup as described in refs.~\cite{Sebby2006, Brown2017} [Fig.~\ref{fig: Single atoms and Raman sideband in square lattice}{\bf(a)}]. For vertical polarization, four-beam interference leads to a \SI{752}{\nano\meter} spacing lattice, while an in-plane polarization creates a \SI{532}{\nano\meter} spacing lattice. The power of the four passes is \SI{41}{\watt}, \SI{39}{\watt}, \SI{37}{\watt} and \SI{36}{\watt}, respectively, with a Gaussian beam waist of \SI{70}{\micro\meter}. The trap depths are 1900\,$E_r^{\text{532nm}}$ and 7500\,$E_r^{\text{752nm}}$ and trap frequencies are \SI{1.36(2)}{\mega\Hz} and \SI{1.90(4)}{\mega\Hz} for the \SI{532}{\nano\meter} and \SI{752}{\nano\meter} spacing lattices, respectively [Fig.~\ref{fig: Single atoms and Raman sideband in square lattice}{\bf(b)}].
The square lattices have smaller lattice spacing than the triangular lattice, however, our reconstruction algorithm is able to determine the lattice occupation with an error only limited by the observed hopping and loss [Fig.~\ref{fig: Single atoms and Raman sideband in square lattice}{\bf(c,d)}]. We confirmed this by comparing different fitting subroutines which lead to differences much smaller than the imaging infidelity. The \SI{532}{\nano\meter} spacing lattice is imaged using the same Raman cooling configuration as the triangular lattice, while for the \SI{752}{\nano\meter} square lattice the Raman beam R2 is the retroreflection of the incoming Raman beam R1, instead of the orthogonal configuration described above. For the triangular and \SI{532}{\nano\meter} spacing square lattices with smaller trap frequencies, we observed that the orthogonal Raman beam configuration is necessary, but for trap frequencies beyond \SI{1.5}{\mega\Hz}, the retroreflected configuration works well. The square lattices have imaging fidelities of 84(3)\% and 97(1)\%, with detected filling up to 50\%, in \SI{532}{\nano\meter} and \SI{752}{\nano\meter} spacing lattices, respectively.
Our imaging fidelity in the \SI{532}{\nano\meter} spacing lattice is slightly lower than observed previously in a three-dimensional \SI{532}{\nano\meter} spacing lattice, possibly caused by our weaker $z$ confinement \cite{Omran2015}. However, the imaging fidelity in the \SI{752}{\nano\meter} spacing lattice is comparable with previous results \cite{Brown2017}. Due to the large sideband frequency in our \SI{752}{\nano\meter} spacing lattice, it would be possible to double the system size while maintaining sufficient lattice depth for high-fidelity imaging.
Superimposing the triangular lattice with the square lattice can form a two-dimensional quasi-crystalline lattice \cite{Sbroscia2020}, which could be used to study many-body localization in a non-separable two-dimensional quasi-periodic lattice. Our setup is ready to superimpose both lattices by splitting the laser power between both simultaneously realized optical paths and will be capable to study such systems on a single-atom level.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{Fig5.pdf}
\caption{{\bf Comparison to square lattices.} (a) Square lattice setup. Orange and blue arrows denote polarizations of \SI{532}{\nano\meter} and \SI{752}{\nano\meter} spacing square lattices, respectively. (b) Raman sideband spectra in \SI{532}{\nano\meter} spacing (orange) and \SI{752}{\nano\meter} spacing (blue) square lattice. The dots denote experimental data and solid lines are Gaussian fits. The sidebands are at \SI{1.36(2)}{\mega\Hz} and \SI{1.90(4)}{\mega\Hz} for \SI{532}{\nano\meter} and \SI{752}{\nano\meter} square lattices, respectively. The asymmetry of the sidebands shows that the atoms are predominantly in the 2d vibrational ground state after loading into the lattice. We find an average number of vibrational quanta per dimension in 2d of $0.1(1)$ in the \SI{532}{\nano\meter} lattice and $0.2^{+0.8}_{-0.2}$ in the \SI{752}{\nano\meter} lattice. (c), (d) Single-site-resolved images of $^{6}$Li atoms with lattice structure overlay in the \SI{532}{\nano\meter} spacing and \SI{752}{\nano\meter} spacing lattice, respectively. The gray circles indicate occupied lattice sites. For the \SI{532}{\nano\meter} lattice, the Raman configuration is the same as for the triangular, but for the \SI{752}{\nano\meter} lattice we use counter-propagating Raman beams. }
\label{fig: Single atoms and Raman sideband in square lattice}
\end{figure}
\section{Conclusion and Outlook}
We have presented the first single-site imaging of ultracold fermionic atoms in a triangular lattice, demonstrating a state-of-the-art imaging fidelity of 97.6(3)\%. Our triangular lattice with spacing \SI{1003}{\nano\meter} enables fast tunneling rates of $\sim$\,\SI{700}{\Hz} in the strongly interacting Hubbard-regime. The interactions are tunable via the Feshbach resonance in lithium and are only limited by multi-band effects.
In our current configuration, we estimate that about 20 sites are in the Hubbard regime when loading the lattice at maximum light sheet depth, making it very challenging to observe interaction effects.
Through the addition of a vertical lattice we will increase the vertical confinement to suppress multi-band effects to obtain Hubbard systems of several hundred atoms.
Our platform will enable studies of the Fermi-Hubbard model in the triangular lattice and, in the limit of strong interactions, the triangular Heisenberg spin model. By varying the polarizations of the three lattice beams, we can adiabatically change the triangular lattice between symmetric and asymmetric tunneling configurations, enabling the study of the complete tunneling-imbalance parameter space.
Furthermore, the platform is ideally suited to directly measure emergent quantum correlations, study signatures of frustration and possibly even detect signatures of quantum spin liquids, depending on the lowest entropy states that can be prepared.
It will become possible to study spin-spin correlations in analogy with results for square lattices in the Mott-insulating regime \cite{Parsons2016,Boll2016,Cheuk2016,Brown2017}, possibly detecting the cross-over from three-sublattice order to a non-magnetic state \cite{Shirakawa2017,Szasz2020}.
Even at temperatures previously reached in ultracold Hubbard simulations, remnants of chiral correlations could be detected which would directly show time-reversal symmetry breaking \cite{Wen1989,Shirakawa2017}. Our new quantum gas microscope platform provides the basis for measuring these three-point correlations. Moreover, the triangular lattice Hubbard model exhibits kinetic frustration, which could be probed using a transient grating approach \cite{Brown2019,Vranic2020} or by detecting magnon-hole bound states. The bound states in the triangular lattice have binding energies that scale with the tunneling energy and are therefore at experimentally accessible temperatures \cite{Zhang2018}.
\raggedbottom
\begin{acknowledgments}
This work was supported by the University of Virginia.
We thank W. S. Bakr, S. S. Kondov and C. A. Sackett for comments on the manuscript and acknowledge discussions with D. Mitra, P. T. Brown, and E. Guardado-Sanchez. We thank J. W. Kim for early contributions to the experiment and S. Kuhr for sharing the initial code base for experiment control and reconstruction software which we extended for generalized lattice geometries.
\end{acknowledgments}
|
2,877,628,091,488 | arxiv | \section{Introduction}
Graphene is the first experimentally-realizable stable true atomic
monolayer. It has a host of unusual electronic properties
\cite{castro_neto_review2009,chakraborty_review,meso_review}.
After its discovery, physical properties of graphene became the subject of
intense scientific efforts. In addition to single-layer graphene, bilayer
graphene is also actively studied. This interest is driven by the desire to
extend the family of graphene-like materials and to create materials with a
gap in the electronic spectrum, which could be of interest for
applications.
Bilayer graphene exists in two stacking modifications. The most common is
the so-called Bernal, or AB, stacking of graphene bilayers (AB-BLG). In
such a stacking, half of the carbon atoms in the top layer are located
above the hexagon centers in the lower layer; and half of the atoms in the
top layer lie above the atoms in the lower layer. A different layer
arrangement, in which carbon atoms in the upper layer are located on top of
the equivalent
atoms of the bottom layer, is referred to as AA-stacked graphene bilayer
(AA-BLG),
Fig.~\ref{AABLG}.
So far, the most efforts have been focused on studying the
AB-BLG~\cite{mccann2006},
for which high-quality samples are
available~\cite{susp_bilayer2009,Mayorov_Sci2011}.
In recent years, the experimental realization of the AA-BLG has been also
reported~\cite{aa first,aa_experiment2008,borysiuk_aa2011}.
However, AA-BLG received limited amount of theoretical attention
\cite{aa_dft2008,spin-orbit2011,borysiuk_aa2011,aa_adsorbtion2010,
aa_optics_2010}.
The tight-binding analysis shows that both AA and AB-BLGs have four bands
(two hole bands and two electron bands). However, the structure of these
bands is different. In the undoped AB-BLG, two bands (one hole band and one
electron band) touch each other at two Fermi points, and the low-energy
band dispersion is nearly
parabolic~\cite{ABBLG}.
The AA-BLG has two bands near the Fermi energy, one electron-like and one
hole-like~\cite{aa_dft2008,spin-orbit2011}.
The low-energy dispersion in the AA-BLG is linear, similar to the monolayer
graphene. Unlike the latter, however, AA-BLG have Fermi surfaces instead of
Fermi points.
An important feature of the AA-BLG is that the hole and electron Fermi
surfaces coincide in the undoped material. It was shown in
Ref.~\onlinecite{our_preprint}
that these degenerate Fermi surfaces are unstable when an arbitrarily weak
electron interaction is present, and the bilayer becomes an
antiferromagnetic (AFM) insulator with a finite electron gap. This
electronic instability is strongest when the bands cross at the Fermi
level. Doping shifts the Fermi level and suppresses the AFM
instability~\cite{our_preprint2}.
Assuming a homogeneous ground state, here we demonstrate that the AFM gap
$\Delta$ decreases when the doping $x$ grows, and vanishes for dopings above
some critical value
$x_c$.
However, the homogeneously-doped state, depending on temperature and
doping, may become unstable with respect to phase separation into an undoped
AFM insulator and a doped
metal~\cite{our_preprint2}.
In the phase-separated state, the concentration of the AFM insulator
decreases when doping increases. Above a certain threshold value of doping
$x^*$,
the insulator-to-metal transition occurs.
In this paper we present a detailed study of the electronic properties of
the AA-BLG. In Sec.~\ref{TBH}, we write down its tight-binding model
Hamiltonian and briefly analyze its properties. In Sec.~\ref{CAFM}, we add
the on-site Coulomb interaction to the Hamiltonian and derive the mean-field
equations for the commensurate AFM gap for finite doping and temperature.
The incommensurate AFM state is analyzed in
Sec.~\ref{ICAFM}.
In
Sec.~\ref{PS},
we demonstrate that the AA-BLG is unstable with respect to phase separation
within some doping and temperature range. The obtained results are
discussed in
Sec.~\ref{Discussion}.
\begin{figure
\centering
\includegraphics[width=0.85\columnwidth]{AABLGstruct.eps}
\caption{(Color online) Crystal structure of the AA-stacked bilayer
graphene. The circles denote carbon atoms in the
${\cal A}$
(red) and
${\cal B}$
(blue) sublattices in the bottom (1) and top (2) layers. The unit cell of
the AA-BLG consists of four atoms $A1$, $A2$, $B1$, and $B2$. Hopping
integrals $t$ and $t_0$ correspond to the in-plane and inter-plane
nearest-neighbor hopping.
\label{AABLG}}
\end{figure}
\section{Tight-binding Hamiltonian}\label{TBH}
The crystal structure of the AA-BLG is shown in Fig.~\ref{AABLG}. The
AA-BLG consists of two graphene layers, $1$ and $2$. Each carbon atom of the
upper layer is located above the corresponding atom of the lower layer.
Each layer consists of two triangular sublattices ${\cal A}$ and ${\cal
B}$. The elementary unit cell of the AA-BLG contains four carbon atoms
$A1$, $A2$, $B1$, and $B2$.
We write the single-particle Hamiltonian of the AA-BLG in the form
\begin{eqnarray}\label{H0}
H_0&=&-t\sum_{\langle\mathbf{nm}\rangle i\sigma}\left(d^{\dag}_{\mathbf{n}i{\cal A}\sigma}
d^{\phantom{\dag}}_{\mathbf{m}i{\cal B}\sigma}+H.c.\right)-\\
&&t_0\sum_{\mathbf{n}a\sigma}\left(d^{\dag}_{\mathbf{n}1a\sigma}d^{\phantom{\dag}}_{\mathbf{n}2a\sigma}+H.c.\right)
-\mu\sum_{\mathbf{n}ia\sigma}d^{\dag}_{\mathbf{n}ia\sigma}d^{\phantom{\dag}}_{\mathbf{n}ia\sigma}\,.\nonumber
\end{eqnarray}
Here $d^{\dag}_{\mathbf{n}ia\sigma}$
and
$d^{\phantom{\dag}}_{\mathbf{n}ia\sigma}$
are the creation and annihilation operators of an electron with spin
projection $\sigma$ in the layer
$i=1,\,2$
on the sublattice
$a={\cal A},{\cal B}$
at the position
$\mathbf{n}$,
$\mu$ is the chemical potential, and
$\langle ...\rangle$
denotes nearest-neighbor pair. The amplitude $t$ ($t_0$) in
Eq.~\eqref{H0} describes the in-plane (inter-plane) nearest-neighbor
hopping. For calculations, we will use the values of the hopping integrals
$t\approx2.57$\,eV, $t_0\approx0.36$\,eV
computed by DFT for multilayer AA systems in
Ref.~\onlinecite{Charlier}.
If we perform the unitary transformation
\begin{equation}\label{U1}
h^{\phantom{\dag}}_{\mathbf{n}a\sigma}=\frac{d^{\phantom{\dag}}_{\mathbf{n}1a\sigma}+d^{\phantom{\dag}}_{\mathbf{n}2a\sigma}}{\sqrt{2}}\,,\;\;
g^{\phantom{\dag}}_{\mathbf{n}a\sigma}=\frac{d^{\phantom{\dag}}_{\mathbf{n}1a\sigma}-d^{\phantom{\dag}}_{\mathbf{n}2a\sigma}}{\sqrt{2}}\,,
\end{equation}
then, Eq.~\eqref{H0} can be rewritten as
\begin{eqnarray}\label{H01}
&&\!\!\!H_0\!=\!-t\!\!\!\sum_{\langle\mathbf{nm}\rangle\sigma}\!\!\!\left(h^{\dag}_{\mathbf{n}{\cal A}\sigma}
h^{\phantom{\dag}}_{\mathbf{m}{\cal B}\sigma}+H.c.\right)- (\mu+t_0)\!\!\sum_{\mathbf{n}a\sigma}h^{\dag}_{\mathbf{n}a\sigma}h^{\phantom{\dag}}_{\mathbf{n}a\sigma}\nonumber\\
&&\!\!\!-t\!\!\sum_{\langle\mathbf{nm}\rangle\sigma}\!\!\!\left(g^{\dag}_{\mathbf{n}{\cal A}\sigma}
g^{\phantom{\dag}}_{\mathbf{m}{\cal B}\sigma}+H.c.\right)- (\mu-t_0)\!\!\sum_{\mathbf{n}a\sigma}g^{\dag}_{\mathbf{n}a\sigma}g^{\phantom{\dag}}_{\mathbf{n}a\sigma}\,.
\end{eqnarray}
Therefore, in this representation the Hamiltonian $H_0$ is a sum of two
single-layered graphene
Hamiltonians~\cite{castro_neto_review2009},
with different effective chemical potential
$\mu \pm t_0$.
The Hamiltonian~\eqref{H01} can be readily diagonalized. To perform the
diagonalization, we switch
to the fermion operators
$h^{\phantom{\dag}}_{\mathbf{k}a\sigma}$
and
$g^{\phantom{\dag}}_{\mathbf{k}a\sigma}$,
which are defined in the momentum representation, and make the unitary
transformation
\begin{eqnarray}
\gamma^{\phantom{\dag}}_{\mathbf{k}1\sigma}=\frac{h^{\phantom{\dag}}_{\mathbf{k}\cal{A}\sigma}+h^{\phantom{\dag}}_{\mathbf{k}\cal{B}\sigma}e^{i\varphi_{\mathbf{k}}}}{\sqrt{2}}\,,\;
\gamma^{\phantom{\dag}}_{\mathbf{k}2\sigma}=\frac{h^{\phantom{\dag}}_{\mathbf{k}\cal{A}\sigma}-h^{\phantom{\dag}}_{\mathbf{k}\cal{B}\sigma}e^{i\varphi_{\mathbf{k}}}}{\sqrt{2}}\,,\nonumber\\
\gamma^{\phantom{\dag}}_{\mathbf{k}3\sigma}=\frac{g^{\phantom{\dag}}_{\mathbf{k}\cal{A}\sigma}+g^{\phantom{\dag}}_{\mathbf{k}\cal{B}\sigma}e^{i\varphi_{\mathbf{k}}}}{\sqrt{2}}\,,\;
\gamma^{\phantom{\dag}}_{\mathbf{k}4\sigma}=\frac{g^{\phantom{\dag}}_{\mathbf{k}\cal{A}\sigma}-g^{\phantom{\dag}}_{\mathbf{k}\cal{B}\sigma}e^{i\varphi_{\mathbf{k}}}}{\sqrt{2}}\,,
\end{eqnarray}
where
$\varphi_{\mathbf{k}}=\arg \left(f_{\mathbf{k}}\right)$,
\begin{equation}\label{f}
f_{\mathbf{k}}
=
1
+2\exp\!\left(\frac{3ik_xa_0}{2}\right)
\!
\cos\!\left(\!\!\frac{\sqrt{3} k_ya_0}{2}\!\!\right)\,,
\end{equation}
and $a_0$ is the in-plane carbon-carbon distance. As a result,
Hamiltonian~\eqref{H01}
becomes
\begin{equation}
\label{H0diag}
H_0=\!\sum_{\mathbf{k}s\sigma}\!
\left(\varepsilon^{(s)}_{0\mathbf{k}}-\mu\right)
\gamma^{\dag}_{\mathbf{k}s\sigma}
\gamma^{\phantom{\dag}}_{\mathbf{k}s\sigma}\,.
\end{equation}
In this equation, the band index $s$ runs from 1 to 4, and the band spectra
$\varepsilon^{(s)}_{0\mathbf{k}}$
are
\begin{eqnarray}
\label{E0k}
&&\varepsilon^{(1)}_{0\mathbf{k}}=-t_0-t\zeta_{\bf k}\,,\qquad
\varepsilon^{(2)}_{0\mathbf{k}}=-t_0+t\zeta_{\bf k}\,,\nonumber\\
&&\varepsilon^{(3)}_{0\mathbf{k}}=+t_0-t\zeta_{\bf k}\,,\qquad
\varepsilon^{(4)}_{0\mathbf{k}}=+t_0+t\zeta_{\bf k}\,,
\end{eqnarray}
where
$\zeta_{\bf k} = |f_{\bf k}|$.
The band structure obtained is shown in Fig.~\ref{FigSpec0}. The bands
$s=2$ and $s=3$ cross the Fermi level near the Dirac points
$\mathbf{K}=2\pi (\sqrt{3},\,1 )/(3\sqrt{3}a_0)$
and
$\mathbf{K}'=2\pi (\sqrt{3},\,-1 )/(3\sqrt{3}a_0)$
[see
Fig.~\ref{FigSpec0}(b)].
\begin{figure
\centering
\includegraphics[width=0.95\columnwidth]{FigSpec0a.eps}\\
\includegraphics[width=0.95\columnwidth]{FigSpec0bc.eps}
\caption{(Color online) (a) The single-particle band structure of the
AA-stacked bilayer graphene. It consists of two single-layered graphene
spectra shifted relative to each other by the energy $2t_0$. (b) The
$\mathbf{k}$-dependence of the spectra
$\varepsilon^{(s)}_{0\mathbf{k}}$ near the Dirac point ${\cal K}$ located
at momentum $\mathbf{K}$.
Here,
$\mathbf{k}=\mathbf{K}+\delta k_y\mathbf{e}_y$.
The intersection of the bands $s=2$ and $s=3$
occurs exactly at zero energy, which corresponds to the Fermi level of the
undoped system. (c) The first Brillouin zone (hexagon) and the reciprocal
lattice unit cell (rhombus) of the AA-BLG. The circles around
${\bf K}$ and ${\bf K}'$ points correspond to Fermi surfaces of the doped system.}
\label{FigSpec0}
\end{figure}
For undoped systems
($\mu = 0$, half-filling)
the Fermi surfaces are given by the equation
$|f_{\bf k}|=t_0/t$.
Since
$t_0/t\ll1$,
we can expand the function
$|f_{\bf k}|$
near the Dirac points and find that the Fermi surface consists of two
circles with radius
$k_{r}=2t_0/(3ta_0)$.
These Fermi surfaces transform into four circles in doped AA-BLG [see
Fig.~\ref{FigSpec0}(c)].
The most important feature of this tight-binding band structure is that at
half-filling the Fermi surfaces of both bands coincide. That is, the
electron and hole components of the Fermi surface are perfectly nested.
This property of the Fermi surfaces is quite stable against changes in the
tight-binding Hamiltonian. It survives even if longer-range hoppings are
taken into account, or a system with two non-equivalent layers is
considered (e.g., similar to the single-side hydrogenated
graphene~\cite{sshg}).
However, the electron interactions can destabilize such a degenerate
spectrum, generating a
gap~\cite{our_preprint}.
\section{Commensurate antiferromagnetic state}\label{CAFM}
The single-electron spectrum described in the previous section
changes qualitatively when interaction is included. Specifically, using
mean field theory, we will demonstrate that the degenerate Fermi surface of
the undoped AA-BLG is unstable with respect to the spontaneous generation
of AFM order.
\subsection{Mean-field equations}\label{CAFMA}
We approximate the electron-electron interaction by the Hubbard-like
interaction Hamiltonian:
\begin{equation}\label{U}
H_{\text{int}}=\frac{U}{2}\sum_{\mathbf{n}ia\sigma}
\left(n_{\mathbf{n}ia\sigma}-\frac{1}{2}\right)\left(n_{\mathbf{n}ia\bar{\sigma}}-\frac{1}{2}\right)\,,
\end{equation}
where
$n_{\mathbf{n}ia\sigma}=d^{\dag}_{\mathbf{n}ia\sigma}d^{\phantom{\dag}}_{\mathbf{n}ia\sigma}$, and $\bar{\sigma}=-\sigma$.
It is known that the on-site Coulomb interaction $U$ in graphene and other
carbon systems is rather strong, but the estimates available in the
literature vary
considerably~\cite{Ut,U69},
ranging from 4-5 to 9-10~eV.
We analyze the properties of the Hamiltonian $H=H_{0}+H_{\text{int}}$ in
the mean-field approximation. We choose the $x$-axis as the spin
quantization axis, and write the order parameters as
\begin{eqnarray}
\Delta_{ia}\equiv U\left\langle
d^{\dag}_{\mathbf{n}ia\uparrow}
d^{\phantom{\dag}}_{\mathbf{n}ia\downarrow}
\right\rangle\,,
\\
\label{GtypeDelta}
\Delta_{1\cal{A}}
=
\Delta_{2\cal{B}}=-\Delta_{1\cal{B}}=-\Delta_{2\cal{A}}\equiv\Delta\,,
\end{eqnarray}
and $\Delta$ is real. Such AFM order, when spin at any given site is
antiparallel to spins at all four nearest-neighbor sites, is called in the
literature G-type AFM. Other types of spin order are either unstable or
metastable.
In the mean-field approximation, the interaction Hamiltonian has the form
\begin{eqnarray}
\label{UMF}
H^{\rm MF}_{\text{int}}
&=&
{\cal N}\left[
\frac{4\Delta^2}{U}-U(n^2-1)
\right]
+
\frac{Ux}{2}
\sum_{\mathbf{n}ia\sigma}n_{\mathbf{n}ia\sigma}
\nonumber\\
&-&\sum_{\mathbf{n}ia}
\Delta_{ia}
\left(
d^{\dag}_{\mathbf{n}ia\uparrow}
d^{\phantom{\dag}}_{\mathbf{n}ia\downarrow}
+
d^{\dag}_{\mathbf{n}ia\downarrow}
d^{\phantom{\dag}}_{\mathbf{n}ia\uparrow}
\right)
\,,
\end{eqnarray}
where $x=n-1$ is the doping level, $n$ is the number of electrons per site,
and ${\cal N}$ is the number of unit cells in the sample (a unit cell of
AA-BLG consists of four carbon atoms, see
Fig.~\ref{AABLG}).
Below, when quoting numerical estimates for doping, we will write $x$ as a
percentage of the total number of carbon atoms in the sample.
We introduce the four-component spinor
\begin{eqnarray}
\psi^{\dag}_{\mathbf{k}\sigma}
=
(
d^{\dag}_{\mathbf{k}1\cal{A}\sigma},
d^{\dag}_{\mathbf{k}2\cal{A}\sigma},
d^{\dag}_{\mathbf{k}1\cal{B}\sigma},
d^{\dag}_{\mathbf{k}2\cal{B}\sigma}
),
\end{eqnarray}
which can be used to build an eight-component spinor
$\Psi^{\dag}_{\mathbf{k}}
=
(
\psi^{\dag}_{\mathbf{k}\uparrow},
\psi^{\dag}_{\mathbf{k}\downarrow}
).$
In terms of this spinor, the mean field Hamiltonian
$H^{\rm MF}=H_{0}+H^{\rm MF}_{\text{int}}$
can be written as
\begin{eqnarray}
\label{HtotM}
H^{\rm MF}
=
{\cal N}E_0
+
\sum_{\mathbf{k}}
\Psi^{\dag}_{\mathbf{k}}
\left(
\begin{matrix}
\hat{H}_{0\mathbf{k}} - \mu' &\hat{\Delta}&\cr
\hat{\Delta}&\hat{H}_{0\mathbf{k}} - \mu' \cr
\end{matrix}\!\!\!
\right)
\Psi^{\phantom{\dag}}_{\mathbf{k}}\,,
\end{eqnarray}
where
\begin{eqnarray}
E_0 = \frac{4\Delta^2}{U} - U(n^2-1),\ \ \mu'=\mu-\frac{Ux}{2}.
\end{eqnarray}
In these equations,
$E_0$
is a $c$-number, $\mu'$ is the renormalized chemical potential,
$\hat{H}_{0\mathbf{k}}$,
and
$\hat{\Delta}$
are
$4\times4$
matrices
\begin{equation}\label{Hk}
\hat{H}_{0\mathbf{k}}=-\left(
\begin{matrix}
0&t_0&tf_{\bf k}&0\cr
t_0&0&0&tf_{\bf k}\cr
tf_{\bf k}^{*}&0&0&t_0\cr
0&tf_{\bf k}^{*}&t_0&0\cr
\end{matrix}\right)\,,
\end{equation}
\begin{equation}\label{DeltaMatr}
\hat{\Delta}=\left(
\begin{matrix}
-\Delta&0&0&0\cr
0&\Delta&0&0\cr
0&0&\Delta&0\cr
0&0&0&-\Delta\cr
\end{matrix}\right)\,.
\end{equation}
We diagonalize the $8\times8$ matrix in Eq.~\eqref{HtotM} and obtain four
doubly-degenerate bands
\begin{eqnarray}\label{Ek}
\varepsilon^{(1,4)}_{\mathbf{k}}
=
\mp\sqrt{\Delta^2+\left(t\zeta_{\mathbf{k}}+t_0\right)^2}\,,
\nonumber\\
\varepsilon^{(2,3)}_{\mathbf{k}}
=
\mp\sqrt{\Delta^2+\left(t\zeta_{\mathbf{k}}-t_0\right)^2}\,.
\end{eqnarray}
To determine the AFM gap $\Delta$ we should minimize the grand potential
$\Omega$. The grand potential per unit cell is
\begin{equation}\label{Omega}
\Omega=E_0-2T\!\sum_{s=1}^{4}\!\int\!\frac{d\mathbf{k}}{V_{\text{BZ}}}\ln\left[1+e^{(\mu'-\varepsilon^{(s)}_{\mathbf{k}})/T}\right]\,,
\end{equation}
where $V_{\text{BZ}}$ is the volume of the first Brillouin zone.
To evaluate integrals over the Brillouin zone it is convenient to introduce
the density of states
\begin{equation}\label{ro}
\rho_0(\zeta)=\!\int\!\frac{d\mathbf{k}}{V_{\text{BZ}}}\delta(\zeta-\zeta_{\mathbf{k}})\,.
\end{equation}
This function is non-zero only for
$0<\zeta<3$.
It is related to the graphene density of states
$\rho_{\text{gr}}(E)$
as
$\rho_{\text{gr}}(E)=\rho_{0}(|E/t|)/t$
(see
Ref.~\onlinecite{castro_neto_review2009}).
Minimization of $\Omega$ with respect to $\Delta$ gives the equation
\begin{eqnarray}
1&=&\frac{U}{4t}\!\int\limits_{0}^{3}\!\!d\zeta\,\rho_0(\zeta)\!\!\label{DeltaT}
\left[F\left(\sqrt{\delta^2+(\zeta+\zeta_0)^2}\right)\right.+\nonumber\\
&&\left.F\left(\sqrt{\delta^2+(\zeta-\zeta_0)^2}\right)\right]\,,
\end{eqnarray}
where $\delta=\Delta/t$, $\zeta_0=t_0/t$, and
\begin{equation}
F(\varepsilon)
=
\frac{f(-t\varepsilon - \mu')-f(t\varepsilon - \mu')}{\varepsilon},\;\;
f(E)=\frac{1}{e^{\frac{\scriptstyle E}{\scriptstyle T}}+1}\,.
\end{equation}
Equation~\eqref{DeltaT} determines the gap $\Delta$ as a function of the
renormalized chemical potential $\mu'$. To find $\Delta$ as a function of
doping, we need to relate the doping and the chemical potential. It is easy
to prove that
\begin{eqnarray}
n=1+x=-\frac14\frac{\partial(\Omega - E_0)}{\partial\mu'}.
\end{eqnarray}
Then, using Eqs.~\eqref{Omega} and \eqref{ro} we derive
\begin{eqnarray}
x=\frac12\!\int\limits_{0}^{3}\!\!d\zeta\,\rho_0(\zeta)\!\!\label{xT}
\left[G\left(\sqrt{\delta^2+(\zeta+\zeta_0)^2}\right)\right.+\nonumber\\
\left.G\left(\sqrt{\delta^2+(\zeta-\zeta_0)^2}\right)\right]\,,
\\
{\rm where\ \ }
G(\varepsilon)=f(-t\varepsilon - \mu')+f(t\varepsilon - \mu')-1\,.
\end{eqnarray}
Solving
Eqs.~\eqref{DeltaT}
and~\eqref{xT}
we obtain the AFM gap
$\Delta(x,T)$
and the chemical potential
$\mu(x,T)$.
The solutions of
Eqs.~\eqref{DeltaT}
and~\eqref{xT}
satisfy the following relations:
$\Delta(-x,T)=\Delta(x,T)$
and
$\mu(-x,T)=-\mu(x,T)$.
They are consequences of the particle-hole symmetry of the model
Hamiltonian. The next-nearest-neighbor hopping breaks this symmetry.
However, our analysis shows that corrections, introduced by these terms, do
not exceed
1--2\%
for the range of parameters characteristic of graphene systems. Assuming
particle-hole symmetry, below we only consider electron doping,
$x>0$.
\begin{figure
\centering
\includegraphics[width=0.95\columnwidth]{FigDelta0a.eps}\\
\includegraphics[width=0.95\columnwidth]{FigDelta0b.eps}
\caption{(Color online) (a) The AFM gap ratio
$\Delta/\Delta_0$
versus doping
$x/x_c$
for different values of the on-site Coulomb repulsion $U$: (red) squares
correspond to
$U=5.5$\,eV,
(blue) circles to
$U=7$\,eV,
(green) triangles to
$U=9$\,eV.
The solid (black) curve is
$\Delta(x)/\Delta_0=\sqrt{1-x/x_c}$.
(b) The dependencies
$\Delta(x=0,T=0) \equiv \Delta_0$
and critical doping
$x_c$
versus $U$. The solid curves are numerical solutions of
Eqs.~\eqref{EqDelta2}
and
\eqref{EqMu2},
while the dashed curves are calculated using the approximate analytical
solution,
Eqs.~\eqref{xca}
and~\eqref{DeltaAFM}.
}
\label{Gap}
\end{figure}
\subsection{Zero temperature}
If $T=0$, Eqs.~\eqref{DeltaT} and~\eqref{xT} become
\begin{eqnarray}\label{EqDelta2}
1&=&\frac{U}{4t}\int\limits_0^3\!\!d\zeta\,\rho_0(\zeta)\!\!
\left[
\frac{1-\Theta\left(\displaystyle \mu'/t-
\sqrt{\delta^2+\left(\zeta+\zeta_0\right)^2}\right)}%
{\sqrt{\delta^2+\left(\zeta+\zeta_0\right)^2}}
\right.+\nonumber\\
&&\left.\frac{1-
\Theta\left(
\displaystyle \mu'/t-
\sqrt{\delta^2+\left(\zeta-\zeta_0\right)^2}
\right)}
{\sqrt{\delta^2+\left(\zeta-\zeta_0\right)^2}}\right]\,,
\end{eqnarray}
\begin{eqnarray}
\label{EqMu2}
x&=&\frac12\int\limits_0^3\!\!d\zeta\,\rho_0(\zeta)\!\!%
\left[\Theta\left(\displaystyle \mu'/t-\sqrt{\delta^2+\left(\zeta+\zeta_0\right)^2}\right)\right.+\nonumber\\
&&\left.
\Theta\left(\displaystyle \mu'/t-\sqrt{\delta^2+\left(\zeta-\zeta_0\right)^2}\right)\right]\,,
\end{eqnarray}
where $\Theta(x)$ is the step function.
At half-filling, $n=1$, $x=0$, and $\mu=\mu'=0$. The lower two bands are filled,
the upper two bands are empty, and both $\Theta$-functions in Eq.~\eqref{EqDelta2} are zero for any $\zeta$.
When doping is introduced, analysis of the latter equations shows that
$\mu'$
changes abruptly from zero to the value
$\mu'>\Delta$.
The gap
$\Delta(x,T=0)$
decreases monotonously from
$\Delta(x=0,T=0) \equiv \Delta_0$
to $0$, when $x$ increases from $0$ to some critical doping $x_c$. To find
$x_c$ we must put
$\delta = 0$
into
Eqs.~\eqref{EqDelta2}
and~\eqref{EqMu2}
and solve them for
$\mu'$
and
$x = x_c$.
Equations~(\ref{EqDelta2})
and
\eqref{EqMu2}
can be solved analytically, if
$\Delta_0\ll t,t_0$.
Using the asymptotic expansions of integrals in these equations for small
$\delta$ we
obtain~\cite{our_preprint2}
\begin{eqnarray}
\label{Delta_vs_x}
\Delta(x,0)&=&\Delta_0\sqrt{1\!-\frac{x}{x_c}}\,,
\\
\label{DelMu}
\mu(x,0)&=&\Delta_0\left[{\rm sgn\,}(x)-\frac{x}{2x_c}\right]+\frac{Ux}{2}\,,
\\
\label{xca}
x_c
&=&
\frac{\Delta_0\rho_0(\zeta_0)}{2t}
\cong
\frac{\Delta_0t_0}{\pi\sqrt{3}t^2}\;\;{\rm \ when\ }t_0\ll t\,.
\end{eqnarray}
In this limit the value of $\Delta_0$ is given by the relation~\cite{our_preprint}
\begin{equation}\label{DeltaAFM}
\Delta_0=2\sqrt{t_0(3t-t_0)}\exp\left\{-\frac{4t-U\eta(\zeta_0)}{2U\rho_0(\zeta_0)}\right\}\,,
\end{equation}
where
\begin{equation}
\eta(\zeta_0)\!=\!\int\limits_0^3\!\!d
\zeta\left[ \frac{\rho_0(\zeta)}{\zeta+\zeta_0}
+ \frac{\rho_0(\zeta)-\rho_0(\zeta_0)}
{\left|\zeta-\zeta_0\right|}\right].\;\;\
\end{equation}
The dependence of the ratio $\Delta(x,0)/\Delta_0$ on $x/x_c$ for different
values of $U$ is shown in Fig.~\ref{Gap}(a). Figure~\ref{Gap}(b) shows
$\Delta_0$ and $x_c$ as functions of $U$ calculated both numerically
[Eqs.~\eqref{EqDelta2} and \eqref{EqMu2}] and analytically
[Eqs.~\eqref{xca} and~\eqref{DeltaAFM}]. Equations~(\ref{DelMu}),
(\ref{xca}) together with Eq.~\eqref{DeltaAFM} for $\Delta_0$ are valid if
$U\lesssim6$\,eV. However,
Eq.~(\ref{Delta_vs_x})
is accurate for any $U$, provided that $x_c$ and $\Delta_0$ are calculated
numerically from
Eqs.~\eqref{EqDelta2}
and~\eqref{EqMu2}
[see
Fig.~\ref{Gap}(a)].
\subsection{Finite temperatures}
In this subsection we will analyze the finite-temperature solutions of the
mean field
equations~(\ref{DeltaT})
and~(\ref{xT}).
However, it is necessary to remember that in 2D systems no long-range order
is possible if
$T>0$.
In such a situation the mean field solutions characterize the short-range
order, which survives for sufficiently low $T$. The effects beyond the mean
field approximation will be discussed in
subsection~\ref{xover}.
Solving numerically the mean field
equations~(\ref{DeltaT})
and~(\ref{xT}),
we find $\Delta$ as a function of doping $x$ and temperature $T$ (see
Fig.~\ref{FigDeltaT}).
The temperature
$T_{\rm MF}$
at which $\Delta$ vanishes is the mean field transition temperature (see
inset of
Fig.~\ref{FigDeltaT}).
The transition temperature, as a function of doping $x$, is not a
single-valued function. Instead, it demonstrates a pronounced re-entrant
behavior.
We discuss this unusual phenomenon in more details
in Sections \ref{ICAFM}, \ref{PS}, and \ref{Discussion}.
\begin{figure
\centering
\includegraphics[width=0.95\columnwidth]{FigDeltaT.eps}
\caption{(Color online) The dependence of
$\Delta(x,T)$
on doping $x$ calculated for
$U=5.5$\,eV
and different
$T/\Delta_0$:
(1)~$T/\Delta_0= 0.06$,
(2)~$T/\Delta_0= 0.17$,
(3)~$T/\Delta_0= 0.33$,
(4)~$T/\Delta_0= 0.41$,
(5)~$T/\Delta_0= 0.47$,
(6)~$T/\Delta_0= 0.52$,
(7)~$T/\Delta_0= 0.55$,
and
(8)~$T/\Delta_0= 0.58$.
Inset: The dependence of the mean-field transition temperature on
doping. The re-entrance from PM to AFM state exists in the doping range
$x_c<x<1.231 x_c$; $x_c=0.128$\% and $\Delta_0=0.124$~eV.\label{FigDeltaT}}
\end{figure}
Equations~\eqref{DeltaT}
and
\eqref{xT}
can be simplified in the case of small gap, when
$\Delta_0\ll t_0,t$.
Neglecting terms of the order of
$\Delta_0^2/t^2$
in
Eq.~\eqref{DeltaT}
and taking into account
Eq.~\eqref{DeltaAFM}
for
$\Delta_0$,
we obtain the following equation for $\Delta$
\begin{eqnarray}\label{DeltaTa}
\ln\frac{\Delta_0}{\Delta}
&=&
\frac14\!\!
\int\limits_{\Delta/T}^{\infty}\!\!dz
\arch\left(\frac{zT}{\Delta}\right)%
\\
\nonumber
&\times&
\!\!\left[
{\cosh^{-2}\left(\displaystyle\frac{z-\mu'/T}{2}\right)}
+
{\cosh^{-2}\left(\displaystyle\frac{z+\mu'/T}{2}\right)}
\right].
\end{eqnarray}
In the same limit, we derive from
Eq.~\eqref{xT}
the relation between $\mu'$ and $x$ in the form
\begin{eqnarray}
\label{xTa}
&&\frac{x}{x_c}
=
\frac{T}{2\Delta_0}\!\!\!
\int\limits_{\Delta/T}^{\infty}\!dz
\sqrt{z^2-\frac{\Delta^2}{T^2}}\!%
\\
\nonumber
&&\times
\left[
{\cosh^{-2}\left(\displaystyle\frac{z-\mu'/T}{2}\right)}
-
{\cosh^{-2}\left(\displaystyle\frac{z+\mu'/T}{2}\right)}\right]\,.
\end{eqnarray}
At half-filling ($x=0$) we find from
Eq.~\eqref{DeltaTa}
the BCS-like result
$T_{\rm MF}(0)\cong0.567\Delta_0$.
If we normalize $x$ by
$x_c$
and $\Delta$, $\mu'$ and $T$ by
$\Delta_0$,
then,
Eqs.~\eqref{DeltaTa}
and~\eqref{xTa}
do not include any parameter characterizing the AA-BLG band structure.
Thus, if the electron interaction $U$ is not large, the obtained results do
not depend on details specific to the AA-BLG and are valid for other
systems with imperfect nesting
\cite{RiceMod,our_Rice_model,our_pnic}.
\subsection{Crossover temperature}
\label{xover}
In 2D systems, finite-temperature fluctuations destroy the AFM long-range
order. Then, the results obtained above in the mean-field approximation are
valid only if the mean-field correlation length
$\xi=v_{\rm F}/\Delta$
is smaller than the spin-wave correlation length
$\xi_{sw}$
(here
$v_{\rm F}\cong3a_0t/2$
is the Fermi velocity in our model). Otherwise, short-range ordering of
spins disappears, and we cannot define the AFM order even locally.
In the limit
$\xi_{\rm sw} > \xi$,
the spin fluctuations can be described using the nonlinear $\sigma$-model
with
Lagrangian~\cite{chak,Manousakis}
\begin{eqnarray}
\label{sigma_m}
{\cal L}_{\rm sw}
=\frac{\rho}{2}\left[(\partial_t {\bf D})^2 - c_{\rm sw}^2 (\partial_{\bf r} {\bf D})^2\right],
\end{eqnarray}
where ${\bf D}$ is the unit vector along the local AFM magnetization. The
spin-wave stiffness $\rho$ and velocity
$c_{\rm sw}$
can be evaluated from Eqs.~(7.89) and (7.90) of Ref.~\onlinecite{schakel}
\begin{eqnarray}
c_{\rm sw}=\frac{v_{\rm F}}{\sqrt{2}},\quad\rho=
\begin{cases}
t_0/(8\pi v_{\rm F}^2), & \text{if $t_0 \gg \Delta$},
\\
\Delta/(16\pi v_{\rm F}^2), & \text{if $t_0 \ll \Delta$}.
\end{cases}
\end{eqnarray}
The correlation function
$K(\mathbf{r})=\langle\mathbf{D}(\mathbf{r})\mathbf{D}(0)\rangle$
can be obtained using the
Lagrangian~\eqref{sigma_m}.
At large distances it behaves
as~\cite{chak,Manousakis}
\begin{eqnarray}
K(\mathbf{r})
\approx
1-\frac{T}{\pi\rho v_{\rm F}^2}
\ln\left(\frac{e^{\gamma}\sqrt{2}\,rT}{3a_0t}\right)\,,
\end{eqnarray}
where $\gamma$ is the Euler's constant. The spin-wave correlation length
$\xi_{\rm sw}$
describing the characteristic size of the short-range AFM
order can be estimated using the equation
$K(\xi_{\rm sw})=0$.
Thus, we have
\begin{equation}
\xi_{\rm sw}
\approx
\frac{a_0t}{T}\exp\left(\frac{2\pi\rho c_{\rm sw}^2}{T}\right)\,.
\end{equation}
Solving the equation
$\xi_{\rm sw}=\xi$,
we find the crossover temperature
$T^*$
between the short-range AFM and the PM. The short-range AFM order exists
over distances of about
$\xi_{\rm sw}\gg a_0$,
if
$T<T^*$,
and it is destroyed if
$T>T^*$.
Our numerical analysis shows that
$T^*(x)/T_{\rm MF}(x)\approx0.8$\,-\,$0.9$
for any ratio
$\Delta/t$.
Thus, the mean-field transition temperature gives an appropriate estimate
for the AFM-to-PM crossover temperature.
\section{Incommensurate antiferromagnetic state}\label{ICAFM}
The G-type AFM state considered above has the smallest value of the grand
thermodynamic potential $\Omega$ among other states with commensurate
magnetic order. However, further optimization of $\Omega$ could be achieved
if we allow the local direction of the AFM magnetization slightly rotate
from site to
site~\cite{RiceMod}.
Then, the translation invariance with a lattice period disappears. Such
a state is referred to as incommensurate (or helical) AFM. The complex
order parameter for this state has a form
\begin{equation}\label{Dq}
\Delta_{\mathbf{n}ia}= U\left\langle d^{\dag}_{\mathbf{n}ia\uparrow}d^{\phantom{\dag}}_{\mathbf{n}ia\downarrow}\right\rangle
=e^{i\mathbf{qn}}\Delta_{ia}\,,
\end{equation}
where
$\mathbf{q}$ describes the spatial variation of the AFM magnetization
direction, the position vector
$\mathbf{n}$
specifies the location of a given carbon atom, and
$\Delta_{ia}$
satisfies
Eqs.~\eqref{GtypeDelta}.
The averaged electron spin
$\mathbf{S}_{\mathbf{n}ia}$
at site
$\mathbf{n}$
lies in the $xy$-plane. It is related to the order parameter as
$\langle
d^{\dag}_{\mathbf{n}ia\uparrow}
d^{\phantom{\dag}}_{\mathbf{n}ia\downarrow}
\rangle
=
S^{x}_{\mathbf{n}ia}+iS^{y}_{\mathbf{n}ia}$.
As a result, we obtain
\begin{equation}
\mathbf{S}_{\mathbf{n}ia}
=
\frac{\Delta_{ia}}{U}
\left(
\cos(\mathbf{qn}),\,\sin(\mathbf{qn})
\right)\,.
\end{equation}
\begin{figure
\centering
\includegraphics[width=0.95\columnwidth]{FigDeltaQ.eps}
\caption{(Color online) The dependence of $\Delta$ (red solid curve) and
$|\mathbf{q}|$
(blue dashed curve) on doping $x$ calculated for
$T/\Delta_0=0.06$.
The dot-dashed curve is the gap $\Delta$ calculated for
$\mathbf{q}=0$
and
$U=8$\,eV.
The doping $x$ is normalized by the critical doping
$x_c$,
calculated for the commensurate AFM state. The incommensurate AFM exists in
a slightly larger doping range than the commensurate AFM. Note that the gap
for commensurate AFM remains non-zero even for
$x > x_c$.
This is a manifestation of the re-entrance, see inset of
Fig.~\ref{FigDeltaT}.
\label{FigDeltaQ}}
\end{figure}
The mean-field version of the interaction
Hamiltonian~\eqref{U}
corresponding to the order parameter
$\Delta_{\mathbf{n}ia}$,
Eq.~\eqref{Dq},
can be written in the momentum representation as [c.f.
Eq.~\eqref{UMF}]
\begin{eqnarray}
\label{UMFq}
H_{\text{int}}&\!\!=\!\!&{\cal N}\!\left[\frac{4\Delta^2}{U}-U(n^2-1)\right]+\frac{Ux}{2}\!\sum_{\mathbf{k}ia\sigma}n_{\mathbf{k}ia\sigma}-\\
&&\!\!\sum_{\mathbf{k}ia}\Delta_{ia}\left(d^{\dag}_{\mathbf{k}+\frac{\scriptstyle\mathbf{q}}{\scriptstyle2}ia\uparrow}
d^{\phantom{\dag}}_{\mathbf{k}-\frac{\scriptstyle\mathbf{q}}{\scriptstyle2}ia\downarrow}+
d^{\dag}_{\mathbf{k}-\frac{\scriptstyle\mathbf{q}}{\scriptstyle2}ia\downarrow}
d^{\phantom{\dag}}_{\mathbf{k}+\frac{\scriptstyle\mathbf{q}}{\scriptstyle2}ia\uparrow}\right)\,.\nonumber
\end{eqnarray}
It is convenient to redefine the spinor
$\Psi$
(see Sec.~\ref{CAFMA}):
\begin{eqnarray}
\Psi^{\dag}_{\mathbf{kq}}
=
(
\psi^{\dag}_{\mathbf{k}+\mathbf{q}/2\uparrow},
\psi^{\dag}_{\mathbf{k}-\mathbf{q}/2\downarrow}
).
\end{eqnarray}
We can rewrite the mean-field Hamiltonian
$H_0+H_{\rm int}^{\rm MF}$
in the form
\begin{equation}\label{HtotMq}
H
=
{\cal N}E_0
+\!\sum_{\mathbf{k}}
\Psi^{\dag}_{\mathbf{kq}}\!
\left( \begin{matrix}
\hat{H}_{
0\mathbf{k}
+
\frac{
\scriptstyle\mathbf{q}
}
{
\scriptstyle2
}
} - \mu'\!\!&\hat{\Delta}&\cr
\hat{\Delta}&\!\!
\hat{H}_{
0\mathbf{k}
-
\frac{
\scriptstyle\mathbf{q}
}
{
\scriptstyle2
}
} - \mu'\cr
\end{matrix}\!\!\!\right)\!\Psi^{\phantom{\dag}}_{\mathbf{kq}},
\end{equation}
where
$\hat{H}_{0\mathbf{k}}$
and
$\hat{\Delta}$
are given by
Eqs.~\eqref{Hk}
and
\eqref{DeltaMatr},
respectively.
\begin{figure
\centering
\includegraphics[width=0.95\columnwidth]{FigPhDiagU55_Q.eps}\vspace{0.3cm}\\
\includegraphics[width=0.95\columnwidth]{FigPhDiagU65_Q.eps}\vspace{0.3cm}\\
\includegraphics[width=0.95\columnwidth]{FigPhDiagU8_Q.eps}
\caption{(Color online) The phase diagram of the model in the ($x,\,T$)
plane, calculated for the electron doping $x>0$ and
$U=5.5$\,eV (a),
$U=6.5$\,eV (b),
and
$U=8$\,eV (c).
Solid (red) curves are
$T_{\rm MF}(x)$,
(blue) dashed curves are
$T^{q}(x)$,
at which the commensurate-incommensurate transition occurs.
The dotted (red) curves are
$T_{\rm MF}(x)$,
calculated without taking into account the incommensurate AFM state. The
dot-dashed (green) curves show the region of phase separation. For hole
doping
($x<0$)
the results are the same.
\label{FigPhDiagT}}
\end{figure}
The electron spectrum in the incommensurate AFM state is found by
diagonalization of the $8\times8$ matrix in Eq.~\eqref{HtotMq}. It consists
of $8$ non-degenerate bands, $E_{\mathbf{k},\mathbf{q}}^{(s)}$,
$s=1,2,\dots,8$. The analytical expression for
$E_{\mathbf{k},\mathbf{q}}^{(s)}$ can be obtained in the limit
$|\mathbf{q}|\ll1/a_0$,
\begin{equation}
\frac{E_{\mathbf{k},\mathbf{q}}^{(s)}}{t}\!\approx\!\pm\frac{\zeta_{\mathbf{k}+\frac{\scriptstyle\mathbf{q}}{\scriptstyle2}}-
\zeta_{\mathbf{k}-\frac{\scriptstyle\mathbf{q}}{\scriptstyle2}}}{2}\!\pm\!
\sqrt{\frac{\!\Delta^2}{t^2}+\!\!\left[\frac{t_0}{t}\pm\frac{\zeta_{\mathbf{k}+\frac{\scriptstyle\mathbf{q}}{\scriptstyle2}}+
\zeta_{\mathbf{k}-\frac{\scriptstyle\mathbf{q}}{\scriptstyle2}}}{2}\right]^2}\!\!.
\end{equation}
If
$\mathbf{q}=0$,
this spectrum coincides with the spectrum of Eqs.~\eqref{Ek}.
The expression for the grand potential $\Omega$ has similar structure as
Eq.~\eqref{Omega},
but now the summation includes eight bands:
\begin{equation}\label{OmegaQ}
\Omega=E_0-T\sum_{s=1}^{8}\!\int\!\frac{d\mathbf{k}}{V_{\text{BZ}}}\ln\left[1+e^{(\mu'-E^{(s)}_{\mathbf{k},\mathbf{q}})/T}\right].
\end{equation}
Minimization of $\Omega$ with respect to $\Delta$ and
$\mathbf{q}$,
together with the condition relating $x$ and $\mu'$ gives the closed system
of equations for calculating
$\Delta(x,T)$,
$\mathbf{q}(x,T)$,
and
$\mu(x,T)$:
\begin{equation}
\label{SystemQ}
\frac{\partial\Omega}{\partial\Delta}=0\,,\;\;
\frac{\partial\Omega}{\partial\mathbf{q}}=0\,,\;\;
1+x=-\frac{\partial(\Omega - E_0)}{\partial\mu'}\,.
\end{equation}
We calculate the functions $\Delta(x,T)$, $\mathbf{q}(x,T)$, and $\mu(x,T)$
numerically for different values of $U$. Typical curves $\Delta(x)$ and
$|\mathbf{q}(x)|$ are shown in Fig.~\ref{FigDeltaQ}. For comparison, the
curve $\Delta(x)$ calculated for the commensurate AFM is also plotted. We
see that the incommensurate AFM state exists in a slightly wider doping
range than the commensurate one. The incommensurate phase arises at
arbitrary small doping if
$T=0$.
At non-zero $T$ the commensurate AFM state is stable until doping exceeds
some $T$-dependent threshold
$x^{q}(T)$.
The curve $T^{q}(x)$ separates the incommensurate and commensurate AFM
states; the more symmetrical AFM state with
$\mathbf{q}=0$
lies above
$T^{q}(x)$.
The phase diagrams of the model in the $x$--$T$ plane are shown in
Fig.~\ref{FigPhDiagT}
for three different values of $U$. The diagrams for small
$U$ ($\lesssim6$\,eV)
and large
$U$ ($\gtrsim6$\,eV)
demonstrate a qualitative difference. Namely, for small $U$
[Fig.~\ref{FigPhDiagT}(a), $U=5.5$\,eV]
the re-entrance, seen in the inset of
Fig.~\ref{FigDeltaT},
disappears. It is masked by the incommensurate AFM phase. For larger $U$,
however, it survives,
Fig.~\ref{FigPhDiagT}(b,c).
Re-entrance is an unusual phenomenon because the ordering occurs as the
temperature increases. If re-entrance is a genuine feature of the model, or
it is an artifact of the mean field approximation, whose reliability
deteriorates when $U$ grows, we do not know. Similar behavior was predicted
theoretically for quarter-filled Hubbard model at moderate interaction
strength~\cite{theory_reentrance},
and numerically for classical rotor
model~\cite{rotor_reentrance1999}.
\section{Phase separation}\label{PS}
In our discussion above we implicitly assumed that the ground state of the
AA-BLG is spatially homogeneous. However, this is not always true: it was
predicted in
Ref.~\onlinecite{our_preprint2}
that there is a finite doping range where the AA-BLG separates in two
phases with unequal electron densities
$n_{1,2}=1+x_{1,2}$.
Indeed, if
$\Delta_0\ll t,t_0$
we can use
Eq.~\eqref{DelMu}
and obtain that
\begin{eqnarray}
\frac{\partial\mu}{\partial x}<0, {\rm \ \ if\ \ }
\frac{U}{t} < \frac{\pi\sqrt{3}t}{t_0}.
\end{eqnarray}
The negative value of the derivative
$\partial\mu/\partial x$
indicates the instability of the homogeneous state toward phase
separation~\cite{thermodyn}.
If the possibility of the incommensurate AFM is ignored, then a
zero-temperature phase
separation~\cite{our_preprint2}
occurs between the AFM insulator
($x_1=0$)
and the PM
($U\lesssim6$\,eV)
or the AFM
($U\gtrsim6$\,eV)
metal
($x_2>0$).
Here we study phase separation taking into account the incommensurate
AFM phase and non-zero temperature. We numerically analyze the stability
of the homogeneous state using the dependence of the chemical potential
$\mu$ on the doping $x$.
A typical dependence $\mu(x)$ for non-zero temperature is shown in
Fig.~\ref{GapMu}.
The derivative
$\partial\mu/\partial x$
is negative in some range of doping and the system separates in
commensurate
($\mathbf{q}=0$, $x_1<x$)
and incommensurate
($\mathbf{q}\neq0$, $x_2>x$)
AFM phases. The doping concentrations
$x_1$
and
$x_2$
are found using the Maxwell
construction~\cite{thermodyn}:
the (black) horizontal line is drawn in such a manner that the areas of the
shaded regions in
Fig.~\ref{GapMu}
are equal to each other. When temperature increases, the doping range
$x_1<x<x_2$,
where the phase separation exists, becomes narrower and disappears at some critical temperature.
Our calculations show that the separated phases are AFM with
$\mathbf{q}=0$
and
$\mathbf{q}\neq0$
for any values of the model parameters. The region of the phase separation
in the
$(x, T)$-phase diagram is shown in Fig.~\ref{FigPhDiagT} by (green)
dot-dashed lines.
\begin{figure
\centering
\includegraphics[width=0.95\columnwidth]{FigMuQ.eps}
\caption{(Color online) Chemical potential $\mu$ of the homogeneous state
versus doping $x$;
$U=5.5$\,eV
and
$T=0.014$\,eV.
The vertical dot-dashed line separates the AFM states with
$\mathbf{q}=0$
and
$\mathbf{q}\neq0$.
In the doping range
$x_1<x<x_2$,
phase separation occurs. The values
$x_{1,2}$
are determined by the Maxwell construction: the horizontal (black) line is
drawn in such a manner that the areas of the shaded regions are equal to
each other.
\label{GapMu}
}
\end{figure}
\section{Discussion}
\label{Discussion}
In this paper we study the evolution of the electron properties of AA-BLG
with doping $x$ and temperature $T$. We calculate the phase diagram of the
system in the
$(x,T)$-plane.
This diagram includes regions of the AFM commensurate and incommensurate
states, a region of phase separation, and the PM state. With good accuracy,
the electronic properties of the AA-BLG are symmetric with respect to the
electron
($x>0$)
or hole
($x<0$)
doping. The maximum crossover temperature between the short-range AFM and
the PM states depends on the on-site Coulomb repulsion $U$. For example,
AFM ordering can exist up to temperatures of about 50~K if
$U=5.5$\,eV
and to temperatures much higher than room temperature if
$U\gtrsim 6.5$\,eV.
At present, there is no consensus on the value of the on-site Coulomb
repulsion in graphene-based materials. However, it is commonly accepted
that $U$ lies is in the range
$6<U<10$\,eV
and, consequently, the AFM state can be observed in the AA-BLG.
The critical doping value $x_c$, at which the AFM is replaced by the PM,
also strongly depends on $U$, changing from
$\sim0.1$\,\%
if
$U=5.5$\,eV
to
$\sim10$\,\%
if
$U=8$\,eV.
For graphene systems, doping of about
$10\,\%$
and even higher was
achieved~\cite{CaK}.
Similar to single-layered graphene, the AA-BLG can be doped by using
appropriate
dopants~\cite{CaK,absor},
choosing the substrate and applying a gate
voltage~\cite{Geim,Kim},
or by combinations of these methods.
We predict the existence of phase separation in the AA-BLG. The
separated phases have different electron concentrations, $x_1$ and $x_2$,
and the phase separation will be frustrated by long-range Coulomb
repulsion~\cite{Cul}. In this case the formation of nano-scale
inhomogeneities is more probable. The electron-rich phase (incommensurate
AFM) is metal and the electron-poor phase (commensurate AFM insulator if
$T=0$) is insulator or ``bad'' metal. Thus, the percolative
insulator-metal transition will occur when the doping $x$ exceeds some
threshold
value~\cite{our_preprint2},
which is about
$0.5(x_1+x_2)$
in 2D systems. Phase separation exists in the doping range
$x_1<x<x_2$,
and
$x_2\lesssim1$\,\%
for any value of $U$. Depending on $U$, phase separation could be observed
from 30-40~K to room and even higher temperatures (see
Fig.~\ref{FigPhDiagT}).
This makes AA-BLG promising for applications.
The incommensurate AFM phase is mathematically equivalent to the
Fulde-Ferrel-Larkin-Ovchinnikov state in
superconductors~\cite{fflo,sheehy2007},
which is sensitive to
disorder~\cite{Takada}
and difficult to observe experimentally. Consequently, it is reasonable to
expect that the incommensurate AFM phase can be destroyed by factors our
study did not account for. Our calculations predict that in this case the
region of phase separation changes only slightly in the phase diagram.
However, the separated phases would be the AFM insulator and PM
($U\lesssim6$\,eV)
of AFM metal
($U\gtrsim6$\,eV).
To conclude, we studied the phase diagram of the AA-stacked graphene
bilayers on the doping--temperature plane. It consists of paramagnetic and
antiferromagnetic (both commensurate and incommensurate) homogeneous
phases. In addition, a region of phase separation is also identified.
Magnetic properties of the AA-BLG may survive even at room temperature.
\section*{Acknowledgments}
The work was supported by ARO,
Grant-in-Aid for Scientific Research (S),
MEXT Kakenhi on Quantum Cybernetics, and the JSPS via its FIRST program,
the Russian Foundation for Basic Research (projects 11-02-00708,
11-02-00741, 12-02-92100-JSPS, and
12-02-00339). A.O.S. acknowledges support from the RFBR project 12-02-31400
and the Dynasty Foundation.
|
2,877,628,091,489 | arxiv | \section{Introduction \label{sec:0}}
Recent advances in the synthesis and experimental investigation of electronic-structure and chemical properties of superheavy elements as well as the prospects for further progress in this field~\cite{Oganessian:2016:901, Oganessian:2017:023003, Nazarewicz:2018:537, Giuliani:2019:011001, Dullmann:2019:587, Oganessian:2019:5, Eichler:2007:72, Oganessian:2012:162501, Sato:2015:209, Laatiaoui:2016:495, Chhetri:2018:263003, Raeder:2018:232503}, prompt the theory to study these complex many-particle systems. The interest is also fueled by the fact that a strong interplay between the correlation, relativistic, and quantum-electrodynamics (QED) effects for a large amount of core and valence electrons may manifest itself in qualitatively new properties of superheavy elements compared to their lighter homologues. In this regard, the ability of oganesson (Og, $Z\!=\!118$), despite its noble-gas electronic configuration, to form a negative ion has already become a textbook example~\cite{Eliav:1996:5350, Goidenko:2003:020102_R, Lackenby:2018:042512, Guo:2021:107, Kaygorodov:2021:012819}.
The question of whether the periodic law holds for chemical elements beyond the seventh period~\cite{Fricke:1971:235, Seaborg:1996:3899, Nefedov:2006:149, Pyykko:2011:161, Jerabek:2018:053001, Kaygorodov:2020:036} is one of the fundamental motivations to study the physics and chemistry of superheavy elements. These investigations are unfeasible without the proper treatment of the QED effects. The state-of-the-art QED calculations of the middle- and high-$Z$ systems are performed within the $1/Z$ perturbation theory or its generalization based on a modification of the zeroth-order approximation by including some effective screening potential, see, e.g., Refs.~\cite{Sapirstein:2008:25,Glazov:2011:71,Volotka:2013:636,Shabaev:2018:60,Indelicato:2019:232001} for review. The corresponding sophisticated and laborious \textit{ab initio} methods cannot be directly incorporated into standard approaches based on the Dirac--Coulomb--Breit Hamiltonian~\cite{Grant:1970:747, Desclaux:1975:31, Bratzev:1977:173, Indelicato:1992:2426, Dzuba:1996:3948, Safronova:1999:4476, Tupitsyn:2003:022511, Kozlov:2015:199, Dzuba:2017:012503, Glazov:2017:46, Saue:2020:204104}. For this reason, there is a vital need for a simple and effective approximation for taking into account the QED corrections in the electronic-structure calculations. A number of such approaches has been proposed in the literature~\cite{Indelicato:1990:5139, Pyykko:2003:1469, Draganic:2003:183001, Flambaum:2005:052115, Thierfelder:2010:062503, Pyykko:2012:371, Tupitsyn:2013:682, Ginges:2016:052509}. For the purpose of describing the QED effects on binding and transition energies in relativistic many-electron systems, our group has suggested the model-QED operator in Ref.~\cite{Shabaev:2013:012513}. This operator has been successfully applied to the approximate QED calculations in various atomic systems including the superheavy ones~\cite{Tupitsyn:2016:253001, Pasteka:2017:023002, Yerokhin:2017:042505:2017:069901:join_pr, Machado:2018:032517, Si:2018:012504, Muller:2018:033416, Kaygorodov:2019:032505, Zaytsev:2019:052504, Shabaev:2020:052502, Kaygorodov:2021:012819, Savelyev:2022:012806, Kaygorodov:2022:preprint}, see also Refs.~\cite{Yerokhin:2020:042816, Skripnikov:2021:201101}.
\begin{figure}
\begin{center}
\includegraphics[width=0.75\columnwidth]{./qed_1order_v2.eps}
\caption{\label{fig:qed_1order}
Lowest-order QED terms: one-photon-exchange~(a), self-energy~(b), and vacuum-polarization (c) diagrams. The double line corresponds to the electron propagator in the local binding potential. The wavy line denotes the photon propagator.}
\end{center}
\end{figure}
The model-QED-operator approach is worked out within the two-time Green's function (TTGF) method~\cite{TTGF} and based on the fact that the QED corrections can be systematically treated by constructing an effective Hamiltonian which acts in an appropriate active space~\cite{Shabaev:1993:4703}. In the case of \textit{ab initio} QED calculations for highly charged ions, unperturbed wave functions for a single level or a set of few quasi-degenerate states generally span the active space, see, e.g., Refs.~\cite{Yerokhin:2001:032109, Artemyev:2005:062104, Malyshev:2021:183001} and references therein. For the needs of the model-QED-operator construction, one should include into the active space all the Slater determinants made up of positive-energy Dirac-equation eigenfunctions with the total (many-electron) energies lying lower than the pair-creation threshold. Since our consideration is restricted to the lowest-order QED terms depicted in Fig.~\ref{fig:qed_1order}, the active space can actually be extended beyond this threshold~\cite{Shabaev:1993:4703}. The model-QED-operator approach formulates the effective Hamiltonian in a form suitable for the atomic-structure calculations. The \texttt{QEDMOD} Fortran package for computing the model-QED operator in the range $3 \leqslant Z \leqslant 120$ was presented in Ref.~\cite{Shabaev:2015:175:2018:69:join_pr}. The main goal of the present paper is to extend the region of supported nuclear charges up to $Z=170$. The nuclear charge $Z\approx 173$ is usually considered as a critical one, at which the $1s$ state of a hypothetical hydrogenlike ion with the extended nucleus reaches the negative-energy continuum~\cite{Pomeranchuk:1945:97, Gershtein:1970:358, Pieper:1969:327, Zeldovich:1972:673}. Therefore, the proposed operator should make it possible to study the QED effects on binding and transition energies in a wide range of the superheavy elements. Merging the model-QED operator with the various electron-correlation methods allows one to take the advantage of the rigorous calculations for hydrogenlike ions in many-electron systems where the \textit{ab initio} treatment is currently rather problematic.
In the next section, we briefly describe the structure of the model-QED operator and outline the modifications made compared to Ref.~\cite{Shabaev:2013:012513}. Then we discuss the evaluation of the self-energy and vacuum-polarization matrix elements within the rigorous QED approach in the range $110 \leqslant Z \leqslant 170$. Finally, the model-QED operator is tested by performing the radiative-correction calculations for superheavy ions with hydrogen- and alkali-metal-like electronic configurations as well as for neutral atoms and comparing the obtained results with the \textit{ab initio} ones.
Relativistic units ($\hbar=1$ and $c=1$) and Heaviside charge unit ($e^2=4\pi\alpha$, where $e<0$ is the electron charge) are used throughout the paper.
\section{Model-QED operator \label{sec:1}}
The Dirac equation represents the natural zeroth-order approximation for middle- and high-$Z$ atomic systems:
\begin{align}
\label{eq:dirac}
h^{\rm D} \psi \equiv
\left[ \bm{\alpha} \cdot \bm{p} + \beta m + V \right] \psi =
\varepsilon \psi \, .
\end{align}
In the case of the potential induced by a point nucleus, $V(r)=-\alpha Z/r$, the eigenvalue of the Dirac Hamiltonian for the principal quantum number~$n$ and relativistic angular quantum number~$\kappa=(-1)^{j+l+1/2}(j+1/2)$ reads as
\begin{align}
\label{eq:en_point}
\varepsilon_{n\kappa} = m \left[ 1 + \left( \frac{\alpha Z}{n_r + \lambda} \right)^2 \right]^{-1/2} \, ,
\end{align}
where $n_r=n-|\kappa|$ and $\lambda = \sqrt{\kappa^2-(\alpha Z)^2}$. The solutions for $|\kappa|=1$ do not formally exist for $Z>137$ due to a singularity at $\alpha Z=1$. In order to deal with the higher values of $Z$, one has to regularize the Hamiltonian~(\ref{eq:dirac}), see, e.g., Ref.~\cite{Gitman:2013:038104}. The most direct way is to employ a more realistic nuclear-potential model, i.e., to take into account the finite size of the nucleus. Then the energy of the $1s$ state will keep decreasing with $Z$ and at $Z=170$ almost reach the onset of the negative-energy continuum. Despite the fact that for high $Z$ the binding energy of low-lying levels exceeds $m$ and, accordingly, $\varepsilon_{n\kappa}<0$, we will refer to these states as the positive-energy ones to distinguish them from the negative-energy continuum. Attributing the electron--nucleus interaction to the initial approximation with subsequent accounting for interactions of electrons with each other and with the quantized electromagnetic field by perturbation theory leads to the Furry picture of QED~\cite{Furry:1951:115}.
To the first order in $\alpha$, the QED effects can be described by an effective Hamiltonian acting in the subspace which is spanned by Slater determinants made up of the positive-energy solutions of Eq.~(\ref{eq:dirac}), see Ref.~\cite{Shabaev:1993:4703} for details. The total effective Hamiltonian can be expressed as
\begin{align}
\label{eq:H_eff}
H = \Lambda^{(+)}
\left[
\sum_i \left( h_i^{\rm D} + h_i^{\rm QED} \right)
+
\sum_{i<j} h_{ij}^{\rm int}
\right]
\Lambda^{(+)} \, ,
\end{align}
where the sums run over all the atomic electrons and $\Lambda^{(+)}$ is the product of the one-electron projectors on the positive-energy eigenfunctions of $h^{\rm D}$. The interaction term~$h^{\rm int}$ arises from the one-photon exchange diagram in Fig.~\ref{fig:qed_1order}(a), see Refs.~\cite{Shabaev:1993:4703, TTGF, Shabaev:2013:012513}. We draw attention to the fact that the whole discussed formalism remains true if the potential~$V$ in Eq.~(\ref{eq:dirac}) includes some local screening potential $V_{\rm scr}$. In this case, the operator~$h^{\rm int}$ corresponds to residual electron--electron interaction. The term~$h^{\rm QED}$ represents the one-electron QED operator originating from the self-energy (SE) and vacuum-polarization (VP) diagrams in Figs.~\ref{fig:qed_1order}(b) and \ref{fig:qed_1order}(c), respectively. Within the TTGF method, one can derive the following symmetric expression for it~\cite{Shabaev:1993:4703}
\begin{align}
\label{eq:H_qed}
\!\!\!
h^{\rm QED} &\equiv h^{\rm SE} + h^{\rm VP} =
\!\!\!\! \sum_{i,k}^{\varepsilon_i,\varepsilon_k>-m} \!\!\!\!
| \psi_i \rangle \langle \psi_i |
\nonumber \\
& \times
\left\{
\frac{1}{2} \left[ \Sigma^{\rm SE}_R(\varepsilon_i) + \Sigma^{\rm SE}_R(\varepsilon_k) \right] + V^{\rm VP}_R
\right\}
| \psi_k \rangle \langle \psi_k | \, ,
\end{align}
where $\Sigma^{\rm SE}_R(\varepsilon)$ and $V^{\rm VP}_R$ are the renormalized SE and VP operators, respectively, and the sums over $i$ and $k$ run over all the positive-energy one-electron Dirac states. Below we discuss how the operator~$h^{\rm QED}$ can be adapted for convenient use in the practical relativistic electronic-structure calculations. In this section, we do not address the issue of the \textit{ab initio} calculations of the SE and VP corrections, see the next section for the discussion and relevant references.
Let us start with the SE part of the QED operator~(\ref{eq:H_qed}). We have to solve two issues simultaneously. On the one hand, due to the lack of simple enough algorithms for the \textit{ab initio} calculations of the $\Sigma$-operator matrix elements for arbitrary levels (including the continuum-spectrum states), one has to restrict the summation in Eq.~(\ref{eq:H_qed}) to a finite number of the low-lying one-electron eigenfunctions of~$h^{\rm D}$. On the other hand, it is necessary to ensure the short interaction range for the operator~$h^{\rm SE}$. In Ref.~\cite{Shabaev:2013:012513}, it was suggested to represent $h^{\rm SE}$ by a sum of short-range semilocal and nonlocal potentials. The nonlocal part was defined using the set of functions which are localized at smaller distances than the Dirac--Coulomb ones. The separation of the semilocal part with the support of the order of the Compton wavelength, $\lambdabar =1/m$, was justified by the fact that for low-$Z$ systems the dominant part of the SE correction indeed can be described by the local term~\cite{Welton:1948:1157}. For the range of the nuclear charges studied in the present work, we found, however, that the SE operator is hardly described with a simple local formula. For this reason, we now retain only the nonlocal potential in the model operator. So, the one-electron SE operator is approximated as follows
\begin{align}
\label{eq:se_model}
\tilde h^{\rm SE} = \sum_{j,l}^n
| \phi_j \rangle B_{jl} \langle \phi_l | \, ,
\end{align}
where, as in Ref.~\cite{Shabaev:2013:012513}, we chose the functions $\{\phi_i\}_{i=1}^n$ to be
\begin{align}
\label{eq:proj}
\phi_i(\bm{r}) = \frac{1}{2} \left[ \, I - (-1)^{s_i} \beta \, \right] \rho_{l_i}(r) \psi_i(\bm{r}) \, .
\end{align}
Here the index $s_i=n_i-l_i$ (with $n_i$ being the principal quantum number and $l_i=|\kappa_i+1/2|-1/2$ being the orbital angular momentum) enumerates the positive-energy states for the given angular symmetry, $I$ and $\beta$ are the identity and the standard Dirac matrices, respectively, and the factors $\rho_{l_i}(r) = \exp \left[ -2\alpha Z(r/\lambdabar)/(1+l_i) \right]$ serve to provide the stronger localization of the functions $\{\phi_i\}_{i=1}^n$ as compared to the Dirac--Coulomb ones $\{\psi_i\}_{i=1}^n$ (in practical calculations, we found that the replacement $1+l_i \rightarrow |\kappa_i|$ in $\rho_{l_i}$, affecting only the positive values of $\kappa_i$, may additionally improve the performance of the model-QED-operator approach for $Z \gtrsim 160$). Finally, the coefficients $B_{jl}$ in Eq.~(\ref{eq:se_model}) are determined from the condition that the model operator~$\tilde h^{\rm SE}$ has to reproduce exactly the matrix elements of the operator~$h^{\rm SE}$ in the space spanned by the functions $\{\psi_i\}_{i=1}^n$, that is
\begin{align}
\label{eq:condition}
\sum_{j,l}^n
\langle \psi_i | \phi_j \rangle B_{jl} \langle \phi_l | \psi_k \rangle
=
\frac{1}{2}
\langle \psi_i |
\left[ \Sigma^{\rm SE}_R(\varepsilon_i) + \Sigma^{\rm SE}_R(\varepsilon_k) \right]
| \psi_k \rangle
\end{align}
for $i,k=1\ldots n$, see Ref.~\cite{Shabaev:2013:012513} for details. We stress that the SE operator conserves the angular quantum numbers, and the matrix $B_{jl}$ has, accordingly, a block diagonal structure.
The one-electron VP operator is equal to the sum of two local potentials, the Uehling and Wichmann--Kroll (WK) ones, $V^{\rm VP}_R = V_{\rm Ue} + V_{\rm WK}$. The dominant Uehling contribution is given by the expression
\begin{align}
\label{eq:Ue}
V_{\rm Ue}(r) = - &\frac{2\alpha^2 Z}{3mr} \,
\int_0^\infty \! dr' \, r' \rho(r') \nonumber \\
&\times \left[ K_0(2m|r-r'|) - K_0(2m|r+r'|) \right] \, ,
\end{align}
where
\begin{align}
\label{eq:K0}
K_0(x) = \int_1^\infty \! dt \,
e^{-xt} \left( \frac{1}{t^3} + \frac{2}{t^5} \right) \sqrt{t^2-1} \, ,
\end{align}
and the inducing-charge density~$\rho$ is normalized in accordance with $\int\!d{\bm{r}}\,\rho(r)=1$. The potential~$V_{\rm Ue}$ can be easily calculated either directly or using the approximate formulas from Ref.~\cite{Fullerton:1976:1283}. The evaluation of the WK part of the VP operator represents a much more difficult problem, see the discussion below. In Ref.~\cite{Shabaev:2013:012513} to a sufficient level of accuracy, this issue was solved by employing the approximate expressions obtained in Ref.~\cite{Fainshtein:1991:559}. These expressions were derived for the point-nucleus case that makes them unsuitable for superheavy elements. Therefore, in the present work we have performed the \textit{ab initio} calculations of the potential~$V_{\rm WK}$ for extended nuclei. For the sake of convenience, we approximate this contribution in a way similar to that applied for the operator~$h^{\rm SE}$. Namely, to represent the WK part of~$h^{\rm VP}$, the operator $\tilde{h}^{\rm WK}$ analogous to Eq.~(\ref{eq:se_model}) is introduced with the matrix~$B_{jl}$ determined by Eq.~(\ref{eq:condition}), where the evaluated WK potential stand instead of the SE operator, $\Sigma^{\rm SE}_R \rightarrow V_{\rm WK}$. The Uehling term is treated as in Ref.~\cite{Shabaev:2013:012513} without any changes. In principle, the SE operator and the WK part of the VP potential can be considered together during the procedure of the model-QED-operator construction. Then the nonlocal potential~$\tilde{h}^{\rm SE+WK}$ with the appropriately defined coefficients $B_{jl}$ will simultaneously approximate the SE and WK contributions.
Concluding the description of the model-QED operator, it should be noted that in what follows, as in Ref.~\cite{Shabaev:2013:012513}, we construct the model operator employing the functions~(\ref{eq:proj}) which correspond to the $ns$ states with the principal quantum number $n \leqslant 3$ and the $np_{1/2}$, $np_{3/2}$, $nd_{3/2}$, and $nd_{5/2}$ states with $n \leqslant 4$.
\section{Rigorous QED evaluation of the SE and VP matrix elements \label{sec:2}}
The \textit{ab initio} calculations of the SE and VP matrix elements are performed for extended nuclei employing the standard two-parameter Fermi model for the nuclear-charge distribution. The size of the nuclei is determined using the approximate formulas given in Ref.~\cite{Pieper:1969:327}. First, we relate the atomic mass number $A$ with the charge $Z$ via
\begin{align}
\label{eq:A}
A = 0.00733 \, Z^2 + 1.30 \, Z + 63.6 \, ,
\end{align}
rounding it up to the nearest integer. Second, we evaluate the root-mean-square radius~$R$ in fm according to
\begin{align}
\label{eq:R}
R=\sqrt{\frac{3}{5}} \, R_{\rm sphere} \, ,
\end{align}
where $R_{\rm sphere} = 1.2\, A^{1/3}$. The uncertainties of the presented below results do not include errors associated with the choice of $R$ and the nuclear model, but are determined only by studying numerical aspects.
\begin{figure}
\begin{center}
\includegraphics[width=0.94\columnwidth]{./se_calc_scheme_with_se2_2line.eps}
\caption{\label{fig:se}
Decomposition of the self-energy diagram. The single line represents the free-electron propagator. The double line with a cross on it corresponds to the mass counterterm. The line ended with a small cross denotes the interaction with the binding potential~$V$.}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{./vp_calc_scheme_v3.eps}
\caption{\label{fig:vp}
Decomposition of the vacuum-polarization diagram.}
\end{center}
\end{figure}
The one-loop SE operator reads as follows, see, e.g., Ref.~\cite{Mohr:1998:227},
\begin{align}
\label{eq:Sigma}
\Sigma_R^{\rm SE}(\varepsilon,\bm{r}_1,\bm{r}_2) &=
2 i \alpha \int_{-\infty}^\infty \!\! d\omega \,
D^{\mu\nu}(\omega,\bm{r}_{12}) \nonumber \\
&\times
\alpha_\mu G(\varepsilon-\omega,\bm{r}_1,\bm{r}_2)\alpha_\nu - \beta \delta m \, ,
\end{align}
where $\delta m$ is the mass counterterm, $D^{\mu\nu}$ is the photon propagator, $G$ is the Dirac--Coulomb Green's function (compared to Ref.~\cite{Mohr:1998:227}, we define it with the opposite sign, $G(\omega,\bm{r}_1,\bm{r}_2)=(\omega-h^{\rm D})^{-1}$), $\alpha^\mu = \gamma^0 \gamma^\mu$ and $\beta = \gamma^0$ are the Dirac matrices, and it is implicitly assumed that the integration contour extending from $-\infty$ to $\infty$ provides the proper bypass of all the singularities in the complex $\omega$-plane. The nonperturbative (in $\alpha Z$) calculations of the SE contribution have been extensively discussed in the literature, for review see Refs.~\cite{Mohr:1974:26:1974:52:join_pr, Snyderman:1991:43, Blundell:1991:R1427, Mohr:1993:158, Jentschura:1999:53, Cheng:1993:1817, Yerokhin:1999:800} and references therein. The expression~(\ref{eq:Sigma}) suffers from the ultraviolet divergences. To isolate them and calculate the diagonal and off-diagonal matrix elements of the resulting renormalized operator $\Sigma^{\rm SE}_R$ with the Dirac--Coulomb wave functions, we employ the method worked out in Ref.~\cite{Yerokhin:1999:800} with some modifications proposed in Refs.~\cite{Artemyev:2007:173004, Artemyev:2013:032518}. The method is based on the expansion of the bound-electron propagator in powers of the interaction with the binding potential~$V$. The divergences are contained only in the so-called zero- and one-potential contributions shown in brackets in the first line of Fig.~\ref{fig:se}. These two terms are renormalized together with the mass counterterm and evaluated in the momentum representation. In principle, the remaining part of the SE operator is finite and can be calculated in the coordinate space. However, aiming to improve the partial-wave-expansion convergence, we additionally separate the slowly convergent term with two interactions. The corresponding two-potential contribution is evaluated using the analytical expression for the free-electron propagator~\cite{Mohr:1974:26:1974:52:join_pr}. The higher-order remainder shown in brackets in the second line of Fig.~\ref{fig:se} is calculated as the pointwise difference of the two similar expressions in the $\omega$-integration by employing the finite-basis-set representation for the free- and bound-electron Green's functions. We will refer to this term as the many-potential one. We note that in Ref.~\cite{Yerokhin:1999:800} such a designation was applied to the contribution with two and higher interactions. The finite basis sets are constructed from B splines~\cite{Johnson:1988:307, Sapirstein:1996:5213} in the framework of the dual-kinetic-balance approach~\cite{splines:DKB}. In the calculations, the maximal value of $|\kappa|$ reaches 45 and 20 for the two- and many-potential terms, respectively. The remainders of the $|\kappa|$-series are estimated using a polynomial fitting in $1/|\kappa|$.
The vacuum-polarization potential can be written as follows, see, e.g., Ref.~\cite{Mohr:1998:227},
\begin{align}
\label{eq:V_vp}
V^{\rm VP} (\bm{r}) = \frac{\alpha}{2\pi i}
\int \! d\bm{r}'
\int_{-\infty}^\infty \!\! d\omega \,
\frac{{\rm Tr} \, G(\omega,\bm{r}',\bm{r}')}{|\bm{r}-\bm{r}'|} \, .
\end{align}
The formal expression~(\ref{eq:V_vp}) is ultraviolet divergent, and it demands the charge renormalization. To do so, one has to expand the Dirac--Coulomb Green's function~$G$ in terms of the free-electron propagators. According to the Furry's theorem~\cite{Furry:1937:125}, only the fermion loops with an even number of vertices contribute, whereas the diagrams with an odd number of the interactions vanish owing to a charge conjugation argument, see the first line of Fig.~\ref{fig:vp}. Thereby, the first nonzero contribution arises from the diagram linear with respect to the external field~$V$, being of order $\alpha(\alpha Z)$. Applying the charge-renormalization procedure to this term, one arrives at the Uehling potential given by Eq.~(\ref{eq:Ue}), see Refs.~\cite{Uehling:1935:55, Serber:1935:49}. The remaining part of the vacuum-polarization potential, which includes the terms of order $\alpha (\alpha Z)^3$ and higher, corresponds to the WK contribution. The all-order (in $\alpha Z$) calculations of this potential have a long history, for review see, e.g., Refs.~\cite{Wichmann:1956:843, Brown:1975:581:1975:596:1975:609:join_pr, Gyulassy:1975:497, Soff:1988:5066, Manakov:1989:673, Persson:1993:2772, Sapirstein:2003:042111} and references therein. The WK potential is finite. However, a special care has to be taken when dealing with a light-by-light scattering contribution represented by the second term of the expansion in Fig.~\ref{fig:vp}, since it may contain a spurious gauge-noninvariant piece. Without going into details, we just point out that a single subtraction of the Uehling contribution from the total vacuum-polarization potential leads to a correct result for the WK potential provided the calculations are arranged so that the terms with positive and negative values of $\kappa$ in the electron propagator are treated together~\cite{Gyulassy:1975:497}, see also Ref.~\cite{Soff:1988:5066}. This prescription for the evaluation of the WK potential is shown schematically in brackets in the second line of Fig.~\ref{fig:vp}, where we additionally removed the free-electron-propagator contribution from the Dirac--Coulomb Green's function which is zero due to the Furry's theorem. It is convenient to rotate the integration contour in the complex $\omega$-plane to the imaginary axis and change the variable of integration to $\eta$, where $\omega=i\eta$. During this rotation, the bound-state poles of the Green's function~$G$ located on the negative real $\omega$-axis are picked up as residues. Finally, taking into account that the induced vacuum-polarization charge must be zero, we can express the WK potential as follows
\begin{align}
\label{eq:WK}
V_{\rm WK}(r) &= \frac{2\alpha}{\pi} \,
\int_r^\infty \! dr' \, r' \left( 1 - \frac{r'}{r} \right) \nonumber \\
&\times
\left[\,
\sum_\kappa |\kappa| \, {\rm Re} \int_0^\infty \!\! d\eta \,\,
{\rm Tr} \, G_\kappa^{(2+)}(i\eta,r',r') \right. \nonumber \\
&\quad\left.
- \,
\pi \! \sum^{-m<\varepsilon_{n\kappa}<0}_{n\kappa} \! |\kappa| \left( g_{n\kappa}^2(r') + f_{n\kappa}^2(r') \right)
\,\right] \, ,
\end{align}
where
\begin{align}
\label{eq:G2plus}
G_\kappa^{(2+)}(\omega,x,y) =
&\int_0^\infty \! dz \, z^2 G^{(0)}_\kappa(\omega,x,z) V(z) \nonumber \\
&\times
\left[ G_\kappa(\omega,z,y) - G^{(0)}_\kappa(\omega,z,y) \right]
\end{align}
with $G^{(0)}_\kappa$ and $G_\kappa$ being the radial parts of the free- and bound-electron Green's functions~\cite{Mohr:1998:227}, respectively. We find them numerically by solving a corresponding system of differential equations. In Eq.~(\ref{eq:WK}), $g_{n\kappa}$ and $f_{n\kappa}$ denote the large and small radial components of the Dirac wave function, normalized according to
\begin{align}
\label{eq:wf_norm}
\int_0^\infty \! dr \, r^2 \left( g_{n\kappa}^2(r) + f_{n\kappa}^2(r) \right) = 1 \, .
\end{align}
We terminate the summation over~$\kappa$ in Eq.~(\ref{eq:WK}) at $|\kappa|=9$--11 and study the convergence of the matrix elements $\langle \psi_i | V_{\rm WK} | \psi_k \rangle$, which are employed thereafter to determine the model-QED operator. In principle, to represent the evaluated WK potential within the model-QED-operator approach, besides the method suggested in the previous section, there is also an option to store it point-by-point on an appropriate integration grid.
\input{table_SE_s.tex}
\input{table_SE_p1.tex}
\input{table_SE_p3.tex}
\input{table_SE_d3.tex}
\input{table_SE_d5.tex}
\input{table_SE_170.tex}
The results of the \textit{ab initio} calculations of the first-order one-electron QED contributions are conveniently expressed in terms of the function~$F_{n_in_k}(\alpha Z)$ defined by
\begin{align}
\label{eq:F}
\langle \psi_i | h^{\rm QED} | \psi_k \rangle =
\frac{\alpha}{\pi} \frac{(\alpha Z)^4}{(n_in_k)^{3/2}} \, F_{n_in_k}(\alpha Z) \, mc^2 \, ,
\end{align}
where $n_i$ and $n_k$ are the principal quantum numbers of the $i$ and $k$ states, respectively. Our results for the SE matrix elements obtained for the $ns$, $np_{1/2}$, $np_{3/2}$, $nd_{3/2}$, and $nd_{5/2}$ states with $n$ up to 5 are given in Tables~\ref{tab:se_s}, \ref{tab:se_p1}, \ref{tab:se_p3}, \ref{tab:se_d3}, and \ref{tab:se_d5}, respectively. As noted above, all the values are calculated for the extended nuclei. The uncertainties are obtained by studying the convergence with respect to the partial-wave expansion in the two- and many-potential terms as well as the dependence of the many-potential terms on the size of the finite basis set employed. If no error is specified, the value is assumed to be accurate to all digits quoted. For $Z=170$, the breakdown of the SE correction into the zero-, \mbox{one-,} two-, and many-potential terms is presented in Table~\ref{tab:se_170} for the lowest-energy levels with different values of $\kappa$. Our result (in terms of the function $F$) for the $1s$ state, $3.8831$, is in reasonable agreement with the value of $3.909$ given in Ref.~\cite{Soff:1982:1465}, where the nuclear size was adjusted in such a way that the $K$-electron energy differs only by 1~meV from the borderline of the negative-energy continuum and the homogeneously-charged sphere was assumed to describe the nuclear-charge distribution. In other cases in Ref.~\cite{Soff:1982:1465}, the atomic mass number was chosen to be $A=2.5Z$ that led to $F$ equal to $1.972$, $2.913$, and $3.517$ for $Z=130$, $Z=150$, and $Z=160$, respectively. These results are in agreement with our values of $1.9832$, $2.8941$, and $3.4565$ as well. Finally, we note a drastic growth of the SE contribution for the $np_{1/2}$ states in the high-$Z$ region, see Table~\ref{tab:se_p1}. For $Z=170$, the absolute value of the SE correction for the $2p_{1/2}$ state almost reaches in magnitude the corresponding value for the $1s$ state. Apparently, this trend is explained by the behavior of the small components of the wave function for the $np_{1/2}$ states which penetrate the region $r<\lambdabar$ for very large~$Z$.
\input{table_WK_s.tex}
\input{table_WK_p1.tex}
\input{table_WK_p3.tex}
\input{table_WK_d3.tex}
\input{table_WK_d5.tex}
The results for the matrix elements of the WK potential evaluated with the Dirac--Coulomb wave functions for the $ns$, $np_{1/2}$, $np_{3/2}$, $nd_{3/2}$, and $nd_{5/2}$ states with $n$ up to 5 are presented in Tables~\ref{tab:wk_s}, \ref{tab:wk_p1}, \ref{tab:wk_p3}, \ref{tab:wk_d3}, and \ref{tab:wk_d5}, respectively. For the $d$ states, the WK correction is small. Therefore, for a better representation of this contribution as a function of $Z$, in Tables~\ref{tab:wk_d3} and \ref{tab:wk_d5} we give an additional significant digit, compared to the other similar tables. In Ref.~\cite{Soff:1988:5066}, the extended nucleus was modeled by a homogeneously-charged spherical shell. For $Z=170$, the radius of the nucleus was assumed to be $R_{\rm shell} = 7.1$~fm and the WK contributions (in terms of the function $F$) for the $1s$, $2s$, and $2p_{1/2}$ states were found to be $0.519$, $0.766$, and $3.76$, respectively. These values are in reasonable agreement with our results given in Tables~\ref{tab:wk_s} and \ref{tab:wk_p1}. The qualitative agreement is found for the partial-wave contributions to the vacuum-polarization charge density as well.
The results given in Tables~\ref{tab:se_s}--\ref{tab:se_d5} and \ref{tab:wk_s}--\ref{tab:wk_d5} are employed to represent the SE and WK contributions by means of the nonlocal model-QED operator in the range $110 \leqslant Z \leqslant 170$ according to the prescriptions formulated in Sec.~\ref{sec:1}. To obtain the function~$F_{n_in_k}(\alpha Z)$ for values of $Z$ not listed in the tables, a polynomial interpolation can be used
\begin{align}
\label{eq:interpol}
F_{n_in_k}(\alpha Z) =
\sum_{n=1}^N F_{n_in_k}(\alpha Z_n)
\prod_{m\neq n} \frac{Z-Z_m}{Z_n-Z_m} \, .
\end{align}
In contrast to Ref.~\cite{Shabaev:2013:012513}, we do not follow the receipt from Ref.~\cite{Mohr:1983:453}, which for the $s$ states implies the interpolation of the function $F_{n_in_k}(\alpha Z)$ with subtraction of the term describing the small-$\alpha Z$ behavior. For the range of $Z$ under consideration, this subtraction is not justified. In the overlapping region $110 \leqslant Z \leqslant 120$, both implementations of the model-QED-operator approach, the original~\cite{Shabaev:2013:012513} and the current one, are applicable and can be compared. We note, however, that in Ref.~\cite{Shabaev:2013:012513} the finite-nuclear-size corrections to the SE contributions were evaluated for slightly different values of nuclear radii. For this reason, in what follows, when comparing these two versions of the model-QED operator, we always use the matrix elements of the SE operator obtained in the present work. Therefore, the corresponding comparison boils down to the analysis of the operators with and without the local potential isolated.
\section{Test of the model-QED operator \label{sec:3}}
The model-QED operator is constructed employing the SE and WK matrix elements evaluated with the Dirac--Coulomb wave functions for the $ns$ states with $n \leqslant 3$ and the $np$ and $nd$ states with $n \leqslant 4$. So, by construction, the radiative corrections for these states are reproduced exactly with the operator. Therefore, the natural test is to probe the ``predictive'' power of the developed operator by calculating the QED corrections for the states with the higher values of the principal quantum number. In Tables~\ref{tab:se_s} and \ref{tab:wk_s}, the matrix elements for the $4s$ state are given. We have performed the \textit{ab initio} calculations of the diagonal matrix elements for the $n=5$ states for a number of $Z$ as well. In Table~\ref{tab:se_prediction}, we present the SE contributions for the $4s$, $5s$, $5p_{1/2}$, $5p_{3/2}$, $5d_{3/2}$, and $5d_{5/2}$ states in hydrogenlike ions. The rows labeled with ``Exact'' correspond to the results of the \textit{ab initio} calculations taken from Tables~\ref{tab:se_s}--\ref{tab:se_d5}, whereas the values in the lines ``Mod. op.'' and ``Ref.~\cite{Shabaev:2013:012513}'' are obtained by averaging the current and original versions of the model-QED operator~$\tilde{h}^{\rm SE}$ with the Dirac--Coulomb wave functions, respectively. For the $s$ states, the deviation of the model-QED-operator predictions from the exact values does not exceed 1\%. For $5d_{3/2}$, the situation is slightly worse than for the other states. It is explained mainly by the smallness of the SE correction, since for all the $nd_{3/2}$ states the corresponding functions~$F_{n_in_k}$ change the sign at $Z\approx 110$. In general, the model-QED operator leads to the values for the SE contributions which are very close to the \textit{ab initio} ones.
In Table~\ref{tab:wk_prediction}, we give the similar comparison for the WK contribution. The notations are the same as in Table~\ref{tab:se_prediction}. Again, for demonstration purposes we show extra digits for the function~$F$ in the cases of the $5p_{3/2}$ and $5d$ states because of the smallness of these contributions. From Table~\ref{tab:wk_prediction} it is seen, that the proposed nonlocal potential reproduces the WK contribution to good accuracy.
In Table~\ref{tab:se_alkali}, we demonstrate the capacity of the model-QED-operator approach to the radiative-correction calculations in many-electron ions of superheavy elements possessing alkali-metal-like configurations, namely, ${\rm [Ne]}3s$, ${\rm [Ne]}3s^2 3p^6 4s$, and ${\rm [Ne]}3s^2 3p^6 3d^{10} 4s^2 4p^6 5s$. First, following Refs.~\cite{Labzowsky:1999:2707, Sapirstein:2002:042501}, we have evaluated the one-loop SE contributions for the valence $ns$ electrons within the rigorous QED approach including a local screening potential into the initial approximation, see the related discussion after Eq.~(\ref{eq:H_eff}) and, e.g., Refs.~\cite{TTGF, Sapirstein:2008:25} for review. We have chosen the Kohn--Sham potential~\cite{pot:KS} with the Latter correction~\cite{Latter:1955:510} introduced for restoring the proper asymptotic behavior to model the interelectronic-interaction effects in the zeroth-order Hamiltonian. The results of these \textit{ab initio} calculations are given in rows labeled ``Exact''. Then the model-QED-operator predictions have been obtained by averaging the operator~$\tilde{h}^{\rm SE}$ with the valence-electron wave functions determined from the Dirac equation~(\ref{eq:dirac}), in which the nuclear potential is replaced with the Kohn--Sham one. As in Table~\ref{tab:se_prediction}, for $Z=110$ and $Z=120$ we present also the results evaluated by means of the original version of the operator~\cite{Shabaev:2013:012513}. For completeness, the \textit{ab initio} values from Table~\ref{tab:se_s} corresponding to hydrogenlike ions are shown in lines labeled ``H-like''. From Table~\ref{tab:se_alkali}, it can be seen that the model-QED operator constructed using the rigorous calculations with the Dirac--Coulomb basis works reasonably well in this nonhydrogenic case.
In Ref.~\cite{Cheng:1976:1943}, the SE shifts for the $1s$ level in superheavy elements were evaluated employing the Dirac--Fock--Slater potential constructed with the Slater-exchange term~\cite{Slater:1951:385}. In Table~\ref{tab:se_1s_DHFS}, we compare the results of Ref.~\cite{Cheng:1976:1943} with our theoretical predictions. The lines labeled ``Mod. op.'', ``Ref.~\cite{Shabaev:2013:012513}'', and ``H-like'' have the same meaning as in Table~\ref{tab:se_alkali}. We have generated the Dirac--Fock--Slater potential for neutral atoms of superheavy elements in accordance with the relativistic ground-state configurations given in Ref.~\cite{Fricke:1977:83}. From Table~\ref{tab:se_1s_DHFS}, it is seen how the averaging of the model-QED-operator with the corresponding wave functions shifts the SE corrections in such a way that they are in perfect agreement with the values from Ref.~\cite{Cheng:1976:1943} within the indicated error bars.
In addition, we mention that averaging the model-QED operator with one- or many-electron wave functions does not exhaust all the possibilities for applying this approach. As noted in Ref.~\cite{Shabaev:2013:012513}, this operator can be self-consistently included into the Dirac--Fock equations, also referred to as the relativistic Hartree--Fock ones. This may be especially important in cases when the leading-order contributions cancel out for some reasons, see, e.g., Ref.~\cite{Shabaev:2020:052502}. Treating the model-QED operator in this way allows one to take into account single-particle excitations into the negative-energy continuum as well as to partly include the higher-order QED contributions.
\input{table_SE_prediction.tex}
\input{table_WK_prediction.tex}
\input{table_SE_alkali.tex}
\input{table_SE_1s_DHFS.tex}
\section{Conclusions \label{sec:4}}
The model-QED-operator approach, proposed in Ref.~\cite{Shabaev:2013:012513}, has been extended to the region of nuclear charges $110 \leqslant Z \leqslant 170$. The self-energy and the Wichmann--Kroll part of the vacuum-polarization potential are represented by nonlocal operators. The model-QED operator can be easily incorporated into any of the existing methods for solving the Dirac--Coulomb--Breit equation. The capacity of the approach has been demonstrated for a number of systems by comparing the model-QED-operator predictions with the results of the corresponding \textit{ab initio} calculations. The developed model-QED operator can be used to evaluate the QED effects in superheavy elements in a wide range of~$Z$.
\section*{Acknowledgments}
The work is supported by the Ministry of Science and Higher Education of the Russian Federation within the Grant No. 075-10-2020-117.
|
2,877,628,091,490 | arxiv | \section{Introduction}\label{sec:intro}
A nova is a powerful eruption following a thermonuclear runaway (TNR) that occurs below the surface of a white dwarf (WD) \cite[]{Starrfield1972,Shara1981,Starrfield2008}. The TNR is the inevitable result of a critical amount of (mostly) hydrogen being pulled away from its companion, less evolved star, and accumulating on the degenerate surface of the WD. As this mass piles up in a degenerate environment, the pressure below the surface increases, causing the temperature to rise until becoming sufficiently high to ignite the hydrogen, entailing fusion in a runaway process and the violent ejection of the envelope \cite[e.g.,][]{Shara1981}, exhibited as an enormous visual brightening \cite[]{Pay1957} of order $\sim10^{4-5}$ times the solar luminosity --- the nova eruption \cite[]{Hellier2001,Warner2003,Knigge2011}. Novae are usually discovered, following the eruption, in the optical band but it is hardly the only range in which a nova may be observed.
Over the course of a nova cycle --- accretion, eruption and decline --- a nova producing system could possibly be observed in the infrared (IR), ultraviolet (UV), soft and hard X-rays and even $\gamma$-rays \cite[]{MacDonald1985,Itoh1990,Orio1994,Orio2009,Schaefer2010,Hillman2014,DellaValle2020,Chomiuk2021,Konig2022}. Each band, if observed, may provide clues as to the nature of the eruption, the system and its unique behavior that could help distinguish one system from the next.
However, the capacity and technical capabilities of capturing observations in the different bands are not the same for all bands, resulting in mostly visual records. While the visual rise indicates the expansion of the WD's envelope and ejection of the mass \cite[e.g.,][]{Prialnik1986}, $\gamma$-rays, if observed, are detected only days after the visual peak \cite[]{Sokolovsky2022}, implying that they must be originating from somewhere other than the WD's surface \cite[]{Metzger2015,Martin2018}.
In the past decade $\gamma$-rays were detected in a handful of systems emitting at energies higher than $100 $ MeV using the Fermi-Large Area Telescope (Fermi-LAT) \cite[]{Razzaque2010,Ackermann2014,Cheung2016,Martin2018}. \cite{Ackermann2014} investigated the likelihood of the $\gamma$-rays originating in both hadronic and leptonic processes, for the symbiotic nova (SymN) V407 Cyg and the three classical novae (CN) V1324 Sco, V959 Mon and V339 Del, but did not come to a firm conclusion regarding which emitting process is more likely to be the source. \cite{Cheung2016} explored detected $\gamma$-rays for an additional two CNe --- V1369 Cen and V5667 Sgr \cite[]{Li2016,Li2017}, and interpreted that this high energy emission is due to particles accelerated up to $\sim$ 100 GeV at the reverse shock and undergoing hadronic interactions in the dense cooling layer downstream of the shock \cite[]{Martin2018}.
Recently, the MAGIC (Major Atmospheric Gamma Imaging Cherenkov) and the H.E.S.S. (High Energy Stereoscopic System) telescopes have detected $\gamma$-rays of energies higher than $100$ GeV from the 2021 outburst of RS Ophiuchi --- a recurrent nova in a symbiotic system that erupts every $\sim15$ years \cite[]{Acciari2022,Hess2022}. When a nova eruption occurs in a symbiotic system, the ejected mass will inevitably collide with the dense wind of the red giant (RG) companion, shocking a fraction of the particles and accelerating them, resulting in the emission of high energy radiation in the $\gamma$-ray range such as seen in RS Oph \cite[]{Hess2022} and V407 Cyg \cite[]{Abdo2010,Martin2018}.
Systems with a red dwarf (RD) donor (i.e., a catacalysmic variable (CV)) might also produce shocks in the event that the ejected mass is not expelled in a unified manner, but rather in stages or in clumps with different velocities, thus a fast clump of mass could collide with a previously ejected slower moving clump of mass. It is also plausible that the ejected mass shell may interact with an expanding mass shell that was ejected in a previous nova eruption, provided the recurrence period is short enough and enough mass was ejected. However, any of these options are not expected to produce detectable $\gamma$-rays since the gas cloud into which the ejected mass is colliding is much less dense than the wind from a red giant (RG) \cite[]{Cheung2016}, and in nova with short recurrence times the amount of ejected mass is low \cite[]{Prikov1995,Yaron2005,Hillman2015,Hillman2016,Shara2018,Hillman2019}. Nevertheless, there are peculiar detections of $\gamma$-rays in some novae hosting a RD donor, as mentioned earlier \cite[]{Ackermann2014,Cheung2016,Martin2018}.
In symbiotic systems, where high energy $\gamma$-rays are plausible (as explained above), it is still not entirely clear what nuclear process is emitting them. The main interpretation of this high energy emission has been claimed to be due to hadronic particle acceleration in shocks \cite[]{Steinberg2020,Acciari2022,Hess2022}. High energy protons, accelerated in the shock region may interact with other protons in the dense environment, giving rise to neutral pions ($\pi^{0}$) that then decay to high energy $\gamma$-rays. A proton-proton interaction will also produce charged pions ($\pi^{\pm}$) that will decay into high energy neutrinos. Modeling $\gamma$-ray emission from an astrophysical source with
a $\pi^{0}$ model thus inevitably predicts a high-energy neutrino flux
from the same source \cite[e.g.,][]{Stecker1970}.
Therefore, if the high energy $\gamma$-ray emission has an hadronic origin, we expect the process to be accompanied by the production of neutrinos.
This work aims to test the origin of the physical processes responsible for the $\gamma$-ray emission that is sometimes observed in nova eruptions. If the high-energy emission observed in these transients has an hadronic and/or leptonic origin, it follows that it should be accompanied by a flux of neutrinos \cite[]{Razzaque2010,Metzger2016,Bednarek2022}. In this work we estimate the neutrino flux that might be associated with the recent eruption of nova RS Oph and thus we predict the number of events that, in principle, could be detected during nova explosions by the present and future neutrino telescopes.
In \S \ref{sec:telescopes} we specify the technical capabilities of each nuetrino detector that we refer to in this work. \S \ref{sec:100GeV} specifies our method of calculation for the different energy ranges of neutrino flux followed by our results in \S \ref{sec:results}. We discuss the implications of our results and compare the expected number of neutrino events with those derived in previous works in \S \ref{sec:discussion} and provide our conclusions in \S \ref{sec:conclusions}.
\section{Neutrino Telescopes}\label{sec:telescopes}
High-energy neutrinos interact with nucleons, producing secondary particles which travel faster than the speed of light in the sea or ice inducing Cherenkov radiation inside the detector. The photons that are emitted by this process are detected by optical sensors that are deployed in the sea or ice (depending on the detector). In the following we briefly describe the basic characteristics of each telescope considered in this work.
\subsection{IceCube and DeepCore}
The IceCube high-energy neutrino telescope is a neutrino detector located at the geographic South Pole \cite[]{detIce}. In the final detector configuration, the digital optical modules, deployed in the Antartic ice, are arranged on 86 vertical strings of 60 sensors each, spread over depths between 1450 m and 2450 m with vertical distances of 17 m between adjacent sensors. Seventy-eight strings have a horizontal spacing of about 125 m and cover a hexagon with a surface area of roughly 1 $\rm km^2$. Eight additional strings together with the seven strings surrounding IceCube, form the more densely instrumented central DeepCore detector \cite[]{Ice2,Ice3}.
The module density in DeepCore is about five times greater than the rest of IceCube, which
allows for the much lower energy detection threshold of a few GeVs.
The IceCube detector has been collecting
data since 2006, and so far no neutrino event has been
associated with a nova eruption. The effective areas vs. neutrino energy for IceCube and DeepCore are shown in Figure \ref{fig:telescopes}.
\subsection{ANTARES}
The ANTARES neutrino detector is located in the Northern Hemisphere and is currently the only deep sea high energy neutrino telescope \cite[]{detAN} that exists to date. The telescope covers an area of about 0.1 $\rm km^2$ on the sea bed, at a depth of 2475 m, 40 km off the coast of Toulon, France. In its full configuration, it is composed of 12 detection lines, each comprising up to 25 triplets of photo-multiplier tubes \cite[]{AN2}. Each triplet is located in one of the storeys, regularly distributed along 350 m, the first storey being located 100 m above the sea bed. The telescope reached its nominal configuration, with 12 lines immersed and taking data, in May 2008.
Figure \ref{fig:telescopes} shows the effective area of the ANTARES neutrino detector, with selection and reconstruction criteria optimized for the search of point like sources, as a function of the neutrino energy \cite[]{ANT}.
\subsection{KM3NeT}
The KM3NeT detector \cite[]{detKM3} is the future generation of under water neutrino telescopes. The infrastructure will consist of three so-called building blocks, each made of 115 strings of 18 optical modules, that have 31 photo-multiplier tubes each. KM3NeT will be comprised of the KM3NeT/ARCA which will consist of two building blocks each that will be deployed at a depth of 3500 m at a site 80 km South-East of Porto Palo di Capo Passero, Sicily, Italy, and of a third building block, called KM3NeT/ORCA, which will be located at a depth of 2200 m in a site close to ANTARES (Toulon), France.
KM3NeT/ARCA will have large spacings between adjacent strings in order to target astrophysical neutrinos at TeV energies.
The KM3NeT/ORCA will be sensitive to neutrinos down to energies of a few
GeV thanks to the denser and compact array. Figure \ref{fig:telescopes} shows the effective areas of KM3NeT/ARCA and KM3NeT/ORCA as a function of the neutrino energy \cite[]{detKM3,Zegarelli2022}.
\subsection{Hyper-Kamiokande (Hyper-K)}
The Hyper-Kamiokande is a next generation under-water
Cherenkov detector with a sensitivity that is far beyond that of the Super-Kamiokande (Super-K) detector. The Hyper-K is designed to detect proton decays, atmospheric neutrinos, and neutrinos from astronomical origins. The baseline design of Hyper-K is based on the highly successful Super-K, taking full advantage of a well-proven technology \cite[]{HyperKfig}.
In Figure \ref{fig:telescopes} we show the effective area of Hyper-K as a function of the neutrino energy. As may be seen in this figure, the detector has good low energy performance, which should allow detection down to a few GeV.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.5\textwidth]{Fig_4_telescope_eff_area_energy.eps}
\caption{The effective areas vs. energy (on a log-log scale) for the six detectors, reproduced from: \cite{IcePoint} (IceCube); \cite{Zegarelli2022} (DeepCore and KM3MeT/ORCA); \cite{ANT} (ANTARES); \cite{detKM3} (KM3NeT/ARCA); and \cite{HyperKfig} (Hyper-K).}
\label{fig:telescopes}
\end{center}
\end{figure}
\section{Expected neutrino fluxes}\label{sec:100GeV}
In this section, we derive the neutrino flux expected from RS Oph based on the assumption that the high energy photon emission is due to hadronic processes. Relativistic protons may produce $>$ GeV $\gamma$-rays either by photo-meson production or inelastic nuclear collisions. In \cite{Bednarek2022} the authors show that p-p interactions are the most likely mechanism for pion production in novae. A possible mechanism that can produce the very high energy (VHE) photons that were detected by H.E.S.S. \cite[]{Hess2022} and MAGIC \cite[]{Acciari2022} may be the decay of neutral pions ($\pi^0$) produced through nuclear collisions of relativistic
protons.
The same process that produces the neutral
pions, and subsequently the sub-TeV photons, would also generate
charged pions ($\pi^\pm$)that decay into neutrinos of similar energy. The following equation describes the three processes:
\begin{equation}
p+p \rightarrow \pi^0, \pi^+, \pi^- p, n, ...
\end{equation}
From this kind of interaction we expect almost the same number of $\pi^+$,$ \pi^0$ and $ \pi^-$ particles, due to isospin symmetry \cite[e.g.,][]{Povh2004}. $\pi^0$ particles decay into two $\gamma$-rays, having, in the pion rest frame, an energy equal to half of the $\pi^0$ mass as described below:
\begin{equation}
\pi^0\rightarrow \gamma + \gamma
\end{equation}
On the other hand, the charged pions decay into neutrinos as follows:
\begin{equation}
\pi^+ \rightarrow \mu^+ + \nu_{\mu} \rightarrow e^+ + \nu_e + \bar{\nu}_{\mu}+ \nu_{\mu}
\end{equation}
\begin{equation}
\pi^- \rightarrow \mu^- + \bar{\nu}_{\mu} \rightarrow e^- + \bar{\nu}_e + \nu_{\mu} + \bar{\nu}_{\mu}
\end{equation}
where $\nu_\mu$ and $\nu_e$ are the muon and electron neutrinos respectively.
Considering the relation between the photon flux and the neutrino flux given in Eq. 4 of \cite{Razzaque2010} we derive that \cite[]{DiPalma2017}:
\begin{equation}\label{eq:pions}
\frac{dN_{\nu+\bar{\nu}}}{dE_{\nu}}=\frac{dN_{\gamma}}{dE_{\gamma}}
\end{equation}
and therefore
\begin{equation}
\int_{E_{\nu}^{\rm min}}^{E_{\nu}^{\rm max}}E_{\nu}\frac{dN_{\nu}}{dE_{\nu}}dE_{\nu}= \int_{E_{\gamma}^{\rm min}}^{E_{\gamma}^{\rm max}}E_{\gamma}\frac{dN_{\gamma}}{dE_{\gamma}}dE_{\gamma}
\end{equation}
where $ E_{\gamma}^{\rm min}$ ($E_{\nu}^{\rm min}$) and $E_{\gamma}^{\rm max}$ ($E_{\nu}^{\rm max}$) are the minimum and maximum photon (neutrino) energies respectively.
The number of photons per unit energy interval, time, and surface area can be written as:
\begin{equation}
\label{eq:TeVspec}
\frac{dN_{\gamma}}{dE_{\gamma}}=N_0\left(\frac{E}{E_0}\right)^{-\Gamma}
\end{equation}
where $N_0$, $E_0$ and $\Gamma$ are the amplitude at reference energy, reference energy and spectral index respectively, computed from observations.
Following the line of the work by \cite{Alvarez2002}, \cite{Guetta2003} and \cite{DiPalma2017}, we compute the high energy neutrino flux at Earth and estimate the number of events that may be detected by the telescopes described in \S\ref{sec:telescopes}.
The total number of expected astrophysical events during an exposure time $T$ of a neutrino telescope
is given by:
\begin{equation}\label{eq:integral}
N=\int_{0.1\,TeV}^{100\,TeV} T \frac{dN_{\nu}}{dE_{\nu}}A(E_{\nu})dE_{\nu}
\end{equation}
where $\frac{dN_{\nu}}{dE_{\nu}}$ can be derived from $\frac{dN_{\gamma}}{dE_{\gamma}}$ according to Equation \ref{eq:pions} for the given TeV spectrum (and used in Equation \ref{eq:TeVspec}) and $A(E_{\nu})$ is the effective area of the considered neutrino telescope, as a function of the neutrino energy $E_{\nu}$ as shown in Figure \ref{fig:telescopes}.
The effective area of a detector may depend on the declination of the observed celestial object. The declination of RS Oph is $-06^0 4^{'} 28.5^{''}$, and we use the corresponding effective areas of the relevant detectors where applicable.
VHE ($>$ 100 GeV) $\gamma$-rays were reported by H.E.S.S. and MAGIC from the recurrent nova RS Ophiuchi up to a month after its 2021 outburst \cite[]{Acciari2022,Hess2022}. The VHE emission has a temporal profile similar to lower-energy GeV emission \cite[][Fig. 3]{Hess2022a}, indicating a common origin, with a two-day delay in peak flux.
Referring to tables S1 and S2 in\cite{Hess2022} we consider the five days of detection (August 9-12 2021). We use their amplitude at reference energy, reference energy and spectral index from their table S2 ($N_0$, $E_0$ and $\Gamma$ respectively) and their exposure times from table S1, in Equations \ref{eq:TeVspec} and \ref{eq:integral} in order to obtain an hourly average.
We estimate the neutrino flux in the energy range 100 GeV-100 TeV, which is the range in which IceCube, and ANTARES operate. We assume the same range for the future KM3NeT/ARCA detector.
We also extend our estimate to lower energies and consider the 1–500 GeV energy range data taken by the Fermi-LAT instrument over the same time period as the H.E.S.S. and MAGIC observations \cite[]{Acciari2022,Hess2022}.
The H.E.S.S. collaboration find that the best fit for the energy range observed photon flux is given by a log-parabola spectral function of the form:
\begin{equation}\label{eq:lowE}
\frac{dN_{\gamma}}{dE_{\gamma}}=N_0\frac{E}{E_0}^{-\alpha-\beta \rm{ln}(E/E_0)}
\end{equation}
We refer to the joint Fermi-LAT and H.E.S.S. calculation in \cite{Hess2022} and take $N_0$ and $E_0$ from their table S3, as well as the spectral index and curvature ($\alpha$ and $\beta$ respectively). This data is used to estimate the expected number of neutrinos in the energy range 1-500 GeV for the IceCube-DeepCore, the KM3NeT/ORCA and the Hyper-Kamiokande detectors.
\section{Results of calculations}\label{sec:results}
In this section we show the results of our calculations regarding the 2021 RS Oph eruption for the high and low energy ranges. We also apply our analysis to additional novae (elaborated in \S \ref{sec:intro}) that were detected in $>$0.1GeV by Fermi-LAT.
\subsection{RS Oph - High energy ($>100$GeV)}\label{sec:high_energy}
For the high energy regime we use Equations \ref{eq:TeVspec} and \ref{eq:integral} and the data from CT1-4 stereo analysis from \cite{Hess2022} to calculate estimates of the total number of neutrinos expected to have been detected from the latest RS Oph eruption by IceCube, ANTARES and KM3NeT/ARCA. For each detector, we calculate the average number of events per hour over the five exposure epochs. According to \cite{Hess2022} the source was observed in eruption for $\sim30$ days. In order to obtain an upper limit of the total number of expected neutrino events, we multiply the average number that we obtained by 30 days.
Since the errors given in \cite{Hess2022} are very large, of order 30$\%$, we do not calculate a range, but regard our results as estimates. We find the total expected number of neutrino events for IceCube, ANTARES and KM3NeT/ARCA to be $\sim1.4\times 10^{-3}$, $\sim3.6\times 10^{-4}$ and $\sim2.1\times 10^{-2}$ respectively. These numbers indicate a detection to be highly improbable.
\subsection{RS Oph - Low energy ($\sim0.1-100$ GeV)}\label{sec:low_energy}
Next we calculate the expected flux for the case that the neutrinos could have originated in $\pi$ decays resulting in lower energies. We use Equations \ref{eq:integral} and \ref{eq:lowE} and data from \cite{Hess2022} as described in \S\ref{sec:100GeV}.
As with the high energy data, here too, the Fermi-LAT data from \cite{Hess2022} has large error bars, thus our results here are estimates for $\sim30$ days. We obtain 0.014, 0.061 and 0.046 for Hyper-K, DeepCore and KM3NeT/ORCA respectively --- these values, although higher than those derived via the high energy calculation, are still too low for any of the current or future telescopes to be able to detect.
We also use the joint spectral fit of H.E.S.S. and Fermi-LAT for the total energy range ($>$0.1GeV) to compute the expected number of neutrino events that would have been detected by the high energy detectors, and derive $\sim4.6\times10^{-4}$, $\sim1.5\times10^{-4}$ and $\sim7.5\times10^{-3}$ for IceCube, ANTARES and KM3NeT/ARCA respectively. These are $1-2$ orders of magnitude lower than the expected detection from the low energy telescopes.
All our methods of calculation, considering both energy ranges yield neutrino fluxes for RS Oph that seem to be too low for any of the current or future telescopes to be able to detect.
\subsection{Additional novae}\label{sec:additional_novae}
We now consider the Fermi-LAT detections of the six novae specified in \S \ref{sec:intro}. We take the photon fluxes from \cite{Ackermann2014} and \cite{Cheung2016} and use them in Equations \ref{eq:TeVspec} and \ref{eq:integral} to determine the expected number of neutrino events, while assuming, as before, that the energy emitted in neutrinos is of the same order as the energy emitted in photons. We used $E_0=1$ GeV as the reference energy and $\Gamma=2.1$ as the spectral index. The values for exposure time and amplitude at reference energy (extracted from \cite{Ackermann2014} and \cite{Cheung2016}) are specified in Table \ref{tab:six_novae} as well as our resulting number of expected neutrino detections by each of the low energy telescopes up to 100 GeV.
Our results predict, for the six novae, substantially smaller numbers of expected events relative to RS Oph, and within the six novae, we expect a higher detection rate for the SymN V407 Cyg relative to the five CNe. We note that none of the low energy telescopes yield a feasible number of expected events for any of these novae.
\begin{table*}[!h]
\begin{center}
\begin{tabular}{c|ccc|ccc|c}
{ }&{D[kpc]}&{$N_0$} & {T[days]} & {$N_{\nu}^{\rm DeepCore}$} & {$N_{\nu}^{\rm Hyper-K}$}& {$N_{\nu}^{\rm ORCA}$} &{$E_{\nu}^{\rm TOT}$} \T\B\\
\hline
{V339 Del}& {4.2}&{5.0}&{27} &{0.013}&{0.0026} &{0.009}&{6.0}\T\B\\
{V959 Mon}&{3.6}&{7.0}&{22} & {0.015}&{0.0030} &{0.011}&{7.1}\T\B\\
{V1324 Sco}&{4.5}&{10.0}&{17} & {0.016}&{0.0033} &{0.012}&{13}\T\B\\
{V407 Cyg}&{2.7}&{10.0}&{22} & {0.021}&{0.0043} &{0.015}&{6.1}\T\B\\
{V1369 Cen}&{2.5}&{2.5}&{18} & {0.004}&{0.0009} &{0.003}&{3.0}\T\B\\
{V5568 Sgr}&{2.0}&{1.0}&{47} & {0.005} &{0.0009} &{0.003}&{1.2}\T\B\\
\hline\hline
{RS Oph}&{2.3}&{7.1}&{30} & {0.060} &{0.0140} &{0.046}&{20}\T\B\\
\end{tabular}
\caption{Summary of six novae. D and T are the distance to the system and the exposure time \cite[]{Ackermann2014,Cheung2016} and $N_0$ is given in units of $10^{-11}\rm erg^{-1}cm^{-2}s^{-1}$. Columns $3-5$ are the derived expected number of neutrinos for the three low energy detectors. In the last column we give the total energy emitted in neutrinos in units of $10^{41}$ erg. The data of RS Oph are included for comparison.}\label{tab:six_novae}
\end{center}
\end{table*}
Additionally, we extrapolate the above calculation to predict the number of neutrino events for the hypothetical case that those novae may emit in the high energy range ($>100$ GeV). We accomplish this by extending the Fermi-LAT photon flux to higher energies. Our results are shown Table \ref{tab:six_novae_high_nrg}, where it seems as though the high energy telescopes would have a good chance of detecting neutrinos from these six novae, while not for RS Oph, which actually emitted in high energy. We discuss this further in \S \ref{sec:discussion}.
\begin{table*}[!h]
\begin{center}
\begin{tabular}{c|ccc|cc|c|c|c}
{ }& {$N_{\nu}^{\rm IceCube}$} &{$N_{\nu}^{\rm ANTARES}$} &{$N_{\nu}^{\rm ARCA}$} &{IceCube+DeepCore} &{KM3NeT}& {$N_\nu^{\rm Bkgrnd}$}& {D$_1^{3\sigma}$[kpc]} & {D$_2^{3\sigma}$[kpc]}\T\B\\
\hline
{V339 Del} &{0.467}&{0.026} &{2.163}&{0.552}&{2.172}&{1.9}&{0.94}&{1.9}\T\B\\
{V959 Mon}&{0.533}&{0.029} &{2.467}&{0.630}&{2.478}&{1.6}&{0.89}&{1.8}\T\B\\
{1324 Sco}& {0.588}&{0.033} &{2.724}&{0.696}&{2.736}&{1.2}&{1.2}&{2.4}\T\B\\
{V407 Cyg}& {0.761}&{0.042} &{3.525}&{0.900}&{3.540}&{1.6}&{0.8}&{1.6}\T\B\\
{V1369 Cen}& {0.156}&{0.009} &{0.721}&{0.185}&{0.724}&{1.2}&{0.35}&{0.7}\T\B\\
{V5568 Sgr}& {0.163} &{0.009} &{0.753}&{0.193}&{0.756}&{3.3}&{0.25}&{0.5}\T\B\\
\hline\hline
{RS Oph}& {0.0014} &{0.0004} &{0.020}&{0.061}&{0.045}&{2.2}&{0.2}&{0.15}\T\B\\
\end{tabular}
\caption{Summary of six novae. The first three columns are the derived hypothetical expected number of neutrinos for the three high energy detectors. The fourth and fifth columns are the sum of events for high and low energy ranges and the
sixth column is the average number of atmospheric background neutrino events from \cite{Metzger2016}. The last two columns are the computed maximum distances at which the systems would need to be in order to allow for a $3\sigma$ detection above the atmospheric background neutrinos, for IceCube+DeepCore (D$_1^{3\sigma}$) and KM3NeT (D$_2^{3\sigma}$). The $3\sigma$ confidence levels for the background were calculated using \citeauthor{Gehrels1986}' (\citeyear[]{Gehrels1986}) prescriptions for small numbers of events. The data of RS Oph are included for comparison.}\label{tab:six_novae_high_nrg}
\end{center}
\end{table*}
\subsection{Expected atmospheric events}
We have presented quantitative estimations concerning detection
prospects of low-energy neutrinos from novae with current (IceCube-DeepCore) and under construction (KM3NeT/ORCA and Hyper-Kamiokande) neutrino telescopes.
At multi-GeV energies the atmospheric
background severely limits the identification of
cosmic signals. The main component for the background is the flux of atmospheric neutrinos, which is caused by the interaction of cosmic rays, high energy protons and nuclei, with particles in the Earth's atmosphere. Decay of charged pions and kaons produced in cosmic ray interactions generates a flux of atmospheric neutrinos and muons. In order to reduce the effect of the background, the search of neutrinos with energies in the multi-GeV range from novae should be performed only for upward going neutrinos. Indeed, Earth-filtered events allow to reduce the atmospheric muon background significantly. Moreover, only events due to $\nu_{\mu}$ charged current (CC) interactions should be considered. The muons that originate in such interactions indeed lead to a long track that allows to define the direction of an incoming neutrino with good accuracy, pointing back to the source.
An approximate estimate of the background events has been given in \cite{Metzger2016}. They estimate the number of background events to be
$\sim 1$ neutrino over a two week duration of typical nova LAT emission for DeepCore-IceCube. Following their work we have estimated the background events and report them in Table \ref{tab:six_novae_high_nrg} for comparison, demonstrating them to be of the same order as the high energy and total energy calculations. This means that even if these nova were emitting in high energy, it would be difficult to distinguish source events from the background events.
\section{Discussion}\label{sec:discussion}
In this paper we estimate the neutrino flux from RS Oph and other novae that emitted energy in the GeV range. We find that no nova to date has provided a strong enough neutrino signal to be detected by any of the present or future neutrino telescopes. The systems are either too far away or the interactions in the ejecta are not energetic enough. Being more energetic means the basic system parameters (e.g., WD mass, donor type or mass or evolutionary stage, separation, accretion rate, kinetic energy etc.) would have to be different. However, understanding what system parameters may produce sufficiently energetic interactions is not so simple. For instance, let us consider the extreme, rapidly recurring nova, M31N 2008-12a \cite[]{Darnley2016} that erupts every year. It should be producing multiple mass shells that expand away from the WD, and they would not all be expanding at the \textit{exact} same velocity, inevitably leading to collisions between different shells. In-homogeneity in the ejecta can form clumping which can lead to collisions as well. This interpretation can mislead to the simplistic conclusion that a system with a shorter recurrence period should be the place to look for highly energetic shocks. However, the amount of mass ejected in a nova decreases with decreasing time between eruptions. This means that being a recurrent nova is not necessarily the only requirement. RS Oph, being a SymRN, is embedded in the dense wind coming from its companion, so the nova eruption sends the ejected shell hurdling into it, which is the source of the GeV radiation. This being the case, perhaps we should expect to find this range of energy in all SymNe? \cite{Ackermann2014} and \cite{Razzaque2010} have investigated the SymN V407 Cyg and found, for the relevant energy range, lower fluxes than found for RS Oph (based on kinetic energy considerations). We find similar results here for the low energy detectors. The difference between these two systems (RS Oph and V407 Cyg) that leads to different energy range output will require deep investigation of the many system parameters that determine the outcome of the eruption, as mentioned above.
Considering the GeV flux given in \cite{Ackermann2014} and \cite{Cheung2016} for five CNe, we estimate the expected number of neutrino events in the low energy range using the method described in \S \ref{sec:low_energy} and find them to be substantially lower than for RS Oph --- entirely undetectable with any current or future planned neutrino telescope. These results are consistent with those found by \cite{Metzger2016} for V1324 Sco in the low energy range.
They also extrapolated the low energy flux to high energies and reported that they have obtained an extremely overestimated number of expected events. We followed this procedure for the six novae and obtained the hypothetical high values shown in Table \ref{tab:six_novae_high_nrg}. We used this method to calculate an expected number of neutrinos from RS Oph and obtained the entirely unrealistic prediction of $\sim14$ neutrinos, whereas when we use the actual values obtained from observations of RS Oph, our expected number of events remains low. (See \S \ref{sec:additional_novae} and the last row of Tables \ref{tab:six_novae} and \ref{tab:six_novae_high_nrg}). This emphasizes that such extrapolations should be carried out with great caution.
We note that we may have been systematically underestimating the neutrino emission from RS Oph due to the fact that we have not considered absorption of GeV$-$TeV photons from the surrounding environment. This calculation is not straightforward, since it involves modelling of the environment, including possible ancient shells that have expanded parsecs away from the source.
It has been suggested that the expected signal event rate may be increased by combining search among low and high energy neutrino
detectors, i.e., KM3NeT/ORCA + KM3NeT/ARCA and
DeepCore + IceCube \cite[]{Zegarelli2022}.
Another option that can greatly increase the signal detection is summing the contribution of many novae (stacking). However, the same holds for the atmospheric background, such that complex stacking techniques are required in order to obtain a significant detection level. (See \cite{Zegarelli2022} for a detailed description of this procedure).
\section{Conclusions}\label{sec:conclusions}
In this paper we have estimated the number of neutrino events expected for RS Oph and other novae observed at both low and high energy ranges. A number of interesting results have emerged:
\begin{enumerate}[(i)]
\item
Given the current telescope sensitivity we cannot put any constraint on whether the GeV$-$TeV emission is a result of an hadronic process or a leptonic one. Our predictions for the number of neutrino events, both for the high and low energy ranges, are quite low. For the IceCube-DeepCore detections we estimate that the distance that a novae eruption similar to RS Oph must occur in order to obtain a 3$\sigma$ detection above the background, must not be larger than $\sim 1$ kpc. All the novae in the sample explored in this work are characterized by distances greater than 2 kpc. Our calculations imply the the situation should improve with KM3NeT, whose higher sensitivity expands the detection threshold up to $\sim 2$ kpc. This would still not be sufficient to detect any of the novae in our sample, but given the current rates of nova explosions in the Milky Way, it will make the neutrino observations from nova explosions a realistic scenario (see item \ref{item:4}).
\item
In the low energy regime, we find an expected number of events that is lower than found by \cite{Razzaque2010} and consistent with \cite{Metzger2015}. The discrepancy with \cite{Razzaque2010} may be due to the fact that these authors did not have sufficient information on the actual effective area of the detector at the time. On the basis of our results we can estimate that the total energy from neutrino flux emitted by novae is of the order of $\sim 10^{41/42}$ erg, which corresponds to about 10$^{-3/-5}$ the bolometric nova energy budget estimated for a nova explosion \cite[]{Gallagher1978}.
\item\label{item:3}
In the high energy range we find a result lower than what was found by \cite{Metzger2016} due to the fact that we use the observed high energy photon flux while \cite{Metzger2016} extrapolated the low energy flux to high energies. Since we do not have any observations at $>100$ GeV for the other novae that were detected at GeV by Fermi-LAT, we have extrapolated the low energy flux to high energy, following the approach taken by \cite{Metzger2016}. The expected number of events that we derive at high energies are consistent with what was found by \cite{Metzger2016} for V1324 Sco. However, we stress that extrapolating the calculation in this way can introduce very large errors, as we have shown by computing the expected number of events for RS Oph by both methods.
\item\label{item:4}
The global nova rate in the Milky Way has been measured many times by several authors over the past decades (see \cite{DellaValle2020} for a summary). Currently, the frequency of occurrence of novae within the Galaxy is typified by a factor of two of uncertainty. Today we believe that its value is between 20 \cite[]{DellaValle1994} and 50 novae/year \cite[]{Shafter2017}. Given the relatively low neutrino fluxes expected from novae (see Table \ref{tab:six_novae_high_nrg}), only nearby objects have the potential to be observed by neutrino observatories. A close inspection of Table \ref{tab:six_novae_high_nrg} reveals that IceCube has the potential to reveal neutrino fluxes from novae up to a distance of $\sim$1 kpc, whereas KM3NeT may be able to detect up to a distance of $\sim$2 kpc. Our location in the Galaxy, in the outskirts of the galactic disk together with the requirement of distances less than 2 kpc, limit our interest only to the disk nova component. Following \cite{DellaValle1993} and using modern values for nova rates, we compute an upper limit for the nova eruption density in the disk of $\sim 1 \times 10^{-10}$ pc$^{-3}$ yr$^{-1}$. This figure, implies a rate of about $1^{+2.3}_{-0.8}$ novae every 4 years within 2 kpc and every $\sim 15$ years within 1 kpc. These values are not unrealistic and may provide, in the near future, a valuable test bed for assessing the validity of the physical processes, described in the literature, which are believed to underlie the high-energy emission observed in novae.
\end{enumerate}
\section*{Acknowledgements}
The support from the Authority for Research \& Development and the chairman of the Department of Physics in Ariel University are gratefully acknowledged. Massimo Della Valle
thanks the University of Ariel for their hospitality during his visit.
|
2,877,628,091,491 | arxiv | \section{Introduction}
Biomembranes exert various types of repulsive and attractive forces on each other. The van der Waals forces are weakly attractive---these vary as $1/d^3$ for close separations and transition to $1/d^5$ at larger distances \cite {Helfrich84, Ninham70}. Here $d$ is the mean distance between the interacting membranes. The notable aspect of the attractive force is that it is long-ranged. A somewhat ambiguous term \emph{hydration forces} \cite {Rand89}, is used to denote the repulsive force that mediates at very small inter-membrane distances. The underlying mechanisms of hydration forces are still under active research \cite {Lipowskychapter}---it suffices to simply indicate here that they are quite short ranged and drop off exponentially with distance.
Helfrich, in a pioneering work \cite{Helfrich78}, showed that two fluctuating fluid membranes exert a repulsive force on each other. Biomembranes are generally quite flexible and a single membrane fluctuates both freely and appreciably at physiological temperatures. As two membranes approach each other, they \emph{hinder} or diminish each others out-of-plane fluctuations. This hindrance decreases the entropy and the ensuing overall increase of the free-energy of the membrane system, which depends on the inter-membrane distance, leads to a repulsive force that tends to push the membranes apart. Helfrich \cite {Helfrich78}, using a variety of physical arguments and approximations, postulated that the entropic force varies as $1/d^3$. In contrast to the other known repulsive forces, this behavior is long-ranged and competes with the van der Waals attraction at all distances \cite {Helfrich84, Ninham70, Israel92, Milner92, Lipowsky86, Lipowsky89}. Since Helfrich's proposal \cite {Helfrich78}, biophysicists have used the existence of this repulsive force to explain and understand a variety of phenomena related to membrane interactions. Helfrich's work has been reexamined and extended in Ref. \cite{Janke86, Gouliaev98, Kleinert99} (among others) and most recently by Freund \cite{Freund13}. See also \cite {Sharma13} for a an overview of Freund's work.
Freund clearly highlights some of assumptions made in Helfrich's work and provides a fresh perspective on this problem \cite{Freund13}. Freund controversially finds that within a range of $d$ values the force law between two fluctuating membranes is proportional to $1/d$ rather than the well-accepted result of Helfrich: $1/d^3$. To settle this issue, we have reexamined this problem both analytically and through recourse to carefully conducted Monte Carlo simulations. As was initially pointed out by Helfrich \cite{Helfrich78}, due to reflective symmetry, the evaluation of the force between two membranes in a periodic stack may be replaced by a single membrane confined between two rigid walls (Figure 1). Throughout this article, we will emphasize the differences between our work and those of Ref. \cite{Freund13, Janke86, Helfrich78}.
\begin{figure}
\centering
\includegraphics[scale=0.45,clip]{fig1.eps}
\caption{\label{fig:1}
A pair of fluctuating membrane may be replaced by a single membrane confined between two walls separated from each other by a distance $2d$. }
\end{figure}
\section{General Formalism and Asymptotic Limit}
Consider a membrane as depicted in Fig. \ref{fig:1}. Assume that the membrane occupies $S=[0, L]^2$ on the $x y $-plane and the thermodynamic state of the membrane is described by $u\equiv u_z(x ,y )$, where $u_z$ is membrane mid-plane deviation along the $z$-axis. In the Helfrich model, the Hamiltonian is
\begin{equation}
H[u] = \int_S \dd ^2\xv \frac{\kappa}{2}(\partial^2u)^2.
\label{eqn:1}
\end{equation}
To address the thermal fluctuation we make the following assumptions:
(i) the membrane consists of $N$ molecules located at $\xv\in {\cal L}=\{a \left(l_1, l_2\right): l_1, l_2=1,\cdots, m\}$, where $a=L/m$ is the molecule's size. In other words, $N=m^2$ is the total degrees of freedom of the system, and (ii) microscopically the out-of-plane deviation $ u_{\xv}$ of each molecules is quantized and can only take values from $\{ n\delta d: n=1,\cdots, d/\delta d \}$, where $\delta d$ is a small spacing along the deviation direction. Then from the definition \cite{Kittel80} the partition function of the system can be written as a functional integral:
\begin{equation}
Z=\int_{-d}^{d}\prod_{\xv\in {\cal L}}C\dd u_{\xv} e^{-\beta\int_S \dd ^2\xv \frac{\kappa}{2}(\partial^2u)^2},
\label{eqn:2}
\end{equation}
where $\beta=1/k_B T$ and $u=u(\xv)$ is any differentiable function interpolating the discrete molecules' deviation $u_{\xv}$, $\xv\in {\cal L}$.
The point-wise hindrance condition that $|u_{\xv}|\le d$ is enforced throughout this work. This constraint is in fact the key obstacle in the closed-form evaluation of the partition function.
Freund \cite{Freund13} modified the partition function by introducing
adjustable integration limits, and then minimized the resultant free energy with respect to these limits to determine the change in free energy with respect to inter membrane separation (and hence the entropic force law). In the Conclusions section, we discuss Freund's approach further. Here, we adopt the form of the partition function in Eq.(\ref{eqn:2}) which, notwithstanding the analytical intractability of its integration limits (i.e. pointwise constraint on $u$), is \emph {exact} within the present formulation. We remark here that Helfrich \cite{Helfrich78} avoids the point-wise $hindrance$ condition, $\left| u(x,y)\right|\leq d$, and instead imposes a weaker constraint where
the mean square membrane displacement is required to be bounded by $d^2$, i.e. $\langle u^2 \rangle\leq d^2$.
\newcommand{\tilde{v}}{\tilde{v}}
\newcommand{\bar{Z}}{\bar{Z}}
\newcommand{\sqrt{\tau}}{\sqrt{\tau}}
To gain new insights into the partition function, two dimensionless quantities, $y$ and $v$, are introduced as
\begin{equation}
\yv = \frac{\xv}{L}\quad\text{and}\quad v(\yv)= \frac{u(\xv)}{d}.
\label{eqn:3}
\end{equation}
By changes of variables we may rewrite the partition function \eqref{eqn:2} as ($S_0=[0,1]^2$, $A=L^2$)
\begin{equation}
Z = (Cd)^N\int_{-1}^{1}\prod_{\yv \in \tilde{{\cal L}} } \dd v_{\yv}
e^{-\frac{\beta \kappa d^2 } {2 A } \int_{S_0}\dd ^2\yv (\partial^2_{\yv} v)^2} ,
\label{eqn:4}
\end{equation}
where $\yv\in \tilde{{\cal L}}=\{\left(l_1/m, l_2/m\right): l_1, l_2=1,\cdots, m\}$.
Let $\tau \equiv A/\beta\kappa d^2$ be a dimensionless variable. By a change of variable back and forth,
$v /\sqrt{\tau} \leftrightarrow\ \tilde{v} $, we obtain
\begin{equation}
Z = (Cd)^N\left(\sqrt{\frac{A}{\beta\kappa d^2}}\right)^N \bar{Z}(\tau),
\label{eqn:5}
\end{equation}
where
\begin{equation}
\bar{Z}(\tau) = \tau^{-N/2}\int_{-1}^{1}\prod_{\yv \in\tilde{{\cal L}}} \dd v_\yv e^{-\frac{1}{2\tau} \int_{S_0}\dd^2\yv(\partial^2_\yv v)^2}.
\label{eqn:6}
\end{equation}
As a consequence, the free energy density has the form
\begin{equation}
\Ff = -\frac{k_{B}T}{2 a^2 }\ln\left( \frac{k_B T A C^2}{\kappa }\right) -
\frac{k_{B}T}{ A } \ln \bar{Z}\left(\frac{k_B T A}{\kappa d^2}\right).
\label{eqn:7}
\end{equation}
It should be noted
here that our free energy density has a subtle difference from the one in Ref. \cite{Janke86}. The first term in the free energy does not depend on $d$ and vanishes upon differentiation with respect to $d$.
It follows that the steric pressure is
\begin{equation}
p = -\frac{1}{2}\frac{\partial \Ff}{\partial d} = \frac{(k_BT)^2}{ \kappa d^3 }g(\tau).
\label{eqn:8}
\end{equation}
\noindent
where $g(\tau) \equiv - \partial \ln \bar{Z}(\tau)/ \partial \tau$.
Janke and Kleinert \cite {Janke86} have performed Monte Carlo calculations and found that $g(\tau)$ is a constant in the thermodynamic limit \cite{Janke86}.
The value of this constant was also found by Kleinert using a variational approach \cite{Kleinert99}. We remark that in the past works that have performed Monte Carlo calculations of this problem, the veracity of the Helfrich's pressure law has been implicitly embraced and the focus has not been on examining the dependence of the entropic pressure on inter membrane distance but rather calculation of $g(\tau)$ \emph{assuming} that Helfrich's inverse cube law is correct. This is the reason, we believe, that Freund's result and (now ours) has not been noted until now.
As evident from the definition in Eq.(\ref{eqn:8}), $g (\tau)$ is a
function of the separation between the rigid walls confining the membrane.
It is interesting to consider the energy variation, and consequently the pressure,
at some small distances. As Freund has shown, this limit is analytically tractable.
Below we reproduce the result using a slightly different procedure.
Consider the scaled partition function in Eq.(\ref{eqn:6}), by Fourier transformation we introduce
\begin{equation}
\begin{split}
\vh_{\kv} =\frac{1}{m}\sum_{\yv\in \tilde{{\cal L}}}e^{-i\kv\cdot\yv}v_{\yv}, \qquad v_{\xv} =\frac{1}{m}\sum_{\kv\in \tilde{{\cal K} }} e^{i\kv\cdot\xv}\vh_{\kv},
\end{split}
\label{eqn:10}
\end{equation}
\noindent
where $ \tilde{{\cal K}}=\{2\pi\left(n_1, n_2\right): n_1, n_2=1,\cdots, m\}$ is the reciprocal lattice of $\tilde{{\cal L}}$. In matrix notations the above equations can be rewritten as
\begin{equation}
\vec{v}_\yv = \Uu^\dagger \vec{\vh}_\kv, \qquad \vec{\vh}_\kv = \Uu \vec{v}_\yv,
\label{eqn:11}
\end{equation}
where $\vec{v}_\yv$ (resp. $\vec{\vh}_\kv$) denotes the column vector formed by $v_{\yv}$, $\yv\in \tilde{{\cal L}}$ (resp. $\vh_{\kv}$, $\kv\in \tilde{{\cal K}} $), and $\Uu$ is a unitary matrix satisfying $\Uu^\dagger \Uu = \Uu\Uu^\dagger = 1$. Since $a<<L$, we may convert an integral over $S$ as a summation over ${\cal L}$: $\int_S\dd^2\xv\Rightarrow a^2\sum_{\xv\in {\cal L}}$, and consequently $ A \int_{S_0}\dd^2\yv \Rightarrow a^2 \sum_{\yv \in \tilde{{\cal L}} }$ . Applying the Parseval's theorem,
we rewrite
\begin{equation}
\begin{split}
\int_{S_0}\dd^2\yv(\partial^2_y v )^2 & = \frac{1}{m^2}\sum_{\kv\in \tilde{{\cal K}} } |\vh_\kv|^2 |\kv|^4\\
&=\frac{1}{m^2}\vec{v}_\yv \cdot\Uu^{\dagger}\Dd(\kv) \Uu\vec{v}_\yv,
\end{split}
\label{eqn:12}
\end{equation}
where $\Dd(\kv) $ is the diagonal matrix with entries $|\kv|^4$, $\kv\in {\cal K}$.
Defining a dimensionless variable $ \pv \equiv \kv/m^4 $, the scaled partition function is
\begin{equation}
\begin{split}
\bar{Z}(\tau) = \tau^{-N/2} \int_{-1}^{1}\prod_{\yv\in \tilde{{\cal L}} } C\dd v_\yv
e^{-\frac{m^2}{2\tau} \vec{v}_\yv \cdot\Uu^{\dagger}\Dd(\pv) \Uu\vec{v}_\yv}
\end{split}
\label{eqn:13}
\end{equation}
For $\tau\rightarrow \infty$ i.e. $d/a\rightarrow 0$, one has
\begin{equation}
\begin{split}
\bar{Z}(\tau) =& \tau^{-N/2}\left[2^N - \frac{m^2}{2\tau} (\Uu^\dagger \Dd\Uu)\cdot\int_{-1}^{1} \prod_{\yv\in\tilde{{\cal L}}}\dd v_{\yv}
\vec{v}_\yv\otimes\vec{v}_{\yv} \right.\\
&\quad\quad \quad\quad\left.+\Oo(\frac{d^4}{a^4})
\right],
\end{split}
\label{eqn:14}
\end{equation}
where the identity $\vec{v}_\yv\cdot\Uu^\dagger\Dd\Uu\vec{v}_{\yv} = (\Uu^\dagger \Dd\Uu)
\cdot(\vec{v}_\yv\otimes \vec{v}_{\yv})$ was used.
Since $\int_{-1}^{1}\prod_{\yv\in \tilde{{\cal L}} } \dd v_{\yv}\vec{v}_\yv\otimes \vec{v}_{\yv}
= \frac{2}{3} 2^{(N-1)}\Ii = \frac{2^N}{3}\Ii$, and the inner product of $\Uu^\dagger\Dd\Uu $
with the identity matrix yields its trace, the reduced partition function in the asymptotic limit is
\begin{equation}
\begin{split}
\bar{Z}(\tau) =& \tau^{-N/2}\left[2^N- \frac{2^N}{3} \frac{m^2}{2\tau}
\sum_{m^4p\in \tilde{\Kk}} p^4 + \Oo(\frac{d^4}{a^4}) \right].
\end{split}
\label{eqn:15}
\end{equation}
It is clear from the definition $g(\tau) = -\partial \ln \bar{Z}/\partial \tau $ that the steric pressure
has the leading order term of $p\approx k_BT/2da^2$. The next correction term can be obtained
using the identity $\sum_{m^4 p\in\tilde{\Kk} } p^4 = (2\pi)^4 \left[ 23 m^2/45 - 26 m/15 + \Oo(1) \right]$, where $m^2 = N$.
The steric pressure in the limit $\tau\rightarrow \infty$ ($d/a\rightarrow 0$) is thus
\begin{equation}
p = -\frac{1}{2}\frac{\partial \Ff}{\partial d}
= \frac{k_BT}{2a^2d}\left[ 1 - (2\pi)^4 \frac{46}{45} \frac{\beta \kappa d^2}{6a^2} +
\Oo(\frac{d^4}{a^4}) \right].
\label{eqn:16}
\end{equation}
\noindent
This pressure law has a very different $d-$dependence
than any known theories or simulations to date
\cite{Helfrich78, Janke86, Kroll89, David90, Netz95, Gouliaev98, Kleinert99}
except of course the work by Freund \cite{Freund13}.
Another interesting point obtained from Eq.(\ref{eqn:16}) is that the first term in the right-hand side has the form of pressure of the ideal gas. Physically, this implies that the membrane fluctuates similar to an ideal gas in the limit $d/a\rightarrow 0$, with a correction term which is in the order of $\Oo(d)$ (the second term in Eq.(\ref{eqn:16})). This ideal-gas contribution does not depend on the bending modulus $\kappa$, hence every type of membranes should exhibit the $p \sim 1/d$ pressure law at the limit $d/a\rightarrow 0$.
In order to elucidate the full $d-$dependence including the limit $d/a\rightarrow 0$, a natural step is to take recourse in numerical Monte Carlo simulations which are discussed in the next section.
\section{Monte Carlo Simulations}
In the Monte Carlo simulations, the spatial coordinates are replaced by a square grid $\{\xv\}$ with lattice constant $a$.
The membrane displacement along the $z-$direction is specified by $u_\xv \equiv u(\xv)$, where
it is also discretized to a grid of space $\delta d$. This scheme has been shown to be sufficient
in prior Monte Carlo calculations \cite{Janke86, Lipowsky89}. The Hamiltonian over these lattice points is
\begin{equation}
H = a^2\sum_{x\in{\cal L}}\frac{\kappa}{2}(\partial^2u)_\xv^2,
\label{eqn:17}
\end{equation}
\noindent
where the discretized Laplacian
\begin{equation}
(\partial^2u)_\xv = \frac{1}{a^2}\left[\sum_{\hat{\rho}\in \text{nbr}} u_{\xv+\hat{\rho}}- 4u_{\xv}\right],
\label{eqn:18}
\end{equation}
and $\hat{\rho}$ denotes the displacement vectors to the four nearest neighbors
to the site $\xv$.
\begin{figure}
\centering
$\begin{array}{c}
\includegraphics[scale=0.45,clip]{fig2a.eps} \\
\includegraphics[scale=0.45, clip]{fig2b.eps}\\
\end{array}
$
\caption{\label{fig:2} (a) Discretization of a strip geometry of a membrane. To update the central point (red dot), the knowledge of the neighboring points inside the
dashed square is required. The $x-$direction of the strip is treated within the realm of message passing, while a periodic boundary
condition is employed along the $y-$direction. (b) A realization of a membrane of size 40 by 40 from an MC run. The energetic parameters
are $\kappa = 1.0$, $k_BT = 2.0$, while for the length scales $a = 1.0$ and $d=5.0$. }
\end{figure}
Our simulation code was fully parallelized using spatial decompositions. The lattice was divided into strips so that each strip can
be updated independently using the usual Metropolis algorithm \cite{Nakano, Heermann91}. Since each strip is updated
in parallel, some care has been taken in order to account for the lattice sites at the boundary. Figure \ref{fig:2}(a) illustrates
a typical geometry in the simulations. Updating the central point requires the knowledge of the values at all points inside the dashed
square. It is forbidden to update any
other site in this neighborhood until the update of the central site has been completed. This can be accomplished
by an appropriate choice of strip lengths along the $x-$axis and performing a row-by-row update. In our simulations,
the minimal lattice length along the $x-$axis is 5, so that there is no overlap between dashed
square in Figure \ref{fig:2}(a). A message passing routine was employed in updating the top and bottom two rows of the strips,
while along the $y-$axis a simple periodic boundary condition was used. In this way, our choice of lattice sizes are 50 by 50, 100 by 100, 600 by 600, and 1000 by 1000.
An MC realization is shown in Figure \ref{fig:2}(b), where the parameters used for its generation are listed in the caption.
In addition to the ensemble average of the energy $\bar{\Ee}$, the second physical quantity needed for this problem is the pressure. The pressure can be derived by differentiating the change in free energy with respect to the inter membrane separation. Alternatively and more conveniently, the pressure can be related
to the ensemble average of the Hamiltonian density $\Hh = H/ Na^2$.
Rewriting Eq.(\ref{eqn:4}) as
\begin{equation}
Z = (Cd)^N\int_{-1}^{1}\prod_{\yv\in\tilde{{\cal L}}} \dd v_{\yv} e^{-\frac{1}{ \tau} \tilde{H}(\{ v_\yv \}) }
\label{eqn:19}
\end{equation}
where $\tilde{H} = (1/2)\int_{S_0}\dd^2\yv (\partial^2_\yv v_\yv)^2 \equiv \tau \beta H $ is a scaled Hamiltonian. The membrane
configuration $\{u_\xv\}$ is parameterized by the set of variable $\{v_\yv = u_{\xv}/ d\}$.
Differentiation of the free energy can be carried out straightforwardly by using these
rescaled parameters $\{ v_{\yv} \} $; the concept which also appears in the derivation for
the Hellmann-Feynman forces in quantum mechanics \cite{Feynman39}. It follows that
\begin{equation}
\begin{split}
\frac{\partial F}{\partial d} =\frac{-k_BT}{Z} &
\left[
(Cd)^N \int_{-1}^{1} \prod_{\yv \in \tilde{{\cal L}}} \dd v_{\yv} \left[-\frac{2 \tilde{H}(\{v_{\yv}\})}{d\tau} \right] e^{-\frac{1}{\tau} \tilde{H}(\{v_{\yv}\})}\right.\\
&\left.
+\frac{N}{d} (Cd)^N \int_{-1}^{1} \prod_{\yv\in\tilde{{\cal L}}} \dd v_{\yv} e^{-\frac{1}{\tau} \tilde{H}(\{v_\yv\} )} \right]
\end{split}
\label{eqn:20}
\end{equation}
Using the concept of ensemble average and Eq.(\ref{eqn:19}), the derivative
of the free energy is
\begin{equation}
\frac{\partial F}{\partial d} = \frac{2}{d}\ave{H} - \frac{N k_B T}{d}
\label{eqn:21}
\end{equation}
In term of the energy density $\Ff = F/ A$ and $\Hh = H/A$ for a continuos membrane, or $\Ff = F/Na^2$ and $\Hh = H/Na^2$ for a discretized membrane, where $N$ is the number of molecules and $a^2$ is a square encompassing each of them
\begin{equation}
\frac{\partial \Ff}{\partial d} = \frac{2}{d}\ave{\Hh} - \frac{Nk_BT}{L^2 d }
= \frac{2}{d}\ave{\Hh} - \frac{k_BT}{a^2d}.
\label{eqn:22}
\end{equation}
Consequently for a membrane situated between two rigid plates of separation $2d$, the steric
pressure
\begin{equation}
p = -\frac{\partial \Ff}{\partial (2d) } = \frac{k_BT}{2a^2d} - \frac{1}{d}\ave{\Hh} = \frac{k_BT}{2a^2d} - \frac{1}{d}\bar{\Ee},
\label{eqn:23}
\end{equation}
where $\bar{\Ee} = \ave{\Hh}$.
The above pressure is exact, and makes the computation of its value straightforward. Furthermore, the errors on
the pressure $p$ stems from only the errors on $\ave{\Hh}$, which for sufficiently long Monte Carlo
time steps become small compared to the value of $\ave{\Hh}$.
The validity of Eq.(\ref{eqn:23}) for both the asymptotic and general form of the pressure can
be easily checked. In the asymptotic limit $\tau\rightarrow\infty$ or $d/a\rightarrow 0$,
Eq.(\ref{eqn:15}) gives the free energy density $\Ff = -k_B T \ln Z/Na^2 $ of the form
\begin{equation}
\Ff(\tau) = -\frac{k_BT}{a^2}\left[ \ln(2dC) - (2\pi)^4\frac{23}{45}\frac{ k_B T}{a^2}\frac{m^2}{2\tau} +\Oo(\frac{d^4}{a^4})
\right].
\label{eqn:24}
\end{equation}
It then follows that
\begin{equation}
\bar{\Ee} = \frac{\partial}{\partial \beta} \beta \Ff \approx (2\pi)^4 \frac{23}{45}\frac{\kappa d^2}{6 a^4}.
\label{eqn:25}
\end{equation}
\noindent
and
\begin{equation}
p \approx \frac{k_BT}{2a^2d} - (2\pi)^4 \frac{23}{45}\frac{\kappa d}{6 a^4},
\label{eqn:26}
\end{equation}
in agreement with Eq.(\ref{eqn:16}). In the general case, it can be deduced from Eq.(\ref{eqn:7}) that
\begin{equation}
\bar{\Ee}
= \frac{k_BT}{2a^2} - \frac{(k_BT)^2}{\kappa d^2}g(\tau).
\label{eqn:27}
\end{equation}
As a result, the pressure obtained from Eq.(\ref{eqn:23}) is
\begin{equation}
p = \frac{(k_BT)^2}{\kappa d^3}g(\tau),
\label{eqn:28}
\end{equation}
\noindent
which is again in agreement with Eq.(\ref{eqn:8}).
\begin{figure}
\includegraphics[scale=0.4, clip]{fig3.eps}
\caption{\label{fig:3} A typical heating of a confined membrane. Circles represent MC results, while
the dashed line corresponds to the heating of one-dimensional ideal gas. The rigid wall separation
is $d=5.0$, while the membrane discretized spacing is $a = 1.0$. The bending modulus is $\kappa = 1.0$.
The membrane starts to experience the presence of the walls at about $k_B T > 10\kappa $.
The errors of the energy is about $\pm 0.01$ . For example, the energy
on the far right point is $\bar{\Ee} = 9.080\pm 0.009. $ }
\end{figure}
\begin{figure}
$\begin{array}{c}
\includegraphics[scale=0.5, clip]{fig4a.eps}\\
\includegraphics[scale=0.5, clip]{fig4b.eps}\\
\includegraphics[scale=0.5, clip]{fig4c.eps}\\
\end{array}
$
\caption{\label{fig:4}Simulation results (symbols) of the pressure $p$ and the distance $d$ between two rigid walls in log-log formats. The lines are drawn to guide the eye .
In subfigures (a), (b), and (c) the bending modulus of $\kappa = 0.1 k_BT$, $5 k_BT$ and $20 k_BT$ are used respectively, whereas in all subfigures $\delta d = 0.1$, $a=1.0$, and $k_BT =10.0$. For each bending modulus $\kappa$, the sizes of the membranes $L\times L$ are shown in the insets. The scaling pressure is $p_n = k_B T/a^3$. }
\end{figure}
\begin{figure}
$\begin{array}{c}
\includegraphics[scale=0.6, clip]{fig5.eps}
\end{array}
$
\caption{\label{fig:5} Simulation results (symbols) of the $p-d$ dependence at the longer range of $d$'s. The sizes of the membrane are shown in the labels.
The other simulation parameters are $k_BT = 10.0$, $\kappa = 1.0$, and $a = 1.0 $. The scaling pressure is $p_n = k_B T/a^3$. }
\end{figure}
In most simulation runs, at least $8\times 10^4$ MC time steps were performed. A Monte Carlo time step is defined as the number of Monte Carlo updates divided by the number of grid points of the membrane. The first 3000 time steps were omitted for thermalization. The calculated statistical errors of a few percent of the energy estimators are obtained. A typical simulation result for heating of a membrane is shown in Figure \ref{fig:3}. The size of the membrane in this figure is 120 by 120, where $a = 1.0$, $d = 5.0$, and $\kappa =1.0$. To give a sense about the errors on the average energy, the measured internal energy density of Figure \ref{fig:3} at $k_B T/\kappa \approx 20$ is $\bar{\Ee} = 9.080\pm 0.009$.
The pressure-distance relation for the separation ranging from $0.1-7.0$ is shown in Figure \ref{fig:4}.
The simulation parameters are $k_BT = 10.0$, $\delta d = 0.1$, and $a = 1.0 $, while the bending moduli are $\kappa = 0.1 k_BT$, $5k_BT$, and $20k_B T$, respectively.
The size of the membrane shown are $L\times L = 50\times 50$, $600\times 600$, and $1000\times 1000$.
From these log-log plots, it is clear that the
relation $p\sim 1/d $ is applicable within a small range $d$, while there is a transition to $p\sim 1/d^3$ as the
separation distance increases. The transition from $1/d^3$
to $1/d$-dependence can be estimated from Eq.(\ref{eqn:13}) as
\begin{equation}
d < d_{\text{tran}} = \sqrt{\frac{2 k_BT a^2}{\kappa}},
\label{eqn:29}
\end{equation}
which for the choice of parameters $k_B T = 10$, $\kappa = 1.0$, $a=1.0$ the corresponding transition length is
$d_{\text{tran}} \approx 4.5$ or about five times
the spacing between molecule ($ a= 1.0$). For more realistic values of $\kappa \approx 20 k_B T$ and $a \approx 8$ \AA$\,$ \cite{Goetz98}, the transition length
is $d_\text{tran} \approx 2.5\,$\AA.
Is the transition from $1/d$ to $1/d^3$-pressure law a result of finite size effects?
Our MC results in Figure \ref{fig:4} for the membrane as large as $1000\times1000$ indicate otherwise.
As evident, the smaller membrane of size $L\times L = 50\times 50$ exhibits the same transition length $d_\text{tran}$ to those
of much larger ones. Since in real biological membranes
the number of possible excitation modes are perhaps much larger than in our model, the transition from the ideal gas $1/d$ to $1/d^3$ pressure law could possibly
represent some new physics---the exploration of this is deferred to future work. Our results are easily interpreted within the context of the free energy of a tightly confined membrane. At small $d/a$, the bending
energy contribution to the free energy is negligible compared to the entropic contribution. The suppression of the elastic effects hence leads to the limiting pressure law
of the ideal gas.
Nevertheless, it should be noted that the transition length $d_\text{tran}$ depends
on the bending modulus, temperature, and intermolecular spacing as indicated in Eq.(\ref{eqn:29}).
The pressure law transition from $1/d$ to $1/d^3$ dependence may have some important implication in the interactions among living cells, potentially opening
a new avenue in reevaluating the conventional understandings about how biological cells mechanically interact.
A larger range of the $p-d$ dependence is shown in Figure \ref{fig:5}. The sizes of the membrane, which are $L$ by $L$, are shown in the subfigures. At large $d$ the pressure follows an exponential decay relation of type $p \sim A \exp(- \lambda d)$ resulting in the log-log plots of the form $y= D-\lambda \exp(x)$
where $x\equiv \ln d$, $y \equiv \ln p$, and $D \equiv \ln A$. This exponential decaying regime has not yet been confirmed
by any simulations or theory thus far, despite a speculation in Ref. \cite{Janke86}. An interesting conclusion of this result is that the Helfrich entropic force is not \emph {really} as long-ranged as previously believed.
\section{Conclusions}
In summary, we conclude that Freund's \cite{Freund13} conclusions are correct for short inter-membrane distances and in that regime, his result is a major modification of the well-accepted entropic force law due to Helfrich \cite{Helfrich78}. However, Helfrich is correct for intermediate distances and finally, for large separations, the entropic force decays exponentially. At the time of writing this manuscript, we became aware of a pre-print by T. Auth and G. Gompper, who (at least for short and intermediate membrane separations) have reached similar conclusions as us.
The physical consequences of the modification of the entropic force law between membranes remains an open problem and is expected to be an interesting avenue for future research.
\begin{acknowledgments}
P. Sharma gratefully acknowledge helpful discussions with Professor Ben Freund and his encouragement to pursue this work. Y. Hanlumyuang thanks Dr. Xu Liu and Professor Aiichiro Nakano for answering several questions regarding parallel computations, and Dr. Dengke Chen for countless insightful discussions.
\end{acknowledgments}
|
2,877,628,091,492 | arxiv | \section{Kriging}
In Universal Kriging, the trend term in relation (\ref{1})
is an unknown linear combination of known
functions $f_j(.)$ with unknown coefficients $\beta_j$, that is
\begin{eqnarray*}
\mu(t) = \Sigma_{j=1}^{P+1}\beta_{j-1}f_{j-1}(t)
\end{eqnarray*}
where $\beta = (\beta_0, ... ,\beta_p)^{\prime}\in R^{p+1}$, is
an unknown vector of parameters. Furthermore, data {\bf Z} can
be written as
\begin{eqnarray*}
{\bf Z}= X \beta + \delta
\end{eqnarray*}
where X is an $n \times (p+1)$ matrix whose (i,j)th element is
$f_{j-1}(t_i)$.
It is desired to predict $Z(t_0)$ linearly from data {\bf Z},
that is
\begin{eqnarray}
\label{3} \hat Z (t_0) = \lambda^{\prime} {\bf Z}, ~~~~~~~~\lambda
^{\prime} X = x^{\prime}
\end{eqnarray}
which is uniformly unbiased ($E[\hat Z (t_0)] = E[Z(t_0)]$), and
minimizes the mean squares error term $\sigma^2_e = E[(\hat Z
(t_0) - Z(t_0))^2]$ over $\lambda=(\lambda_1, ..., \lambda_n)$.
Assumption $\lambda ^{\prime} X = x^{\prime}$ in equation
(\ref{3}) is equivalent to uniformly unbiased condition, where
$x= (f_0(t_0),...,f_n(t_0))^{\prime}$. Then the optimal value of
$\lambda$ in relation (\ref{3}) is
\begin{eqnarray}
\label{5}
\lambda^{\prime} =[C+X(X'\Sigma^{-1} X)^{-1}(x - X'\Sigma^{-1}C)]^{\prime}~ \Sigma^{-1}
\end{eqnarray}
where $C = (c(t_0-t_1),...,c(t_0-t_n))'$ and $\Sigma$ is an $n
\times n$ matrix with $\it{(i, j)}$th element $c(t_i-t_j)$. The
Kriging variance can be written as
\begin{eqnarray}
\label{5'}
\sigma^2(t_0) = c(0) -C' \Sigma^{-1} C+(x
-X'\Sigma^{-1} C)' (X'\Sigma^{-1} X)^{-1} (x - X'\Sigma^{-1} C)
\end{eqnarray}
When $p=0$ and $f_0 (t)=1$, universal kriging reduce to ordinary
kriging.
In universal kriging, the optimal value of
$\lambda$ (equation (\ref{5})) can be written as $\lambda_U =
\Sigma_U^{-1} C_U$ where $\lambda_U
=(\lambda_1,...,\lambda_n,-m_0,...,-m_p)'$ and $m_i s$ are
lagrange multipliers that insure $\lambda ^{\prime} X =
x^{\prime}$ and $C_U =(c(t_0
-t_1),...c(t_0-t_n),1,f_1(t_0),...,f_p(t_0))^{\prime}$. Then
kriging predictor at $t_0$ is
\begin{eqnarray}
\label{b1}
\hat{Z}(t_0) ={\bf Z'}_U \Sigma_U ^{-1} C_U =V'_U C_U
\end{eqnarray}
where $V_U =\Sigma_U^{-1} {\bf Z}_U$ , ${\bf Z}_U
=(Z(t_1),...,Z(t_n),0,...,0)'$ which is an $(n+p+1)\times 1$
vector. In equation (\ref {b1}) by writing $V'_U =(V'_1 ,V'_2)$ so
that $V_1$ is $n \times 1$ and $V_2$ is $(p+1) \times 1$, then
$V_U=\Sigma_U ^{-1} Z_U
=\left[\begin{array}{cc}
\Sigma & X \\
X' & O
\end{array}\right]^{-1} \left[\begin{array}{c}
Z \\
0
\end{array}\right]=\left[\begin{array}{c}
V_1 \\
V_2
\end{array}\right]
$ or $\left[\begin{array}{cc}
\Sigma & X \\
X' & o
\end{array}\right]\left[\begin{array}{c}
V_1 \\
V_2
\end{array}\right]=\left[\begin{array}{c}
Z \\
0
\end{array}\right]$
and dual kriging equations is obtained as
\begin{eqnarray}
\label{12}
\left\{
\begin{array}{ll}
\Sigma V_1 +X V_2 = Z \\
X'V_1= 0
\end{array}
\right.
\end{eqnarray}
By solving this system and replacing in relation (\ref{3}),
predictor of $Z(t_0)$ can be written as
\begin{eqnarray*}
\hat{Z} (t_0) =V'_1 C+V'_2{\bf x}
\end{eqnarray*}
\section{Spline}
Data {\bf Z} of random field Z(.) is given at locations $\{t_i
\in D \subset R^d , d>1\}$. Consider the problem of estimating
unknown function $\it{g}$ in the model
\begin{eqnarray}
\label{11} Z_i =g(t_i)+e_i ,\hspace{5mm} i=1,...,n
\end{eqnarray}
To fit $\it{g}$ properly, penalized sum of squares criterion is
defined as
\begin{eqnarray}
\label{b2} S(g,\lambda) = \Sigma_{i=1}^n (Z_i -g(t_i))^2 + \alpha
J_{r+1}^d(g)
\end{eqnarray}
where $\alpha > 0$ is smoothing parameter. A function $\hat g$
which minimizes penalized sum of squares criterion is called
Spline. The second term in equation (\ref {b2}) is
\begin{eqnarray*}
J_{r+1}^d(g) &=& \int|\nabla^{r+1}g(t)|^2 dt \cr
&=& \Sigma_{|m|=r+1} {r+1\choose m}
\int(\frac{\partial^{r+1}g(t)} {\partial
t[1]^{m[1]},\dots,\partial t[d]^{m[d]}})^2 dt
\end{eqnarray*}
where $\nabla^{r+1}$ is $(r+1)$-fold iterated gradient of $g$, $t
=(t[1],\dots,t[d])$,
\begin{eqnarray*}
J_{r+1}^d (g)=\Sigma_{|m|=r+1} {r+1\choose m} \int (\frac{\partial^{r+1} g(t)}
{\partial t[1]^{m[1]},\dots,\partial t[d]^{m[d]}})^2 dt
\end{eqnarray*}
where $m=(m[1],\dots ,m[d])$ and $|m|=m[1]+\dots+m[d]$.
For $d=2$, a function $\hat{g}$ which minimizes penalized sum of
squares (\ref {b2}) is called Thin Plate spline. To determine a
proper value of $\alpha$ can refer to
Gu (2002), Hart (2005) and Hardle (2006).
Now finding dual equations for spline in case d=2 is considered
(because dimension of our data is d=2.) Smoothing Spline of degree 2 is
\begin{eqnarray}
\label{13} \hat{Z} (t_0) =a_0 +a_1 x_0 +a_2 y_0 +\Sigma_{i=1}^n
b_i e(t_0 -t_i)
\end{eqnarray}
where
\begin{eqnarray*}
e({\bf h}) ={||{\bf h}||}^2\log({||{\bf h}||}^2)/16\pi
\end{eqnarray*}
In relation (\ref{13}), $a = (a_0,a_1,a_2)'$ and $b = (b_1,\dots,
b_n)'$ solve
\begin{eqnarray}
\label{15}
\left\{
\begin{array}{ll}
\label{15}
K_{\alpha} {\bf b} +X{\bf a} = Z \\
X'{\bf b} =0
\end{array}
\right.
\end{eqnarray}
where $K_{\alpha} = K + n \alpha I$ is an $n \times n$ matrix with $\it(i,j)$th
elementary
$e(t_i - t_j)$, X is an $n \times 3$ matrix with $\it{i}$th row
$(1, x_i, y_i),~~ t_i = (x_i,y_i)'$ and $0\leq \alpha \leq \infty$
is the smoothing parameter.
\section{Application of Spline and Kriging to Prediction}
Dual equations (\ref{12}) and (\ref{15}) show that the form of
these equations for universal kriging and spline are the same,
just generalized covariance in Spline is used instead of
covariogram. In kriging method, when the second order stationary
condition does not satisfied or anyway the IRFk's is used,
generalized covariances are applied. Therefore dual equations of
kriging and spline methods are equal. Consequently methods of
kriging and spline are similar (theoretically), but they can be
different practically. In the next section these two methods are
compared in an epidemiological problem.
\subsection{Data Set and Practical Comparison}
Here data of taberculosis infection prevalence in the cities of
Iran on the year 1999 is considered. The random field is
nonstationary and data has a trend, therefore data is detrended
by median polishing. To estimate covariogram, Classic estimator
is applied and Gaussian model is chosen as the best model of
covariogram for this data set.
To compare the methods performances, a criterion should be
considered. Cross validation is a popular means of assessing
statistical estimation and prediction. If the variogram model
described adequately spatial dependencies implicit in data set,
then predicted value $\hat{Z} (t_0)$
should be close to the true value $Z(t_0)$. Ideally additional
observations on $\it{Z(.)}$ to check this, or initially some of
the data might set aside to validate spatial predictor. More
likely, all of the data are used to fit the variogram, build the
spatial predictor, and there is no possibility of taking more
observations. In this case the cross validation approach can be
used. Let $2\gamma(h,\hat{\theta})$ be the fitted variogram model
(obtained from the data); now delete a datum $Z(t_j)$ and predict
it with $\hat{Z_{-j}} (t_j)$ [based on $2\gamma(h,\hat{\theta})$
variogram estimator and the data ${\bf Z}$ without $Z(t_j)$]. Its
associated mean - square prediction error is $\sigma _{-j}
^2(t_j)$ which depends on the fitted variogram model.
The closeness of prediction values to the true values can be
characterized as the standardized Mean Square error of Prediction
\begin{eqnarray*}
\label{14} MSP=[1/n (\Sigma_{i=1}^n {\frac{Z(t_j) -
\hat{Z}_{-j}(t_j)}{\sigma_{-j} (t_j)}})^2]^{1/2}.
\end{eqnarray*}
In this paper, spline and kriging methods is compared by this
criterion and the better method which has smaller MSP is
determined. For this data set, gaussian model with nugget effect
equal to 39.8 is the best covariogram model to kriging
prediction. In spline method the smoothing parameter should be
determined and for this data set, the best value which minimizes
penalized sum of squares criterion equals $\alpha=208.6601$.
Cross validation criterion is applied to compare the methods.
Programs for computations is written in R and SPLUS environments
for the two dimensional data set.
Cross validation criterion in kriging method is equal to
0.0239 and in spline method, it is equal to 0.0461.
Consequently, kriging method has a better performance than spline
for this data set. This result can be reasonable because in
spline usually a special generalized covariance function is used
but in kriging this function is characterized based on the data.
Therefore for some data sets, kriging method could have better
performance than spline.
\section {Conclusion}
Under certain conditions kriging and spline methods are
equivalent, but in practice there
are differences between these methods. For instance in spline usually
a particular generalized covariance function is used but in kriging,
this function is determined based on data, therefore it is
expected that kriging has a better performance in some situations.
In this paper these methods are applied to predict rate of
taberculosis infection prevalence which is a noticeable problem in
medicine. The data has measured at two dimensional sites and
computations are carried out in R and SPLUS environments. For the
data set, computations show that kriging method has a better
performance than spline. Consequently application of Kriging can
be a preferable method of prediction.
|
2,877,628,091,493 | arxiv | \section{Introduction }
Scalar-tensor theories of gravity have attracted much attention since the pioneering example of Brans-Dicke theory~\cite{Brans:1961sx}. The physical relevance of such models could be tested, in particular, in strong gravity systems, namely black holes (BHs). On the one hand, as it turns out, the BH solutions in Brans-Dicke theory, as well as in a large class of models where the scalar field is non-minimally coupled to the Ricci scalar, are the same as in General Relativity (GR)~\cite{Hawking:1972qk,Sotiriou:2011dz}. On the other hand, BHs in extended scalar-tensor models, namely those with higher curvature corrections are, generically, different from those of GR~\cite{Herdeiro:2015waa}.
Within the class of scalar-tensor theories that possess higher curvature corrections, those including a real scalar field, $\phi$, with a canonical kinetic term, non-minimally coupled to the Gauss-Bonnet (GB) quadratic curvature invariant,
\begin{eqnarray}
R^2_{\rm GB} \equiv R_{\alpha\beta\mu\nu} R^{\alpha\beta\mu\nu} - 4 R_{\mu\nu} R^{\mu\nu} + R^2 \ ,
\end{eqnarray}
have attracted considerable interest. This is the class of \textit{Einstein-scalar-GB} (EsGB) models described by the action
\begin{eqnarray}
\label{action}
\mathcal{S}=
\int d^4x \sqrt{-g} \left[ R - \frac{1}{2}
\partial_\mu \phi\partial^\mu \phi
+ \alpha f(\phi) R^2_{\rm GB} \right],
\end{eqnarray}
where
$\alpha $ is a dimensionful coupling constant and
$f(\phi)$ is a dimensionless coupling function. In these models, the GB term becomes dynamical in four spacetime dimensions, and the equations of motion remain second order, which is typically not the case when higher curvature corrections are included in the action. Moreover, the GB term as a higher order correction is suggested from string theory~\cite{Zwiebach:1985uq}.
The status of BHs in the family of models~\eqref{action} depends on the properties of $f(\phi)$; its choice determines if $\phi=0$ is a consistent truncation of the equations of motion. There are two generic cases. Following the classification in~\cite{Astefanesei:2019pfq} for a cousin model, we call models where $\phi=0$ is \textit{not} a consistent truncation of the equations of motion {\it class I or dilatonic-type.}
In this class of EsGB models $\phi \equiv 0$ does $not$ solve the field equations.
Thus the Schwarzschild/Kerr BH is not a solution. In terms of the coupling function, this class of models obeys (from the scalar field equation (\ref{KG-eq}) below)
\begin{eqnarray}
\label{condx}
f_{,\phi}(0)\equiv \frac{d f (\phi)}{d \phi}\Big |_{\phi=0} \neq 0\ .
\end{eqnarray}
A representative example of coupling for this class is the standard dilatonic coupling,
$
f (\phi)=e^{\gamma \phi}$, which emerges in Kaluza-Klein theory, string theory and supergravity. In this case $\phi$ is often referred to as the \textit{dilaton} field. BHs in the Einstein-dilaton-GB model were constructed in~\cite{Kanti:1995vq,Kleihaus:2015aje,Kleihaus:2011tg}, where they were shown to have a qualitatively novel feature: a minimal BH size, determined by the coupling constant $\alpha$. Some of these BHs are perturbatively stable~\cite{Kanti:1997br} and aspects of their phenomenology has been considered in $e.g.$~\cite{Cunha:2016wzk,Zhang:2017unx,Blazquez-Salcedo:2017txk}.
Models where $\phi=0$ is a consistent truncation are called
{\it class II or scalarised-type}.
In this case $\phi \equiv 0$ solves the field equations
and thus
Schwarzschild and Kerr BHs are solutions of the full model.
This demands that
\begin{equation}
f_{,\phi}(0)\equiv \frac{d f (\phi)}{d \phi}\Big |_{\phi=0}= 0 \ .
\label{typeii}
\end{equation}
This condition holds, for instance, if one requires the model to be $\mathbb{Z}_2$-invariant under $\phi\rightarrow -\phi$. The Schwarzschild/Kerr BH solution is not, in general, unique.
These EsGB models may contain a second set of BH solutions,
with a nontrivial scalar field profile -- {\it the scalarised BHs}.
Such second set of BH solutions may, or may not, continuously connect with GR BHs. Models within this class have been recenly under scrutiny in relation to BH spontaneous scalarisation - see $e.g.$~\cite{Doneva:2017bvd,Silva:2017uqg,Antoniou:2017acq,Cunha:2019dwb,Collodel:2019kkx}.
Two reference examples of coupling functions in this case are $f_1(\phi)=\gamma \phi^2$ and
$
f _2(\phi)=e^{\gamma \phi^2} \ .
$
Although $f_1$ is the linearisation of $f_2$ (the constant term is irrelevant here) these two models have qualitatively different properties. Namely, the spherical scalarised BHs with the former coupling function are unstable against perturbations; but the ones with the latter coupling function can be stable~\cite{Blazquez-Salcedo:2018jnn}.
In this paper we are interested in a model of class I, the linear coupling or \textit{shift symmetric} model. The coupling function is
\begin{eqnarray}
f(\phi)= \phi~,
\end{eqnarray}
which implies the existence of a shift symmetry: the equations of motion are invariant under the transformation
\begin{eqnarray}
\label{shift-symm}
\phi \to \phi+\phi_0~,
\end{eqnarray}
with $\phi_0$ an arbitrary constant.
This invariance results from the fact that
in four spacetime dimensions
the GB term alone
is a total divergence. BHs in the model~\eqref{action} with~\eqref{shift-symm} have been first discussed by Sotiriou and Zhou (SZ)~\cite{Sotiriou:2014pfa,Sotiriou:2013qea}. This model falls within the Horndeski class~\cite{Horndeski:1974wa}, for which a no-scalar-hair theorem had been established~\cite{Hui:2012qt}. However, the SZ solution circumvents this theorem, since one of the assumptions (finitness of a certain current) is violated. The SZ solution has a minimal size, such as the BHs in Einstein-dilaton-GB. In fact, the model~\eqref{action} with~\eqref{shift-symm} can be seen as a linearisation of the Einstein-dilaton-GB model, and thus one expects similar properties for the BH solutions of both models. However, as pointed out above, models with a certain coupling function and its linearisation may have different properties. It has also been argued that the SZ could emerge dynamically in a gravitational collapse scenario~\cite{Benkel:2016rlz}.
The goal of this paper is to construct and study the basic physical properties of the spinning generalisation of the SZ solution, which, up to now, have not been considered. Astrophysical BHs have angular momentum. Thus, considering spinning BHs is fundamental to assess the physical plausibility of any BH model. This is, however, technically more challenging than for spherical BHs, in particular in the presence of higher curvature corrections, such as the GB invariant, as described below.
This paper is organised as follows. In Section~\ref{sec2} we briefly discuss the equations of motion and some relevant properties of the model. In Section~\ref{sec3} we provide a short review of the spherical SZ solutions, as a warm up for the spinning case. In Section~\ref{sec4} we introduce the framework for the construction of spinning BHs, discussing the ansatz, boundary conditions, the physical quantities of interest and the numerical procedure. In Section~\ref{sec5} we describe the spinning BH solutions, its domain of existence, and the behaviour of different physical quantities. In Section~\ref{sec6} we present conclusions and remarks. Two appendices give some technical details on the construction of perturbative and extremal solutions.
\section{The model }
\label{sec2}
We consider a general EsGB model with the action~\eqref{action}. We use units such that $c=1=16\pi G$. Observe that the coupling constant has physical dimension $[\alpha] \sim [L]^2$, where $L$ represents ``length".
Varying the action~ (\ref{action}) with respect to the metric tensor
$g_{\mu\nu}$,
we obtain
the Einstein field equations
\begin{eqnarray}
\label{EGB-eq}
E_{\mu\nu}\equiv R_{\mu\nu}-\frac{1}{2}g_{\mu\nu} R -\frac{1}{2}T_{\mu\nu} =0\ .
\end{eqnarray}
The {\it effective} energy-momentum tensor has two distinct components,
\begin{eqnarray}
\label{Teff}
T_{\mu\nu} = T_{\mu\nu}^{(s)}-2\alpha T_{ \mu\nu}^{(GB)} \ .
\end{eqnarray}
The first one is due to the scalar kinetic term in (\ref{action})
\begin{eqnarray}
T_{\mu\nu}^{(s)}=\partial_{\mu} \phi\partial_{\nu} \phi -\frac12 g_{\mu\nu}\partial_\alpha\phi\partial^\alpha\phi \ ;
\end{eqnarray}
the second one is due to the scalar-GB term in (\ref{action}), and reads
\begin{eqnarray}
\label{Teff2}
T_{\mu\nu}^{(GB)}= P_{\mu\gamma \nu \alpha}\nabla^\alpha \nabla^\gamma f(\phi) \ ,
\end{eqnarray}
where we have defined
\begin{eqnarray}
P_{\alpha\beta\mu\nu} & \equiv & -\frac14 \varepsilon_{\alpha\beta\rho\sigma} R^{\rho\sigma\gamma\delta} \varepsilon_{\mu\nu\gamma\delta} \\
&= & R_{\alpha\beta\mu\nu}+ g_{\alpha\nu} R_{\beta\mu} - g_{\alpha\mu} R_{\beta\nu} + g_{\beta\mu} R_{\alpha\nu}-g_{\beta\nu} R_{\alpha\mu}
+\frac12 \left( g_{\alpha\mu}g_{\beta\nu} - g_{\alpha\nu}g_{\beta\mu}\right) R \ .
\nonumber
\end{eqnarray}
Here, $ \varepsilon_{\alpha\beta\rho\sigma}$ is the Levi-Civita tensor. The equation for the scalar field is
\begin{eqnarray}
\label{KG-eq}
\Box \phi +\alpha \frac{d f (\phi)}{d \phi} R^2_{\rm GB}
=0 \ .
\end{eqnarray}
As pointed out in the introduction, the GB term is a total divergence:
\begin{eqnarray}
\label{totder}
R^2_{\rm GB} =\nabla_\mu P^\mu \ ,
\end{eqnarray}
where the vector $P^\mu$ takes a particularly simple form
\cite{Yale:2010jy}
for a spacetime
possessing a Killing vector $\partial/\partial t$
($t$ is the time coordinate),
\begin{eqnarray}
P^\mu=4 P_{\nu}^{~\alpha \mu t} \Gamma^\nu_{t \alpha} \ .
\end{eqnarray}
Thus the transformation
(\ref{shift-symm})
does not change the equations of the model.
Moreover, (\ref{totder})
implies that the equation for the scalar field (\ref{KG-eq})
can be written as
\begin{eqnarray}
\label{relD}
\nabla_\mu J^\mu=0\ , \qquad {\rm with}~~
J^\mu= \partial^\mu \phi+\alpha P^\mu \ .
\end{eqnarray}
As we shall see,
a consequence of this relation is that
the scalar `charge'
(as read off from the asymptotically leading monopolar mode) is just the Hawking temperature
of BH~\cite{Prabhu:2018aun}.
In this work we shall be interested in stationary, axially symmetric solutions. They possess two asymptotically measured global charges:
the mass $M$ and the angular momentum $J$.
There is also a scalar charge $Q_s$, but it is not an independent quantity; it depends on the BH mass and angular momentum.
Thus the scalar hair is of secondary type~\cite{Herdeiro:2015waa}.
Also,
note that
the shift symmetry
(\ref{shift-symm})
is broken
by imposing $\phi(\infty)=0$. Horizon quantities of physical interest, on the other hand, include
the Hawking temperature $T_H$,
the horizon area $A_H$
and the entropy $S$,
whose concrete expressions are given below.
Since the equations of the model are invariant under the transformation
\begin{eqnarray}
\label{scale}
r\to \lambda r\ , \qquad \alpha \to \lambda \alpha \ ,
\end{eqnarray}
where $\lambda>0$ is an arbitrary constant, the most meaningful physical quantities must be invariant under (\ref{scale}).
Considering how the
various global quantities transform under this scaling
($e.g.$ $M\to \lambda M$, $J\to \lambda^2 J$, $etc.$) we normalise the various quantities
$w.r.t.$ the mass of the solutions.
In this way, we define the \textit{reduced}
angular momentum $j$,
horizon area $a_H$,
entropy $s$
and
Hawking temperature $t_H$
as
\begin{eqnarray}
\label{scale2}
j\equiv \frac{J}{M^2}\ , \qquad
a_H\equiv \frac{A_H}{16\pi M^2}\ , \qquad s\equiv \frac{S}{4\pi M^2} \ , \qquad
t_H\equiv 8 \pi T_H M \ .
\end{eqnarray}
Alternatively, one can
define dimensionless reduced variables $w.r.t.$ the coupling constant $\alpha$
(we recall that $[\alpha] \sim [L]^2$).
\section{Spherically symmetric black holes}
\label{sec3}
Before discussing the case of spinning BHs,
it is of interest to review the construction and basic properties
of the static, spherically symmetric BHs,
the SZ solutions~\cite{Sotiriou:2014pfa,Sotiriou:2013qea}.
As we shall see, they contain valuable information, and share some key properties
with their rotating counterparts, being easier to study since
they are found by solving a set of ordinary differential equations.
Moreover, a perturbative {\it exact} solution is available in the static case,
which is discussed in Appendix~\ref{a1}.
\subsection{The equations and boundary conditions}
The spherical BHs of~\eqref{action} with~\eqref{shift-symm} can be found using
Schwarzschild-like coordinates, with a metric ansatz containing two unknown functions,
\begin{equation}
\label{s1}
ds^2=-N(r) \sigma^2(r) dt^2+\frac{dr^2}{N(r)}+r^2 d\Omega_2^2\ , \qquad {\rm with} \ \ \ N(r)\equiv 1-\frac{2m(r)}{r}\ ,
\end{equation}
where $r$ and $t$ are the radial and time coordinate, respectively,
$d\Omega_2^2$ is the metric on the unit round $S^2$ and
$m(r)$ is the Misner-Sharp mass \cite{Misner:1964je},
which obeys $m(r)\to M$ as $r\to \infty$.
The scalar field $\phi$
is a function of $r$ only. The Schwarzschild
BH corresponds to $\phi=0$, $m(r)=r_H/2=$constant,
$\sigma(r)=1$. One can easily verify that for
$\alpha\neq 0$
this is not a solution of the model in this work.
The advantage of this metric gauge choice is the simple form of the Einstein equations (\ref{EGB-eq}), which yield the
generic relations
\begin{equation}
m'=-\frac{r^2}{4} T_t^{t }\ , \qquad \frac{\sigma'}{\sigma}=\frac{r}{4 N} ( T_r^{r }-T_t^{t }) \ .
\label{ss143}
\end{equation}
For the considered EsGB model,
the diagonal components of the effective energy-momentum tensor contain second derivatives of
the metric functions
$N,\sigma$.
However,
one can find a
suitable combination of the field equations
such that
the functions $m,\sigma$
still solve first order equations. These equations are
\begin{eqnarray}
\label{eq-N}
&&
\left[
1+2\alpha (1-3N)\frac{\phi'}{r}
\right] m'
-
\left\{
\frac{N}{8}r^2 \phi'^2+\alpha(1-N)\left[(1-3N)\frac{\phi'}{r}+2N\phi''\right]
\right\}
=0 \ ,
\\
&&
\label{eqs-spherical}
\frac{\sigma'}{\sigma}
\left[1+2 \alpha (1-3 N)\frac{\phi'}{r}\right]
-\frac{1}{4 r}
\left[ r^2\phi'^2 +8\alpha (1-N) \phi'' \right]
=0 \ .
\end{eqnarray}
The Einstein equations contain also a second order equation which provides a constraint, being a
linear combination of (\ref{eq-N}) and (\ref{eqs-spherical})
together with their first derivatives.
The scalar field $\phi$ is a solution of a 2nd order equation
in terms of $N$ and $\phi'$
only
\begin{eqnarray}
\nonumber
&&
\phi''
\bigg[
1+\frac{2\alpha}{r}(1-7N)\phi'
-\frac{24 \alpha^2}{r^4}
\left[
2(1-N)^2+r^2N(1-3N)\phi'^2
\right]
+\frac{8 \alpha^3 N\phi'}{r^5}
\big[
24(1-N)^2
\\
\nonumber
&&
{~~~}
+r^2\{1+3N(2-5N)\}\phi'^2
\big]
\bigg]
+\frac{1}{r}
\bigg[
\left(1+\frac{1}{N}\right)\phi'
+\frac{2\alpha}{r^3 N}
\bigg[
6(1-N)^2+r^2(1-N-12N^2)\phi'^2
\\
\nonumber
&&
{~~~}
-\frac{1}{8}r^4 N^2 \phi'^4
\bigg]
-\frac{8\alpha^2 \phi'}{r^4}
\bigg[
6(1+N^2)-r^2\phi'^2(1+21N^2)
-N\left(12-10r^2\phi'^2+\frac{1}{8}r^4 \phi'^4\right)
\bigg]
\\
\label{eq-phi}
&&
{~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~}
+\frac{8\alpha^3}{r^3}
(1-3N)^2(1-5N)\phi'^4
\bigg]=0 \ .
\end{eqnarray}
This approach leads to a good accuracy of the numerical results,
and can easily be generalized for an arbitrary coupling function $f(\phi)$.
The approximate form of the solutions valid for large-$r$
reads
\begin{equation}
N(r)=1-\frac{2M}{r}+\frac{Q_s^2}{4r^2}+\dots\ , \qquad
\sigma(r)=1-\frac{Q_s^2}{8r^2}+\dots\ , \qquad
\phi(r)=-\frac{Q_s}{r}- \frac{Q_sM}{r^2}+\dots \ ,
\end{equation}
in terms of mass $M$ and a scalar ``charge" $Q_s$.
Close to the event horizon, located at $r=r_H$,
the solutions possess an approximate expression
as a power series in $r-r_H$, with
\begin{eqnarray}
\nonumber
N(r)&=&N_1(r-r_H)+\dots\ , \qquad
\sigma(r)=\sigma_H+\sigma_1(r-r_H)+\dots\ , \\
\phi(r)&=&\phi_H+\phi_1(r-r_H)+ \phi_2(r-r_H)^2+\dots\ ,
\end{eqnarray}
where
\begin{eqnarray}
N_1=\frac{1}{2\alpha \phi_1+r_H}\ , \qquad \sigma_1=\frac{(16\alpha \phi_2+\phi_1^2 r_H^2)\sigma_H}{4(2\alpha \phi_1+r_H)} \ ,
\end{eqnarray}
while $\phi_2$ is a complicated function of $\phi_1$, $r_H$ and $\alpha$.
The Hawking temperature, horizon area and entropy of the solutions,
as computed from the formalism in the next Section, are given by
\begin{eqnarray}
T_H=\frac{N_1 \sigma_H}{4\pi}\ , \qquad A_H=4\pi r_H^2\ , \qquad S=\pi r_H^2+4\pi \alpha \phi_H \ .
\end{eqnarray}
The field equations imply that
the first derivative of the scalar field, $\phi_1$, is a solution of the quadratic equation
\begin{eqnarray}
\label{eqphi1}
\phi_1^2+\frac{ r_H}{2\alpha }\phi_1+\frac{6}{r_H^2}=0\ ,
\end{eqnarray}
which implies the following condition for
the existence of a real root
\begin{eqnarray}
\label{condi}
\frac{\alpha}{r_H^2}<\frac{1}{4\sqrt{6}}\simeq 0.10206~.
\end{eqnarray}
This requirement translates into
the following coordinate independent condition
between the horizon size and the coupling constant
$\alpha$
\begin{eqnarray}
\label{cond}
A_H>16\pi \sqrt{6}\alpha\ .
\end{eqnarray}
We remark that $A_H=4\pi r_H^2$ for the metric ansatz employed here.
Thus, for a theory with a given value of the input parameter $\alpha>0$,
the BHs are not smoothly connected with the Minkowski vacuum.
There is minimal horizon size and a mass gap~\cite{Sotiriou:2014pfa,Sotiriou:2013qea}, just as for BHs in the Einstein-dilaton-GB model~\cite{Kanti:1995vq,Kleihaus:2015aje,Kleihaus:2011tg}.
\subsection{The solutions}
The parameter space of solutions can be scanned by starting with the Schwarzschild
BH ($\alpha=0$) and increasing the value of $\alpha$ for fixed $r_H$.
When
appropriately scaled,
they form a line, starting from the smooth GR limit
and ending at a \textit{critical} solution
where the condition
(\ref{cond})
is violated, and
where the maximal value of the ratio
$\alpha/M^2$ (around $0.32534$) is achived.
Once the critical configuration is reached, the solutions cease to exist in the parameter space. Physically this means that the EsGB BHs have a minimal size and mass,
for given $\alpha$. A possible interpretation is that the GB term provides a repulsive contribution, becoming overwhelming for sufficiently small BHs, thus preventing the existence of an event horizon. The full set of static solutions will be shown below in Fig.~\ref{dom0} (the blue dotted line with $j=0$)
as a function of the dimensionless parameter $\alpha/M^2$.
As discussed in Appendix~\ref{a1}, a simple perturbative solution can be found
as a power series in the parameter
\begin{eqnarray}
\label{beta}
\beta\equiv \frac{\alpha}{r_H^2}=\frac{4\pi \alpha}{A_H} \ .
\end{eqnarray}
The results in Appendix~\ref{a1} imply the following expressions
\begin{eqnarray}
&&
a_H=\frac{A_H}{16\pi M^2}=
1-\frac{98 }{5 }\beta^2
+\frac{146378 }{1925 }\beta^4
-\frac{42468831605804 }{13266878625} \beta^6
+\dots\ ,
\\
\label{fin-res}
&&
t_H=8 \pi T_H M=1+\frac{146}{15 }\beta^2
+\frac{1410898 }{17325 }\beta^4
+\frac{72356439488}{57432375 }\beta^6
+\dots\ ,
\\
\nonumber
&&
s=\frac{S}{4 \pi M^2}=
1+\frac{146}{15 }\beta^2
-\frac{13451026 }{51975 }\beta^4
+\frac{25584053312 }{57432375}\beta^6
+\dots\ ,
\\
\nonumber
&&
q=\frac{Q_s}{M}=
8\beta
-\frac{1184 }{15 }\beta^3
-\frac{4614784}{17325 }\beta^5
+\dots\ ,
\\
\nonumber
&&
\phi(r_H)= \frac{22}{3} \beta
+\frac{40516 }{675 }\beta^3
-\frac{7057522938136377682 }{119373478599375}\beta^7
+\dots. \ .
\end{eqnarray}
Interestingly, all corrections to
the reduced temperature $t_H$ are positive. That is, for the same mass,
the shift symmetric Hordenski BH is `hotter'.
For the other quantities, no clear generic pattern emerges.
We have found that the perturbative solution provides a
very good approximation to the numerical results. This follows from the smallness of the parameter $\beta$. In fact, condition
(\ref{condi}) implies $\beta_{\rm max} \simeq 0.102062$.
As such, the contribution of the
higher order terms in $\beta$ quickly
becomes irrelevant.
\section{Spinning black holes: the framework}
\label{sec4}
\subsection{Ansatz and boundary conditions}
To obtain stationary and
axi-symmetric BH spacetimes, possessing
two commuting Killing vector fields, $\xi$ and $\eta$, we use a coordinate system adapted to these symmetries.
Then
$
\xi = \partial_t,
$
$
\eta=\partial_\varphi,
$
and we
consider a metric ansatz which has been employed in the past
for the study of
Kerr BHs with scalar hair~\cite{Herdeiro:2014goa}.
In terms of the spheroidal coordinates $r,~\theta$ and $\varphi$ (with $t$ the time coordinate), the
metric line element reads:
\begin{equation}
\label{ansatz}
ds^2=-e^{2F_0} N dt^2+e^{2F_1}\left(\frac{dr^2}{N }+r^2 d\theta^2\right)+e^{2F_2}r^2 \sin^2\theta (d\varphi-W dt)^2\ , \ \ \ \ N\equiv 1-\frac{r_H}{r}\ ,
\end{equation}
%
where the metric functions
$F_i,W$, as well as the scalar field $\phi$,
depend on $r,\theta$ only and $r_H>0$ is an input parameter again describing the location of the event horizon.
The coordinates $\theta,\varphi$ and $t$
possess the usual range, while $r_H\leqslant r <\infty$.
The vacuum Kerr BH
can be written in this form, the corresponding expressions of
$F_0,F_1,F_2$ and $W$ being displayed in Appendix A of~\cite{Herdeiro:2015gia}.
Finding BH solutions with this ansatz requires defining boundary behaviours. We have made the following choices. For the solutions to approach at spatial infinity ($r\rightarrow \infty$) a Minkowski spacetime we require
\begin{equation}
\lim_{r\rightarrow \infty}{F_i}=\lim_{r\rightarrow \infty}{W}=\lim_{r\rightarrow \infty}{\phi}=0\ .
\end{equation}
Since the scalar field is massless, one can construct an approximate solution of the field equations
compatible with these asymptotics as a power series in $1/r$. The leading order terms of such an expansion
are:
\begin{eqnarray}
\label{r-infty}
\nonumber
&&
F_0(r,\theta)=\frac{c_t}{r}
+\dots\ , \qquad
F_1(r,\theta)=-\frac{c_t}{r}
+\dots\ , \qquad
F_2(r,\theta)=-\frac{c_t}{r}
+\dots\ , \nonumber \\
&&
W(r,\theta)=\frac{c_\varphi}{r^3}
+\dots\ , \qquad
\phi(r,\theta)=\frac{Q_s}{r }+\dots \ ,
\end{eqnarray}
where $c_t$, $c_\varphi$ and $Q_s$ are constant parameters to be fixed by the numerics.
Axial symmetry, together with regularity at the axis impose
the following boundary conditions on the symmetry axis, $i.e.$ at $\theta=0,\pi$:
\begin{equation}
\partial_\theta F_i = \partial_\theta W = \partial_\theta \phi = 0 \ .
\end{equation}
As before, an approximate expansion of the solution compatible with these boundary conditions can be constructed;
as an illustration, at $\theta=0$ one finds
\begin{eqnarray}
\label{t0}
{\cal F}_a(r,\theta)= {\cal F}_{a0}(r)+\theta^2 {\cal F}_{a2}(r)+\mathcal{O}(\theta^4)\ ,
\end{eqnarray}
where ${\cal F}_a =\{F_0, F_1, F_2, W; \phi\}$. The essential data, which is fixed by the numerics, is encoded in the
functions ${\cal F}_{a0}=\{F_{i0},W_{0},\phi_{0}\}$.
Moreover, the absence of conical singularities implies also that
$
F_1=F_2
$
on the symmetry axis.
Focusing on BHs with parity reflection symmetry,
we need to consider the solutions only
for $0 \leqslant \theta \leqslant \pi/2$.
Then, the functions
$F_i,~W$ and $\phi$
satisfy the following boundary conditions on the equatorial plane ($\theta=\pi/2$)
\begin{equation}
\partial_\theta F_i\big|_{\theta=\pi/2} = \partial_\theta W\big|_{\theta=\pi/2} =\partial_\theta \phi\big|_{\theta=\pi/2} = 0 \ .
\end{equation}
For the metric ansatz~\eqref{ansatz}, the event horizon is located at a surface with constant radial variable, $r=r_H>0$.
By introducing a new radial coordinate
\begin{equation}
x=\sqrt{r^2-r_H^2} \ ,
\label{x}
\end{equation}
the horizon boundary conditions and numerical treatment of the problem simplify. These boundary conditions are
\begin{equation}
\partial_x F_i \big|_{x=0}= \partial_x \phi \big|_{x=0} = 0\ , \qquad W \big|_{x=0}=\Omega_H\ ,
\label{bch1}
\end{equation}
where $\Omega_H $ is the horizon angular velocity, and
the Killing vector $\chi =\xi+\Omega_H \eta$ is orthogonal and null on the horizon.
These conditions are consistent with the near horizon solution
\begin{eqnarray}
\label{rh}
{\cal F}_a(r,\theta)= {\cal F}_{a0}(\theta)+x^2 {\cal F}_{a2}(\theta)+\mathcal{O}(x^4)\ ,
\end{eqnarray}
where the essential functions are
${\cal F}_{i0}$
(also $F_0\big |_{r_H}=F_1\big |_{r_H}$).
\subsection{Quantities of interest and a Smarr relation}
Many quantities of interest are
encoded in the metric functions at the horizon or at infinity.
Considering first horizon quantities. The
Hawking temperature is $T_H={\kappa}/({2\pi})$, where $\kappa$ is the surface gravity
defined as $\kappa^2=-\frac{1}{2}(\nabla_a \chi_b)(\nabla^a \chi^b)|_{r_H}$,
and the event horizon area $A_H$.
These are computed as
\begin{eqnarray}
\label{THAH}
&&
T_H=\frac{1}{4\pi r_H}e^{F_0(r_H,\theta)-F_1(r_H,\theta)} \ ,
\qquad
A_H=2\pi r_H^2 \int_0^\pi d\theta \sin \theta~e^{F_1(r_H,\theta)+F_2(r_H,\theta)} \ .
\end{eqnarray}
The horizon angular velocity $\Omega_H$ is fixed by the horizon value of the metric function $W$,
\begin{eqnarray}
\label{OmegaH}
\Omega_H=-\frac{g_{\varphi t}}{g_{tt}}\bigg|_{r_H}=W \bigg|_{r_H}.
\end{eqnarray}
The total (ADM) mass $M$ and angular momentum $J$ of the BHs
are read off from the asymptotics of $g_{tt}$ and $g_{\varphi t}$,
\begin{eqnarray}
\label{asym}
g_{tt} =-1+\frac{2GM}{r}+\dots \ , \qquad ~~g_{\varphi t}=-\frac{2GJ}{r}\sin^2\theta+\dots \ .
\end{eqnarray}
These global quantities can be split into the horizon and bulk contributions - see, $e.g.$,~\cite{Townsend:1997ku}.
These are, respectively $M_H$ and $J_H$, computed as a Komar integrals on the horizon, and $M_\phi$ and $J_\phi$,
computed as volume integrals of the appropriate {\it effective}
energy-momentum tensor components:
\begin{eqnarray}
\label{TotalMass}
&&
M = M_H+M_\phi\ , \qquad \qquad
M_\phi\equiv -2\int_\Sigma dS_\mu\bigg( T_{\nu}^{\ \mu} \xi^{\nu}-\frac{1}{2}T \xi^\mu \bigg)\ ,
\\
\label{TotalAngularMomentum}
&&
J = J_H +J_\phi\ , \qquad \qquad
J_\phi\equiv \int_\Sigma dS_\mu \left( T_{\nu}^{\mu} \eta^{\nu} -\frac{1}{2}T \eta^{\mu} \right) \ ,
\end{eqnarray}
where $\Sigma$ is a spacelike surface, bounded by the 2-sphere at infinity
$S^2_\infty$ and the spatial section of the horizon $H$. $M_\phi$ and $J_\phi$
encode the contribution of the \textit{effective} ``matter" distribution to the total
mass and angular momentum.
For Kerr BHs, $M=M_H$ and $J=J_H$;
this is not so for EsGB BHs.
Moreover,
since $T_t^{t(\phi)}-\frac{1}{2}T^{(\phi)}=T_\varphi^{t(\phi)}=0$,
only the GB part of the \textit{effective} energy-momentum tensor (\ref{Teff})
contributes to the
energy and angular momentum ``matter" densities.
The solutions can be shown to obey the Smarr-type law
\begin{eqnarray}
\label{smarr}
M +2\Omega_H J+ M_s=2 T_H S\ ,
\end{eqnarray}
where $S$ is the entropy as computed from Wald's formula
\cite{Wald:1993nt},
\begin{eqnarray}
\label{S}
S=S_E+S_{sGB}\ , \qquad
S_E=\frac{A_H}{4}\ , \qquad S_{sGB}=\frac{\alpha}{2} \int_{H} d^2 x \sqrt{h}\phi {\rm R} \ ,
\end{eqnarray}
and
${\rm R}$ is the Ricci scalar of the induced horizon metric $h$.
In the Smarr-type law, $M_s$ is a contribution of the scalar field
\begin{eqnarray}
\label{sup}
M_s= \frac{1}{2} \int_\Sigma d^3x \sqrt{-g} \partial_\mu \phi\partial^\mu \phi \ ,
\end{eqnarray}
which can also be expressed as an integral of $\phi R^2_{\rm GB}$ term.
Also, by integrating (\ref{relD})
over an hypersurface bounded by the event horizon and
the sphere at infinity
one can prove the following
interesting relation
\begin{eqnarray}
\label{Qs}
Q_s=16 \pi \alpha T_H\ .
\end{eqnarray}
This proportionality between
the scalar charge
and the Hawking temperature is a unique feature of the
shift symmetric
EsGB model, see the discussion in \cite{Prabhu:2018aun}.
The EsGB BHs satisfy also
the first law
\begin{eqnarray}
\label{first-law}
dM=T_H dS +\Omega_H dJ \ .
\end{eqnarray}
\subsection{The numerical approach}
In our approach, the field equations reduce to a set of five
coupled non-linear elliptic partial differential equations for the functions
${\cal F}_a =(F_0, F_1, F_2, W; \phi)$,
which are found by plugging the ansatz
(\ref{ansatz}) together with $\phi=\phi(r,\theta)$
into the field eqs.~(\ref{EGB-eq}), (\ref{KG-eq}).
They consist of
the Klein-Gordon equation (\ref{KG-eq})
together with suitable combinations of the Eintein equations (\ref{EGB-eq})
$
\{
E_r^r+E_\theta^\theta=0;~
E_\varphi^\varphi=0;~
E_t^t=0;~
E_\varphi^t=0
\}.
$
The explicit form of the equations solved in practice is too complicated to display here;
each equation containing around 250 independent terms.
Also, the remaining equations
$E_\theta^r =0$
and
$E_r^r-E_\theta^\theta =0$
are not solved directly, they
yielding two constraints which are monitored in numerics. Typically they are satisfied at the level of the overall numerical accuracy. We remark that one can
verify that
the remaining equations vanish identically,
$E_r^\varphi =E_r^t =E_\theta^\varphi =E_\theta^t =0$,
the circularity condition being satisfied.
As such, the employed ansatz is consistent,
a fact which is not \textit{a priori} guaranteed (see~\cite{VanAelst:2019kku}
for a discussion in an Einstein-scalar field model which leads
to a non-circular metric form).
Our numerical treatment can be summarised as follows.
We restrict the domain of integration to the region outside the horizon. Then,
the first step is to introduce the new radial variable
$\bar x=x/(1+x)$
which maps the semi--infinite region $[0,\infty)$ to the finite region $[0,1]$, where $x$ is given by~\eqref{x} and $r$ is the radial variable in
the line element
(\ref{ansatz}).
Next, the equations for ${\cal F}_a$
are discretised on a grid in $\bar x$ and $\theta$.
Most of the results in this work have been found for
an equidistant grid with $300 \times 40$ points.
The grid covers the integration region
$0\leqslant \bar x \leqslant 1$ and $0\leqslant \theta \leqslant \pi/2$.
The equations for ${\cal F}_a$
have been solved subject to the boundary conditions
introduced above.
All numerical calculations
are performed by using a professional package \cite{schoen},
which employs a Newton-Raphson method.
This code uses the finite difference method, providing also an error estimate for each unknown function.
For the solutions in this work,
the maximal numerical error
for the functions is estimated to be on the order of $10^{-3}$.
The Smarr relation (\ref{smarr})
provides a further test of the numerical accuracy, leading to error estimates of the same order.
In our numerical scheme, there are three input parameters:
${\bf i)}$ the event horizon radius $r_H$;
${\bf ii)}$
the event horizon angular velocity $\Omega_H$
in the metric ansatz (\ref{ansatz})
and
${\bf iii)}$
the coupling constant $\alpha$ in the action (\ref{action}).
The quantities of interest are computed from the numerical output.
For example, the mass $M$, and the angular momentum $J$
are extracted from the asymptotic expressions (\ref{asym}),
while the Hawking temperature, the entropy and the horizon area
are obtained from the event horizon data.
The results
reported in this work are obtained from around twenty thousand solution points.
For all these BHs we
have monitored the Ricci and the Kretschmann scalars,
and, at the level of the numerical accuracy, we have
not observed any sign of a singular behaviour on and outside the horizon
(see, however, the discussion below on the limiting solutions).
\section{Spinning black holes: numerical results}
\label{sec5}
\subsection{General properties and limiting behaviour}
In an approach based on the Newton-Raphson method
a good initial guess for the profile of the various functions is an essential condition for a successful implementation.
The spinning solutions in this work can be constructed by using
two different
routes.
In the first approach, one uses the profile of a Kerr BH with given $r_H,\Omega_H$
as
an initial guess for EsGB solutions\footnote{We mention that, similar to the static limit, the
scalar field equation (\ref{KG-eq}) possesses a nontrivial solution in a fixed Kerr background,
which inherits most of the basic properties of the backreacting generalization.
In particular, the scalar charge-Hawking temperature relation (\ref{Qs}) holds also in this case,
while the scalar field
appears to diverge as the extremal Kerr limit is approached.} with a small value of the ratio
$\alpha/r_H^2$.
The iterations
converge and, repeating the procedure, one obtains in this way solutions with large $\alpha$.
In the second approach, one starts instead with spherically symmetric solutions of EsGB, either obtained numerically or from the perturbative expansion. These can also be studied within the ansatz (\ref{ansatz}),
with $W=0$, $F_i$ being functions of $r$ only and with
$F_1=F_2$. Then, starting with an EsGB spherical BH with a given $r_H$ and $\alpha \neq 0$, rotation is introduced by introducing and slowly increasing $\Omega_H$.
For all solutions we have found, the metric functions ${\cal F}_a$, together with their first and second derivatives with respect
to both $r$ and $\theta$ have smooth profiles. This leads to finite curvature invariants on the full domain of integration,
in particular at the event horizon.
The shape of the metric functions $F_0,F_1,F_2$ and $W$ is similar to those in the $\alpha = 0$ case.
The maximal deviation from the Einstein gravity profiles (with the same
input parameters $r_H,\Omega_H$)
is near the horizon.
At the same time, the scalar field may possess a complicated angular dependence,
in particular for fast spinning configurations.
The profile functions of a typical solution are exhibited in Figure~\ref{sol1}.
The insets show the same curves for Kerr with the same $r_H$, $\Omega_H$, for comparison.
The Ricci and the Kretschmann scalars, $R$ and $K\equiv R_{\alpha\beta\mu\nu}R^{\alpha\beta\mu\nu}$, together with the components $T_t^t$ and $T_\varphi^t$
of the {\it effective} energy-momentum tensor are shown in
Figure \ref{sol2}.
In these plots, the corresponding functions are shown in terms of the (inverse) radial variable
$r$
for three different values of the
angular coordinate $\theta$.
One observes, for instance, that $g_{tt}$ becomes positive along the equator, near the horizon, thus manifesting the existence of an ergo-region (see next subsection). One also notices that both $R$ and $K$
stay finite everywhere, in particular at the horizon.
From the components of the effective energy-momentum tensor one observes, in particular, that $-T_t^t<0$ for a region in the vicinity of the symmetry axis, manifesting a breakdown of the weak energy condition for the effective energy-momentum tensor.
\begin{figure}[t!]
\begin{center}
\includegraphics[height=.255\textheight, angle =0]{F0.pdf}
\includegraphics[height=.255\textheight, angle =0]{F1.pdf} \ \
\includegraphics[height=.255\textheight, angle =0]{F2.pdf}
\includegraphics[height=.255\textheight, angle =0]{W.pdf} \ \
\includegraphics[height=.255\textheight, angle =0]{Z.pdf}
\includegraphics[height=.255\textheight, angle =0]{gtt.pdf} \ \
\end{center}
\vspace{-0.5cm}
\caption{Profile functions of a typical solution
with
$r_H=1.38$,
$\Omega_H=0.2$,
$\alpha=0.4$,
$vs.$ $1-r_H/r$, which compactifies the exterior region,
for three different polar angles $\theta$. The insets show the corresponding functions for a Kerr BH with the same $r_H,\Omega_H$. The behaviour is qualitatively similar for both cases, with small quantitative differences.
}
\label{sol1}
\end{figure}
\begin{figure}[t!]
\begin{center}
\includegraphics[height=.255\textheight, angle =0]{R.pdf}
\includegraphics[height=.255\textheight, angle =0]{Kr.pdf}
\ \
\includegraphics[height=.255\textheight, angle =0]{T44.pdf}
\includegraphics[height=.255\textheight, angle =0]{T34.pdf}
\end{center}
\vspace{-0.5cm}
\caption{ The Ricci $R$ and Kretschmann $K$ scalars
and the components $T_t^t$
and
$T_\varphi^t$
of the {\it effective}
energy-momentum tensor, $vs.$ $1-r_H/r$,
for three different polar angles $\theta$ and the same solution as in Figure
\ref{sol1}.
The inset of the bottom left panel shows the existence of a region of negative energy densities around the axis. The inset of the top left panel shows a zoom of the $\theta=0$ curve.
}
\label{sol2}
\end{figure}
Returning to the construction of the solutions,
we have noticed the existence
of a critical set of input parameters for which
the numerical process fails to converge.
Neither a singular behaviour nor a
deterioration of the numerical accuracy in the vicinity of this set was observed.
An explanation for this behaviour,
similar to that justifying the critical configurations found in the static case, is based on the analysis of
the field equations in the vicinity of the event horizon.
%
After some algebra, one finds that the second order term $\phi_2(\theta)$ in the expansion of the
scalar field $\phi(x,\theta)=\phi_0(\theta)+\phi_2(\theta) x^2+\dots$
is a solution of a quadratic equation,
\begin{eqnarray}
\label{eq1}
a \phi_2^2+b \phi_2+c=0\ ,
\end{eqnarray}
where the coefficients $a,b,c$ depend on the values of $F_i,W$ and their derivatives
at the horizon.
Then, a real solution to the above equation exists only if $\Delta=b^2-4 ac>0$.
In practice, we have monitored
this discriminant
and
observed that the numerical process fails to converge\footnote{The values of $a,b,c$ becomes very large
as the value of the reduced temperature decreases, which complicates
their accurate extraction and the evaluation of $\Delta$ in the vicinity of the extremal set.}
when $\Delta$ takes small values close to zero at $ \theta=0,\pi$.
As in the spherically symmetric case, we have found no evidence for the emergence
of a secondary branch of solutions in the vicinity
of the critical solutions.
A different limiting behaviour is found
when varying the value of the horizon
velocity $\Omega_H$ for fixed $(r_H,\alpha)$.
As for the vacuum Kerr family,
following this method
one finds two branches of solutions,
which join for a maximal value of $\Omega_H$.
The first branch emerges from the corresponding static configuration.
The second branch, on the other hand, ends, as for $\alpha=0$,
at {\it extremal configurations}. These have vanishing Hawking temperature
and nonvanishing global charges, horizon area and entropy. We must emphasise, however, that only near extremal solutions, as opposed ot exactly extremal BHs, can be constructed within the
framework proposed in this work.
As such, the results for the extremal solutions reported here
result from extrapolating the data found in the near-extremal case.
Moreover, unlike the extremal vacuum Kerr BH which yields a perfectly regular geometry~\cite{Bardeen:1999px},
the extremal EsGB solutions appear to not be regular, with
the Ricci scalar tending to diverge at the poles of the horizon.
A partial understanding of this behaviour is
given in Appendix~\ref{apb}, based on a perturbative construction of
the near-horizon configurations.
\subsection{The domain of existence}
Let us now address the domain of existence of the EsGB solutions.
There are two fundamental scales, the coupling constant $\alpha$,
and the BH mass of the solutions $M$.
In what follows we display various quantities of interest as a function of
the dimensionless coupling constant $\alpha/M^2$. This parameter measures the impact of non-GR features, due to the GB contribution. The analysis is also performed in terms of the dimensionless angular momentum $j=J/M^2$. This parameter measures the impact of non-staticity.
The link between these two quantities is provided by the Figure \ref{dom0}, where we
plot the domain of existence
(shaded blue region)
in a $j$ $vs.$ $\alpha/M^2$ plot. Therein, all data points which were found numerically are also explicitly shown. The blue shaded region is the extrapolation of these points into the continuum. The figure shows that the domain of existence is delimited by:
\begin{itemize}
\item the set of static BHs ($j=0$, blue dotted line);
\item the set of extremal BHs (black dotted line);
\item the set of critical solutions (green line);
\item the set of GR solutions -- the Kerr/Schwarzschild BHs ($\alpha/M^2=0$, red line).
\end{itemize}
\begin{figure}[ht!]
\begin{center}
\includegraphics[height=.28\textheight, angle =0]{domain.pdf}
\end{center}
\vspace{-0.5cm}
\caption{ Domain of existence of EsGB spinning BHs in a $j$
$vs.$~$\alpha/M^2$ diagram.
Here and in Figure
\ref{dom1},
all quantities are normalised $w.r.t.$ the mass of the solutions.
The domain is obtained by extrapolating into the continuum
over twenty thousand numerical points.
Each such point corresponds to an individual BH solution, and is represented in this plot as a small orange circle.
}
\label{dom0}
\end{figure}
Two comments on Figure \ref{dom0}. First, the Kerr bound $j\leqslant 1$
is violated for spinning EsGB BHs
in a small region of the domain of existence close the extremal set.
However, this violation
is rather small, with
$j^{(max)}\sim 1.013$ for all (accurate enough) solutions studied so far.
Second, along $j$ fixed lines, the critical solution is attained at a smaller $\alpha/M^2$ as $j$ is increased. A possible interpretation is that both the GB contribution and the spin are repulsive effects. Thus, in the presence of rotation, BHs cease to exist for a smaller GB contribution.
In Figure \ref{dom1} (left panels)
the reduced horizon area $a_H\sim A_{\rm H}/M^2$, entropy $s\sim S/M^2$
and temperature $t_H\sim T_{\rm H} M$
of all solutions are
shown
as functions of the dimensionless coupling constant $\alpha/M^2$.
A complementary picture is found when exhibiting the same data as a function of
the reduced angular momentum $j$ - Figure \ref{dom1} (right panels).
\begin{figure}[t!]
\begin{center}
\includegraphics[height=.255\textheight, angle =0]{aHalpha.pdf}
\includegraphics[height=.255\textheight, angle =0]{aHj.pdf} \ \
\includegraphics[height=.255\textheight, angle =0]{salpha.pdf}
\includegraphics[height=.255\textheight, angle =0]{sj.pdf} \ \
\includegraphics[height=.255\textheight, angle =0]{tHalpha.pdf}
\includegraphics[height=.255\textheight, angle =0]{tHj.pdf} \ \
\end{center}
\vspace{-0.5cm}
\caption{Domain of existence of spinning EsGB BHs in a reduced horizon area (top panels), entropy (middle panels) and
Hawking temperature (bottom panels) $vs.$ the dimensioness coupling $\alpha/M^2$ (left panels) or angular momentum $j$ (right panels).
}
\label{dom1}
\end{figure}
Let us comment on some features resulting from Figure \ref{dom1}. For fixed $j$, the BH area decreases as $\alpha/M^2$ increases; but the corresponding reduced BH entropy \textit{increases}. This provides a clear example how BH entropy deviates from the Hawking-Bekenstein formula in this modified gravity: when the GB contribution becomes larger, the BH becomes smaller but it carries more entropy (for fixed $j$). On the other hand, fixing the EsGB dimensionless coupling constant $\alpha/M^2$, both the reduced area and the reduced entropy decrease as $j$ increases. Thus, for any fixed EsGB model, spin reduces the size and the entropy of BHs. The BH temperature, on the other hand, increases with $\alpha/M^2$ for fixed $j$ and decreases with $j$ for fixed $\alpha/M^2$.
\subsection{Other properties}
\subsubsection{Ergoregion and horizon properties}
All spinning EsGB BHs have an ergoregion, defined as the domain in which the norm of $\xi=\partial_t$ becomes positive outside the horizon.
This region is bounded by the event horizon and by the surface where
\begin{equation}
g_{tt}=-e^{2F_0} N+W^2e^{2F_2}r^2 \sin^2\theta =0 \ .
\end{equation}
For the Kerr BH, this surface has a spherical topology and touches the horizon at the poles.
As discussed in~\cite{Herdeiro:2014jaa},
the ergoregion can be more complicated for other models, notably for BHs with synchronised scalar hair, with the possible
existence of an additional $S^1\times S^1$ ergo-surface (ergo-torus) - see also~\cite{Kunz:2019bhm}.
We have found that this is not the
case for EsGB BHs, where all solutions are Kerr-like in the sense they possess a single topologically $S^2$ ergosurface.
Let us now consider the horizon geometry. Similarly to the GR Kerr solution,
EsGB BHs have an event horizon of spherical topology. The metric of a spatial cross-section of the horizon is
\begin{eqnarray}
\label{horizon-metric}
d\Sigma^2=h_{ij} dx^i dx^j=r_{\rm H}^2\left [ e^{2F_1(r_H,\theta)} d\theta^2+e^{2F_2(r_H,\theta)}\sin^2\theta d\varphi^2\right ]\ .
\end{eqnarray}
Geometrically, however, the
horizon is a squashed, rather than round, sphere.
This is shown by computing the horizon circumference along the
equator, $L_e$, and along the poles, $L_p$:
\begin{equation}
L_e=2 \pi r_H e^{F_2(r_H,\pi/2)} \ , \qquad L_p=2 r_H \int_0^\pi d\theta e^{F_1(r_H,\theta)} \ .
\end{equation}
%
The ratio of these two circumferences define the sphericity \cite{Delgado:2018khf}
\begin{equation}
\mathfrak{s} \equiv \frac{L_e}{L_p}~.
\end{equation}
In Figure \ref{Fig:Horndeski_vH} (left panel)
the sphericity is shown as a function of the dimensionless coupling constant $\alpha/M^2$.
An interesting feature there is that $\mathfrak{s} $
can exceed the maximal GR value for a set of EsGB solutions
close to extremality. Roughly, the EsGB can become more oblate than Kerr.
Also, as expected, the squashing of the horizon produced by the rotation is
such that $\mathfrak{s}$ is always larger than unity. That is, the solutions are always deformes towards oblatness, rather than prolatness.
Another physical quantity of interest
is the horizon linear velocity $v_H$~\cite{Herdeiro:2015moa,Delgado:2018khf,Delgado:2019prc}.
$v_H$
measures how fast the null geodesics generators of the horizon rotate relatively to a static observer at spatial infinity.
It is defined as the product between the perimetral radius of the circumference located at the equator,
$R_e \equiv L_e/2\pi$, and the horizon angular velocity $\Omega_H$,
\begin{equation}
v_H=\frac{L_e }{2\pi}\Omega_H\ .
\end{equation}
As seen in Figure \ref{Fig:Horndeski_vH} (right panel), all studied
EGBs solutions have $v_H<1$, just like for Kerr, and despite the (small) violations of the Kerr bound. Thus, the null geodesics generators of the horizon rotate relatively to the asymptotic observer at subluminal speeds.
\begin{figure}[ht!]
\begin{center}
\includegraphics[height=.255\textheight, angle =0]{sphericity.pdf}
\includegraphics[height=.255\textheight, angle =0]{vH.pdf}
\end{center}
\vspace{-0.5cm}
\caption{ The sphericity $\mathfrak{s}$ (left panel), and the horizon linear velocity $v_H$ (right panel)
$vs.$ $\alpha/M^2$ for the full set of EsGB BHs.
}
\label{Fig:Horndeski_vH}
\end{figure}
Further insight into the horizon geometry is obtained by considering the
isometric embedding of
the spatial sections of the horizon in an Euclidean 3-space $\mathbb{E}^3$.
A well-known feature of the Kerr horizon geometry is
that for a dimensionless spin $j > {\sqrt{3}}/{2} \equiv j^{\rm (S)}$
(dubbed \textit{Smarr point})
the Gaussian curvature of the horizon becomes negative in a vicinity of the poles \cite{Smarr:1973zz}.
In this regime, an isometric embedding of the Kerr horizon geometry in $\mathbb{E}^3$ is no longer possible.
As expected, this feature also occurs also for the solutions in this work,
even though the position of the Smarr point now depends on the value of
the dimensionaless coupling constant
$\alpha/M^2$.
Following~\cite{Delgado:2018khf,Delgado:2019prc}, the collection of Smarr points as $\alpha/M^2$ is varied is dubbed {\it the Smarr line}.
%
Figure \ref{Fig:Horndeski_vH}
displays also the position of the Smarr line as a function of $\alpha/M^2$.
One observes that, as for the Kerr limit,
an isometric embedding of the horizon geometry in $\mathbb{E}^3$
is possible only up to a maximal value of $\mathfrak{s} $ and $v_H$.
Also, notice that both the sphericity
$\mathfrak{s}$
and
$v_H$ are not constant
along the Smarr line and slighly larger values of both these quantities are allowed for embeddable BHs when $\alpha/M^2$ is increased.
\subsubsection{Orbital Frequency at the ISCO and Light Rings}\label{ISCOLR}
A phenomenologically relevant aspect of any BH concerns the angular frequency at both the innermost stable circular orbit (ISCO) and the light ring (LR).
The former is associated to a cut-off frequency of the emitted synchrotron radiation
generated from accelerated charges in accretion disks.
The latter is related to the real part of the frequency of BH quasi-normal modes~\cite{Cardoso:2008bp}. The LRs are also key in determining the BH shadow~\cite{Cunha:2018acu}.
Following a standard method, one finds that the angular frequency of a test particle with energy, $E$, and angular momentum, $L$, on the equatorial plane, $\theta=\pi/2$, is,
\begin{equation}
\label{Eq:AngularFrequency}
\omega= \frac{\dot{\varphi}}{\dot{t}} = W - \frac{e^{2(F_0-F_2)} L}{r^2 (L\ W - E)}\left( 1 - \frac{r_H}{r}\right) \ .
\end{equation}
The radial coordinate, $r$, of such particle obeys the equation,
\begin{equation}
\dot{r}^2 = V(r) \equiv e^{-2F_1} \left( 1 - \frac{r_H}{r} \right) \left[ -\epsilon - e^{-2F_2} \frac{L^2}{r^2} + \frac{e^{-2F_0} (E - L\ W)}{1 - \frac{r_H}{r}} \right] \ ,
\end{equation}
where the `dot' denotes derivative with respect to an affine parameter. $\epsilon$ is a constant with $\epsilon = 0$ for massless test particles and $\epsilon = -1$ for the massive test particles. The former are relevant for the LRs and the latter for the ISCO.
In the case of massive test particles, circular orbits require that both the potential $V(r)$ and its derivative vanish, $V(r) = V'(r) = 0$. This yields two algebraic equations for $E$ and $L$, which can be solved analytically. These have two distinct pairs of solutions, $(E_+, L_+)$ and $(E_-, L_-)$, corresponding, respectively, to co-rotating and counter-rotating orbits.
It is then possible to assess the stability of the circular orbits by computing the second derivative of the potential. The ISCO will correspond to the orbit in which the test particle has energy and angular momentum that solves $V(r) = V'(r) = 0$ and the radial coordinate that solves $V''(r)=0$. Having obtained the energy, angular momentum and radial coordinate of the ISCO, the corresponding angular frequency is computed using~\eqref{Eq:AngularFrequency}.
In Fig.~\ref{Fig:Horndeski_wISCO}, we present the ratio between the angular frequency at the ISCO between EsGB BHs and Kerr BHs,
for both co-rotating, $\Delta \omega_{\rm ISCO}^{\text{co}}$ and counter-rotating orbits, $\Delta \omega_{\rm ISCO}^{\text{counter}}$,
fixing $j$, as a function of the reduced coupling constant, $\alpha/M^2$:
\begin{equation}
\label{ratios}
\Delta \omega_{\rm ISCO}^{\text{co}}(j,\alpha/M^2)=\frac{\omega_{\rm ISCO}^{\text{co}}(j,\alpha/M^2)}{\omega_{\rm ISCO}^{\text{co}}(j,\alpha/M^2=0)} \ , \qquad \Delta \omega_{\rm ISCO}^{\text{counter}}(j,\alpha/M^2)=\frac{\omega_{\rm ISCO}^{\text{counter}}(j,\alpha/M^2)}{\omega_{\rm ISCO}^{\text{counter}}(j,\alpha/M^2=0)} \ .
\end{equation}
Several illustrative values of $j$ are exhibited.
\begin{figure}[ht!]
\begin{center}
\includegraphics[height=.255\textheight, angle =0]{wISCO_alphaM2_Prograde.pdf}
\includegraphics[height=.255\textheight, angle =0]{wISCO_alphaM2_Retrograde.pdf}
\end{center}
\vspace{-0.5cm}
\caption{ Ratio between the angular frequency at the ISCO between EsGB BHs and Kerr BHs for co-rotating orbits (left panel)
and counter-rotating orbits (right panel).}
\label{Fig:Horndeski_wISCO}
\end{figure}
For both the co-rotating and counter-rotating cases, by definition, the ratio converges to unity in the Kerr limit. For all fixed $j$ and for both co and counter-rotating orbits, the ratio diverges away, monotonically, from unity as $\alpha/M^2$ increases. How the ratio goes away from unity depends, however, on $j$ and on the direction of the orbital motion.
For $j=0$ the distinction between co and counter rotating orbits is meaningless. The ratio grows away from unity as $\alpha/M^2$ increases -- solid blue line in Fig. \ref{Fig:Horndeski_wISCO}. Naively, this is related to the fact that the static BH size decreases with increasing $\alpha/M^2$, making the ISCO also decrease and hence its frequency increase. Introducing $j$ raises the degeneracy between co and counter rotating orbits. For co-rotating (counter-rotating) orbits and small $j\neq 0$, the ratio is always larger (smaller) than that for the static BHs ($j=0$) -- dotted lines in Fig. \ref{Fig:Horndeski_wISCO} (left and right panels). One may interpret these behaviours as a consequence of frame dragging, which enhances (damps) motion along co-rotating (counter-rotating) orbits. In the counter-rotating case this trend remains for large $j$ -- dashed lines in Fig. \ref{Fig:Horndeski_wISCO} (right panel). In the co-rotating case, however, an unexpected behaviour emerges. For sufficiently large $j$ the ratio stops being enhanced with respect to the static case, and eventually becomes \textit{suppressed} with respect to it -- dashed lines in Fig. \ref{Fig:Horndeski_wISCO} (left panel).
A possible explanation for this unexpected behaviour is found by studying the angular velocity of the horizon, $\Omega_H$. This quantity is a better measure of dragging effects than the spacetime angular momentum. Indeed, the fact that a BH has a large $j$ does not imply that it has a large horizon angular velocity.\footnote{The relation between the two quantities should be determined by a moment of inertia. See~\cite{Herdeiro:2009qy} for an attempt to introduce this notion in BH physics.} Let us then consider the reduced horizon angular velocity, $\omega_H \equiv \Omega_H M$, and its difference beween EsGB and Kerr BHs with the same $j$, defined as:
\begin{equation}
\delta \omega_H(j,\alpha/M^2) \equiv \omega_H(j,\alpha/M^2) - \omega_H(j,\alpha/M^2 = 0) \ .
\end{equation}
This quantity is plotted against the reduced angular momentum $j$ in Fig. \ref{Fig:Horndeski_deltaOmegaH}. One observes that, for small enough fixed $j$, the EsGB BHs have larger $\omega_H$ than Kerr ones. This support the thesis that dragging effects are stronger and should enhance the angular frequency at the ISCO. However, after a given spin $j$, the EsGB BHs have smaller $\omega_H$ than Kerr BHs. That is, albeit having a larger spacetime angular momentum, large $j$ EsGB BHs spin more slowly, and thus source weaker frame dragging, than Kerr BHs. Qualitatively, at least, this provides an explanation for the behaviour observed in Fig. \ref{Fig:Horndeski_wISCO} (left panel).
\begin{figure}[ht!]
\begin{center}
\includegraphics[height=.255\textheight, angle =0]{deltaOmegaH_vs_j.pdf}
\end{center}
\vspace{-0.5cm}
\caption{ Reduced horizon angular velocity difference between EsGB BHs and Kerr BHs in a $\delta \omega_H$ $vs.$ $j$ plot. For small $j$ the difference is positive, meaning that EsGB BHs spin faster. But for large $j$ the difference is negative, meaning that EsGB BHs spin slower.}
\label{Fig:Horndeski_deltaOmegaH}
\end{figure}
Quantitatively, for co-rotating orbits, the maximal deviation from Kerr is $\Delta \omega_{\rm ISCO}^{\text{co}}\sim 8\%$ and occurs for $j \sim 0.5$ and the maximal value of $\alpha/M^2$. For counter-rotating orbits, on the other hand, the ratio is maximised, for any $\alpha/M^2$, by the static case.
In the case of massless particles, a similar analysis can be done. Now, solving $V(r) = 0$, we obtain an algebraic equation for the impact parameter, $b_p = L/E$, which yields two distinct solutions $b_p^+$ and $b_p^-$ corresponding to co-rotating and counter-rotating orbits, respectively. Using this result, and solving $V'(r) = 0$, yields the radial coordinate of the LR. Having computed the impact parameter and the radial coordinate of the LRs, one can again compute their angular frequency, using~\eqref{Eq:AngularFrequency}.
\begin{figure}[ht!]
\begin{center}
\includegraphics[height=.255\textheight, angle =0]{wLR_alphaM2_Prograde.pdf}
\includegraphics[height=.255\textheight, angle =0]{wLR_alphaM2_Retrograde.pdf}
\end{center}
\vspace{-0.5cm}
\caption{ Ratio between the angular frequency at the LR between EsGB BHs and Kerr BHs, for co-rotating orbits (left panel)
and counter-rotating orbits (right panel).}
\label{Fig:Horndeski_wLR}
\end{figure}
Fig. \ref{Fig:Horndeski_wLR} shows the ratio between the angular frequency at the LR of EsGB BHs and Kerr BHs, for both co-rotating
and counter-rotating orbits, $\Delta \omega_{LR}^{\text{counter}}$, defined in an analogous way to~\eqref{ratios},
with different values of spin, $j$, as a function of the reduced coupling constant, $\alpha/M^2$. The overall behaviour is very similar to the one discussed above for the ISCO frequency. The main difference for the LR case is that the maximal deviation for both types of orbits is smaller than the corresponding orbits at the ISCO.
\section{Conclusions and remarks}
\label{sec6}
In this work we have constructed the spinning generalisations of the static
BHs in the shift symmetric Hordenski model. This is a family of asymptotically flat, stationary, axially symmetric BHs, that are non-singular on and outside an event horizon. The domain of existence of these solutions is naturally described by two dimensionless parameters: the dimensionless coupling constant of the model, $\alpha/M^2$, and the dimensioness spin of the BHs, $j=J/M^2$.
Then, the domain of existence is bounded by four special limiting behaviours: the GR limit (when $\alpha=0$), the static limit (when $j=0$), the extremal limit, when the surface gravity of the solutions vanishes, and a critical set of solutions for which a horizon ceases to exist. This last boundary has an important implication. For non-zero $\alpha$ it means there is minimum mass (and hence) size for BHs. Thus there is a mass gap with respect to the Minkowski vacuum, which is also a solution of the theory.
This non-GR property also occurs for the Einstein-dilaton-GB model discussed $e.g.$
in~\cite{Kleihaus:2015aje,Kleihaus:2011tg}. Other properties of the BHs we have constructed and analysed in this paper also parallel the solutions found in the Einstein-dilaton-GB model. This similarity of properties was antecipated by the observation made in the introduction: the linearisation of the action of Einstein-dilaton-GB model
\begin{eqnarray}
\label{actionEGBd}
S=
\int d^4x \sqrt{-g} \left[R - \frac{1}{2}
\partial_\mu \phi\partial^\mu \phi
+ \alpha e^{\phi} R^2_{\rm GB} \right] \ ,
\end{eqnarray}
reduces to (\ref{action}) in the limit of small $\phi$, $i.e.$ $e^{\phi}\simeq 1+\phi$, by virtue of~\eqref{totder}.
Since
the scalar field takes rather small values for typical Einstein-dilaton-GB BHs,
the shift symmetric EsGB BHs
with the same input parameters
provide a reasonable approximation - see, for instance, the bottom left panel of Figure~\ref{sol1} for the scalar field magnitude of a typical solution.
Thus, the domain of existence
of the Einstein-dilaton-GB and EsGB BHs
are indeed quite similar, as confirmed by the results in this work.
Yet, there are both qualitative and quantitative differences between the two models. An intriguing property of the model we have focused on, that does not occur for the Einstein-dilaton-GB model,
is the scalar charge-temperature relation
(\ref{Qs}). In fact, also the Smarr law is different in both models.
Quantitatively, the correspondence between the two models
holds only for small enough values of $\alpha/M^2$ and $j$.
For example, the critical
value of the ratio $\alpha/M^2$ is
$0.3253 $
for the spherically symmetric solutions in this work
(being fixed by an algebraic condition between the horizon size and the coupling constant $\alpha$, Eq. (\ref{cond}))
and $0.1728 $
for
Einstein-dilaton-GB BHs (in which case the generalization of (\ref{cond}) includes, as well,
a dependence on the value of the scalar field at the horizon, see $e.g.$ Ref.\cite{Kanti:1995vq}).
Moreover, a specific feature of the Einstein-dilaton-GB model is the occurrance, near the critical configuration
of a small secondary branch of BH solutions
\cite{Torii:1996yi,Alexeev:1996vs,Guo:2008hf}.
Along this
branch, the mass increases with decreasing horizon radius.
This secondary branch appears to be absent in the EsGB case.
Finally, let us remark that the way the SZ solutions circumvent the no-scalar-hair theorem also applies to the model herein~\cite{Hui:2012qt}. This occurs by violating the assumption that the current associated to the shift-symmetry should be finite at the horizon. For the static SZ BHs this current diverges on the horizon. This, however, does not induce any physical pathologies. We have checked that this current (squared) diverges at the horizon also in the spinning BHs reported in this work.
\section*{Acknowledgements}
J. D. is supported by the FCT grant SFRH/BD/130784/2017. This work is supported by the Center for Research and Development
in Mathematics and Applications (CIDMA) through the Portuguese
Foundation for Science and Technology
(FCT - Fundacao para a Ci\^encia e a Tecnologia),
references UIDB/04106/2020 and UIDP/04106/2020 and by national funds (OE), through FCT, I.P., in the scope of the framework contract foreseen in the numbers 4, 5 and 6 of the article 23, of the Decree-Law 57/2016, of August 29,
changed by Law 57/2017, of July 19. We acknowledge support from the projects PTDC/FIS-OUT/28407/2017 and CERN/FIS-PAR/0027/2019. This work has further been supported by the European Union's Horizon 2020 research and innovation (RISE) programme H2020-MSCA-RISE-2017 Grant No.~FunFiCO-777740. The authors would like to acknowledge networking support by the
COST Action CA16104.
|
2,877,628,091,494 | arxiv | \subsection{Verbleibende Punkte}
|
2,877,628,091,495 | arxiv | \section{Introduction}
\label{sec:intro}
The Fermi--LAT telescope has measured \cite{Ackermann:2014usa}
the extragalactic gamma--ray flux,
generated by the ensemble of all extragalactic sources,
in the energy range 0.1--820~GeV.
The total flux (commonly called Extragalactic Gamma Background or EGB)
can be decomposed into a component due to the ensemble of resolved
extragalactic point sources, and a second component (the Isotropic
Gamma Ray Background or IGRB) that accounts for all other
emissions, including unresolved faint point sources.
This decomposition obviously depends on the instrument
sensitivity and integration time.
Both the EGB and the IGRB are in good approximation isotropic
(reflecting the homogeneity and isotropy of the universe), and
their energy spectra have been fitted
in \cite{Ackermann:2014usa} with the functional form:
\begin{equation}
\phi_\gamma (E) = K_\gamma ~E^{-\alpha} ~e^{-E/E_{\rm cut}}
\label{eq:fit_form}
\end{equation}
that is a power--law with an exponential cutoff.
The Fermi--LAT Collaboration has presented
three estimates for the EGB and IGRB
obtained using different models for the Galactic foreground,
these estimates are however close to each other with differences
that are negligible for the purposes of our work.
Averaging the results of the three fits one obtains:
$\alpha = 2.30\pm 0.02$, $E_{\rm cut} = 330\pm 70$~GeV for the EGB,
and
$\alpha = 2.29\pm 0.02$, $E_{\rm cut} = 239 \pm 50$~GeV for the IGRB.
One can note that the spectral indices
for the EGB and the IGRB are consistent with being equal,
suggesting a common origin.
It is also likely that, correcting for absorption effects,
the spectra are reasonably well
described by a simple power--law form in the entire energy range of
the Fermi--LAT observations. This is because
the observed spectral cutoffs are consistent with being
the distortions generated
by the absorption of high energy gamma--rays during propagation,
assuming an emission that is an unbroken power--law.
The main source of absorption is due to pair production interactions
($\gamma \gamma \to e^-e^+$) with the target photons that form
the intergalactic radiation fields.
The cutoff energy for the IGRB is smaller than the cutoff for the
EGB, but this can be naturally explained assuming that a large fraction
of the unresolved flux is due to faint, distant sources
that are more absorbed.
The Fermi--LAT observations also show that approximately
one half of the total (EGB) extragalactic flux is due to
an ensemble of point sources, most of them
Active Galactic Nuclei (AGN) of the blazar class.
The observations of the point sources allow
to model their luminosity function and cosmological evolution
and then to estimate the flux
of those that are too faint to be resolved.
These studies indicate that blazars also account for most of the
IGRB \cite{TheFermi-LAT:2015ykq}.
Other sources (such as normal
or starburst Galaxies or Dark Matter self--annihilation)
can contribute only a fraction of order 10\% or less.
Some questions emerge immediately from these results.
What are the astrophysical mechanisms
that generate the extragalactic gamma--ray emission?
What is the origin of the power--law form of the spectrum?
Why the spectral index has the value $\alpha \simeq 2.30$~?
Simple considerations suggest that if a spectrum
has a power--law form with slope $\alpha$,
and is formed by the sum of distinct components,
then also the energy distributions of the components
have a power--law form with the same spectral index.
This is because it appears difficult to combine
spectra of different shapes to form a sum
that has a featureless power--law form.
Also in the case where the individual components are all
of power--law form, but have different slopes,
the spectrum of the sum is not a simple power--law,
but hardens gradually, with an energy dependent slope.
On the other hand, the spectra of the extragalactic point sources
resolved by Fermi--LAT have a broad range of spectral shapes,
that in many cases are not simple power--laws, but are ``curved''
(in a log--log representation) with an energy dependent spectral index.
Nonetheless, the sum of these contributions generate a total flux
of simple power--law form.
This surprising result indicates that the slope
$\alpha \approx 2.30$ of the average extragalactic gamma--ray emission
is related to the properties of the {\em ensemble of the sources}
and is not a ``universal slope'' that describes the emission
of the individual sources.
In this work we address this problem and investigate the origin of the
spectral index of the extragalactic gamma--ray flux.
The paper is organized as follows.
In the next section we study the spectra of the extragalactic
sources observed by Fermi--LAT, and discuss the properties
of the log--parabola form that is used to fit 18\% of the
sources (but accounts for over 60\% of the total resolved extragalactic
flux).
In Sec.~\ref{sec:combined-spectra} we discuss the
shape of a spectrum formed by components that have different
energy distributions, and show how an ensemble of log--parabola
components can combine to generate an average that has power--law form.
In Sec.~\ref{sec:statistical} we show how the conditions
required to form a power--law spectrum from log--parabola
components can be satisfied if the luminosity of the sources
has a power--law dependence of the hardness of their spectra.
In this case the spectral index of the extragalactic flux is related
to the exponent that describes the luminosity--hardness relation.
Sec.~\ref{sec:power-laws} discusses how
blazar emission can be described as a ``critical phenomenon''.
Section~\ref{sec:galactic} briefly comments on the cosmic ray Galactic sources
and of the possible relevance of the concepts developed here also
for Galactic accelerators. It is in fact puzzling that
most of the Supernova remnants observed by Fermi--LAT
have log--parabola spectra (that account for more than 90\% of the total
flux for this class of objects).
Sec.~\ref{sec:conclusions} contains some concluding remarks.
An appendix very briefly discusses how simple modifications
of the standard Fermi acceleration mechanism can result in
gradually softening spectra that are in good approximation
of log--parabola form.
\section{Gamma--ray point sources}
\label{sec:point-sources}
\subsection{Spectral shapes}
The Fermi--LAT telescope has measured the spectra of a large number
of point--like and quasi--point like sources.
The recently released fourth source catalog (4FGL)
\cite{Fermi-LAT:2019yla} lists 5066 sources
with detection significance of more than 4 sigma, and
for each source provides a description of the spectral properties
in the form of an analytic fit. These results are of extraordinary value
to develop an understanding of the astrophysical acceleration mechanisms.
The 4FGL catalog uses three functional forms to
fit the spectra of the sources: ``Power--law'', ``Log--parabola''
and ``Cutoff''. The ``Cutoff'' form
(a power--law with a super--exponential cutoff)
is used in the 4FGL to fit the spectra of 220 sources
(218 Pulsars, the Small Magellanic Cloud, and the blazar 3C 454.3).
In the present work we are not interested in Pulsars,
and this spectral form will not be discussed further.
Most of the sources in the catalog (3543)
are fitted with the 2--parameter Power--Law form:
\begin{equation}
\phi_\gamma (E)= \phi_0~\left (\frac{E}{E_0} \right )^{-\alpha} ~.
\label{eq:power_form}
\end{equation}
In this expression $E_0$ is not a parameter but
a reference energy (called the ``pivot energy'')
chosen as the energy where the error on the absolute flux is minimum.
Approximately 25\% of the sources in the catalog (1303)
are fitted with the 3--parameter ``log--parabola'' form
\begin{equation}
\phi(E) = \phi_0 \; \left ( \frac{E}{E_0} \right )^{-(\alpha_0 + \beta \, \ln E/E_0)} ~.
\label{eq:log_parabola_form}
\end{equation}
As in the previous case $E_0$ is a source dependent pivot energy,
$\phi_0$ and $\alpha_0$ are the flux and
the spectral index at $E_0$, while $\beta$ gives the curvature of the spectrum.
For all sources in the catalog $\beta$ is positive,
and this corresponds to a gradually softening spectrum.
The name of this spectral shape expresses the fact
that in a log--log representation ($\log \phi(E)$ versus $\log E$)
the spectrum has the form of a parabola. This parabolic form
is conserved also when the spectrum is represented in the form
($\log E^n \phi(E) $ versus $\log E$) for any value of the exponent $n$.
The energy dependent slope $\alpha (E)$ of a log--parabola spectrum is:
\begin{equation}
\alpha(E)
= -\frac{d\log \phi(E)} {d\log E}
= \alpha_0 + 2 \beta \, \ln \frac{E}{E_0}
\label{eq:alpha_logparabola1}
\end{equation}
and grows linearly with $\log E$ with coefficient $2 \beta$
taking all real values, from $-\infty$ at very low energy to $+\infty$ at very high energy.
The log--parabola expression can also be rewritten in the form of a log--normal distribution as:
\begin{equation}
\phi(E)
= \phi_\dagger \; \left ( \frac{E}{E_\dagger } \right )^{-\beta \, \ln E/E_\dagger}
= \phi_\dagger
~e^{-\beta \; (\ln E -\ln E_\dagger)^2}
\label{eq:log_parabola_form2}
\end{equation}
In this expression we have eliminated the arbitrary reference energy $E_0$,
and introduced as a new parameter
$E_\dagger$ the energy where the flux has its maximum
(and where the spectral index vanishes: $\alpha (E_\dagger)=0$);
$\phi_\dagger$ is the value of the flux at $E = E_\dagger$.
The new parameters $E_\dagger$ and $\phi_\dagger$
can be obtained from $E_0$ and $\alpha_0$ as:
\begin{equation}
E_\dagger = E_0 \; e^{-\alpha_0/(2 \, \beta)}
\end{equation}
\begin{equation}
\phi_\dagger = \phi_0 \; e^{\alpha_0^2/(4 \, \beta)} ~.
\end{equation}
The value $E_\dagger$ is typically (for $\beta$ small)
much below the energy range where the spectra are measured,
and for this reason we find it more convenient
to parametrize the log--parabola form in terms of a different
quantity, the characteristic energy $E_*$ defined as
the energy where the Spectral Energy Distributions (SED)
$S(E) = E^2 \, \phi(E)$ has its maximum,
or equivalently where the spectral index takes the value $\alpha(E_*) = 2$.
The SED of a log--parabola spectrum is symmetric around the characteristic
energy $E_*$, so that the energy fluxes integrated
below and above $E_*$ are equal.
The log--parabola expression can be written in terms of $E_*$ in the form:
\begin{equation}
\phi(E)
= \phi_* \; \left ( \frac{E}{E_* } \right )^{-(2 +\beta \, \ln E/E_*)}
= \phi_*
~e^{-(2 \, \ln E/E_* + \beta \, \ln^2 E/E_*)}
\label{eq:log_parabola_form1}
\end{equation}
(where $\phi_*$ is the flux at $E = E_*$).
The parameters $E_*$ and $\phi_*$ can be calculated as:
\begin{equation}
E_*
= E_0 \; e^{(2-\alpha_0)/(2 \, \beta)}
= E_\dagger \; e^{1/ \beta}
\end{equation}
\begin{equation}
\phi_* = \phi_0 \; e^{(\alpha_0^2-4)/(4 \, \beta)}
= \phi_\dagger \; e^{-1/ \beta} ~.
\end{equation}
The spectral index $\alpha(E)$ of the log--parabola form
can be written using the parameter $E_*$ (or $E_\dagger$) as:
\begin{equation}
\alpha(E)
= 2 + 2 \, \beta \, \ln \frac{E}{E_*}
= 2 \, \beta \, \ln \frac{E}{E_\dagger} ~.
\label{eq:alpha_logparabola}
\end{equation}
It is important for the following discussion to note
that a log--parabola spectrum
(in contrast to a featureless, scale free power--law)
determines an energy scale
(that can be chosen has the energy $E_*$ where the SED has its maximum).
For a fixed value of the curvature parameter $\beta$
the shape of the spectrum then depends only on the ratio $E/E_*$.
It is also useful to note that (for $\beta >0$)
the log--parabola expression can in principle be extended
to all energies $E> 0$.
This is not possible for a power--law spectrum,
because it would result in divergences
in the number of particles and their total energy.
On the other hand for a log--parabola (log--normal)
spectrum (when $\beta > 0$) the momenta of arbitrary order $m$
are always finite:
\begin{equation}
\left \langle \phi(E) \; E^m \right \rangle =
\int_0^\infty dE~\phi(E)~E^m = \phi_* \; E_*^{m+1} ~\sqrt{\frac{\pi}{\beta}}
~e^{(m-1)^2/(4 \, \beta)} ~.
\label{eq:moment-m}
\end{equation}
This implies can the average energy of particles in a log--parabola
spectrum is proportional to the characteristic energy:
$\langle E \rangle = E_* \; e^{-1/(4 \beta)}$.
\subsection{Extragalactic point sources}
To select a sample of extragalactic sources we have
chosen the sky region $|\sin b| > 0.25$ around the two Galactic poles.
In this part of the sky,
after the exclusion of a few objects that are classified as Pulsars
(and are therefore Galactic), the 4FGL catalog contains 3223 sources,
with a neglible contamination of Galactic objects.
The best fits to the spectra of the 30 brightest
objects in this selection are shown in Fig~\ref{fig:gamma_resolved}.
Only one of these sources (the blazar 3C 454.3,
the brightest object in this sky region) is fitted with the ``Cutoff' form.
Most of the selected sources (2629 or 82\% of the total) are
fitted with the simple power--law form.
The spectral index for these objects
has a broad distribution
(shown in Fig.~\ref{fig:alphapower})
that takes values in an interval that extends from 1.2 to 3.5,
with average $\langle \alpha \rangle \simeq 2.21$
and width (r.m.s.) $\sigma_\alpha \simeq 0.31$.
The remaining 593 sources (18\% of the total) are fitted
with the ``log--parabola'' form.
Integrating in the 1--100~GeV energy interval,
these sources account for over 60\% of the total flux.
This is the consequence of the fact that
sources fitted with the log--parabola
form are on average more luminous than sources fitted
with the simple power--law form.
As an illustration, ranking in total flux the extragalactic sources,
after 3C 454.3 the next 86 objects are
all fitted with the log--parabola form, before
finding the first source fitted with a simple power--law form
(therefore in Fig.~\ref{fig:gamma_resolved} all the best fits lines
are curved).
This result can be (at least in part) attributed to the fact that
it is difficult to measure the curvature of a faint spectrum, and
one interesting hypothesis is that all (or nearly all) extragalactic sources
deviate from the simple power--law form, so that longer observation times
(or more sensitive telescopes) will result in the detection
of a significant curvature for a larger and larger fraction of the sources.
A scatter plot of the shape parameters $\{E_* ,\beta\}$
for the 593 extragalactic sources in sky region $|\sin b| < 0.25$
that have been fitted with the log--parabola form
is shown in Fig.~\ref{fig:estar_beta}.
The projection that gives the distribution of the characteristic
energy $E_*$ is shown in Fig.~\ref{fig:estar},
and one can see that $E_*$ takes values in a broad interval
that extends from 10~MeV to 100~GeV. The projection for
the curvature parameter $\beta$ is shown in Fig.~\ref{fig:beta}.
The distribution has a peak at $\beta \simeq 0.1$, but extends
to values of order unity. Several sources are fitted to the
maximum value ($\beta = 1$) allowed in the Fermi--LAT fits.
The spectral index distributions of the extragalactic sources
that have been fitted with the log--parabola form
is shown in Fig.~\ref{fig:alpha1}.
The top panel of the figure shows the distribution
of $\alpha(E)$ for three values of the energy
$E= 0.1$, 1 and 100~GeV.
Inspecting this figure one can notice some interesting facts. \\
(i) The shape of the spectral index distribution
is energy dependent. This is because for
fluxes described by the log--parabola expression,
the spectral index (for $\beta > 0$)
grows with energy. Accordingly, the average $\langle \alpha (E) \rangle$
changes rapidly with energy, taking values of 0.61, 1.96 and 4.6
for $E = 0.1$, 1 and 100~GeV. \\
(ii) The distributions are very broad with r.m.s. values
$\sigma_\alpha (E)$ = 1.74, 0.79 and 2.38 for the same values of the energy.
The bottom panel of Fig.~\ref{fig:alpha1} shows the
distributions of $\alpha(E)$ calculated weighting the contribution
of each source with its (energy dependent) flux.
These flux weighted distributions
have a shape that depends weakly on energy and
is also much more narrow.
For the same three values of the energy used before
($E = 0.1$, 1 and 100~GeV) the average spectral index is
$\langle \alpha (E) \rangle = 1.91$, 2.18 and 2.30,
and the r.m.s. width is $\sigma_\alpha (E) = 0.30$, 0.36 and 0.37.
The very different energy dependence of the average spectral
index calculated with and without weighting each source
by its flux is also evident in Fig.~\ref{fig:alphamed}.
In the figure the dashed line
shows (as a function of energy) the average
$\langle \alpha (E) \rangle_{\rm sources}$ of the index for all
extragalactic sources fitted with the log--parabola expression.
This quantity grows linearly with $\ln E$ with a coefficient
$2 \, \langle \beta \rangle \approx 0.59$.
The solid line shows the flux weighted average
$\langle \alpha (E) \rangle_{\rm flux}$
of all extragalactic sources (including those with a power law fit).
In this case, in the interval 0.1--100~GeV,
the average takes values in small interval
(between 2.21 and 2.30), and taking into account statistical uncertainties
is consistent with being constant.
This is equivalent to the result that the average of the fluxes
of the extragalactic sources can be described as a simple power--law
in the energy interval considered.
\subsection{Sum of the spectra of the extragalactic point sources}
The difference between the EGB (the total extragalactic flux)
and the IGRB (the isotropic component of the flux) is equal by definition
to the angle averaged contribution of all resolved point sources.
This relation can be written explicitely as:
\begin{equation}
\phi_{\rm resolved} (E) = \phi_{\rm EGB} (E) - \phi_{\rm IGRB} (E) = \frac{1}{\Delta \Omega} ~\sum_j \phi_j(E)
\label{eq:resolved}
\end{equation}
where the summation is over all extragalactic sources in the solid angle
$\Delta \Omega$. As already discussed,
in the energy range 0.1--100~GeV both the EGB and the IGRB
are well fitted by power--law with approximately equal spectral indices
(of order $\alpha \simeq 2.30$). This implies that the average spectrum
of the resolved extragalactic point sources, in reasonably good approximation,
can also be described by a simple power--law.
We have checked the validity of Eq.~(\ref{eq:resolved})
obtaining the left--hand size of the equation from the measurements of the EGB and IGRB
in the Fermi--LAT publication \cite{Ackermann:2014usa}
and calculating the right--hand side summing the fits to
the 3543 extragalactic sources in the sky region $|\sin b| > 0.25$,
and dividing by the appropriate solid angle ($\Delta \Omega = 3 \, \pi$).
The result of this exercise is shown in Fig.~\ref{fig:gamma_extragal},
and shows good agreement in the energy range 0.1--100~GeV.
At higher energy the sum of the fits to the point sources
become larger than the measurement of the resolved extragalactic flux,
but this is expected because the form of the fits does not take
into account the effects of gamma--ray absorption
that become significant in this range.
The result that the average of the fluxes of all extragalactic point sources
is well described by a simple power--law was in fact already evident
in Fig.~\ref{fig:alphamed} that shows the energy dependence
of $\langle \alpha(E) \rangle_{\rm flux}$, the (flux weighted)
average spectral index of the extragalactic sources.
This quantity is in fact identical to the spectral index
of the sum (or average) of all components, and therefore
the result that $\langle \alpha(E) \rangle_{\rm flux}$ is approximately
constant is equivalent to the statement that the spectrum of the
resolved extragalactic flux is a simple power law.
The result shown in Fig.~\ref{fig:gamma_extragal} is of course
only a consistency check, a demonstration that the fits performed
by the Fermi--LAT Collaboration for the spectra of the detected point sources
are reasonably accurate.
The interesting point is to understand the origin
of this result, that the sum of contributions that have
a large variety of spectral shapes
combine to form a simple power--law spectrum.
This can considered as a simple ``just so'' fact,
but in the following we will take the point of view that
it is something that requires a critical discussion
and the development of an understanding.
It is in fact not obvious how a
a featureless, scale free power--law spectrum can emerge from
the combination of components that have different shapes.
Also in the case where the components
have power--law form, with a distribution of different slopes,
the average is a convex, gradually hardening spectrum.
The next section will show under which conditions an ensemble
of curved, softening spectra (such as those described by the
log--parabola form) can combine to form an average of power--law form.
\section{Combination of components}
\label{sec:combined-spectra}
\subsection{Power--law components}
\label{sec:combined-power-laws}
It is straightforward to see that the sum of components that
have power--law form but different slopes results in a spectrum
that has a convex, gradually hardening form.
Let us consider a flux that is formed by the sum of many components:
\begin{equation}
\phi(E) = \sum_j \phi_j (E) = \sum_j K_j ~\left ( \frac{E}{E_0} \right )^{-\alpha_j}
\label{eq:flux-comb0}
\end{equation}
(with $E_0$ an arbitrary reference energy).
The spectral index $\overline{\alpha} (E)$ of the total flux
is simply the average of the spectral indices of the components:
\begin{equation}
\overline{\alpha}(E) = \langle \alpha (E) \rangle = \frac{1}{\phi(E)} ~\sum_j \phi_j (E) \, \alpha_j
= \frac{1}{\phi(E)} ~\sum_j K_j ~\left( \frac{E}{E_0} \right )^{-\alpha_j} \, \alpha_j ~.
\label{eq:alpha_pow}
\end{equation}
With growing $E$ the contributions of the hard components (with small $\alpha_j$)
increase in importance, because they are weighted with a larger
factor ($\propto E^{-\alpha_j}$), and the spectral index
decreases monotonically with $E$.
Eq.~(\ref{eq:flux-comb0}) can also be rewritten in the form:
\begin{equation}
\phi(E) = \int d\alpha ~K(\alpha, E_0) ~ \left ( \frac{E}{E_0} \right )^{-\alpha}
\label{eq:flux-comb}
\end{equation}
where $K(\alpha, E_0)$ gives the contribution to the total flux
at the energy $E_0$ of all sources that have spectral index $\alpha$.
An instructive case is when $K(\alpha, E_0)$ is a gaussian of width $\sigma_\alpha$
around the average value $\alpha_0$:
\begin{equation}
K (\alpha, E_0) = \frac{\phi(E_0)}{\sqrt{2 \, \pi} \, \sigma_\alpha}
~\exp \left [-
\frac{(\alpha - \alpha_0)^2}{2 \, \sigma_\alpha^2} \right ] ~.
\label{eq:alpha_dist1}
\end{equation}
In this case the integral in Eq.~(\ref{eq:flux-comb})
can be performed analytically
with the result:
\begin{equation}
\phi(E) = \phi(E_0) ~\left ( \frac{E}{E_0} \right )^{-\alpha_0 + \frac{1}{2} \sigma_\alpha^2 ~\ln (E/E_0)} ~.
\label{eq:power-comb}
\end{equation}
The spectral index $\overline{\alpha}(E)$ of the total flux is then:
\begin{equation}
\overline{\alpha} (E) = \alpha_0 - \sigma_\alpha^2 ~\ln \frac{E}{E_0} ~.
\label{eq:alpha-gaussian}
\end{equation}
Comparing Eqs.~(\ref{eq:power-comb}) and~(\ref{eq:log_parabola_form})
one can see that an ensemble of components of power law form
with a (flux weighted) distribution of spectral index that
is a gaussian of width $\sigma_\alpha$, combine to form
an average that is of log--parabola spectrum with a (negative)
curvature parameter $\beta = -\sigma_\alpha^2/2$.
In our discussion we have made the assumption that
the distribution of spectral index of the components
is a gaussian at the reference energy $E_0$. This assumption
however implies that the distribution of spectral index is a gaussian
with a constant (energy independent) width $\sigma_\alpha$
for {\em all} values of the energy $E$.
The average $\langle \alpha (E) \rangle = \overline{\alpha}(E)$
does however vary with $E$ following Eq.~(\ref{eq:alpha-gaussian}).
This corresponds to the fact that at different values of the energy,
different components are dominant in forming the total flux,
with harder components becoming important at higher energy.
At the energy $E$, only the subset of sources
with spectral index in an interval of width $\sigma_\alpha$
around the central value
$\langle \alpha (E)\rangle = \alpha_0 - \sigma_\alpha^2 \, \ln(E/E_0)$
give significant contributions to the total flux.
\subsection{Log--parabola components}
\label{sec:combined-logparabola}
If a spectrum is formed by the combinations of components that
have log--parabola form, the spectral shape is determined
by the combination of two effects that act in opposite directions.
When the energy increases:
(i) components with harder and harder spectral shape
become dominant, but
(ii) the spectra of all components (assuming $\beta > 0$) soften gradually.
It is therefore possible that the two effects cancel resulting
in an average flux that is a simple power--law.
The spectral index of a the sum of an ensemble of
components of log--parabola form around the
(arbitrary) energy $E_0$ can be estimated as:
\begin{equation}
\overline{\alpha} (E)
= \langle \alpha(E) \rangle
\simeq \langle \alpha(E_0) \rangle
+ \left (2 \, \langle \beta (E_0) \rangle - \sigma^2_{\alpha} (E_0) \right )
~\ln \left (
\frac{E}{E_0} \right )~.
\label{eq:cancellation0}
\end{equation}
In this equation
$\langle \alpha (E_0) \rangle$
is the (flux weighted) average spectral index
at the energy $E_0$, and the curvature of the spectrum
(that is the derivative $d\overline{\alpha}(E)/d\ln E$)
is determined by the sum of two terms of opposite sign
that describe the two effects discussed in the previous paragraph.
The first one: 2 $\langle \beta(E_0)\rangle$
is associated to the fact that the spectra of the individual
components are softening [see Eq.~(\ref{eq:alpha_logparabola1})],
and is simply the average of the curvature parameters of the components.
The second term
is associated to the fact that
at higher energy the total flux is formed by harder components
[see Eq.~(\ref{eq:alpha_pow})], and $\sigma_{\alpha} (E_0)$
is the width of the spectral index distribution at $E_0$.
Inspecting Eq.~(\ref{eq:cancellation0}) one can see that the curvature of the
spectrum of the average flux at the energy $E$ can
vanish if the condition
\begin{equation}
2 \, \langle \beta(E) \rangle = \sigma_\alpha^2 (E)
\label{eq:cancellation}
\end{equation}
is satisfied.
If Eq.~(\ref{eq:cancellation}) is valid in a finite energy interval,
then in that range the spectrum is described by a simple power--law
of constant spectral index.
The spectrum of the resolved extragalactic gamma--ray flux
obtains its power law form because of this ``cancellation effect''.
At higher energy the flux is generated by harder components,
however the spectra of the individual components are also
gradually softening, and the spectral index of the total flux
remains approximately constant.
\section{Statistical properties of the ensemble of the gamma--ray sources}
\label{sec:statistical}
The fact that the extragalactic gamma--ray sources
emit spectra that have a variety of different shapes,
is of course of great importance
to develop an understanding of the mechanisms that generate the spectra.
It is however not immediately clear what is the significance of the fact that
the average flux formed by the ensemble of all sources can be well
described by a simple power law in a broad energy range.
This result can be seen as an
``accident'' without any deep physical meaning.
In the following however we will start from the assumption that this
result is perhaps revealing some interesting property for the
{\em ensemble} of the extragalactic sources.
We will discuss this problem in terms an ensemble of sources
that emit spectra $q_j(E)$ (with $j$ an index that runs over all the sources)
of different shape and normalization.
The combined emission, obtained summing all sources will be indicated by $Q(E)$.
A result of general validity is the following.
The combined emission $Q(E)$ has a simple power--law form if
these two conditions are satisfied:
\begin{itemize}
\item [(A)] The spectral shape of each component
is determined by a parameter $E_*$ (with the dimension of energy),
and is a function of the ratio $E/E_*$, so that
the emission from the $j$--th source has the form:
\begin{equation}
q_j (E) = \frac{q_{0,j}}{E_{*,j}} ~F\left ( \frac{E}{E_{*,j}} \right )
\label{eq:qj}
\end{equation}
where $q_{0,j}$ is a normalization factor,
$E_{*,j}$ is the (source dependent) characteristic energy,
and $F(x)$ is a function of arbitrary shape.
Without loss of generality one can impose the condition
that $F$ is normalized to unity,
so that the energy integrated emission from the $j$--th source
is $q_{{\rm tot},j} = q_{j,0}$.
\item[(B)] The (energy integrated) emission
of all sources characterized with characteristic energy
$E_*$ has the power law dependence:
\begin{equation}
\frac{dQ_{\rm tot}}{dE_*} (E_*) = Q_0 ~\left (\frac{E_*}{E_0} \right )^{-p}
\label{eq:qtot-estar}
\end{equation}
(with $E_0$ an arbitrary reference energy).
For $p > 0$ Eq.~(\ref{eq:qtot-estar})
states that the emission of the sources
decreases when their spectral hardness increases.
\end{itemize}
A demonstration of the theorem stated above is straightforward.
The total emission $Q(E)$ can be written as an integral
over the parameter $E_*$ as:
\begin{equation}
Q(E) = \sum_j q_j (E) = \int_0^\infty dE_*
~\frac{dQ_{tot}}{dE_*}
~\frac{1}{E_*}
~F\left ( \frac{E}{E_{*}} \right )
\label{eq:q0}
\end{equation}
where the quantity $dQ_{\rm tot}/dE_*$
\begin{equation}
\frac{dQ_{\rm tot}}{dE_*} = \sum_j q_{0,j}~\delta[E_* - E_{*,j}]
\label{eq:dqstar}
\end{equation}
describes the contribution to the emission
of all sources with critical energy $E_*$.
If $dQ_{\rm tot}/dE_*$ has the power--law form of Eq.~(\ref{eq:qtot-estar})
the integration over $E_*$ in Eq.~(\ref{eq:q0})
can be performed analytically, and the combined emission
takes the form:
\begin{equation}
Q(E) = Q_0 \; k_p ~\left (\frac{E}{E_0} \right )^{-p}
\label{eq:q1}
\end{equation}
where $k_p$ is an adimensional constant that depends
on the exponent $p$ and on the shape of the function $F(x)$:
\begin{equation}
k_p = \int_0^{\infty} \; dx ~x^{p-1} \; F(x) ~.
\label{eq:kk}
\end{equation}
This completes the demonstration of our general theorem.
The important point of Eq.~(\ref{eq:q1}) is that the
spectral index $p$ of the combined emission
is associated to the statistical property of
the ensemble of the sources,
and describes how the luminosity of the sources
decreases with the hardness of their spectra.
\subsection{Log--parabola spectra}
The general discussion that we have developed above can be applied
to spectra of log--parabola (or log--normal) spectra.
The only complication is that the log--parabola
spectra are defined not by a single parameter, but by two,
that can be chosen as the characteristic energy $E_*$
and the curvature parameter $\beta$
(see discussion in Sec.~\ref{sec:point-sources}).
If $\beta$ has a single value the results discussed above
are immediately applicable. In this case the (normalized) function $F(x)$
[see Eq.~(\ref{eq:log_parabola_form1})] has the form:
\begin{equation}
F(x) = \sqrt{\frac{\pi}{\beta}} ~e^{-1/(4 \, \beta)} ~x^{-(2+\beta \, \ln x)}
\label{eq:flog}
\end{equation}
and the constant $k_p$ in (\ref{eq:kk}) becomes:
\begin{equation}
k_p = e^{(p^2-4 p + 3)/(4 \, \beta)}
\label{eq:kk-log}
\end{equation}
The quantity $dQ(E)/dE_*$ that describes the contribution
of sources characterized by the parameter $E_*$
to the total emission at the energy $E$
can be calculated using Eqs.~(\ref{eq:q0}) and~(\ref{eq:flog})
with the result:
\begin{equation}
\frac{1}{Q(E)} ~\frac{dQ(E)}{d\ln E_*} =
\frac{1}{\sqrt{2 \, \pi} \; \sigma_{\ln E_*}} ~
\exp \left [
- \frac{ (\ln E_* - \langle \ln E_* (E) \rangle)^2} {2 \, \sigma_{\ln E_*}^2}
\right ] ~.
\label{eq:q-logestar}
\end{equation}
This distribution is a gaussian in $\ln E_*$ with width
\begin{equation}
\sigma_{\ln E_*} = 1/\sqrt{2 \, \beta}
\end{equation}
and average
\begin{equation}
\langle \ln E_* (E) \rangle = \ln E - \left (\frac{p-2}{2 \, \beta} \right ) ~.
\end{equation}
In other words, the emission at the energy $E$ is generated by
sources that have the $E_*$ parameter in a relatively small range of
values centered at a value $E_* \simeq E \, e^{-(p-2)/(2 \, \beta)}$
that grows linearly with $E$, so that increasing the energy, the emission is
dominated by harder and harder sources
(with larger and larger characteristic energy $E_*$).
The combined emission at energy $E$ has the spectral index $p$,
but the slopes of the contributions of the individual sources have a range
of values. It is easy to compute the shape of this distribution
using Eq.~(\ref{eq:q-logestar}) and the relation between the slope
$\alpha(E)$ and the critical energy $E_*$. The result is:
\begin{equation}
\frac{1}{Q(E)} ~\frac{dQ(E)}{d\alpha} =
\frac{1}{\sqrt{2 \, \pi} \; \sigma_{\alpha}} ~
\exp \left [
- \frac{ (\alpha - p)^2} {2 \, \sigma_{\alpha}^2}
\right ] ~.
\label{eq:q-alpha}
\end{equation}
This expression is again a gaussian, however in this case
both the width and the average are energy independent.
The average spectral index is
\begin{equation}
\langle \alpha \rangle = p
\end{equation}
(as it must because we have already demonstrated that the
combined emission is a power law of constant slope), and the
constant width is again:
\begin{equation}
\sigma_\alpha = 1/\sqrt{2 \, \beta} ~.
\end{equation}
The results of Eqs.~(\ref{eq:q-logestar}) and~(\ref{eq:q-alpha})
have been obtained for a unique value of the curvature parameter $\beta$,
it is however straightforward to generalize to the case
where the sources have an arbitrary $\beta$ distribution
with a shape that is independent from the value of
the characteristic energy $E_*$,
as (in first approximation) is the case for the extragalactic gamma--ray
sources (see Fig.~\ref{fig:estar_beta}).
In this more general case the distributions of $\ln E_*$ and
of $\alpha(E)$ are the superposition of gaussians with
$\beta$ dependent width and average.
The distributions of $\alpha(E)$ of the extragalactic sources
shown in the bottom panel of Fig.~\ref{fig:alpha1}
have an energy independent shape
and are consistent with the results of Eq.~(\ref{eq:q-alpha}).
\section{The origin of power--law spectra}
\label{sec:power-laws}
Power--laws appear widely in
physics, biology, economics, social sciences and many other fields.
For instance they describe the distributions of the size of earthquakes,
moon craters, towns and cities, forest fires and
many other data sets \cite{newman_powerlaws}.
A list of real--world data sets from a range of different
disciplines that can be reasonably well described by
power--law distributions is for example presented in \cite{clauset_powerlaws},
and it is remarkable that the spectral indices
that describe these data sets have values between 1.7 and 3.1,
and in several cases the best fit to the spectral index
is of order 2.3--2.4.
The origin of these power--law distributions
has been a topic of debate for over a century.
A possible explanation for a number of these power--laws is
the intriguing concept of self--organized criticality, originally proposed
by Bak {\it et al.} \cite{Bak:1987xua,Bak:1988zz}
to describe dynamical systems (such as the paradigmatic sandpile model)
that evolve naturally toward a critical state that has no intrinsic time or length scale.
Well known ``toy models'' examples of this idea
have been developed with numerical simulations of cellular automata,
that describe approximations of sand piles \cite{Bak:1988zz}
or forest fires \cite{Bak:1990,Drossel:1992}.
The concept of self--organized criticality has been applied to a wide range
of fields from biophysics (evolution and extinctions, spread of diseases)
to social sciences (urban growth, traffic, internet),
and also in astrophysics \cite{Aschwanden:2014dna}.
It is not clear if the concept of self--organized criticality is
also relevant to understand the origin of the approximate power--law shape of
the cosmic ray spectra, a problem that has been of central importance
in high energy astrophysics for many decades.
Current models for Galactic cosmic rays
(see for example the textbook of
Gaisser, Engel and Resconi \cite{Gaisser:2016})
explain the power--law form of the spectrum with the existence
of a ``universal'' acceleration mechanism
that generates power--law spectra with a unique spectral index $\alpha$
(below a maximum energy that can be source dependent).
The source spectra are then distorted by propagation effects,
that for ultrarelativistic protons and nuclei have
a power--law energy dependence
characterized by the slope $\delta$, so that
the CR fluxes observable at the Earth have power--law spectra
with a slope $\gamma$ determined by both
(acceleration and propagation) mechanisms: $\gamma \simeq \alpha + \delta$.
The ``universal'' CR acceleration mechanism is commonly identified
as ``first order Fermi acceleration'' \cite{Blandford:1987pw}
(based on a modification of ideas originally proposed by Fermi \cite{Fermi:1949ee}),
where particle are accelerated while propagating in a magnetized
plasma in the presence of strong shock waves,
such as those generated by supernova explosions.
Simplified treatments of this mechanism
generate spectra with index $\alpha = 2 + 4/M^2$ where
$M$ is the Mach number of the shock wave in the upstream region.
For strong shocks ($M \gg 1$) this corresponds to an approximately universal
slope $\alpha \simeq 2 +\varepsilon$ (with $\varepsilon$ positive and small).
The ``standard'' model for the acceleration of Galactic cosmic rays
appears quite distant from the concepts of self--organized criticality.
On the other hand, in this work we have shown that, under certain conditions,
a power--law spectrum can also be formed by the combination of
components that have different shapes.
In particular, the discussion in Sec.~\ref{sec:statistical}
demonstrates that the power--law shape of the extragalactic gamma--ray flux
emerges from the relation between the luminosity of the sources
and the hardness of their spectra [see Eq.~(\ref{eq:qtot-estar})].
This luminosity--hardness relation is of power--law form, and its slope
determines the spectral index of the gamma--ray emission.
If the origin of the power--law form of the spectrum
is based on a mechanism of this type, and is related to the
statistical properties of the ensemble of the sources,
the concepts of self--organized criticality
become much more relevant.
For example, an intriguing possibility is that there is an analogy between
the flares that accelerate particles in blazars and earthquakes.
The distribution of the energy released during earthquakes has been found
to obey the well known Gutenberg--Richter law \cite{gutenberg-richter},
originally proposed on the basis of empirical observation.
The Gutenberg--Richter law is usually formulated
stating that the frequency of
earthquakes with magnitude greater than $m$ is given by the relation
$\log_{10} N = a - b\; m$, where $a$ and $b$ are adimensional constants
with values that depend on the region of the Earth
(with the parameter $b$ close to 1.0).
The Gutenberg--Richter law can be reformulated
stating that the differential frequency distribution for the
release of energy $\mathcal{E}$ in an earthquake
has the power--law form: $dN/d\mathcal{E} \propto \mathcal{E}^{-(b+1)}$.
The mechanisms that generate this frequency--magnitude relation
are not yet fully clarified, but have been interpreted as the
the consequence of the fact that the Earth crust is in a state of
self--organized criticality \cite{Bak:2002zz}.
One can speculate that the blazar emission (that dominates
the extragalactic gamma--ray flux) is generated by
``flares'' (presumably associated to the accretion
flow on the central black hole) that are less frequent
(or less energetic) when they form harder spectra.
The spectral index of the average extragalactic
flux is then related to the exponent that describes
the frequency of events with high and low characteristic energy.
In this framework, the slope of the average extragalactic flux
is analogous to the parameter $(b+1)$ of the Gutenberg--Richter law.
A better and closer analogy for the origin of the extragalactic gamma--rays
is the acceleration of particles in solar flares.
The spectra of relativistic particle generated in
solar flares have a large variety of spectral shapes,
however the time averaged spectra
measured in the energy range from 10~KeV to 100~MeV
\cite{mewaldt_2001,mewaldt_2007} have a smooth shape
that can be well approximated as a simple power--law.
This result can be explained with the same argument
outlined above, and assuming that the generation
of the flares is a critical phenomenon.
In fact already in 1991 Lu and Hamilton \cite{Lu-Hamilton} have argued that
solar flares can be seen as analogous to the avalanches of sand
in the models published by Bak and colleagues.
This naturally explains why the flares have a very broad size distributions
that is well described by a single power--law
that spans five orders of magnitude.
In this interpretation the classification of flares
into nanoflares, microflares,
giant flares and so on, is arbitrary, because they
are all generated by the same fundamental mechanism.
In order to form a featureless time integrated spectrum, it is also
necessary to assume that the spectral shape of the particles
accelerated in one flare is related to the total energy contained
in one event.
\section{Galactic cosmic rays}
\label{sec:galactic}
As discussed above, the study of Galactic cosmic rays indicates
that the average source spectrum released in interstellar space
by the Milky Way accelerators, in a broad energy range, has a power--law shape with
a spectral index of order $\alpha_0 \approx 2.2$--2.4.
In the ``standard model'' for the acceleration of Galactic cosmic rays
all CR sources (or at least those that are dominant)
generate spectra that have a unique, ``universal'' shape
(that is obviously identical to the shape
of the space and time averaged spectrum).
The prediction of the existence of this ``universal'' acceleration
spectrum has very important implications for high energy astrophysics,
but has not yet received a clear confirmation from the observations.
The alternative possibility is that the power--law form of the Galactic
CR source spectrum emerges as the average of components
of different spectral shape.
This question can be investigated experimentally,
studying the energy distributions of freshly accelerated particles
inside or near the acelerators.
Information about these CR spectra can be obtained from the
observations of the emission of gamma--rays (and neutrinos) from these
astrophysical sources.
We have already used in this work the Fermi--LAT observations
of the gamma--ray sources summarized in the 4FGL catalog
\cite{Fermi-LAT:2019yla} to study the extragalactic sources.
The catalog also give information about Galactic sources, and
in particular about the spectra of young Supernova remnants (SNR),
that are commonly considered as the most attractive class of objects
for the main accelerators of Galactic cosmic rays.
The 4FGL catalog contains 40 sources that are associated to SNR.
The best fits for these 40 sources are shown in Fig.~\ref{fig:snr_spectra}.
Fifteen of these SNR sources have been fitted with
a simple power--law form, and the spectral index of these fits
takes values in a broad interval that goes from 0.96 to 2.49,
with average $\langle \alpha \rangle \simeq 1.98$,
and a width (r.m.s.) $\sigma_\alpha = 0.36$.
The other 25 sources have
curved spectra and have been fitted with the log--parabola
expression of Eq.~(\ref{eq:log_parabola_form}).
Inspecting the figure one can see that sources
fitted with the log--parabola form are typically
brighter than those fitted with the power--law form,
and summed together account for 90.1\% of the flux of all SNR sources.
Fig.~\ref{fig:snr_parameters} shows the shape parameters of the fits
to the SNR gamma--ray spectra.
For the 15 sources fitted with a power--law form,
the lower part of the figure shows the best fit spectral index
and the 1--$\sigma$ error. For the 25 sources fitted
with the log--parabola form,
the figure shows the spectral index $\alpha(E)$ for two values
of the energy: $E = 200$~MeV and $E = 10$~GeV.
For each of these sources the spectral shape is represented by
two points (with error bars) in the plane $\{\alpha, \beta\}$
(showing the spectral index $\alpha$ at the two energies considered,
while the parameter $\beta$ is energy independent).
Figures~\ref{fig:snr_spectra} and~\ref{fig:snr_parameters}
show that the gamma--ray emission from SNR has a very broad range of
spectral shapes, and this suggests that the CR populations
contained in the sources have also a broad range of energy distributions.
Can these results be reconciled with the idea that
all SNR generate spectra of universal shape?
Unfortunately the answer to this question is not trivial and requires
a detailed modeling of the sources. There are in fact
some significant difficulties for the
interpretation of the SNR data. \\
(i) The observations of each source are effectively only one
``snapshot'', taken at a single time, of an evolving object. \\
(ii) One needs a model for the space distributions of the
populations of relativistic particles and of the target
(gas and radiation fields) inside the sources.
Given these difficulties, it is perhaps possible that the large
variations in the spectral shape of the emission from SNR
remnants can be attributed to differences in the age and environment
of the supernovae, so that the time integrated spectra of different objects
have equal shape.
In this work we will not attempt a discussion of these problems
and a review of the large body of literature
on the modeling of particle acceleration
in Supernova Remnants. We can however comment that also the
alternative possibility, that different supernovae accelerate
cosmic rays populations with different spectral shapes is consistent with
the observations. The hypothesis that the spectra of particles
accelerated in SNR do not have a unique spectral shape is
not necessarily inconsistent with the idea that
SNR are the dominant source of the Galactic cosmic rays,
if the contributions of a sufficiently large number of
objects combine to form an average spectrum of power--law form.
Recently, it has been observed that the Galactic CR spectra
do not have exactly a power--law form, but contain
features such as a hardening at a rigidity of order 300~GV, and a
softening at 10~TV \cite{Ahn:2010gv,Adriani:2011cu,Aguilar:2015ooa,An:2019wcw}.
A possible interpretation for these deviations of the spectrum
from a simple power--law form is that they are
the manifestation of the fact that the CR flux is
formed by components that do not have identical
energy distributions \cite{Lipari:2019jmk}.
\section{Outlook}
\label{sec:conclusions}
The observations of Fermi--LAT have shown that most of extragalactic
gamma--ray flux is generated by blazars.
These sources emit spectra that have a broad range of shapes
that in most cases are ``curved'', that is they do not have
a constant spectral index, but soften gradually when the energy increases.
This fact is clearly of great importance to develop an understanding
of the mechanisms that accelerate and confine particles in AGN jets.
The fact that the emissions from blazars do not
have a unique shape implies that the extragalactic
flux at different energies is generated by different objects.
This can be of great importance
for the study of the origin of the astrophysical neutrino flux
recently discovered by IceCube
\cite{Aartsen:2013jdh,Aartsen:2014gkd,Aartsen:2015rwa}.
This neutrino flux emerges as an approximately isotropic
(and therefore extragalactic) component
above the atmospheric foreground at very high energy ($E \gtrsim 100$~TeV).
Blazars, that are the dominant source of extragalactic gamma--rays
in the energy 0.1--10$^3$~GeV, are the most
natural candidate for the class of objects that generates
the neutrino signal, because in all theoretical models
high energy gamma--ray and neutrino emissions are intimately related.
There is strong evidence \cite{IceCube:2018dnn,IceCube:2018cha}
that one blazar (TXS 0506+056) is a high energy neutrino emitter,
however studies of the correlation between the directions of the
high energy neutrinos detected by IceCube and the positions of the
blazars observed by Fermi--LAT have yielded only upper limits
\cite{Aartsen:2016lir} on the maximum contributions of these objects
to the astrophysical neutrino signal.
For these correlation studies it is important to tale into account
the fact that the gamma--rays and the neutrinos are observed
in different energy ranges, and the result discussed in this paper
that different sources have different spectral shapes must
be taken carefully into account to obtain an estimate of the
contribution of the blazars to the neutrino flux.
This problem deserves a detailed discussion
that is postponed to a future paper.
It is remarkable that the average spectrum generated by the ensemble of
all extragalactic sources, in a broad energy interval,
can be well described by a simple, featureless power--law form,
with a spectral index of order 2.30.
This result emerges even if the spectra of the individual sources are
``curved'' and gradually softening, because with increasing energy
objects with harder spectra become dominant. For any $E$
in the range of the Fermi--LAT observations
the extragalactic flux is dominated by sources with
spectral index of order 2.30. The sources dominant at energy $E$
are less important both at lower energy
(when they have a harder spectra and their
relative contribution is growing) and at higher energy
(when they have softer spectra and their contribution is decreasing).
The ``hardness'' of the spectrum of a source
can be parametrized by the value $E_*$ of the energy where
the spectral index has the value
$\alpha(E_*) = 2$. The result that the average spectrum of
all extragalactic sources has a simple power--law form is then
equivalent to the statement that the emission $dQ_{\rm tot}/dE_*$
of the sources that generate spectra characterized by the parameter $E_*$
has a power--law dependence on $E_*$: $dQ_{\rm tot}/dE_* \propto E_*^{-p}$.
The exponent $p$ is then also the spectral index of the average
emission spectrum.
The emission of gamma--rays from blazars is then analogous
to the production of solar energetic particles, that are generated by
solar flares that have a very broad range of sizes, with small
and frequent flares that generate soft spectra, and large and rare flares
that generate hard spectra. The time averaged spectrum of the flares
is also reasonably well described by a simple power--law.
In this scenario, the power--law shape of the extragalactic gamma--ray
emission emerges because of the statistical properties of the
blazar flares, and the (power--law form) relation between
the flares frequency and energy output, and the hardness of the
spectra of the particles that they accelerate.
The flaring of blazars can then be seen as one example of a critical
phenomenon, analogous for example to the generation of earthquakes in
the crust of the Earth.
It is natural to speculate if some of the results and
considerations developed here for the gamma--ray emission
from blazars can be relevant also for other classes of
high energy sources, in particular for the acceleration
of Galactic cosmic rays.
In this respect it is interesting to note
that also the Galactic gamma--ray sources measured by Fermi--LAT
have a broad range of spectral shapes, and
a large fraction of them is fitted with the
``curved'' log--parabola expression.
The sources with gradually softening spectra are
also bright and account for 73\% of the total flux
of all point sources (excluding Pulsars).
For Supernova remnants, objects fitted with the log--parabola spectrum
account for more than 90\% of the flux.
Since the curvature of the spectrum of a faint object is
difficult to observe, this suggests that perhaps most
of both Galactic and extragalactic sources have curved spectra.
These results appears in conflict with the simple idea
that astrophysical acceleration mechanisms always
generate power-law spectra,
and suggest to investigate in depth alternative models,
where only the average of many sources can be described by
a spectrum of constant slope.
\vspace{0.25 cm}
\noindent{\bf Acknowledgments.}
I'm grateful to Tom Gaisser for pointing my attention to the
acceleration of particles in solar flares, and to Silvia Vernetto
for many discussions.
|
2,877,628,091,496 | arxiv | \section{Introduction}
The light-harvesting antenna complexes of purple non-sulphur photosynthetic bacteria provide prime examples for the importance of quantum effects for biological function~\cite{cogdell06_227,scholes17_647}. Fascination especially among non-biologists has been triggered by the publication of the high-resolution structure for the peripheral antenna LH2 of \textit{Rhodopseudomonas (Rps.) acidophila} by McDermott et al. in 1995~\cite{mcdermott95_517}. The modular design of rings, comprised of nine pairs of $\alpha\beta$-apoproteins, each pair binding three BChl $a$ molecules, facilitates a wealth of scenarios as far as exciton dynamics is concerned (for reviews, see e.g. Refs. \cite{pullerits96_381, kuhn97_213, valkunas00, renger01_137}).
From the dynamics and spectroscopy point of view the LH2 consists of two different pigment pools, i.e.\ the strongly coupled 18 BChl $a$ molecules forming the B850 ring, whose bacteriochlorin planes are perpendicular to the transmembrane $\alpha$ helices, and the more weakly coupled 9 BChl $a$ molecules, whose bacteriochlorin planes are essentially perpendicular to the B850 ones. The two pigment pools gives rise to two absorption features at about 800 and 850~nm as indicated by the labeling. It is commonly assumed that the electronic excitation of the B850 pool is rather delocalized and the transfer is of exciton relaxation type~\cite{kuhn97_4154,chachisvilis97_7275}, whereas the B800 pool is characterized by hopping like incoherent transfer~\cite{pullerits97_10560}, although the latter view has been challenged by recent simulations~\cite{smyth15_30805,shibl17_184001}. Different proposals also exist for the inter-pool B800 to B850 transfer. Due to the weak coupling, modified F\"orster theory taking into account the excitonic delocalization seems to be appropriate~\cite{scholes03_57,sener11_518}. However, due to the overlap between the B800 and B850 band states inter-pool coherences could be operative to facilitate the rapid B800-B850 transfer ~\cite{wu96_12022,pullerits97_10560,kuhn97_3432,renger01_137,smyth15_30805,shibl17_184001}. This view has been supported by recent investigations using two-dimensional spectroscopy~\cite{karki18_,tiwari18_4219,schroter18_114107}.
In terms of the absorption spectrum, \textit{Rps. acidophila} is rather typical for purple bacteria. Other commonly studied natural variants feature band shifts or suppression of one band~\cite{cogdell06_227}. An interesting exception in this respect is \textit{Alc. vinosum}, whose B800 absorption band has a double-peak structure~\cite{kereiche08_3650}. There are two hypotheses concerning the origin of this band splitting into a blue (B800b) and red (B800r) component. First, the two peaks could be due to two structurally slightly different LH2 complexes, similar to what has been found for \textit{Chromatium tepidum}~\cite{vandijk98_1269}.
A second hypothesis builds on the observation that there are two main $\alpha$-apoprotein types, suggesting that there could be a structural motif with alternating protein subunit types within a single LH2~\cite{carey14_1849}. This could lead to an excitonic dimerization of the B800 pool, i.e. due to alternating intermolecular distances and/or different monomeric transition energies. L\"ohner et al. have proposed an excitonic model with alternating distances but equal transition energies to simulate their circular dichroism and polarization-resolved single-molecule spectroscopy data taken at 1.2~K~\cite{lohner15_23}. However, in earlier transient absorption experiments the excitonic coupling of the B800 bands upon selective excitation of one sub-band was not observed as a simultaneous bleaching signal~\cite{niedzwiedzki12_1576}. Parallel to these findings, hole-burning experiments have been interpreted in terms of weakly and strongly hydrogen-bonded pigments giving rise to the two B800 sub-bands, including conformational changes due to proton transfer upon illumination~\cite{kell17_4435}. Dimerization has also been invoked in the transient absorption study reported in Ref.~\cite{luer15_1885}.
In a recent investigation of the dynamics of \textit{Alc. vinosum} at 77~K using two-dimensional electronic spectroscopy, Schr\"oter et al.~\cite{schroter18_1340} provided unambiguous evidence for the excitonic coupling between the two sub-bands. Analysis of diagonal and cross-peak evolution time scales for the inter-band transfers have been established as follows: 3.9~ps for B800b$\rightarrow$B800r, 1.0~ps for B800b$\rightarrow$B850, and 1.4~ps for B800r$\rightarrow$B850. Note that these time scales are rather similar to the ones previously reported in Ref.~\cite{luer15_1885} although the B800 double-peak structure has not been very pronounced in the absorption of the studied room temperature case.
The analysis in Ref.~\cite{schroter18_1340} has been based on a global kinetic modeling of an effective three-state system. Thus, although the excitonic nature of the double-peak has been demonstrated, no information could be obtained about the underlying exciton Hamiltonian. This provides the motivation for the present study, which proposes a simple yet non-trivial system-bath model capable of reproducing the linear absorption spectrum and the inter-band exciton population relaxation times for \textit{Alc. vinosum} at 77~K.
The paper is organized as follows: In Section \ref{sec:methods} we first outline the spatial arrangement of B800 and B850 molecules, thereby following earlier work by L\"ohner at al.~\cite{lohner15_23}. Next, the system-bath approach is briefly introduced, which leads to the identification of four different models to be investigated in Section~\ref{sec:results}. Results are presented for the absorption spectra of the four models as well as for the population dynamics of that model which best fits the experimental results. A summary is provided in Section \ref{sec:summary}.
\section{Theoretical Methods
\label{sec:methods}
\subsection{Model Systems}
In Ref.~\cite{lohner15_23} a model starting with a 12-fold symmetry for the arrangement of the B800 and B850 chromophores and in particular for the direction of the transition dipole moments had been developed using \textit{Rhodospirillum molischianum}~\cite{koepke96_581} as a template. In this model the 36 {BChl} $a$ molecules are arranged in two rings (radius 38.5~\AA{}) as shown in Fig.~\ref{fig:geometry}. The center to center distance between the B800 (upper) and B850 (lower) ring is 17.3~\AA. The transition dipole moments of the {BChl} $a$ molecules, $\vec{\mu}_m$, are characterized by two angles: $\varphi_m$ is the angle between the projection of the dipole moment and the local tangent $\vec{n}_m$ of the $m$th molecule in the plane of the ring. $\phi_m$ is the angle between the dipole moment and the direction of the cylinder axis $\vec{z}$. Further the two rings are rotated with respect to each other by an angle $\psi$. Two motifs for the basic B800-B850 units are used, called B800A and B800B; they differ in the position of B800 with respect to B850 as shown in Fig.~\ref{fig:geometry}. (Note that in the following we will use the labels B800A and B800B to distinguish the two type of B800 molecules.) \textcolor{black}{This yields a dimerization of the B800 pool with intra-dimer distances of 15.4~\AA{} and inter-dimer distances of 24.4~\AA. The distance between two B850 molecules within a motif is 9~\AA{} and between the B800A and B800B motifs it is 11~\AA. } Such a dimerization is in accord with the observation of two main $\alpha$-apoprotein types with equal abundance~\cite{carey14_1849}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=1.1\columnwidth]{fig-geometry}
\caption{ Scheme of the geometry and set of angles for {BChl} $a$ molecules (B850 (lower) or B800 (upper)) in the LH2 model of L\"ohner et al. \cite{lohner15_23} (for the sake of presentation not all BChl $a$ molecules are shown). The projection of dipole moment $\vec{\mu}$ onto the ring plane and the tangent $\vec{n}$ of the ring makes the angle $\varphi$. The angle $\phi$ is between $\vec{\mu}$ and direction of the axis $\vec{z}$. Overall, there is a torsional angle $\psi$ between the two rings. In the left part the two structural motifs, with B800A and B800B molecules, are shown.}
\label{fig:geometry}
\end{center}
\end{figure}
The fitting of the set of angles by L{\"o}hner et al.~\cite{ lohner15_23} has been performed using the fluorescence excitation spectrum for the complex embedded into a polymer matrix at 1.2 K. The following values have been obtained: $\varphi_{800}=0^{\circ}$ and $\phi_{800}=90^{\circ}$ for B800, $\varphi_{850}=10^{\circ}/170^{\circ}$ and $\phi_{850}=110^{\circ}/70^{\circ}$ for the two B850 molecules within one unit, and $\psi=10^{\circ}$. Based on these geometries, the Coulomb interaction between excitations at different sites has been calculated in dipole approximation. For the monomeric dipoles values of 8.25~D (B800) and 7.5~D (B850) have been used. Further, it was assumed that all site energies for B850 and B800 are equal to $E_m=$12900 cm$^{-1}$. As far as the line broadening is concerned a simple model of constant linewidths for B800 and B850 was taken. These parameters define \textbf{model 1} of the present study; its excitonic parameters are summarized in Tab.~\ref{tab:lh2parameters}. Notice that due to the tight packing of the overall LH2 structure the maximum couplings are rather large as compared with other LH2 systems. In principle this also questions the validity of the dipole approximation, an issue which will not be further addressed here for simplicity.
\begin{table*}[t]
\caption{Parameters of the different models used in this work. \textcolor{black}{Notice that only the maximum values of the Coulomb interactions are given. The complete Hamiltonian matrices are provided in the Supplementary Material.}}\label{tab:lh2parameters}
\centering
\begin{tabular}{l|c|c|c|c}
\hline
& {model 1} & {model 2} & {model 3} &{model 4} \\ \hline
B800 dipole moment (D)& 8.25 & 8.25 & 8.25 & 8.25 \\
max. B800-B800 interaction (cm$^{-1}$) & 186 & 186 & 186 & 186 \\
$\hat{\Gamma}_{800}$ (cm$^{-1}$) & 0 & 0 & 300 & 300 \\
$\Delta E_{\rm 800B}$ (cm$^{-1}$) & 0 & -250 & -250 &-250 \\
\hline
B850 dipole moment (D)& 7.5 & 8.25 & 8.25 & 8.25 \\
max. B850-B850 interaction (cm$^{-1}$) & 629 & 761 & 761 & 761 \\
$\hat{\Gamma}_{850}$ (cm$^{-1}$) & 0 & 0 & 1750 & 1750 \\
\hline
max. B850-B800 interaction (cm$^{-1}$) & 60 & 66 & 66 & 132 \\
SB scaling $a_{800}=a_{850}$ & 0.3 & 0.3 & 0.02 & 0.02 \\
\hline
\end{tabular}
\end{table*}
The absorption spectrum for {LH2} in a buffer/glycerol matrix also reported in Ref.~\cite{lohner15_23} looks rather different from the one obtained for the polymer matrix. In fact it is closer to the spectrum reported in Ref.~\cite{schroter18_1340}, also measured in glycerol, but at 77 K (see Fig.~\ref{fig:absorption}). In order to fit this spectrum and the dynamics reported in Ref.~\cite{schroter18_1340} we have designed three more models, which as far as the geometry is concerned build on model 1. In all three models we assume equal monomeric transition dipole moments (8.25~D) and introduce some heterogeneity by shifting the B800B monomeric site energies to 12650 cm$^{-1}$.
This accounts for the different pigment-binding pockets in the two $\alpha$-apoproteins. Further, in \textbf{model 4} the B850-B800 couplings have been uniformly scaled by a factor of two (see Tab.~\ref{tab:lh2parameters}).
For all models inhomogeneous broadening is accounted for using the model of diagonal static disorder, which assumes an independent Gaussian distribution of site energies with variance of 150~cm$^{-1}$. The results presented below have been obtained by averaging over 5000 realizations.
Other differences relate to the system-bath coupling, which is introduced in the following section.
\subsection{Exciton Dynamics}
The dynamics and spectroscopy of the LH2 models will be treated using the standard system-bath (SB) model~\cite{may11,valkunas13,kuhn18_259}, see in particular the implementation in Ref.~\cite{kuhn97_4154}. The system part consists of the Frenkel exciton Hamiltonian \textcolor{black}{(for the labeling see also Fig. S1 in the Supplementary Material)}
\begin{equation}
H_{\rm S}=\sum_{mn}(\delta_{mn}E_m + J_{mn}) |m\rangle\langle n|
\end{equation}
with site energies, $E_m$, and Coulomb couplings, $J_{mn}$, as specified in the previous section. The single exciton eigenstates with energies, $E_\alpha$, will be expressed as
\begin{equation}
\label{eq:eigen}
\vert \alpha\rangle=\sum_m c_{m,\alpha}\vert m\rangle\,.
\end{equation}
The exciton dynamics is driven by interaction with an external laser field $\vec E (t)$ via the coupling to the transition dipole moments $\vec{\mu}_{m}$, i.e.
\begin{equation}
H_{{\rm F}}(t)= - \sum_m \vec E (t)\vec{\mu}_{m} |m\rangle\langle 0| + {\rm h.c.}
\end{equation}
The transition dipole matrix elements in terms of the eigenstates are given by
\begin{equation}\label{eq:dipole}
\vec{\mu}_{\alpha}=\sum_{m}\vec{\mu}_{m}c_{m,\alpha} \,.
\end{equation}
The exciton system is coupled to a thermal bath composed of harmonic oscillators with coordinates $q_\xi$ and frequencies $\omega_\xi$. The SB coupling is taken to be of the form
\begin{equation}
H_{\rm SB}= \sum_m \sum_\xi \hbar \omega_\xi (g_{m, \xi}^{(1)} q_\xi + g_{m, \xi}^{(2)} q_\xi^2)|m\rangle\langle m| \, .
\end{equation}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{fig-SD}
\caption{BChl $a$ spectral density used in this work. The data were obtained by fluorescence line-narrowing experiments in Ref.~\cite{ratsep11_024506}. The red vertical sticks are drawn at the energy gaps between the three peaks of the absorption spectrum of model 4 at 172, 864, and 1036 cm$^{-1}$.}
\label{fig:SD}
\end{figure}
Here, the coupling strength for linear and quadratic coupling is $g_{m, \xi}^{(1)}$ and $g_{m, \xi}^{(2)}$, respectively.
\textcolor{black}{
The linear coupling constant is related to the Huang-Rhys factor via $S_{m, \xi}=(g_{m, \xi}^{(1)})^2/2$. Its effect is described by a spectral density $J_m(\omega) = \sum_\xi S_{m,\xi} \delta(\omega-\omega_\xi)$. Central to phase and energy relation rate is the bath correlation function given by
\begin{equation}
C_m(\omega) = 2 \pi \omega^2[1+n(\omega)](J_m(\omega)-J_m(-\omega)]\, ,
\end{equation}
where $n(\omega)$ is the Bose-Einstein distribution function. In the following the shape of the
spectral density is taken taken from the fluorescence line-narrowing experiment on BChl $a$ in solution~\cite{ratsep11_024506} called $J^{\rm (exp)}(\omega)$, see Fig.~\ref{fig:SD}. Its reorganization energy amounts to $\lambda^{\rm (exp)}=\hbar \int d\omega \omega J^{\rm (exp)}(\omega) =196$~cm$^{-1}$. This choice provides a spectral density having a form which is typical for intramolecular modes of BChl $a$, but it doesn't capture effects due to the specific environment in LH2. In passing we note that the specific values of the spectral density at the relevant transition frequencies (see sticks in Fig.~\ref{fig:SD}) determine the relaxation rates. This means that using a simple Debye fit while retaining the overall reorganization energy would change the ratio of the spectral density values if comparing different regions and thus the pattern of relaxation rates. }
\textcolor{black}{In the simulations reported below,
to have some flexibility concerning the fitting of relaxation and dephasing rates we will introduce a site dependent SB scaling parameter $a_m$, i.e. the spectral density is assumed to have the form $J_m(\omega)=a_m J^{\rm (exp)}(\omega)$. In fact, the best agreement with experiment has been obtained using the same scaling parameter for all sites. Note that in view of the simplicity of the present exciton-vibrational model this empirical parameter should not be overinterpreted. In fact, of more relevance are the actual relaxation rates which also contain the effect of excitonic couplings, see below. }
As far as the quadratic coupling is concerned we will restrict ourselves to the pure dephasing contribution only (see also Ref.~\cite{kuhn97_4154}).
The dynamics of the reduced exciton density operator, $\rho$, will be treated using the Redfield model in Bloch approximation~\cite{may11}
\begin{equation}\label{eq:qmewithfield}
\dfrac{d}{dt}\rho(t)=-\frac{i}{\hbar}\left[H_{\rm{S}}+H_{\rm{F}}(t),\rho(t)\right]-R\rho(t)
\end{equation}
with the relaxation matrix $R$ having contributions for population relaxation
\begin{equation}
R_{\alpha\alpha,\beta\beta}=-k_{\beta\rightarrow\alpha}+\delta_{ \alpha\beta}\sum_{\gamma}k_{\alpha \rightarrow\gamma}
\end{equation}
and coherence dephasing
\begin{equation}
R_{\alpha\beta,\alpha\beta}=\hat{\Gamma}_{\alpha\beta}+\dfrac{1}{2}\sum_{ \gamma\neq\alpha}k_{\alpha\rightarrow\gamma}+\dfrac{1}{2}\sum_{ \gamma\neq\beta}k_{\beta\rightarrow\gamma} \,.
\end{equation}
Here, the energy relaxation rates between states $\alpha$ and $\beta$ are given by
\begin{equation}\label{eq:dampingab}
k_{\alpha\rightarrow \beta}=\sum_{m}C_{m}(\omega_{\alpha\beta})\vert c_{m,\alpha}\vert^2\vert c_{m,\beta}\vert^2
\end{equation}
and the pure depasing rates are
\begin{align}
&\hat{\Gamma}_{\alpha \beta}=\sum_{m}\hat{\Gamma}_m (\vert c_{m,\alpha}\vert^2-\vert c_{m,\beta}\vert^2)^2 \, \\
&\hat{\Gamma}_{\alpha 0}=\sum_{m}\hat{\Gamma}_m\vert c_{m,\alpha}\vert^4 \, ,
\label{eq:pure}
\end{align}
with $\hat{\Gamma}_m \propto |g_{m,\xi}^{(2)}|^2$ being the pure dephasing rate for molecule $m$, which is treated as a parameter.
Thus, the phase relaxation rates for the excitonic transitions from the ground state read
\begin{equation}\label{eq:oerelaxrate}
\gamma_{\alpha}=\sum_{\beta\neq\alpha}k_{ \alpha\rightarrow\beta}+2\hat{\Gamma}_{\alpha 0} \,.
\end{equation}
\section{Results
\label{sec:results}
\textcolor{black}{
In the following we will present results obtained for four models; cf. Tab.~\ref{tab:lh2parameters}. Starting from a model similar to the one in Ref.~\cite{lohner15_23} (\textbf{model 1}) supplemented by energy and phase relaxation, by changing the model parameters we will eventually arrive at \textbf{model 4}, which reproduces both the absorption spectrum and the energy relaxation rates obtained in the experiment~\cite{schroter18_114107}. The other two models (2 and 3) merely serve to illustrate the effect of the different parameters on the absorption spectrum.}
\subsection{Absorption Spectra}
In the following we will discuss the absorption spectrum ($T=77$~K)
\begin{equation}\label{eq:absorptionlh2}
A(\omega)=\left\langle\sum_{\alpha}\dfrac{ \gamma_{\alpha}\vert \vec{\mu}_{\alpha}\vert^2}{(\omega-\omega_{\alpha})^2+\gamma^2_{\alpha}/4} \right\rangle_{\rm{disorder}}
\end{equation}
to fully specify the four different models according to Tab.~\ref{tab:lh2parameters}. In Fig.~\ref{fig:absorption}a the experimental absorption spectrum~\cite{schroter18_1340} is compared with a simulation using the original model of Ref.~\cite{lohner15_23}, supplemented by the excitonic phase relaxation. Here, we did not include pure depashing and tuned the SB scaling parameters $a_m$ such as to give a reasonable fit to the experimental linewidths (\textbf{model 1}). It turns out that model 1, which was paramaterized in Ref.~\cite{lohner15_23} to reproduce the 1.2~K matrix spectra, gives only a poor agreement with the 77~K glycerol spectra. First, the B800-B850 splitting is too small and, second, the B800 band splitting has a reversed order of intensities.
\begin{figure}[t]
\begin{center}
\includegraphics[width=1\columnwidth]{fig-abs}
\caption{ Absorption spectra (red full line) as obtained for the different models upon averaging over 5000 samples of an uncorrelated Gaussian distribution of local transition energies. Also shown is the experimental spectrum (black dashed line) from Ref.~\cite{schroter18_1340}. Model parameters are given in Tab.~\ref{tab:lh2parameters}.}
\label{fig:absorption}
\end{center}
\end{figure}
To improve the agreement, first, it was found that the dipole moment for the B850 {BChl} $a$ should be increased from 7.5 D to 8.25 D (the same as B800) to match the B800-B850 splitting. Further, once an energy shift $\Delta E_{\rm 800B}=-250$~cm$^{-1}${} is introduced the ratio of the B800 peak heights is reversed, which is shown in Fig.~\ref{fig:absorption}b. In fact, introducing an energy shift between B800 and B850 subunits will also reproduce the splitting between B800 and B850, however, the ratio of the B800 peaks cannot be matched due to the interdependence between this ratio and the energy shift. This set of parameters defines \textbf{model 2}; see Tab.~\ref{tab:lh2parameters}.
Comparing experimental and calculated absorption spectra, one notices that the ratio of the B800 to B850 peak heights does not match. In addition there is an extra peak in the calculation near 765 nm. The intensity ratios can be influenced by the linewidths, i.e. the SB coupling parameters. Including pure dephasing, the 765~nm peak can be suppressed and the widths of the B800 and B850 peaks can be reasonably matched (see Fig.~\ref{fig:absorption}c) with the parameters of \textbf{model 3} as given in Tab.~\ref{tab:lh2parameters}.
Inspecting Fig.~\ref{fig:absorption}c we notice that the B800 peak splitting is still not reproduced. In principle, two factors have a direct influence on this peak splitting, which are the energy shift $\Delta E_{\rm 800B}$ and the inter-pool coupling, $J_{800-850}$, between B800 and B850 monomers. Increasing $\Delta E_{\rm 800B}$ will also cause a decrease of the B800-B850 gap. However, it is found that a scaling of all couplings of type $J_{800-850}$ by a factor of two in \textbf{model 4} gives the best agreement with the experimental absorption spectrum as shown in Fig.~\ref{fig:absorption}d. Note that the actual values for $a_m$ and $\hat \Gamma_m$ have been fixed using the population dynamics (see below), i.e. while the spectrum is influenced by both parameters, the population flow depends on $a_m$ only.
\begin{figure}[t]
\begin{center}
\includegraphics[width=1\columnwidth]{fig-coeff}
\caption{Eigenvalues, state character as measured by the coefficient $c_{\alpha}(i)=\sum_{m\in i}\vert c_{m,\alpha}\vert^2$ (red: $i=$B850, green: $i=$B800A, blue: $i=$B800B), and oscillator strength (black triangles) for model 1 (a), models 2,3 (b), and model 4 (c). The calculations have been performed without disorder.}
\label{fig:lh2eigen}
\end{center}
\end{figure}
In order to unravel the changes in the spectra for the different models, the eigenstates, Eq.~\eqref{eq:eigen}, and oscillator strengths $|\mu_\alpha|^2$, Eq. \eqref{eq:dipole}, will be analyzed for the case of no disorder. Here, a decomposition of the eigenstates in terms of local B800 and B850 state has been performed according to $c_{\alpha}(i)=\sum_{m\in i}\vert c_{m,\alpha}\vert^2$ with $i=$(B800A, B800B, B850). The results are shown in Fig.~\ref{fig:lh2eigen}. For all models the total width of the eigenstate spectrum is determined by B850-like states. Further, due to the relatively strong coupling between the B800 monomers, the band structure related to these monomers is clearly discernible. The B800 band is approximately located at the overall band center.
Due to the high symmetry, oscillator strengths is distributed over a few transitions only, e.g. notably at the lower band edge (B850-like states). As far as the B800 double peak is concerned, the mixing between B800- and B850-like states is of prime importance. In the original model 1, the transition at the blue side of the double peak (B800b) is dominantly of B800 origin, while the red peak (B800r) is of B850 origin. Going to models 2 and 3, where the local transition dipoles are equal and where there is a shift $\Delta E_{\rm 800B}$ of the B800B monomer energies, reverses this assignment along with a reversed transition strength ratio. Increasing the B800-B850 coupling, increases the splitting between the bright states. At the same time they become more B800-like. Here, B800b has about equal contributions from B800A and B800B, whereas B800r is dominated by B800B excitations.
\begin{figure}[t]
\begin{center}
\includegraphics[width=\columnwidth]{fig-gamma}
\caption{Phase relaxation rates $\bar \gamma_{\alpha}$ and eigenenergies $\bar E_{\alpha}$ after averaging over an inhomogeneous ensemble (5000 realizations) for model 4.}
\label{fig:lh2gam}
\end{center}
\end{figure}
The effect of the linewidth on shaping the overall spectrum can be appreciated by inspecting Fig.~\ref{fig:lh2gam}. It shows the phase relaxation rates averaged with respect to the Gaussian distribution of site energies, $ \bar \gamma_{\alpha}$, in dependence on the average energies $\bar E_{\alpha}$ for model 4. Similar to the model discussed for \textit{Rps. acidophila} in Ref.~\cite{kuhn02_15} the relaxation rate increase with increasing energy, being largest at the upper edge of the exciton band. This fact is responsible for suppression of the peak near 765~nm. The reason for this behavior is that with increasing energy the number of relaxation channels increases as well. The non-monotonous behavior is due to the change in delocalization when moving through B800 and B850 dominated bands. Overall we notice that the rather high values for the local pure dephasing constants should be taken with caution due to the influence of the eigenstate coefficients in the final dephasing rates, Eq.~\eqref{eq:pure}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.9\columnwidth]{fig-pop2D}
\caption{The populations dynamics of model 4 in the range from 760 nm to 900 nm (step 4 nm) for excitation at the two B800 band maxima (contour values from 0.001 to 0.01 by 0.001). \textcolor{black}{For a plot of the populations scaled by the transition dipole strength of the respective states, see Fig. S2 in the Supplementary Material}}
\label{fig:pop2D}
\end{center}
\end{figure}
\subsection{Population Dynamics}
In the following we will investigate whether the parametrization of model 4 is in accord with the time scales of energy relaxation obtained in Ref.~\cite{schroter18_1340}. To this end, the exciton dynamics driven by a Gaussian-shaped laser pulse is studied, i.e.
\begin{equation}\label{eq:lh2extfield}
\vec{E}(t)=\vec{e} E_{0}\cos(\omega t)\exp\left(- \dfrac{(t-t_0)^2}{2\sigma^2}\right)\, .
\end{equation}
Here, a field strength of ${E}_{0}=$1.1$\times 10^7$ V/m is chosen such as to give about 5\% excited state population for $t_0=200$ fs and $\sigma=42.5$ fs (i.e. the FWHM of the pulse is 100 fs).
To account for the averaging over disorder, population dynamics will be assigned to certain wavelength ranges as $P_{ab}=\sum_{\alpha}\rho_{\alpha\alpha}$ if $E_{\alpha}/hc \in [\lambda_a,\lambda_b]$ for a given sample. To focus on the time scales associated with the excitation of the two peaks of the B800 band, two excitation cases are introduced as follows: the case B800b/B800r excitation corresponds to excitation within the wavelength range [788,800]~nm/ [800,812]~nm. In other words, B800b and B800r matches the lower and higher wavelength peak, respectively. In both cases the direction, $\vec e$, and frequency, $\omega$, of the external field are assumed to be the same as the direction and eigenvalue, respectively, for the largest dipole moment $\vec{\mu}_{\alpha}$ in the considered frequency range. The population dynamics will be analyzed in terms of contour plots, Fig.~\ref{fig:pop2D}, and integrated frequency intervals, Fig.~\ref{fig:pop1D}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.9\columnwidth]{fig-pop1D}
\caption{The integrated population for the three peaks in the {LH2} absorption spectrum, i.e. B800b:[788,800] nm, B800r:[800,812] nm and B850:[852,876] nm and for the two excitation conditions of Fig.~\ref{fig:pop2D}.}
\label{fig:pop1D}
\end{center}
\end{figure}
The population dynamics for the two cases during 2000 fs is shown in Fig.~\ref{fig:pop2D}. First, let's consider excitation of the lower wavelength band (B800b), cf. upper panel of Fig.~\ref{fig:pop2D}. Here, the states in B800b are dominantly excited and in B800r are weakly excited by the external pulse. After the pulse, the excitation energy quickly transfers from B800b to B850. However, it is found that there is no apparent reduction of the populations in the B800r range from 400 fs to 800 fs, and only after 1200 fs appreciable depopulation sets in. This is more clearly observed from the integrated populations in the upper panel of Fig.~\ref{fig:pop1D}. The reason is that shortly after the pulse the direct relaxation from B800b to B800r keeps the populations in the B800r range approximately unchanged, but after some time there is not enough population flow to the B800r range anymore to compensate for the transition from B800r to B850.
Next, we focus on the case where the higher wavelength band is excited (B800r), cf. lower panels of Figs.~\ref{fig:pop2D} and \ref{fig:pop1D}. Here, the states in B800b are only weakly excited, while those in B800r are strongly excited by the external pulse. There is no plateau-like behavior for the B800r population and both bands decay with different time scales.
The depopulation times of the B800r and B800b band states can be obtained from the integrated populations in Fig.~\ref{fig:pop1D}. To this end, a fit of the populations to a kinetic three state model has been performed. This gives a time scale of 1.13~ps for the direct B800b to B850 relaxation as well as 3.25~ps and 1.43~ps for the two step relaxation B800b$\rightarrow$B800r$\rightarrow$B850.
%
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.95\columnwidth]{fig-rates}
\caption{Analysis of the relaxation rates, Eq.~\eqref{eq:dampingab}, in terms of the eigenfunction coefficients for particular (different) disorder realizations, chosen such as to resemble the values obtained for transitions between the B800b and B800r band (upper panel), between the B800b and B850 bands (middle panel), and between the B800r and the B850 band (lower panel). The accepting B850 states are labeled by $x$. For $k_{bx}$ and $k_{rx}$ initial/final states are at 789/849 nm and 804/861 nm, respectively. For $k_{br}$ the values are 792~nm and 805 nm. The labeling follows the sequence B850, B850, B800A, B850, B850, B800B etc. \textcolor{black}{Note that the site energies of the particular disorder realizations used here are given in Fig. S3 of the Supplementary Material.)}
}
\label{fig:rates}
\end{center}
\end{figure}
The nature of these relaxation processes can be unraveled by analyzing the relaxation rates, Eq.~\eqref{eq:dampingab}. This has been done in Fig.~\ref{fig:rates} for particular disorder realizations, which have been chosen such as to resemble the values of the decay rates for the ensemble. Of course, analysis of a single member of the ensemble should not be over-interpreted and at best provides a qualitative picture. The relaxation rates depend on the spectral density taken at the transition frequency as well as on the wave function overlap $|\langle\alpha|m\rangle\langle m | \beta \rangle|^2$. According to Fig.~\ref{fig:SD} the spectral density changes by a factor $\sim$5 when comparing its values at the transition frequencies between the B850 and the B800 bands. The eigenfunction coefficients are shown in Fig.~\ref{fig:rates}. Overall, we notice that comparing the B800b to B800r relaxation with the decay of B800b/B800r towards the B850 states, in the latter cases there are substantially more local states involved (i.e.\ the eigenstates are more delocalized), which overcompensates the smaller value of the spectral density.
For the B800b to B800r relaxation we find large overlaps at sites 6, 30, 36, i.e. this relaxation is dominated by B800B local states. This is in accord with the fact that both absorption peaks of the B800 band are to a good extent of B800B origin (cf. Fig.~\ref{fig:lh2eigen}).
The decay of the B800b band states is due to the mixing between local B800A/B and B850 transitions. In some case there is only an amplitude at the local B800A/B site (3, 18, 27, 30), but no such amplitude in the final state. The main contribution to the relaxation rate comes from state pairs which both have local B850 amplitudes. For this particular disorder realization, the exciton eigenfunctions responsible for relaxation are delocalized on the segment with $m=29, 31, 32$. The initial state for the relaxation of the B800r band states has local amplitudes on the B800B and B850 sites, and little involvement of the higher energetic B800A site. Responsible for the decay is a pair of states delocalized on the segment with $m=17, 19, 20, 22, 23$.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.65\columnwidth]{fig-scheme}
\caption{Calculated absorption spectrum (left) and relaxation times for model 4 assuming an effective 3-level scheme (in parenthesis experimental results from Ref.~\cite{schroter18_1340}). }
\label{fig:scheme}
\end{center}
\end{figure}
\section{Summary
\label{sec:summary}
The peculiar double-peak structure of the B800 band of \textit{Alc.\ vinosum} at 77~K has been investigated from the perspectives of absorption spectroscopy and exciton population dynamics. Thereby, it has been assumed that the key structural feature is a dimerization of the B800 pool, in accord with previous experimental and theoretical studies~\cite{lohner15_23,schroter18_1340}. The structural model, previously developed by L\"ohner et al.~\cite{lohner15_23} for 1.2~K polymer matrix conditions, has been adapted and extended to include dephasing and energy relaxation within a system-bath model. Simulations of absorption spectra and population dynamics have been performed for different models using Redfield relaxation theory.
A parametrization of a structural and system-bath model has been identified, which reproduces the spectrum as well as the population dynamics in good agreement with experiment. The relevant results are compiled in Fig.~\ref{fig:scheme}. On the one hand side, this can be viewed as advanced fitting of multiple sets of experimental data. On the other hand side, the analysis of the results provided inside into the details, which could be operative for this particular LH2 complex. In fact, the LH2 of \textit{Alc. vinosum} features an interesting interplay of two excitonic bands, which are originating from different pigment pools. This involves state, which for the uncoupled pools are essentially optically dark by symmetry. The particular double-peak structure of the B800 band is emerging due to B800 dimerization but also due the coupling to the B850 pool, causing a particular state mixing and thus a sensitivity to resonances and couplings strengths between the exciton manifolds of the separate pools. In terms of the population dynamics, this opens different relaxation channels upon excitation of the B800 band. In particular excitation of the short wavelength peak (B800b) leads to relaxation to the B850 band via two pathways: direct transfer to B850 with a time scale of $\sim$1.1~ps, which is the main pathway, and indirect slower two-step transfer via B800r to B850.
\textcolor{black}{The present model has to be viewed as a suggestion based on advanced fitting. The crucial point for obtaining the good agreement with experiment has been the \textit{ad hoc} change of the Coulomb coupling strength between the B800 and B850 pools. In terms of the structure proposed in Ref. \cite{lohner15_23} this would correspond to a decrease of the vertical distance between the B800 and B850 rings from 17.3 \AA{} to 14 \AA. Other factors which could influence the Coulomb coupling are the different screening of the inter- and intrapool interactions as well as the breakdown of the dipole approximation. Without further structural information it will be difficult to disentangle these effects in their influence on the simulation results.}
%
\section{Acknowledgments}
The authors thank the Deutsche Forschungsgemeinschaft (DFG) for financial support through the Sfb 652.
|
2,877,628,091,497 | arxiv | \section{Introduction}
Understanding common difficulties students exhibit in learning
conceptual physics has been an important research strand in
physics education research (PER) since its inception. This work
was greatly advanced by the introduction of multiple-choice
conceptual instruments measuring students' understanding of
mechanics and electricity and magnetism: the Force Concept
Inventory (FCI) \cite{hestenes1992}, the Force and Motion
Conceptual Evaluation (FMCE) \cite{thornton1998}, the Conceptual
Survey of Electricity and Magnetism (CSEM) \cite{maloney2001}, and
the Brief Electricity and Magnetism Assessment (BEMA)
\cite{ding2006evaluating}. Studies involving these instruments
continue to be of central importance in PER. For an overview of
the history of these instruments and their use in PER, see Docktor
and Mestre's extensive synthasis of the field
\cite{docktor2014synthesis}.
Recently, substantial efforts have been made to apply quantitative
techniques to further understand these instruments including
factor analysis \cite{scott2012exploratory, semak2017,
eaton2018confirmatory}, cluster analysis
\cite{fazio2018conceptual}, and item response theory
\cite{wang2010, scott2015,stewart2018,zabriskie2019}. In 2016,
Brewe, Bruun, and Bearden \cite{brewe2016} introduced a new class
of quantitative algorithms to analyze the incorrect answers,
network analytic methods \cite{newman, zweig}. Network analysis is
a broad, flexible, and extremely productive field of quantitative
analysis that has been used to analyze systems as diverse as the
functional networks in the brain \cite{devico} and passing
patterns of soccer teams \cite{pena}.
A network is formed of nodes which are connected by edges. Network
analysis seeks to identify structure within the network; one
important class of structure is subsets of the network which are
more interconnected within themselves than they are connected to
the rest of the network. These subsets are called ``modules'' or
``communities'' interchangeably. In anticipation of the ``igraph''
package \cite{igraph} in the ``R'' software system
\cite{R-software} becoming the primary tool used within PER for
network analysis, we will call the subgroups ``communities.''
Wells {\it et al.} \cite{wells2019} attempted to replicate Brewe's
{\it et al.} \cite{brewe2016} analysis for the FCI and found that
it did not scale to large datasets. They suggested a modified
algorithm called Modified Module Analysis (MMA); the details are
discussed below as Study 1. In the current study, the MMA
algorithm was applied to explore the community structure of the
FMCE; the results are then compared to the results of
Study 1.
This study sought to answer the following research questions:
\begin{description}
\item[RQ1] What incorrect answer communities are identified by
Modified Module Analysis in the FMCE? \item[RQ2] How are these
communities different pre- and post-instruction? How is the
community structure different for men and women? \item[RQ3] How do
the communities change as the parameters of the MMA algorithm are
modified? \item[RQ4] How do the communities detected compare to
those detected in the FCI in Study 1?
\end{description}
\subsection{The FMCE Instrument}
\label{sec:fmce} The FMCE is a widely used mechanics conceptual
inventory that measures students' understanding of force and
motion. The instrument consists of 43 items examining student
understanding of Newton's laws of motion. The items are presented
in groups with each item having at least 6 possible responses,
some of which represent common misconceptions. Most items include
a ``none of the above'' response which is not the correct response
to any item; ``none of the above'' responses have been shown to
cause psychometric problems \cite{devore_examining_2016}. The FMCE
is available at PhysPort \cite{physport}.
The FMCE uses the practice of ``blocking'' or ``chaining'' items
where multiple items refer to a common stem. In an item block, a
physical system is introduced, then multiple items refer to that
system. Of the 43 items in the FMCE, all but one (item 39) are
included in item blocks. The FCI also employs item blocks with 13
of the 30 items included in blocks. Multiple studies have
suggested that blocking items introduces spurious correlations
that can make the instrument difficult to interpret statistically
\cite{stewart2018,wells2019}.
Since its introduction, the blocked structure of the FMCE has been
used to provide a compact description of the instrument in terms
of the qualitative features of the item blocks. This description
has been refined since the introduction of the instrument as will
be discussed in Sec. \ref{sec:gen}. The descriptive terms provide
an overview of the instrument. ``Force Sled'' items (items 1-7)
ask about the force that an individual would need to exert on a
sled on a low-friction surface to produce a set of accelerations;
students select for a number of textual responses. ``Cart on a
Ramp'' items (items 8-10) ask students to select the force on a
cart as it moves up and down an incline. ``Coin Toss - Force''
items (items 11-13) ask students to select the force on a coin
tossed in the air. ``Force Graph'' items (items 14-21) ask
students about the force on a toy car as it moves across a
low-friction surface; students select from a number of graphs.
``Acceleration Graph'' items (items 22-26) ask students to select
the graph which correctly represents the acceleration of a toy car
moving on a horizontal surface. ``Coin Toss - Acceleration'' items
(items 27-29) ask students to select the acceleration of a coin
tossed in the air. ``Newton III'' items (items 30-39) ask students
about the forces during a variety of interactions between cars and
trucks. ``Velocity Graph" items (items 40-43) ask students to
select the graph which correctly represents the velocity of a toy
car moving on a horizontal surface. The current version of the
FMCE has four multiple choice ``Energy'' items (items 44-47) and
one free response item (46a). These items were not present in the
original FMCE and will not be analyzed in this study.
\subsection{Prior Studies}
As this analysis was motivated by prior works, this research will
draw heavily from two previous studies which will be referenced as
Study 1 and Study 2 throughout the manuscript.
\subsubsection{Study 1: Modified Module Analysis \label{Study1}} In
Study 1, Wells {\it et al.} \cite{wells2019} introduced Modified
Module Analysis (MMA), a network analytic method to explore the
structure of the incorrect answers of a multiple-choice
instrument. Modified Module Analysis was introduced to the adapt
Module Analysis of Multiple-Choice Responses (MAMCR) method of
Brewe {\it et al.} \cite{brewe2016} for a large datasets. In both
MMA and MAMCR, the incorrect responses to a conceptual inventory
are used to define a network with weighted edges. The responses
are the nodes of the network. In MAMCR, the number of times two
responses are selected by the same student define the edge weight
of the network. For example, if FCI response 1D and 2B were
selected together by 40 students, the network would contain 1D and
2B as nodes and have an edge between the nodes with weight 40. The
notation 1D represents response ``D'' to item 1. In MMA, the edge
weight is the correlation coefficient between the two responses.
To analyze this network, the correlation matrix was calculated and
a threshold applied. In Study 1, only edges which were correlated
at the $r>0.2$ level were retained where $r$ is the correlation
coefficient. The remaining correlated items define a network with
edge weight equal to the correlation. A community detection
algorithm was then applied to detect substructure in the network.
A community represents a set of nodes that are preferentially
selected together by many students. The MMA algorithm detects
incorrect answer communities, subsets of the network formed of
incorrect answers which are preferentially selected together.
Modified Module Analysis identified 9 pretest communities and 11
post-test communities on the FCI. Three of the communities were
the result of blocked items. For these blocked items, the later
response was the correct response if an earlier response had been
correct. In most cases, the remaining communities could be related
to the misconceptions associated with the items in original paper
introducing the FCI \cite{hestenes1992} and in the more detailed
taxonomy provided by Hestenes and Jackson \cite{fcitable}. For
eight of the communities, a dominant misconception was identified
and for two of the communities, two common misconceptions were
identified. For example, one FCI community included responses
\{4A, 15C, 28D\}, common incorrect answers to the Newton's 3rd law
items. Students were applying both the greater mass implies
greater force and the most active agent produces greater force
misconceptions for these items.
Study 1 found the communities identified for men and women on both
the pretest and post-test, while not identical, were very similar.
\subsubsection{Study 2: Multidimensional Item Response Theory and
the FMCE}
Study 1 made extensive use of a prior study of the FCI applying
constrained Multidimensional Item Response Theory (MIRT) to
produce a detailed model of the physical reasoning required to
correctly solve the items in the instrument \cite{stewart2018}.
The incorrect communities not related to the blocking of items
often required similar physical reasoning for their solution. This
methodology has recently been extended to the FMCE and will be
referenced as Study 2. In Study 2, Yang {\it et al.} performed a
detailed analysis of the correct answers to the FMCE using
constrained MIRT \cite{fmce-mirt}. This technique produced a
detailed model of the instrument in terms of the fundamental
reasoning steps (principles) required for its solution. Results of
factor analysis and correlation analysis were also presented. All
analyses suggested the existence of subsets of items within the
instrument that shared a common solution structure. These item
groups included items 40-43 (definition of velocity), 22-26
(definition of acceleration), 30-39 (Newton's 3rd law), and 8-13
and 27-29 (motion under gravity). A fifth group of items, items
1-7 and 14-20, measured a combination of Newton's 1st and 2nd law
and corollaries of motion derived from these laws. These item
groups presented responses to students using different
representations with items 1-7 asking students to select textual
responses and items 14-20 asking students to choose between
two-dimensional graphs. The constrained MIRT analysis found that
this distinction between textual and graphical responses was
important to understanding student answers to the instrument.
The groups identified as requiring a common solution structure are
well aligned with the item groups identified by previous research
and described in Sec. \ref{sec:fmce} supporting the identification
of these groups as measuring distinct elements of Newtonian
thinking. Some of the groups suggested by MIRT combine groups
suggested by previous authors. For example, ``Cart on a Ramp,''
``Coin Toss - Force,'' and ``Coin Toss - Acceleration'' items all
require an understanding of the force or acceleration due to
gravity for their solution. Item groups with similar correct
solution structure will often also have responses that represent
consistently applied misconceptions in the analysis which follows.
In general, the FMCE had many more items requiring similar
reasoning for their solution than the FCI; this may make it a
productive instrument for the exploration of structure of
misconceptions about mechanics using MMA.
\section{Previous Studies of the FMCE}
\subsection{General Analyses}
\label{sec:gen}
Multiple subdivisions of the FMCE have been suggested. Thornton
and Sokoloff introduced four subgroups of items with the original
publication of the instrument: ``Force Sled'' items, ``Cart on a
Ramp'' items, ``Coin Toss'' items , and ``Force Graph'' items
\cite{thornton1998} as described above. Items 5, 6, and 15 were
identified as potentially problematic leading to modified
subgroups: ``Force Sled'' items (items 1-4 and 7) and ``Force
Graph'' items (items 14 and 16-21).
Using data collected after the instrument's publication, Thornton
\textit{et al.} proposed an alternate scoring scheme which
eliminated some items and scored some groups of items (clusters)
together \cite{thornton2009comparing}. The alternate scoring
scheme for the clusters suggested item groups 8-10, 11-13, and
27-29 be scored together because students had not mastered the
concept tested by the group unless they answered each item in the
group correctly. Each cluster received two points if all items
were answered correctly, zero points if not. They also suggested
the elimination of items 5, 15, 33, 35, 37, and 39 because
students without an understanding of Newtonian mechanics often
answered them correctly. They also suggested the elimination of
item 6 because content experts often answered it incorrectly.
Multiple authors proposed other revisions to the subgroups of
items initially introduced by Thornton and Sokoloff. Wittmann
identified five subgroups: ``Force (Newton I and II)'' (items 1-4,
7-14, 16-21), ``Acceleration'' (items 22-29), ``Newton III''
(items 30-32, 34, 36, 38), ``Velocity'' (items 40-43), and
``Energy'' (items 44-47) \cite{smith2008}. These subgroups were
further refined using a resource framework by Smith and Wittmann
who proposed a set of seven subgroups: ``Force Sled'' (items 1-4,
7), ``Reversing Direction'' (items 8-13, 27-29), ``Force Graphs''
(items 14, 16-21), ``Acceleration Graphs'' (items 22-26), ``Newton
III'' (items 30-32, 34, 36, 38), ``Velocity Graphs'' (items
40-43), and ``Energy'' (items 44-47) \cite{smith2008}. The
problematic items identified by Thornton \textit{et al.} were
eliminated from all subgroups in these two studies. More recently,
Smith, Wittmann, and Carter applied the revised subgroup
structure to understand of the effect of instruction
\cite{smith2014}.
\subsection{Exploratory Analyses}
Many studies have applied quantitative analysis methods to explore
the structure of conceptual physics instruments. A substantial
number of studies have explored the factor structure of the FCI,
generally finding inconsistent or unintelligible results
\cite{huffman_what_1995,scott2012exploratory,scott2015,semak2017,stewart2018}.
Only two studies have performed factor analysis on the FMCE. Ramlo
examined the reliability of the FMCE using a sample of 146
students \cite{ramlo2008validity} finding adequate reliability on
the pretest (Cronbach's $\alpha=0.742$) and excellent reliability
on the post-test (Cronbach's $\alpha=0.907$). While the pretest
factor structure was undefined, three conceptually coherent
factors were identified on the post-test.
In Study 2, exploratory factor analysis found 5, 6, 9, and 10
factor models optimized some fit statistics. Overall, the model
fit of the 5-factor model was superior. The factor loadings in
this model were very consistent with the groups of conceptually
similar items identified by the confirmatory MIRT analysis. These
groups also had adequate to excellent internally consistency
measured by Cronbach's alpha ranging from $\alpha=0.66$ to
$\alpha= 0.93$. There is also strong theoretical support for the
selection of either a 5 or 10 factor model as discussed in Study
2. Study 2 concluded that the 3-factor structure identified by
Ramlo probably resulted from the low sample size.
Recent studies of the FMCE have ranked incorrect responses to
examine conceptual development in introductory mechanics
\cite{smith2018showing} and produced a hierarchy of responses
\cite{Louis2019}.
\subsection{Gender and the FMCE}
On mechanics conceptual inventories (the FCI and the FMCE), men,
on average, outperform women by 13\% on the pretest and 12\% on
the post-test \cite{madsen2013}. The majority of research into the
``gender gap'' in PER analyzes differences between men and women
on the FCI; however, some studies have explored these differences
on the FMCE.
Researchers have explored various factors that could explain the
differences between men and women on the FMCE. For example,
differences in academic backgrounds and preparation, measured by
FMCE pretest and math placement exam scores, have been shown to
explain much of the gender gap on the FMCE post-test
\cite{kost2009,salehi2019}. Studies have also investigated the
impact of interactive-engagement on the overall gender gap.
Although some studies have shown a positive impact by reducing the
differences between men and women on conceptual inventory scores
\cite{lorenzo2006,kost2009,kohl2009introductory}, other
researchers have demonstrated that the gender gap for students
enrolled in an interactive-engagement classroom is unchanged
\cite{pollock2007}.
While many studies have focused on the overall average gender
differences on the FMCE, recently, researchers have explored the
fairness in the individual items on the FMCE \cite{henderson2018}.
An item is fair if men and women of equal overall ability with the
material score equally on the item. Applying the modified scoring
method proposed by Thornton \textit{et al.}
\cite{thornton2009comparing}, only item cluster 27-29 scored as a
single item consistently showed substantial unfairness in multiple
samples; this item was unfair to men. In one of the two samples,
item 40 demonstrated substantial gender unfairness; this item was
also unfair to men. These results were substantially different
from the analysis performed by Traxler \textit{et al.} which
identified a large number of unfair items on the FCI; most of the
items items were unfair to women \cite{traxler2018}.
\subsection{The FCI and the FMCE}
While both the FCI and the FMCE measure an understanding of
Newtonian mechanics, the FCI includes a substantially broader
coverage of the topic. The FCI includes two-dimensional kinematics
and circular motion while the FMCE does not. Thornton \textit{et
al.} \cite{thornton2009comparing} quantified this difference in
coverage noting that 22 of the 30 FCI items were outside the
coverage of the FMCE.
The optimal model presented in Study 2 and a similar study of the
FCI \cite{stewart2018} provide further evidence for the difference
in coverage of the two instruments with the optimal model of the
FCI requiring 19 principles (fundamental reasoning steps) while
the optimal model of the FMCE required only 8 principles. The two
instruments also differed starkly in their re-use of principles
with the FCI rarely repeating the same set of principles on
multiple items and the FMCE often repeating the same principles.
Study 2 also provided partial support for Thornton \textit{et al.}
\cite{thornton2009comparing} identification of problematic items
with items 5, 6, 33, 35, and 37 having relatively small
discriminations and item 15 having negative discrimination. The
models in Study 2 also suggest items 20 and 21 may not be
appropriately grouped with the other items probing graphical
interpretation of forces.
\section{The Structure of Knowledge}
The MMA algorithm detects sets of incorrect answers that are
commonly selected together by multiple students. Study 1 showed
that, for the FCI, these incorrect answer communities were related
to either misconceptions proposed by the authors of the FCI or to
the practice of blocking items. The reason students answer physics
questions incorrectly is a broad area of research and multiple
frameworks have been developed to explain incorrect answering.
\subsection{Knowledge Frameworks}
Much of the early work in PER conceptualized patterns of incorrect
answers as ``misconceptions,'' coherently applied incorrect
reasoning often related to Aristotelian or medieval theories of
nature. Early research investigated common student difficulties in
applying Newtonian mechanics
\cite{viennot1979,trowbridge1981,caramazza1981,peters1982,mccloskey1983,gunstone1987,camp1994}.
As the field evolved, systematic studies were developed to explore
student understanding and epistemology
\cite{mcdermott1997,thornton1998,rosenblatt2011,erceg2014,waldrip2014}.
Eventually, alternate frameworks not involving misconceptions were
proposed. Two of the most prominent frameworks are
knowledge-in-pieces \cite{disessa1993,disessa1998} and ontological
categories \cite{chi1993,chi1994,slotta1995}. Knowledge-in-pieces
models student thinking as resulting from the application of a set
of granular pieces of reasoning which are used independently or
collectively to solve problems. Multiple authors have investigated
this model and these reasoning pieces have been called
phenomenological primitives (p-prims)
\cite{disessa1993,disessa1998}, resources
\cite{hammer1996misconceptions,hammer_more_1996,hammer2000student},
and facets of knowledge \cite{minstrell1992}. In the
knowledge-in-pieces model, misconceptions represent consistently
activated p-prims. Unlike the misconception view, the
knowledge-in-pieces model views p-prims as potentially positive
resources than can be activated as part of the process of
constructing knowledge.
For a careful and accessible exploration of the relation of and
differences between the misconception view and the
knowledge-in-pieces framework, see Scherr \cite{scherr2007}; the
current study applies the definitions from this work. The
misconception model is defined as ``a model of student thinking in
which student ideas are imagined to be determinant, coherent,
context-independent, stable, and rigid'' \cite{scherr2007}. The
knowledge-in-pieces framework models student ideas ``as being at
least potentially truth-indeterminate, independent of one another,
context-dependent, fluctuating, and pliable'' \cite{scherr2007}.
The ontological category framework differs substantially from
either the misconception view or the knowledge-in-pieces view. The
ontological category framework models incorrect reasoning as
resulting for an incorrect classification of a concept. For
example, misclassifying force as a quantity that can be used up
which might lead a student to believe an object would slow when
the applied force was removed.
\subsection{Misconceptions}
The FCI was developed using the misconceptions model; Hestenes,
Wells and Swackhamer proposed a detailed taxonomy of the
misconceptions measured by the instrument \cite{hestenes1992}. The
taxonomy was developed from qualitative studies investigating
students' ``alternate view of the relationship between force and
acceleration'' where researchers interviewed students about their
difficulties while solving conceptual physics problems
\cite{clement1982,clement1989,clement1993}. The authors of the FCI
provided a detailed description of the misconceptions measured by
the instrument \cite{hestenes1992}; this taxonomy was later
refined by Hestenes and Jackson \cite{fcitable}. The analysis in
the current work demonstrates that the FMCE probes a limited
number of the misconceptions that were originally outlined by the
authors of the FCI; only these misconceptions are described below.
For more information about the other misconceptions probed by the
FCI, see Study 1.
\vspace{6pt}
\noindent{\textit{Velocity-Acceleration Undiscriminated.}} The
misconception of velocity-acceleration undiscriminated stems from
the concept of ``motion is vague'' \cite{hestenes1992}. This
misconception demonstrates the inability to differentiate the
concepts of position, velocity, and acceleration within
kinematics. For example, items 22-26 on the FMCE refer to a car
moving on a horizontal surface and ask for the acceleration as a
function of time. The velocity-acceleration undiscriminated
misconception would predict that when the car is speeding up or
slowing down at a constant rate, the graph would show a linear
trend of acceleration with respect to time and when the car is
traveling at a constant velocity, the graph would show a non-zero
constant acceleration.
\vspace{6pt}
\noindent{\textit{Motion Implies Active Forces.}} The motion
implies active forces misconception is one of the sub-categories
outlined under the ``Active Forces'' category of misconceptions
describe by the authors of the FCI \cite{hestenes1992}. This
misconception implies that an object in motion, even if moving at
constant velocity, will experience a force in the direction of
motion; it demonstrates that Newton's 2nd law is not well
understood. For example, items 1-4 on the FMCE probe this
misconception; a sled is being pushed along the ice and students
are asked to describe the force which would keep the sled moving.
The motion implies active forces misconception would predict that
force is proportional to velocity rather than acceleration.
\vspace{6pt}
\noindent{\textit{Action/Reaction Pairs.}} The misconceptions of
greater mass implies greater force and the most active agent
produces the greatest force are the two sub-categories within the
``Action/Reaction Pairs'' group of student difficulties. This
group of misconceptions implies that Newton's 3rd law is not well
understood. For example, FMCE items 30-32 probe these
misconceptions by describing collisions between a heavy truck and
a small car. The greater mass implies a greater force
misconception would predict that the heavy truck would exert a
greater force on the small car than the small car would on the
heavy truck. The most active agent produces the greatest force
would predict that the object that is moving the fastest would
produce the greatest force.
\section{Methods}
\subsection{Sample}
\label{sec:samples}
The sample was collected at a large eastern land-grant university
serving approximately 30,000 students. The demographics of the
undergraduate population at the university were 80\% White, 6\%
International, 4\% African-American, 4\% Hispanic, 2\% Asian, 4\%
two or more races, and other groups less than 1\% \cite{usnews}.
The general undergraduate population had a range of ACT scores
from 21-26 (25th to 75th percentile).
The data were collected in the introductory calculus-based
mechanics course from Spring 2011 to Spring 2017. The majority of
the students enrolled in this course were physical science and
engineering majors. This sample was previously analyzed in
Henderson \textit{et al.} (Sample 3A \cite{henderson2018}) where
the instructional environment is described in detail. The course
was taught by multiple instructors and generally featured an
interactive pedagogy in lecture and laboratory.
Over the period studied, the FMCE was given at the beginning and
at the end of the class in each semester. The sample contains 3956
FMCE pretest responses and 3719 FMCE post-test responses (each
with 80\% men); only the students who completed the course for a
grade were included in the study. The overall pretest to post-test
gains for men and women were 28\% and 21\%, respectively. The
descriptive statistics for the FMCE pretest and the FMCE post-test
are presented in Table II in Henderson \textit{et al.} (Sample 3A)
\cite{henderson2018}.
\subsection{Analysis Methods}\label{sec:ModuleAn}
This work applies Modified Module Analysis (MMA) described in
Study 1 to the FMCE. Although the method is described in detail in
Study 1 \cite{wells2019}, we provide an overview of the method
here.
All responses to the FMCE where dichotomously coded where response
1D$_i$ would be coded as one if student $i$ selected the response
and zero otherwise. The correct responses were eliminated; network
analysis is unproductive if the correct responses are included
because they form a single tightly connected community that hides
the structure of the incorrect answers. Responses that were
selected by fewer than 5\% of the students were eliminated as
statistically unreliable.
The correlation matrix was calculated for the remaining incorrect
answers. This correlation matrix defines a network with nodes
representing the incorrect responses and weighted edges between
the nodes representing the strength of the correlation between the
two responses. Edges that represent correlations that were not
significant at the $\alpha = 0.05$ level with a Bonferroni
correction applied were eliminated. The network was further
simplified by eliminating any correlation where $r<0.2$; this was
the threshold applied in Study 1. This also served to remove the
large negative correlations between two responses to the same
item. Network analysis often uses methods to simplify the network
while retaining important structure; this process is called
``sparsification.''
A community detection algorithm was then applied to detect
structure in the network. Study 1 applied the ``fast-greedy''
algorithm \cite{newmangirvin:2004} included in the ``igraph''
package \cite{igraph} for R. Many community detection algorithms
exist; Study 1 reported that most produced similar results for the
correlation network. The fast-greedy algorithm is designed to
maximize the modularity of the division of the network into
unified subnetworks. Modularity measures the number of
intra-community edges in a particular division of the network
compared to the number expected in a random division.
To account for randomness in both the sample and the algorithm,
1000 bootstrapped replications were carried out. As a result, 1000
divisions of the network into communities were calculated sampling
the data with replacement. For each pair of incorrect items, the
number of times the two items appeared in the same community was
calculated. This number is divided by the number of bootstrap
replications to form the community fraction $C$. In this study, we
analyzed communities that were identified in 80\% of the 1000
bootstrapped samples.
Because the incorrect answer communities of men and women are
compared and the number of men in the sample is significantly
larger than the number of women, care was taken to produce a
balanced sample. For men, the data were downsampled to the size of
the female dataset. For women, the dataset was sampled with
replacement preserving the size of the dataset.
\section{Results}
Modified Module Analysis was applied to the FMCE; the communities
identified are shown in the first table in the Supplemental
Materials \cite{supp}. Retaining nodes where at least 5\% of the
students selected the response (approximately the threshold used
in Study 1) produced 35 communities. These communities were often
formed of small subsets of item groups identified in previous
studies. This was dramatically different than the small number of
communities identified in the FCI by Study 1. The complex nature
of the communities identified made understanding their structure
difficult.
To produce a simpler structure more open to interpretation, the
network was further sparsified retaining only nodes selected by
20\% of the students. The community structure of this network is
shown in Table \ref{tab:commat}. In nearly every case, the
communities form completely disconnected, complete graphs. The
intra-community density measures the connectivity of a community
and is defined as $\gamma = 2m/n(n-1)$, where $n$ is the number of
nodes and $m$ is the number of realized edges. A fully connected
community has an intra-community density of one.
\begin{table*}[!htb]
\caption{Communities identified in the pretest and post-test incorrect answers at $r>0.2$ and community fraction, $C>0.8$. Only nodes selected by $20\%$ of the students are included. The number in parenthesis is the
intra-community density, $\gamma$,
for communities where the intra-community density is not one. Newton III* denotes that this community does not contain 31F. \label{tab:commat} }
\centering
\begin{tabular}{|l|cc|cc|c|}
\hline
\multirow{2}{*}{Community}&\multicolumn{2}{|c}{Pretest}&\multicolumn{2}{|c|}{Post-test}&Item\\
&Men&Women&Men&Women&Group\\\hline
1A, 2B, 3C, 4G, 5B, 6C, 7E&X&X&&X&Force Sled\\\hline
\multirow{2}{*}{1A, 2B, 3C, 4G, 5B, 6C, 7E, 14A, 16C, 17B, 18H, 19D, 20F}&&&\multirow{2}{*}{X($\gamma = 0.88$)}&&Force Sled\\
&&&&&Force Graph\\\hline
\multirow{2}{*}{8G, 9D, 10B, 11G, 12D, 13B}&\multirow{2}{*}{X}&&\multirow{2}{*}{X}&&Cart on a Ramp\\
&&&&&Coin Toss - Force\\\hline
\multirow{3}{*}{8G, 9D, 10B, 11G, 12D, 13B, 27G, 28D, 29B} &&\multirow{3}{*}{X}&&\multirow{3}{*}{X}&Cart on a Ramp\\
&&&&&Coin Toss - Force\\
&&&&&Coin Toss - Acceleration\\\hline
14A, 16C, 17B, 18H, 19D, 20F, 21H &X&&&&Force Graph\\\hline
14A, 16C, 17B, 18H, 19D&&X&&X&Force Graph\\\hline
\multirow{2}{*}{22E, 23G, 24B, 25F, 26A, 27G, 28D, 29B}&\multirow{2}{*}{X}&&&&Acceleration Graphs\\
&&&&&Coin Toss - Acceleration\\\hline
22E, 23G, 24B, 25F, 26A&&X&X&X&Acceleration Graphs\\\hline
27G, 28D, 29B&&&X&&Coin Toss - Acceleration\\\hline
30A, 31F, 32B, 34B, 36C, 38B, 39D&X&&&&Newton III\\\hline
30A, 31F, 32B, 34B, 36C, 38B&&X&&&Newton III\\\hline
30A, 32B, 34B, 35B, 36C, 38B, 39D&&&X&X&Newton III*
\\\hline
\end{tabular}
\end{table*}
Table \ref{tab:commat} offers partial support for the
identification of items 5, 6, 15, 33, 35, 37, and 39 as
problematic in Thornton \textit{et al.}
\cite{thornton2009comparing}. Items 20 and 21 were modeled as
having a different solution structure to other items in the
``Force Graph'' group in Study 2; these items are inconsistently
connected to the other items in this group in Table
\ref{tab:commat}. Incorrect answers to items 15, 33, and 37 were
never identified as part of a community. Incorrect answers to
items 20, 21, 35, and 39 were inconsistently identified as parts
of the communities associated with the items in the group. As
such, some of the complexity in Table \ref{tab:commat} results
from these items. If items 5, 6, 15, 20, 21, 33, 35, 37, and 39
are eliminated from the analysis, the structure of Table
\ref{tab:commat} simplifies substantially to produce Table
\ref{tab:commat2}. The communities in Table \ref{tab:commat2} are
shown graphically in Fig. \ref{fig:network}.
\begin{table*}[!htb]
\caption{Communities identified in the pretest and post-test incorrect answers at $r>0.2$ and community fraction, $C>0.8$. Only nodes selected by $20\%$ of the students are included. Problematic items identified
in Study 1 and 2 have been eliminated. The number in parenthesis is the
intra-community density, $\gamma$,
for communities where the intra-community density is not one. \label{tab:commat2} }
\centering
\begin{tabular}{|l|cc|cc|c|}
\hline
\multirow{2}{*}{Community}&\multicolumn{2}{|c}{Pretest}&\multicolumn{2}{|c|}{Post-test}&Item\\
&Men&Women&Men&Women&Group\\\hline
1A, 2B, 3C, 4G, 7E&X&X&&X&Force Sled\\\hline
\multirow{2}{*}{1A, 2B, 3C, 4G, 7E, 14A, 16C, 17B, 18H, 19D}&&&\multirow{2}{*}{X($\gamma = 0.88$)}&&Force Sled\\
&&&&&Force Graph\\\hline
\multirow{2}{*}{8G, 9D, 10B, 11G, 12D, 13B}&\multirow{2}{*}{X}&&\multirow{2}{*}{X}&&Cart on a Ramp\\
&&&&&Coin Toss - Force\\\hline
\multirow{3}{*}{8G, 9D, 10B, 11G, 12D, 13B, 27G, 28D, 29B} &&\multirow{3}{*}{X}&&\multirow{3}{*}{X}&Cart on a Ramp\\
&&&&&Coin Toss - Force\\
&&&&&Coin Toss - Acceleration\\\hline
14A, 16C, 17B, 18H, 19D&X&X&&X&Force Graph\\\hline
\multirow{2}{*}{22E, 23G, 24B, 25F, 26A, 27G, 28D, 29B}&\multirow{2}{*}{X}&&&&Acceleration Graphs\\
&&&&&Coin Toss - Acceleration\\\hline
22E, 23G, 24B, 25F, 26A&&X&X&X&Acceleration Graphs\\\hline
27G, 28D, 29B&&&X&&Coin Toss - Acceleration\\\hline
30A, 31F, 32B, 34B, 36C, 38B&X&X&X&X&Newton III\\\hline
\end{tabular}
\end{table*}
\begin{figure*}[!htb]
\centering
\includegraphics[width=\textwidth]{fig1.eps}
\caption{Communities identified in the FMCE pretest and post-test for men and women.\label{fig:network}}
\end{figure*}
The sets of items in Table \ref{tab:commat} and \ref{tab:commat2}
generally conform to the item groups identified in previous works
and discussed in Sec. \ref{sec:fmce}. Table \ref{tab:commat2}
suggests items 27-29 should be treated as an independent group; we
propose this group be called ``Coin Toss - Acceleration'' to
distinguish it from items 11-13 which becomes ``Coin Toss -
Force.'' Both sets of items ask about a coin tossed in the air;
items 11-13 ask about the force on the coin, items 27-29 about the
acceleration. Smith and Wittmann combined these items into a
``Reversing Direction'' (items 8-13, 27-29) group; MMA suggests
this grouping may not be appropriate for all students. We also
note that Smith and Wittmann's ``Velocity Graphs'' (items 40-43)
group does not appear. This group had relatively poor Cronbach
alpha when used as a subscale in Study 2.
At this level of sparsification, for each item only a single
response appeared in each community, indicating that there is a
single, dominant incorrect answer that students tend to select.
This was consistent between the pretest and the post-test and by
gender.
\begin{table*}[t!]
\caption{Item groups, the physical principle tested by the group, and the common misconception selected by the students. \label{tab:summary} }
\centering
\begin{tabular}{|l|c|l|l|}
\hline
Item Group & Community & Physical Principle &
Misconception\\\hline
Force Sled&1A, 2B, 3C, 4G, 7E&Newton's 1st and 2nd law&Motion implies active forces\\\hline
Cart on a Ramp&8G, 9D, 10B&Motion under gravity&Motion implies active forces\\\hline
Coin Toss - Force&11G, 12D, 13B&Motion under gravity&Motion implies active forces\\\hline
Force Graph&14A, 16C, 17B, 18H, 19D&Newton's 1st and 2nd law&Motion implies active forces\\\hline
Acceleration Graphs&22E, 23G, 24B, 25F, 26A&Definition of acceleration&Velocity-acceleration undiscriminated\\\hline
Coin Toss - Acceleration&27G, 28D, 29B&Motion under gravity&Velocity-acceleration undiscriminated\\\hline
\multirow{2}{*}{Newton III}&\multirow{2}{*}{30A, 31F, 32B, 34B, 36C, 38B}&\multirow{2}{*}{Newton's 3rd law}&Greater mass implies greater force\\
&&&Most active agent produces greatest force\\\hline
\end{tabular}
\end{table*}
\subsection{The Structure of Incorrect FMCE Responses}
Study 2 allows the description of the physical principles tested
by each item group. Both ``Force Sled'' and ``Force Graph'' test
a combination of Newton's 1st and 2nd law and the definition of
acceleration. The ``Force Graph'' items also require the use of
graphical reasoning. The ``Cart on a Ramp,'' ``Coin Toss -
Force,'' and ``Coin Toss - Acceleration'' groups each require the
law of gravitation, that the gravitational force is downward and
constant. The ``Acceleration Graphs'' group requires the
definition of acceleration and reading a graph. The ``Newton III''
group requires Newton's 3rd law.
In addition to the communities being strongly related to the
item groups, often multiple item groups
testing the same physical principles were part of the same
community. Much of the complexity of Table \ref{tab:commat2}
results from the inconsistent joining of incorrect answers to
items testing the same concept. Table \ref{tab:summary} summarizes
the item groups, the physical principle tested by the group, and
the common misconception selected for the group.
The misconceptions represented by the items in the incorrect
communities are quite consistent. As in Study 1, we use Hestenes
and Jackson's extensive taxonomy of misconceptions measured by the
FCI to classify the misconceptions \cite{fcitable}. The ``Force
Sled,'' ``Force Graph,'' ``Coin Toss - Force,'' and ``Cart on a
Ramp'' responses all represent the motion implies active forces
misconception; all select a force proportional to the velocity.
The ``Acceleration Graphs'' and ``Coin Toss - Acceleration''
groups both represent the velocity-acceleration undiscriminated
misconception; all select an acceleration proportional to
velocity.
Study 1 found that the FCI presented the students with two
misconceptions related to Newton's 3rd law: greater mass implies
greater force and most active agent produces greatest force. MMA
was unable to disentangle the application of these two
misconceptions for the FCI. Both misconceptions are also in the
same community for the FMCE. Item 30A represents the greater mass
implies greater force misconception. Items 32B, 34B, 36C, 38B
apply the most active agent produces greatest force misconception.
Interestingly, item 31 gives the student a situation where both
misconceptions apply, a head-on collision between a large truck
and a faster moving car. Response 31F indicates the student does
not believe they have enough information to solve the item
suggesting they are indeed trying to apply both misconceptions
simultaneously.
\subsection{Gender Differences in Community Structure}
Both men and women consistently answer incorrectly to the ``Force
Sled'' and ``Force Graph'' items on the pretest. The physical principles
needed to solve these items are very similar, but the responses to
the ``Force Sled'' items are textual whereas the responses to the
``Force Graph'' items are graphical. This seems to indicate that the
representation chosen for the answer affects the application of
the misconception on the pretest for both men and women. These item
groups continue to be different communities for women on the
post-test; for men, they have generally merged ($\gamma=0.88$)
into a single community on the post-test.
Men and women also differ in their application of misconceptions
to items involving motion under gravity: ``Cart on a Ramp'' items,
``Coin Toss - Force'' items, and ``Coin Toss - Acceleration''
items. These items form a single community on both the pretest and
post-test for women. For men, the Coin Toss - Acceleration items
are in a different community on both the pretest and post-test.
These three groups do apply different misconceptions with ``Cart
on a Ramp'' and ``Coin Toss - Force'' items applying a force
proportional to velocity misconception while the ``Coin Toss -
Acceleration'' items apply an acceleration proportional to
velocity misconception. If a student understands that force and
acceleration are proportional, then these two misconceptions
should produce the same results. The pattern of community
membership seems to indicate women apply both misconceptions
consistently, while men do not.
While most communities make theoretical sense, both in terms of
the item group suggested for the instrument and the physical
principles required to solve items in the group identified in
Study 2, one does not. For men, one pretest community combines
``Acceleration Graphs'' with ``Coin Toss - Acceleration.'' These
items require very different physical reasoning for their correct
solution, but apply the same misconception, velocity-acceleration
undiscriminated. For these items, the misconception is more
important in determining the community than the correct answer
structure.
\subsection{The Strength of Common Misconceptions}
\begin{table*}[t]
\caption{\label{tab:misc} Percentage of students selecting each incorrect community for the FMCE post-test; mean,
1st quartile (1Q), median (med), and 3rd quartile (3Q). A Mann-Whitney $U$ test was performed to
determine if the differences between men and women were significant, the $p$-value is presented.
The effect size is given as Vargha and Delaney's $A$ \cite{vargha2000},
the probability that a randomly selected woman will score higher than a randomly selected man.
}
\centering
\begin{tabular}{|l|cc|cc|cc|c|}\hline
\multirow{2}{*}{Community}&\multicolumn{2}{|c}{Men}&\multicolumn{2}{|c|}{Women}&\multirow{2}{*}{$p$}&\multirow{2}{*}{$A (\%)$}&\multirow{2}{*}{Misconception}\\
&Mean&1Q, Med, 3Q &Mean&1Q, Med, 3Q (\%)&&&\\\hline
Force Sled, Force Graph & 48 &$10, 50, 80$ &59&$40, 70, 80$&$<0.001$&59&Motion implies active forces\\\hline
Cart on a Ramp &\multirow{2}{*}{48} & \multirow{2}{*}{$0, 50, 83$} &\multirow{2}{*}{59}& \multirow{2}{*}{$33, 67, 83$}&\multirow{2}{*}{$<0.001$}&\multirow{2}{*}{61}&\multirow{2}{*}{Motion implies active forces}\\
Coin Toss - Force &&&&&&&\\\hline
Acceleration Graphs & 27 &$0, 0, 60$ &35& $0, 20, 60$&$<0.001$&56&Velocity-acceleration undiscriminated\\\hline
Coin Toss - Acceleration & 30 &$0, 0, 67$ &44& $0, 33, 67$&$<0.001$&62&Velocity-acceleration undiscriminated\\\hline
\multirow{2}{*}{Newton III} & \multirow{2}{*}{43} &\multirow{2}{*}{$0, 40, 80$} &\multirow{2}{*}{46}& \multirow{2}{*}{$0, 40, 80$}&\multirow{2}{*}{$0.07$}&\multirow{2}{*}{52}&Greater mass implies greater force\\
&&&&&&&Most active agent produces largest force\\
\hline
\end{tabular}
\end{table*}
One potential application of these results is to provide classroom
instructors with a measurement of how strongly a misconception is
held by their students. The instructor could then tailor his or
her instruction to emphasize material on those subjects. The
strength of a misconception community, called the ``misconception
score,'' is defined as the fraction of items within the community
that are selected by the student. For example, if a community
contains \{22E, 23G, 24B, 25F, 26A\}, a student who selected 22E,
24G, and 26A would have a misconception score of sixty percent,
while a student who selected all five answer choices would have a
score of one-hundred percent. A higher score indicates a more
strongly held misconception. A student who answered items 22, 23,
24, 25, and 26 correctly would have a misconception score of zero
percent.
The Mann-Whitney $U$ test \cite{mann1947} was used to determine if
the misconception scores were significantly different for men and
women on the post-test because the data were highly non-normal and
discontinuous. The Mann-Whitney $U$ test is a non-parametric test
that may be used instead of the unpaired t-test. In this sample,
the overall post-test score was higher for men than women: the
median number of incorrect responses was 20 for men and 26 for
women. The effect size of this difference, measured using Vargha
and Delaney's $A$ statistic \cite{vargha2000}, was small: 0.63.
This indicates that a randomly selected female student will have
more incorrect answers than a randomly selected male student 63\%
of the time. If there were no effect, $A$ would be 0.50,
reflecting a 50-50 chance of a score from either group being
higher. The small, medium, and large effect sizes for Cohen's $d$
correspond to values of Vargha and Delaney's $A$ greater than
0.56, greater than 0.64, and greater than 0.71, respectively.
Table \ref{tab:misc} presents the $A$ statistic, the mean, 1st
quartile (1Q), median (Med.), and third quartile (3Q) for men and
women for the misconception scores for each incorrect answer
community. While the Mann-Whitney $U$ test found a significant
difference in each case, all of the $A$ values were in the small
or negligible effect size range. Furthermore, all of the $A$
values were lower than the overall chance of selecting a female
student at random with more incorrect answers than a random male
student. This is consistent with the finding in Study 1 showing
while significant differences exist between the misconception
scores of men and women, that these differences are largely
explained by overall differences in the post-test scores of men
and women.
For the class studied, students hold the motion implies active forces and the Newton's
3rd law misconceptions more strongly than the velocity-acceleration undiscriminated
misconception.
\subsection{Reducing Sparsification}
Sparsification is a network analytic term for removing edges from
a network to reduce its density. In MMA, sparsification is
accomplished by removing nodes selected by a small number of
students and edges correlated below some threshold ($r<0.2$ in
this study). Sparsification allows important structure to be
identified in the network. Table \ref{tab:commat2} presents the
community structure identified after sparsifying the network by
removing all nodes selected by fewer than $20\%$ of the students.
This sparsification results in a community structure very similar
to that identified in Study 1 with a small number of communities
each associated with a misconception discussed in Hestenes and
Jackson's \cite{fcitable} taxonomy.
This sparsification threshold is far more strict than that applied
in Study 1 which only removed nodes not selected by 30 students
(about 4\% of the sample). When a similar threshold was applied to
the FMCE, $5\%$, 35 communities were found in either the pretests
or post-tests of men and women. These results are presented in the
Supplemental Material \cite{supp}. Most of these communities were
very similar to one another, differing by only a single response
in some cases. These differences may have resulted from the very
different manner in which the two instruments treat incorrect
responses. The FCI presents the student with a number of responses
developed from student interviews, most designed to test a
specific misconception. Most students select only one or two of
the available incorrect answers. The FMCE presents the students
with many possible options that come close to exhausting the
available responses.
This greater scope of possible answers produces a more complex
community structure that offers the possibility of identifying
misconceptions not explicitly used to construct the instrument.
The communities identified for men and women on the pretest and
post-test for responses selected by a minimum of $10\%$ of the
students are also presented in the Supplemental Material
\cite{supp}. The misconceptions represented by communities not
identified at $20\%$ sparsification are shown in Table
\ref{tab:commat10misc}. While some responses do not have an
obvious relation to the general misconception tested by the
community (marked with an *), most responses in the communities
can be associated with a single misconception. Often these
misconceptions are outside the taxonomy \cite{fcitable} developed
for the FCI suggesting students have a much richer set of
misconceptions than is measured by the FCI. In Table
\ref{tab:commat10misc}, misconceptions identified by Hestenes and
Jackson \cite{fcitable} are bolded. Many of the items represent
combinations of misconceptions in this taxonomy involving the
failure to discriminate force, acceleration, velocity, and
position in varying combinations. The items mix the
position-velocity undiscriminated, the velocity-acceleration
undiscriminated, and the velocity proportional to applied force
misconceptions identified by Hestenes and Jackson \cite{fcitable}.
\begin{table*}[t]
\caption{Misconceptions represented by communities identified in items selected by at least 10\% of the students which were not identified in items selected
by at least 20\% of the students. Items marked * do not have an obvious relation to the misconception. Misconceptions identified by Hestenes
and Jackson \cite{fcitable} are bolded.\label{tab:commat10misc} }
\centering
\begin{tabular}{|l|l|}
\hline
Community & Misconception\\\hline
3D, 7D &No force is required to slow an object.\\\hline
\multirow{2}{*}{3E, 7C} &To slow an object at a constant rate, a decreasing force \\
&opposite motion must be applied.\\\hline
\multirow{2}{*}{3G, 7A} &To slow an object at a constant rate, an increasing force \\
&opposite motion must be applied.\\\hline
8E, 11E, 27E&Gravity exerts a constant force in the direction of motion.\\\hline
8F, 11F, 27F&Gravity exerts an increasing force in the direction of motion.\\\hline
\multirow{2}{*}{8F, 10C, 11F, 13C, 27F, 29C} &Gravity exerts an increasing force as an object travels upward \\
&and a decreasing force as it travels downward.\\\hline
\multirow{2}{*}{8F, 10C, 11F}&Gravity exerts an increasing force as an object travels upward \\
&and a decreasing force as it travels downward.\\\hline
11E, 27E&Gravity exerts a constant force in the direction of motion.\\\hline
14C, 17H, 24G, 26E, 40D, 42C, 43A* &Force-acceleration-velocity undiscriminated from position.\\\hline
14C, 17D, 17H, 23D*, 24G, 26E, 40D, 42C, 43A* &Force-acceleration-velocity undiscriminated from position.\\\hline
14C, 17D, 40D, 42C &Force-velocity undiscriminated from position.\\\hline
14C, 17D, 17H, 40D, 42C, 42H*&Force-velocity undiscriminated from position.\\\hline
17A, 18D, 19C, 19H, 23F, 24A, 25E, 25G &{\bf Velocity proportional to applied force.}\\\hline
17A, 19C, 24A, 25E, 42A* &{\bf Velocity proportional to applied force.}\\\hline
18D, 19H, 23F, 25G&{\bf Velocity proportional to applied force.}\\\hline
19C, 25E&{\bf Velocity proportional to applied force.}\\\hline
24F, 26E&\textbf{Velocity-acceleration undiscriminated.}\\\hline
\multirow{2}{*}{27B, 27C, 29F}&Gravitational acceleration not constant and in the opposite \\
&direction of motion. \\\hline
\multirow{2}{*}{27C, 29F} &Gravitational acceleration proportional to velocity and in the opposite\\
&direction of motion.\\\hline
\end{tabular}
\end{table*}
\section{Discussion}
\subsection{Research Questions}
This study sought to answer four research questions; the first
three will be addressed in the order proposed. The fourth research
question compares the results of Study 1 for the FCI to the
results of this study. The differences of the FCI and FMCE will be
discussed as part of the answer to each of the first three research
questions.
\textit{RQ1: What incorrect answer communities are identified by
Modified Module Analysis in the FMCE? } The communities of
incorrect responses identified on the FMCE generally conformed to
the block structure of the instrument and were associated with
items groups identified in previous work. This discussion will
focus on the analysis retaining nodes selected by 20\% of the
students; results retaining nodes selected by 5\% and 10\% of the
students are discussed in RQ3. Modified Module Analysis showed the
item groups proposed by Smith and Wittman were being consistently
answered using a common misconception: the ``Force Sled'' (items
1-4, 7), the ``Force Graph'' (items 14, 16-19), ``Acceleration
Graphs'' (items 22-26) and ``Newton III'' (items 30-32, 34, 36,
38) \cite{smith2008}. The ``Reversing Direction'' subgroup of
items (items 8-10, 11-13, 27-29) \cite{smith2008} was not
consistently identified as an incorrect answer community. The
subgroup of items 27-29 sometimes formed its own community and was
sometimes grouped with the other items. We proposed renaming the
subgroups: ``Cart on a Ramp'' (items 8-10), ``Coin Toss-Force''
(items 11-13), and ``Coin Toss-Acceleration'' (items 27-29).
``Cart on a Ramp'' and ``Coin Toss - Force'' items were identified
in the same community both pre- and post-instruction and for men
and women; ``Coin Toss - Acceleration'' items were inconsistently
identified as part of this community.
Only four misconceptions were identified retaining nodes selected
by the 20\% of the students: motion implies active forces,
velocity-acceleration undiscriminated, and two Newton's 3rd law
misconceptions. The Newton's 3rd law misconceptions, greater mass
implies greater force and most active agent produces largest
force, were not identified as independent incorrect answer
communities. This is consistent with Study 1 which also failed to
distinguish the two misconceptions in the FCI. Also consistent
with Study 1, the incorrect answer communities contained items
testing the same physical principles as identified in Study 2. The
physical principle tested by the item, rather than the
misconception, was the most important factor in determining the
incorrect answer community. In this study, four separate item
groups were associated with the motion implies active forces
misconception (Table \ref{tab:summary}): ``Force Sled,'' ``Force
Graph,'' ``Cart on a Ramp,'' and ``Coin Toss - Force.'' Study 2
showed that the first two groups required Newton's 1st and 2nd law
for their solution while the last two required the law of
gravitation. While testing the same misconception, the first two
groups were never detected in the same community as the last two
groups. This is consistent with Study 1 which also identified
multiple incorrect answer communities in the FCI measuring the
motion implies active forces misconception; these communities also
had similar correct solution structure \cite{stewart2018}.
Study 2 demonstrated that the FMCE has substantially less complete
coverage of mechanics than the FCI which was consistent with
previous work by Thornton {\it et al.}
\cite{thornton2009comparing}. The FCI also measures a broader set
of misconceptions than the FMCE. Communities associated with 9
different misconceptions were identified in the FCI, while only 4
were identified in the FMCE. While covering fewer misconceptions,
the FMCE does measure the critical velocity-acceleration
undiscriminated misconception more thoroughly than the FCI.
Responses 19A, 20B, and 20C in the FCI are reported to measure
this misconception in Hestenes and Jackson \cite{fcitable}, but
were not detected as an incorrect answer community in Study 1.
Study 1 also identified 3 communities in the FCI that directly
resulted from the blocked structure of the instrument. In these
communities, the second item in an item block was the correct
answer if the first answer had been the correct answer. No such
communities were identified in the FMCE. While extensively
blocked, the items in the FMCE do not directly refer to the
results of previous items.
The communities identified in the FMCE were generally
substantially larger than those identified in the FCI. The FCI
contained 13 distinct communities for a 30-item instrument while
the FMCE contained 9 communities for a 43-item instrument. In the
FMCE, some of the distinct communities resulted from joining other
communities. All communities in the FMCE can be formed of 6 groups
of items: ``Force Sled,'' ``Force Graph,'' ``Acceleration
Graphs,'' ``Coin Toss - Acceleration,'' ``Newton III,'' and a
community that combines ``Cart on a Ramp'' and ``Coin Toss -
Force.'' As such, substantially fewer distinct groups of
misconceptions are identified in the FMCE; however, the groups
were often substantially larger in the FMCE than the FCI. For the
FMCE, the fundamental groups have sizes ranging from 3 to 6 with
all but one group containing at least 5 items. Only 2 of the 13
groups in the FCI contain as many as 3 items with 11 groups
containing only two items. Because the incorrect answer
communities contain more items, the FMCE may provide a
substantially more accurate characterization of the strength of
the misconception (Table \ref{tab:misc}) than the FCI.
The MMA method also provided support for eliminating the
problematic items which were identified by Thornton \textit{et
al.} \cite{thornton2009comparing}. With items 5, 6, 15, 20, 21,
33, 35, 37 and 39 included in the analysis, the community
structure was complex which made it rather difficult to interpret
because some of these items were inconsistently associated with a
misconception community.
\textit{RQ2: How are these communities different pre- and
post-instruction? How is the community structure different for men
and women?} The pre- and post-instruction differences of the
community structure were very different for men and women, and as
such, these two questions will be addressed together. The
communities identified for men and women were often different; on
the FMCE pretest, only three out of the nine communities were the
same, while on the FMCE post-test, two out of the nine were the
same. The differences were generally the result of joining two
communities with similar correct solution structure as identified
in Study 2. Men integrated the ``Force Sled'' and ``Force Graph''
item groups on the post-test while women did not; however, women
integrated the ``Coin Toss - Acceleration'' item group with the
``Cart on a Ramp'' and ``Coin Toss - Force'' item groups on the
post-test while men did not. As such, neither men nor women were
more likely to form more integrated misconceptions with
instruction. The same physical reasoning is required to solve the
items in the larger integrated misconception groups and,
therefore, more consistency in selecting a misconception may
represent progress in recognizing the same reasoning is required
by the items.
The difference between men and women both pre- and
post-instruction was dramatically different than the results of
Study 1 for the FCI. Generally, the incorrect answer community
structure was very similar for men and women on both the pretest
and the post-test for the FCI.
The change in misconception structure between the pretest and the
post-test was dramatically different for men and women. For women,
the misconception communities identified were completely
consistent from the pretest to the post-test. For men, of the five
communities identified pre-instruction, only two were identified
post-instruction. The differences resulted from the ``Force
Graph'' and ``Force Sled'' communities merging post-instruction,
possibly indicating that men developed more facility with working
with the same type of problem in multiple representations with
instruction. Pre-instruction, the ``Acceleration Graphs'' and
``Coin Toss - Acceleration'' item groups were combined; these were
separate post-instruction. These groups require different physical
principles for their solution; however, both apply the same
misconception. This may possibly indicate that men differentiate
the ideas of force and acceleration in an inconsistent manner
pre-instruction.
These results also help to explain the unfairness that was
identified in items 27-29 by Henderson \textit{et al.}
\cite{henderson2018}. Women consistently integrated this item
group (``Coin Toss - Acceleration'') with the other item groups
measuring motion under gravity (``Cart on a Ramp'' and ``Coin Toss
- Force''); men did not. ``Coin Toss - Force'' and ``Coin Toss -
Acceleration'' items differ only by asking about the force and
acceleration on a coin moving under the force of gravity; failing
to integrate the misconceptions about force and acceleration seems
to indicate either that the student does not understand that force
and acceleration are proportional or indicate some error in
interpreting the items.
The strength of the misconception, measured by the misconception
score in Table \ref{tab:misc}, shows how strongly students hold a
particular misconception. The misconception score was smaller than
the overall difference in FMCE score between men and women showing
there are not particular misconceptions more strongly held by men
or women. No gender difference in misconception score was larger
than a small effect.
\textit{RQ3: How do the communities change as the parameters of
the MMA algorithm are modified?}
Study 1 investigated variations in two network building
parameters: the correlation threshold $r$ and the community
fraction $C$. These parameters were adjusted to produce productive
community structure using the model of the correct solution
structure provided in Study 2 and the taxomony of misconceptions
provided by Hestenes and Jackson \cite{fcitable}. The threshold of
the minimum number of students who could select a response was not
investigated because productive structure was identified retaining
only responses selected by a least 30 students, the minimum
statistically viable threshold. The FMCE behaved differently; the
misconception structure changed dramatically as the threshold for
the minimum percentage of students selecting a response was
modified.
Retaining nodes selected by at least 5\% of the students, MMA
identified 35 incorrect response communities; many of these
communities were similar, with some differing by only a single
response. Retaining responses selected by at least 10\% of the
students, the structure of the communities was still complex
(Table \ref{tab:commat10misc}) but, in general, a single coherent
misconception could be identified for each community. Some, but
not all, of these misconceptions were described in the taxonomy
proposed by Hestenes, Wells, and Swackhamer \cite{hestenes1992,
fci-revised} and refined by Hestenes and Jackson \cite{fcitable}.
If responses selected by a minimum of 20\% of the students were
retained, the community structure simplified substantially (Table
\ref{tab:commat}). Examination of the community structure showed
that much of the remaining complexity involved the sporadic
inclusion of items identified as problematic by Thornton
\textit{et al.} \cite{thornton2009comparing}. Removal of these
items produced the relatively simple community structure in Table
\ref{tab:commat2}. With the exception of one male pretest
community, these communities all measured a misconceptions
described in Hestenes and Jackson's taxonomy \cite{fcitable} as
well as requiring the same physical reasoning described in Study
2. The male pretest community applied the same misconception, but
required different physical reasoning for its correct solution.
The FCI and the FMCE community structures were dramatically
different if responses selected by 5\% of the students were
retained. At this threshold, the FCI had only 13 small communities
and the FMCE 35 often fairly large communities even though the
coverage of the FCI is substantially more broad than the FMCE.
These differences likely resulted from two sources: students in
the FCI sample scored substantially higher on the instrument than
the students in the FMCE sample and the unusual distractor
structure of the FMCE. The FCI uses only 5 responses for each
question and the incorrect responses were developed from student
interviews and include common student incorrect views. The FMCE
uses items with more than 5 responses that often generally exhaust
the possible responses. This offers far greater latitude for
students to express uncommon misconceptions and, therefore, are
only selected by a small fraction of the students.
The broad set of misconception communities identified retaining
nodes selected by 10\% of the students suggest that the state of
student incorrect reasoning may be substantially more complex than
the structure measured by the FCI.
\section{Implications}
The responses to the FCI
were constructed to measure common misconceptions allowing Jackson
and Hestenes to provide a detailed taxonomy of the misconceptions
measured by each item \cite{fcitable}. While common misconceptions
were certainly considered in the construction of the instrument,
the FMCE presents students with many possible incorrect answers.
These answers largely exhaust the possible responses. As such, the
FMCE may be a much better instrument for a purely exploratory
analysis of student incorrect thinking less tied to the
misconception view.
The identification of incorrect answer communities testing the
same misconception allows the calculation of a misconception score
as a quantitative measure of how strongly the misconception is
held. This should allow instructors to determine which
misconceptions are most prevalent in their classes and to target
instruction to eliminate these misconceptions.
\section{Limitations}
The MAMCR and MMA algorithms require a number of choices to be
made by the researcher to produce network structure that is
productive in furthering the understanding of a conceptual
instrument. As the use of network analysis matures in PER,
quantitative criteria for optimally selecting network parameters
should be developed.
\section{Conclusion}
Physics conceptual inventories have played an important role in
quantitative physics education research and understanding
students' difficulties with conceptual physics continues to be a
central research area within PER. Network analysis, specifically
Modified Module Analysis (MMA), has recently been used as a tool
to investigate the common misconceptions on the FCI
\cite{wells2019}. The current study replicated this work for the
FMCE.
In general, retaining responses selected by 20\% of the students,
the community structure for the FMCE was consistent with the item
groups identified in previous studies
\cite{thornton1998,smith2008}. The misconceptions represented by
these communities were limited: motion implies active forces,
velocity-acceleration undiscriminated, greater mass implies
greater force, and most active agent produces greatest force.
Three of these incorrect answer communities were previously
identified in the FCI \cite{wells2019}; however, the
velocity-acceleration undiscriminated misconception was only
detected as an incorrect answer community in the FMCE. The FCI was
found to measure nine misconceptions in the previous study.
The FCI and the FMCE behaved dramatically differently as network
parameters were adjusted. For the FCI, including responses
selected by 4\% of the students, only 13 communities were
detected, most with only two responses. Retaining responses
selected by a similar percentage of students, 35 communities were
detected in the FMCE with up to 15 members.
The evolution of the communities identified was dramatically
different for men and women. The communities identified for women
did not change from pretest to post-test, while only 2 of the 5
communities identified for men remained consistent. Unlike the
FCI, there was little consistency in the communities identified
for men and women either pre-instruction or post-instruction.
Overall, Modified Module Analysis was productive in understanding
the misconception structure of both the FCI and the FMCE and
allowing the comparison of the instruments.
\begin{acknowledgments}
Data collection for this work was supported by National Science
Foundation grants EPS-1003907 and ECR-1561517.
\end{acknowledgments}
|
2,877,628,091,498 | arxiv | \section{Introduction}\label{sec:intro}
Topological actions\footnote{Unfortunately, the word `action' has very different meanings in mathematics and physics and both meanings feature in this work; we hope that no confusion results.} have come to play an important r\^{o}le in physics. Examples include the Aharonov--Bohm term and the Dirac monopole in quantum mechanics, Chern--Simons terms and theta terms in gauge theories, and the Wess--Zumino--Novikov--Witten (WZNW) terms occurring in hadronic physics and elsewhere. Such actions have hitherto mostly been described in an {\em ad hoc} fashion. We will show how all of the above examples, and many more besides, can be described using differential cohomology, a mathematical gadget which is a diffeomorphism invariant of a manifold that refines integral cohomology by information about differential forms, thus merging topological data about the manifold (or rather its homotopy type) with geometrical information in an intricate way.
As well as allowing us to describe topological actions in a systematic way, differential cohomology has a number of other advantages over {\em ad hoc} approaches. One is that the action obtained is manifestly `topological' in the loose, physicist's sense, in that it is invariant under orientation-preserving diffeomorphisms of the spacetime manifold.
A second advantage is that a basic necessary requirement of locality is satisfied, namely that the action can be defined on any orientable, compact manifold without boundary, representing spacetime in the euclidean picture. Ideally of course, one would like to go further and show that the theory can be defined on a manifold with boundary and corners of arbitrary codimension, but we will see that this first baby step already yields dividends. Moreover, it is believed that the remaining steps can be carried out \cite{Freed:2699265}.
A third advantage is that the interplay of topological actions with symmetries, be they local or global, can be discussed in a systematic way using equivariant or invariant versions of differential cohomology. Moreover, by studying the surjectivity of a natural map from the equivariant version to the invariant version, one can address the question of whether a global symmetry can be gauged. This is certainly not always the case and indeed there exist counterexamples in which the obstructions are consistent with known anomalies in quantum field theory. The prototypical example is the topological WZNW term in the low-energy effective action describing the strong interactions, where the obstruction to gauging reproduces anomalies in the underlying high energy description via quantum chromodynamics, consistent with the non-renormalization of the anomaly. With a systematic understanding of locally- and globally-symmetric topological actions in hand, 't Hooft's idea of using anomaly matching to understand strongly-coupled dynamics~\cite{tHooft:1979rat} acquires new power, because we can track anomalies even in cases where no fermions are present due to confinement.
A rather trivial, but nonetheless satisfying, version of this phenomenon occurs in theories in one spacetime dimension, where we often
have the luxury of being able to compare with exact quantum mechanical solutions.
For example, we will see that one cannot gauge the $SO(3)$ rotation symmetry of a rigid body in the presence of a topological term that endows it with the properties of a fermion, in the sense that the exponentiated action corresponding to a whole rotation about any axis equals minus one (a similar conclusion was reached by different arguments in~\cite{Gaiotto:2017yup}). This is consistent with the quantum mechanical solution (for a recent treatment, see \cite{Davighi:2019ffp}), which shows that the energy eigenstates of the system have even degeneracy, corresponding to states of half-integer spin. The states thus carry a projective representation
of $SO(3)$, which leads to an anomaly when we try to gauge it. So, by means of a classical computation, we obtain a result which is an avatar of the spin-statistics theorem in quantum field theory in 4 dimensions. A similar result obtains for a charged particle coupled to a magnetic monopole of odd charge, whose quantum mechanical energy eigenstates also have half-integer spin.
Moving up to two spacetime dimensions, we find another satisfying result: one cannot gauge the $O(n) \times O(n)$ symmetry of the WZNW term of the $O(n) \times O(n)/O(n)$ sigma model. This result was anticipated in Ref.~ \cite{Witten:1983ar}, where it was shown that, for suitable values of the couplings of this term and the usual kinetic term, the bosonic sigma model is dual to a free theory of $n$ Majorana fermions, with $O(n) \times O(n)$ corresponding to the anomalous chiral symmetries.
A fourth advantage of using differential cohomology is that, because its building blocks are familiar objects in algebraic topology, namely integral cohomology and differential forms (or their invariant/equivariant siblings), the formidable apparatus of that subject can be brought to bear in their elucidation. Though we only treat simple examples in this work, the reader will hopefully see that the procedure of constructing actions and identifying the set of possible associated coupling constants is generally straightforward, if one knows enough tricks.
A fifth and final advantage is that the definition of differential cohomology, together with our definition of the physics action, can be extended from the category of smooth manifolds to a larger category whose objects include spaces of smooth maps \cite{bar2014differential}. This implies that the resulting physics action has a notion of smoothness with respect to the degrees of freedom of the field theory. This is not only desirable from the physics point of view, but becomes a necessity if we wish to play the game of classifying field theories and actions. After all, two actions which differ by arbitrarily small amounts cannot be distinguished by experimental measurements of limited precision, so it would be wrong to distinguish them in the classification.
The outline is as follows. By way of invitation, we describe in \S \ref{sec:dirac} the obstruction to gauging the rotation symmetry of an electrically-charged particle coupled to a monopole of odd charge, by means of an {\em ad hoc} construction that slavishly follows the usual physicist's approach. By doing so, we hope to convince readers that not only is there interesting physics going on in such systems, but also that there ought to be a better way of figuring out what it is. To this end, in \S\S \ref{sec:ord}-\ref{sec:inv} we give axiomatic definitions of ordinary, equivariant, and invariant cohomology theories, describe their connections to topological physics actions with local or global symmetries, and make some preliminary remarks about their mathematical properties and physical consequences thereof. To go further requires us to delve deeper into the mathematical structure of the various differential cohomology theories, which we do in \S\ref{sec:top}. In particular, we show, following \cite{Becker:2014tla}, that equivariant ({\em ergo} ordinary) differential cohomology can be endowed with a smooth structure, in the form of an abelian Lie--Fr{\'e}chet group and describe various smooth exactness and splitting properties of the sequences of maps defining it. In particular, we show that the two short exact sequences in which equivariant differential cohomology sits split smoothly. These results, which may also be of interest to mathematicians, are technically useful to physicists because they enable a concrete characterization of invariant differential cohomology (at least in favourable cases), as we show in \S\ref{sec:char}. In \S\ref{sec:gau} we describe a map from equivariant to invariant differential cohomology, which corresponds on the physics side to the fact that every locally-symmetric physics action defines a globally-symmetric one, and discuss its features. Along the way, we describe a number of simple examples relevant for physics.
Finally we take pains to point out that the application of differential cohomology to physics, be it implicit or explicit, is certainly not new; see {\em e.g.} \cite{Wu1976, Alvarez:1984es, gawedzki1988topological, dijkgraaf1990,Freed:1992vw,Freed2002classical, Freed:2004yc ,Freed:2006ya,Freed:2006yc, Freed:2016rqq,Freed:2699265} and references therein. In particular, many of the ideas appearing here have precursors in \cite{Freed:2006mx}, which studied the particular case of the WZNW term in the strong interactions using differential cohomology, albeit in the presence of an additional structure, in the form of a spin structure on spacetime.
\section{An invitation: gauging Dirac's monopole}\label{sec:dirac}
By way of invitation, let us consider the physics of an electrically-charged particle moving in the presence of a magnetic monopole. To simplify things, we suppose that the particle is constrained to move on the surface of a 2-sphere $X=S^2$, with the monopole at the centre. Dirac \cite{Dirac:1931kp} showed that consistency requires that the monopole has an integer quantized charge, a condition which was given an elegant interpretation in terms of topological actions by Witten \cite{Witten1983a}, as follows. Let the path of the particle be given by a map $f:S^1 \to X$ from the (euclidean) worldline to the 2-sphere. Since $S^1$ bounds a disk $D^2$ (with an orientation induced by that on $S^1$) and since any map $f$ extends to a map $\overline{f}: D^2 \to X$, one can define a rotationally-invariant topological action by $\int_{D^2} \overline{f}^* \omega$, where $\omega$ is a rotationally-invariant 2-form on $X=S^2$, which is unique up to a scalar. But since there is also an extension in which $D^2$ is mapped to the complement of $\overline{f}(D^2)$ in $X$, we must take care to ensure that the (exponentiated) action be independent of the choice of lift; we therefore must require that $\omega$ has integral periods, restricting the possible actions to $\mathbb{Z} \subset \mathbb{R}$, which we interpret as the allowed monopole charges.
From here, Witten went on to study an analogous term arising in the low-energy effective lagrangian describing the strong interactions. Here the target manifold is diffeomorphic to $SU(3)$ (where 3 corresponds to the number of light quarks) and there is an obvious $SU(3)\times SU(3)$ global symmetry corresponding to the action by left and right translations. Witten showed that it is not possible to gauge this symmetry and showed how this could be linked to chiral anomalies in the underlying high energy theory of QCD.
Our point of departure here is to complete the circle of ideas by returning to the Dirac monopole and asking whether its global symmetry, namely the $SO(3)$ group of rotations, can be gauged. The answer is that it can, but only if the monopole charge is an even multiple of the minimal charge. This result is, on the one hand, surprising, because it cannot be seen by passing to a local description and attempting a brute-force gauging, as Witten did in \cite{Witten1983a}; nor can it be seen by a study of equivariant differential forms, as in \cite{witten1992}. But on the other hand, it should come as no surprise at all, because the quantum mechanics of this problem can be solved exactly (see \cite{Davighi:2019ffp} for a recent treatment) and shows that the energy eigenstates carry a projective representation of $SO(3)$, which is known to lead to problems upon gauging \cite{Nelson:1984gu}.
In fact, there is an obstruction to gauging any connected subgroup of $SO(3)$, so let us try to gauge an $SO(2) \cong U(1)$ subgroup of $SO(3)$ instead. Any such subgroup corresponds to rotations about some axis on $X=S^2$ and leaves two points fixed, which we might as well poetically call the poles. The data of the $U(1)$ gauge theory then consist of a principal $U(1)$-bundle $P$ with connection $\Theta$ over the worldline $S^1$, together with a section of the associated bundle $P\times_{U(1)} X$, or equivalently a $U(1)$-equivariant map $f: P \to X$. From this data, we may try to define an action as follows. The form $\omega$ has a unique closed $U(1)$-equivariant extension $\overline{\omega}$, and given lifts $\overline{P},\overline{\Theta},$ and $\overline{f}$ of $P, \Theta,$ and $f$, respectively, to the disk bounding $S^1$, we can pull $\overline{\omega}$ back to obtain an equivariant 2-form $\overline{f}^*\overline{\omega}$ on $P$. To finish the construction of the action, we use the so-called Cartan map \cite{GuilleminSternberg} together with our connection $\overline{\Theta}$ to obtain a
2-form on the base $D^2$, which we integrate over the base to get our action. We will give the details of the Cartan map later; for now it is enough to know that it is a homotopy inverse to the chain map
from forms on the base to equivariant forms on the bundle given by pullback along the bundle map.
Our definition of the action involves many choices. We must check not only that it is possible to make such choices, but also that our definition is independent of how this is done. Existence is easily established: since every $U(1)$-bundle $P$ over $S^1$ is trivializable (that is, isomorphic as a principal bundle to the trivial bundle $U(1)\times S^1$), we can always find an extension $\overline{P}$, which is itself trivializable (a similar story holds for the connection and the equivariant map). But now comes the crucial observation. Even though all such extensions are trivializable, when we compare the result of two different extensions, it is not the case that the difference in the action can be expressed in terms of a trivial $U(1)$-bundle over $M= S^2$. This is perhaps most easily seen by starting from a non-trivial $U(1)$-bundle over $M$ (together with some connection and some equivariant map) and cutting along an $S^1$ in the base; by doing so, we obtain two $U(1)$-bundles over $D^2$ (after pulling back along the respective base inclusions), which are perforce trivializable, and so {\em kosher} lifts.
To see how this leads to a problem, consider the non-trivial $U(1)$-bundle over $M=S^2$ given by the Hopf bundle $Hf:S^3 \to S^2$. This admits $U(1)$-equivariant maps to the target $X=S^2$, namely the constant maps sending all of $S^3$ to one of the poles. We wish to compute the action (or rather difference in actions) corresponding to such a bundle and equivariant map (the choice of connection will turn out to be irrelevant). To do so, we must first pull back our equivariant form $\overline{\omega}$ along our constant polar map.
Now, in the Cartan complex $\overline{\omega}$ is a sum of the original 2-form $\omega$ together with a linear map from the Lie algebra $\mathfrak{u}(1)$ of $U(1)$ to the space of $0$-forms on $X=S^2$.
To determine these pieces, we resort to a dirty calculation.\footnote{More fastidious readers may prefer to appeal to the Atiyah--Bott localization formula \cite{ATIYAH19841}, which determines the values at the two poles of the linear map in terms of the integral of $\omega$ over the sphere. Since the reflection in the equator is equivariant but reverses orientation, the values are equal and opposite.} Let $X=S^2$ be the unit sphere in $\mathbb{R}^3$, let the $U(1)$ act by rotation in the ($x_2$--$x_3$)-plane, and take $U(1)$ as $\exp{2\pi i t}$ for $t \in [0,1]$, with $d/dt \in \mathfrak{u}(1)$ as the generator. Then the unit normalized volume form is $\omega = (x_1 dx_2 dx_3 - x_2 dx_1 dx_3 + x_3 dx_1 dx_2)/4\pi$ and a simple calculation shows that $\overline{\omega} = \omega - \frac{x_1}{2} dt$ is an equivariant extension. On pulling back along the constant map to (say) the South pole at $x_1=-1$, this gives $\frac{1}{2} dt$. But under the Cartan map, $dt$ represents the first Chern class of the principal U(1)-bundle we started with (this is the Chern--Weil correspondence), so $\frac{1}{2} dt$ yields a 2-form whose integral over the base equals $\frac{1}{2}$, such that the exponentiated actions differ by a sign, so are ill-defined. The argument generalizes immediately to a monopole of arbitrary odd charge.
As we remarked in the Introduction, the obstruction to gauging is fully consistent with what we find from an exact solution of the system in quantum-mechanics (see, {\em e.g.} \cite{Davighi:2019ffp}): for a monopole of charge $g$ in units of the minimal charge, the energy eigenstates have spin $\frac{g}{2}, \frac{g}{2}+1, \frac{g}{2}+2,\dots$, so carry a projective representation of $SO(3)$ when $g$ is odd, so lead to an anomaly upon gauging.
Whilst the problem with our {\em ad hoc} construction of the action can be seen easily enough in this simple case, it is hopefully obvious to the reader that it will become a nightmare for anything but the simplest field theories. This on its own motivates the search a more systematic construction, which differential cohomology will provide. There, the obstruction to gauging in the case of the monopole can be seen straightforwardly either using the algebraic definition of differential cohomology directly, or by using a geometric interpretation thereof that is available in low degrees. Algebraically, the obstruction corresponds to the fact that the map $H^2_{SO(3)}(S^2;\mathbb{Z}) \cong \mathbb{Z} \to H^2(S^2;\mathbb{Z})^{SO(3)} \cong \mathbb{Z}$ in integral equivariant cohomology is multiplication by two, which can be deduced from the Serre exact sequence; geometrically, it can be seen from the fact that the Hopf bundle $SU(2) \to S^2$ (which corresponds to $1 \in \mathbb{Z} \cong H^2(S^2;\mathbb{Z})$) does not admit an equivariant action of $SO(3)$, while the bundle $SO(3) \to S^2$ (which corresponds to $2 \in \mathbb{Z} \cong H^2(S^2;\mathbb{Z})$) obviously does. We will give more details once we have developed the necessary formalism.
\section{Ordinary differential cohomology}\label{sec:ord}
Let us begin with a definition of differential cohomology, which we sometimes prefix with the adjective `ordinary' to distinguish it from the invariant and equivariant versions that follow. There are, by now, many equivalent definitions extant in the literature \cite{10.1007/BFb0075216,d6b9f1ee45804fcc88d4fb4c171ed7f7,brylinski2007loop,2005math12251R,Hopkins:2002rd,bunke2016differential}. We prefer an axiomatic one \cite{simons2008axiomatic}, which has the advantage of involving only basic notions of cohomology and differential forms that should be familiar to physicists. By way of preamble, let us recall the basic notions. For $A$ an abelian group, let $H^{\ast}(\cdot;A)$ be the usual cohomology with coefficients in $A$, considered as a (contravariant) functor from the category of smooth manifolds to the category of graded abelian groups. Similarly, let $\Omega^{\ast}(\cdot)$ (respectively $\Omega^{\ast}(\cdot)_\mathbb{Z}$) be the functors representing differential forms (respectively differential forms with integral periods, henceforth referred to simply as `integral forms'). Let
\begin{equation} \label{eq:lesbock}
\dots \to H^{n-1}(\cdot;\mathbb{R}) \to H^{n-1}(\cdot;\mathbb{R}/\mathbb{Z}) \xrightarrow{b} H^{n}(\cdot;\mathbb{Z}) \to H^{n}(\cdot;\mathbb{R}) \to \dots
\end{equation}
be the long exact sequence in cohomology associated to the short exact sequence of coefficients $ \mathbb{Z} \hookrightarrow \mathbb{R} \twoheadrightarrow \mathbb{R}/\mathbb{Z}$, with Bockstein map $b$, and let
\begin{equation} \label{eq:lesdr}
\dots \to H^{n-1}(\cdot;\mathbb{R}) \to \Omega^{n-1}(\cdot)/\Omega^{n-1}(\cdot)_{\mathbb{Z}} \xrightarrow{d} \Omega^{n}(\cdot)_{\mathbb{Z}} \to H^{n}(\cdot;\mathbb{R}) \to \dots
\end{equation}
be the obvious long exact sequence associated to de Rham's theorem, with exterior derivative $d$.
\begin{defn}[\cite{simons2008axiomatic}, \S 1]
An {\em ordinary differential cohomology theory} is a functor $\widehat{H}^{\ast}(\cdot)$ together with four natural transformations $i$, $j$, $\mathrm{curv}$, and $\mathrm{char}$, such that for any manifold $X$, the diagram
\begin{equation} \label{eq:character diagram DC}
\begin{tikzcd}[row sep=scriptsize,column sep=tiny]
{} & {} & H^{n-1}(X;\mathbb{R}/\mathbb{Z}) \arrow[rr,"b" description] \arrow[dr, hookrightarrow, "j" description] & {} & H^{n}(X;\mathbb{Z}) \arrow[dr] & {} & {} \\
{} & H^{n-1}(X;\mathbb{R}) \arrow[ur] \arrow[dr] & {} & \widehat{H}^{n}(X) \arrow[ur, twoheadrightarrow, "\mathrm{char}" description] \arrow[dr, twoheadrightarrow, "\mathrm{curv}" description] & {} & H^{n}(X;\mathbb{R}) & {} \\
{} & {} & \Omega^{n-1}(X)/\Omega^{n-1}(X)_{\mathbb{Z}} \arrow[rr,"d" description] \arrow[ur, hookrightarrow, "i" description] & {} & \Omega^{n}(X)_{\mathbb{Z}} \arrow[ur] & {} & {}
\end{tikzcd}
\end{equation}
commutes, with the 2 diagonals in the centre being exact at $\widehat{H}^{n}(X)$.
\end{defn}
\begin{theorem}[\cite{simons2008axiomatic}, Thm. 1.1]
Ordinary differential cohomology theories exist and are unique up to unique isomorphism.
\end{theorem}
Thus, to refer to {\em the} differential cohomology, as we frequently do in sequel, is but a {\em peccadillo}.
Let us now make some remarks about differential cohomology. The diagram (\ref{eq:character diagram DC}) formalises our earlier assertion that differential cohomology refines the integral cohomology of a manifold with information about differential forms: the map $\text{char}$ surjects onto $H^{n}(X;\mathbb{Z})$, with a kernel given by an equivalence class of $(n-1)$-forms. A key property of differential cohomology is that given a fibre bundle $E \to B$ with closed oriented fibre $F$ of dimension $m$, there exists a notion of fibre integration ({\em cf. e.g.} \cite{bar2014differential}), namely a map $\int_{F}: \widehat{H}^\ast (E) \to \widehat{H}^{\ast - m}(B)$, which is compatible with the corresponding maps on cohomology and differential forms. This map can be extended to fibres with boundary and an important special case is the homotopy formula: given $h \in \widehat{H}^\ast (X)$ and a smooth homotopy $F:[0,1] \times Y \to X$ of maps $F_{0}, F_1:Y \to X$, we have
$$F_1^*h - F_0^*h = i\int_{[0,1]} F^* \text{curv} \; h$$
where the integral denotes the usual fibre integration of differential forms. This formula not only shows that differential cohomology is not a homotopy invariant, but also encodes its variation under homotopies in an explicit way, making differential cohomology a powerful diffeomorphism invariant of manifolds. We make use of the homotopy formula in \S \ref{sec:char}.
To see how to define a physics action using differential cohomology, suppose we have a
physical system where spacetime is oriented and has dimension $p$ and where we have a fixed target manifold $X$. Given a spacetime, in the form of an oriented, closed $p$-manifold $M$, the degrees of freedom of the theory are then smooth maps $f:M \to X$. Given an element $h \in \widehat{H}^{p+1}(X)$, we define the physics action (or rather its exponential $e^{2\pi i S}$) as follows. Using the map $f$, we form the pullback $f^*h \in \widehat{H}^{p+1}(M)$. Since $M$ is a $p$-manifold, $\Omega^{p+1}(X)_{\mathbb{Z}}$ vanishes, so the diagram (\ref{eq:character diagram DC}) (with the obvious replacements $X \leadsto M$ and $n \leadsto p+1$) shows that the map $j$ is in fact an isomorphism. We may thus form $j^{-1} f^*h \in H^{p}(X;\mathbb{R}/\mathbb{Z})$. Since $M$ has an orientation, it has a fundamental class $[M]$, and so we obtain an element in $\mathbb{R}/\mathbb{Z}$ by evaluating $j^{-1} f^*h$ on $[M]$ using the canonical pairing of homology and cohomology. Exponentiating this element leads to a well-defined value for $e^{2\pi i S}$.
Equivalently, since $H^{p+1}(M;\mathbb{Z})$ vanishes as well, we can obtain our exponentiated action by integrating a representative $p$-form in $i^{-1}f^* h \in \Omega^{p}(M)/\Omega^{p}(M)_{\mathbb{Z}}$ over $M$, and noting that this is well-defined on classes once we reduce modulo $\mathbb{Z}$.
Evidently, our construction is valid on any closed, oriented spacetime manifold $M$. Moreover, it is clear that the construction requires only these structures, together with the map to $X$. The action is thus `topological', in the sense commonly used by physicists.
Our construction shows that, far from being rare, there are many such actions associated to a given physical system. One way to see this is to give an explicit geometrical interpretation of differential cohomology in low degrees. In degree one, for example, the abelian group of differential cohomology on $X$ is isomorphic to the abelian group of smooth functions $g:X \to U(1)$ (the map $\text{char}$ sends $g$ to its homotopy class, while the map $\text{curv}$ sends $g$ to its derivative). Physically, this corresponds to the rather boring case of a 0-dimensional field theory, in which spacetime is a finite disjoint union of points. Each of these is sent by $f$ to a point in $X$ and the action is given by summing the values of $gf$ over the points.
Things are somewhat more interesting in degree two, where differential cohomology on $X$ is isomorphic to the abelian group of isomorphism classes of principal $U(1)$-bundles on $X$ with connection. Physically, this corresponds to the quantum mechanics of a particle whose worldline traces out a loop in the target space $X$. The $U(1)$-bundle with connection represent a background magnetic field on the space $X$ and the action corresponding to a worldline is given by the holonomy of the connection.
More generally, in spacetime dimension $p$, it is obvious that one way to get a topological action is to take a $p$-form on $X$, pull it back to $M$ using $f$, and integrate it over $M$ (this corresponds to the inclusion $j$ in (\ref{eq:character diagram DC})). The integral $p$-forms yield trivial exponentiated action, but since a generic $X$ has many non-closed ({\em ergo} non-integral) $p$-forms, we see that there will be many topological actions of this kind. One way to model differential cohomology is as a generalization of such globally-defined forms to locally-defined forms that patch together consistently~\cite{Davighi2018}.
As we remarked in the Introduction, our definition of differential cohomology can be extended to a larger category whose objects include spaces of smooth maps \cite{bar2014differential}, as can the notion of fibre integration. This allows the following alternative definition of the physics action. Given a closed, oriented spacetime $M$ of dimension $n$ and a target $X$, let $X^M$ denote the space of smooth maps $M \to X$ and let $\text{ev}:X^M \times M \to X$ denote the evaluation map. Then, given an element $h \in \widehat{H}^{n+1}$, we can pull back along $\text{ev}$ and integrate along the fibre $M$ of the trivial bundle $X^M \times M \to X^M$ to obtain an element in $\int_M \text{ev}^*h \in \widehat{H}^{1}(X^M)$. According to the geometric interpretation of differential cohomology in degree one just given, this is a smooth map (in the generalized sense) from $X^M$ to $U(1)$, giving an equivalent definition of the action that is manifestly smooth with respect to variations of the degrees of freedom, namely the fields $f \in X^M$.
\section{Equivariant differential cohomology and local symmetry}\label{sec:equ}
Let us now begin the discussion of the interplay between topological actions and symmetries. Such symmetries may be local or global. While there are arguments that suggest that there can be no global symmetries in a fundamental theory of quantum gravity, this is certainly not true for effective descriptions of Nature, where global symmetries (albeit often approximate) abound and indeed play a dominant r\^{o}le in determining the low-energy dynamics.
Given that local symmetries are somehow more fundamental than global ones, it is a reasonable guess that they admit a more straightforward (or at least more natural) mathematical description and indeed this turns out to be the case here. As we will see, the right mathematical gadget is the generalization of ordinary differential cohomology to the equivariant setting.
As before, we suppose that we have a
physical system where spacetime is oriented and has dimension $p$, with a fixed target manifold $X$. But now we suppose that we have, in addition, a smooth action of a Lie group $G$ on $X$. Here we shall assume that $G$ is compact, as is commonly the case in gauge theory (this assumption will be relaxed when we discuss global symmetries in the next Section). Given a spacetime, to wit a closed $p$-manifold $M$, the degrees of freedom of a gauge theory with gauge group $G$ are a principal $G$-bundle $P$ over $M$ with connection $\Theta$, together with a section $f$ of the associated bundle $P \times_G X$ (usually referred to by physicists as a `matter field'). Such sections are in 1-1 correspondence with equivariant maps $P \to X$.
Now let us define equivariant differential cohomology and give a prescription for constructing topological actions with symmetry therefrom. As with ordinary differential cohomology, a variety of equivalent definitions are now available \cite{kubel1510equivariant,redden2016differential} (see also \cite{gomi2005}). As with ordinary differential cohomology, we find it most convenient to choose an axiomatic definition \cite{redden2016differential}, whose basic ingredients we now describe.
Ringing the changes, for an abelian group $A$ and a compact Lie group $G$, let $H^{\ast}_G(\cdot;A) := H^{\ast}(EG \times_G \cdot;A)$ be the usual Borel construction of equivariant cohomology considered as a (contravariant) functor from the category of smooth $G$-manifolds to the category of graded abelian groups and let
\begin{equation} \label{eq:elesbock}
\dots \to H_G^{n-1}(\cdot;\mathbb{R}) \to H_G^{n-1}(\cdot;\mathbb{R}/\mathbb{Z}) \xrightarrow{b_G} H^{n}_G(\cdot;\mathbb{Z}) \to H^{n}_G(\cdot;\mathbb{R}) \to \dots
\end{equation}
be the long exact sequence in cohomology associated to the short exact sequence of coefficients $ \mathbb{Z} \hookrightarrow \mathbb{R} \twoheadrightarrow \mathbb{R}/\mathbb{Z}$, with Bockstein map $b_G$.
For the equivariant version of the de Rham sequence, we need the Cartan complex\footnote{One may equivalently use the Weil complex, which Mathai and Quillen have shown \cite{MATHAI198685} to be isomorphic.} of equivariant differential forms on a manifold $X$, $\Omega_G^\ast (X) := [S^\ast(\mathfrak{g}^\vee) \otimes \Omega^\ast(X)]^G$ (equivalently, the Cartan complex consists of the $G$-equivariant polynomial maps $\mathfrak{g} \to \Omega^\ast (X)$), also considered as a functor $\Omega_G^\ast (\cdot)$ from the category of smooth $G$-manifolds to the category of graded abelian groups, with grading given by the differential form degree plus twice the polynomial degree. The differential is given by $d_G \omega (v) := d\omega (v) + \iota_v \omega (v)$, where $v$ denotes either an element of the Lie algebra $\mathfrak{g}$ of $G$ or the corresponding fundamental vector field on $X$ and $\iota$ denotes the contraction of a form with a vector field.
The equivariant de Rham theorem asserts that the cohomology of the complex $\Omega_G^\ast(X)$ under $d_G$ is isomorphic to $H^{\ast}_G(X;\mathbb{R})$. Letting
$\Omega_G^{\ast}(\cdot)_\mathbb{Z}$ denote the subfunctor of $\Omega_G^{\ast}(\cdot)$ that assigns the
subgroup of equivariant forms whose image in $H^{\ast}_G(\cdot;\mathbb{R})$ via the equivariant de Rham theorem intersects the image of $H^{\ast}_G(\cdot;\mathbb{Z})$ under the obvious inclusion, we have the long exact sequence
\begin{equation} \label{eq:elesdr}
\dots \to H_G^{n-1}(\cdot;\mathbb{R}) \to \Omega_G^{n-1}(\cdot)/\Omega_G^{n-1}(\cdot)_{\mathbb{Z}} \xrightarrow{d_G} \Omega_G^{n}(\cdot)_{\mathbb{Z}} \to H_G^{n}(\cdot;\mathbb{R}) \to \dots
\end{equation}
Just as for ordinary differential cohomology, our definition of equivariant differential cohomology will make use of these two exact sequences. But we will need a further ingredient.
Given a principal $G$-bundle $P$ on a manifold $M$, we have an isomorphism $H^\ast_G(P;A) \cong H^\ast(M;A)$, since the $G$-action on $P$ is free. Given, furthermore, a connection $\Theta$ on $P$, we have the Cartan map \cite{GuilleminSternberg} $\Theta^*: \Omega^\ast_G(P) \to \Omega^\ast(M)$, which may be constructed as follows. By evaluating a polynomial in $\Omega^\ast_G(P)$ on the curvature of the connection $\Theta$, we obtain a differential form on $P$ of the same degree. This form is basic, meaning that it is both $G$-invariant and horizontal, {\em i.e.} when evaluated on tangent vectors, it yields zero if any of those vectors are tangent to a fibre. But such basic forms are isomorphic to forms on the base, with the isomorphism given by pullback along the bundle map. In this way, we obtain a chain map $\Theta^*: \Omega^\ast_G(P) \to \Omega^\ast(M)$ which turns out to be a homotopy inverse to the composite map $\Omega^\ast(M) \to \Omega^\ast_{\text{basic}}(P) \to \Omega^\ast_G(P)$, where the first map is pullback along the bundle map and the second map is inclusion. Thus, given the data $(M,P,\Theta)$ we can construct maps for all of the objects in the outer hexagon of the equivariant version of the diagram (\ref{eq:character diagram DC}). It is thus natural to ask that a corresponding map exist for equivariant differential cohomology and this will form the second part of the definition.
We thus arrive at the following
\begin{defn}[\cite{redden2016differential}, Prop. 4.18]
Let $M$ be a smooth manifold, let $P$ be a principal $G$-bundle over $M$ with connection $\Theta$, and let $X$ be a $G$-manifold.
An {\em equivariant differential cohomology theory} is a functor $\widehat{H}_G^{\ast}(\cdot)$, together with four natural transformations $i_G$, $j_G$, $\mathrm{curv}_G$, and $\mathrm{char}_G$, such that
\begin{enumerate}
\item[a.] for any $G$-manifold $X$, the diagram
\begin{equation} \label{eq:character diagram EDC}
\begin{tikzcd}[row sep=scriptsize,column sep=tiny]
{} & {} & H_G^{n-1}(X;\mathbb{R}/\mathbb{Z}) \arrow[rr,"b_G" description] \arrow[dr, hookrightarrow, "j_G" description] & {} & H_G^{n}(X;\mathbb{Z}) \arrow[dr] & {} & {} \\
{} & H_G^{n-1}(X;\mathbb{R}) \arrow[ur] \arrow[dr] & {} & \widehat{H}_G^{n}(X) \arrow[ur, twoheadrightarrow, "\mathrm{char}_G" description] \arrow[dr, twoheadrightarrow, "\mathrm{curv}_G" description] & {} & H_G^{n}(X;\mathbb{R}) & {} \\
{} & {} & \Omega_G^{n-1}(X)/\Omega_G^{n-1}(X)_{\mathbb{Z}} \arrow[rr,"d_G" description] \arrow[ur, hookrightarrow, "i_G" description] & {} & \Omega_G^{n}(X)_{\mathbb{Z}} \arrow[ur] & {} & {}
\end{tikzcd}
\end{equation}
commutes, with the 2 diagonals in the centre being exact at $\widehat{H}_G^{n}(X)$, and
\item[b.] for any manifold $M$ and principal $G$-bundle $P \to M$ with connection $\Theta$, there exists a map $\Theta^*: \widehat{H}_G^{n}(P) \to \widehat{H}^{n}(M)$ compatible with the diagrams (\ref{eq:character diagram EDC}) and (\ref{eq:character diagram DC}) and the maps induced by the Cartan map $\Omega^\ast_G(P) \to \Omega^\ast(M)$ and the isomorphism $H^\ast_G(P;\mathbb{Z})\to H^\ast (M;\mathbb{Z})$.
\end{enumerate}
\end{defn}
\begin{theorem}[\cite{redden2016differential}, Prop. 4.18]
Equivariant differential cohomology theories exist, and are unique up to unique isomorphism.
\end{theorem}
With the definition complete, we now describe the construction of the physics action. Recall that our gauge theory data consist of
a closed, oriented $p$-manifold $M$, a principal $G$-bundle $P \to M$ with connection $\Theta$, and a $G$-equivariant map $f:P \to X$. Given an element $h_G \in \widehat{H}^{p+1}_G(X)$, we first form the pullback $f^*h_G \in \widehat{H}^{p+1}_G(P)$. But now we can use the map $\Theta^*: \widehat{H}^{p+1}_G(P) \to \widehat{H}^{p+1}(M)$ used in the definition to get an element in ordinary differential cohomology of degree $p+1$. From here, we can use exactly the same arguments that we made in the previous section to obtain the physics action. In summary, the action is $\langle j^{-1} \Theta^* f^* h_G , [M] \rangle \in \mathbb{R}/\mathbb{Z}$, where the angled brackets denote the canonical pairing between homology and cohomology.
In fact, we can make a much more explicit construction of the action, which will be useful in discussing practical examples. Namely, consider $f^*h_G \in \widehat{H}^{p+1}_G(P)$. Because $P$ is a principal $G$-bundle, the $G$-action on it is free, and so the equivariant integral cohomology is given by $H^{p+1}_G(P) \cong H^{p+1}(P/G) \cong H^{p+1}(M) = 0$, since $M$ is a $p$-manifold. Thus, the map $i_G$ has an inverse and we can form $i_G^{-1} f^*h_G \in \Omega_G^{p}(P)/\Omega_{G}^{p}(P)_{\mathbb{Z}}$. From here, we can use the connection $\Theta$ to construct the Cartan map $\Omega_G^\ast (P) \to \Omega^\ast (M)$ which we follow to obtain a form in $\Omega^\ast (M)$ whose degree coincides with that of the original element in $\Omega_G^\ast (P)$. In the case at hand, the Cartan map sends a representative for $i_G^{-1} f^*h_G \in \Omega_G^{p}(P)/\Omega_{G}^{p}(P)_{\mathbb{Z}}$ to a $p$-form on $M$, which can be integrated over $M$ to obtain an exponentiated action that is independent of the choice of representative.
An important feature of equivariant differential cohomology is the following. The unique map that sends all of $X$ to a point is $G$-equivariant and so provides a map
\begin{equation}\label{eq:MapToPt}
\widehat{H}^*_G(\text{pt}) \to \widehat{H}^*_G(X).
\end{equation}
In the special case of ordinary differential cohomology (with $G$ the trivial group), we have $\widehat{H}^*(\text{pt}) \cong \mathbb{Z}$, so nothing new results. But, as we shall soon see, the equivariant differential cohomology of a point is non-trivial (indeed, it is responsible for all topological terms in pure gauge theory, such as Chern--Simons and theta terms) and the map \eqref{eq:MapToPt} can be nontrivial. Indeed, this map can fail to be injective, meaning there is no sense in which the locally symmetric topological actions for a $G$-manifold $X$ `contain' the pure gauge theory actions, as Example \ref{sec:TranslationAct} below shows.
As for ordinary differential cohomology, it is perhaps helpful to give a geometric description of equivariant differential cohomology in low degrees. In degree one, it is isomorphic to the abelian group of $G$-invariant maps from $X$ to $U(1)$, while in degree two it is isomorphic to the abelian group of isomorphism classes of $G$-equivariant principal $U(1)$-bundles on $X$ equipped with a $G$-invariant connection.
\begin{example}[Pure gauge theory]
When $X$ is a point, ordinary differential cohomology is trivial, but equivariant differential cohomology is not. Indeed, the Cartan complex reduces to (invariant) polynomial maps $\mathfrak{g} \to \mathbb{R}$ where the variable has even degree. Thus, for $n$ even (corresponding to odd spacetime dimension), one diagonal in the diagram (\ref{eq:character diagram EDC}) yields
$$\widehat{H}^n_G(\mathrm{pt}) \cong H^n_G(\mathrm{pt};\mathbb{Z}) \cong H^{n}(BG;\mathbb{Z})$$
coinciding with the celebrated classification of Dijkgraaf and Witten \cite{dijkgraaf1990 } of Chern--Simons terms in pure gauge theory. Moreover, reading along the other diagonal we have a short exact sequence
$ H^{n-1} (BG;\mathbb{R}/\mathbb{Z}) \hookrightarrow H^{n}(BG;\mathbb{Z}) \twoheadrightarrow S^{n/2}(\mathfrak{g}^\vee)^G,$
which, along with the observation that $H_{n-1}(BG;\mathbb{Z})$ is torsion as $n$ is even, provides the starting point for their construction of the action.
Similarly,
for $n$ odd (corresponding to even spacetime dimension), the diagram (\ref{eq:character diagram EDC}) yields
$$\widehat{H}^n_G(\mathrm{pt}) \cong H^{n-1}_G(\mathrm{pt};\mathbb{R}/\mathbb{Z}) \cong H^{n-1}(BG;\mathbb{R}/\mathbb{Z}),$$
characterizing so-called theta terms in pure gauge theory. Here, a rather more straightforward construction of the action is available: we can simply forget the connection and note that isomorphism classes of principal $G$-bundle on $M$ are in 1-1 correspondence with homotopy classes of maps from $M$ to $BG$. Thus, the action can be obtained by taking a representative $g \in [M,BG]$ and evaluating $g_* [M] \in H_{n-1}(BG;\mathbb{Z})$ against the desired class in $H^{n-1}(BG;\mathbb{R}/\mathbb{Z})$ using the canonical pairing between homology and cohomology. \qed
\end{example}
Let us now discuss some particular cases of this example.
\begin{subexample}[Finite groups]\label{sec:exgamma}
When $G=\Gamma$ is a finite group, the Cartan complex is trivial and we get, for all $n$,
$$\widehat{H}^n_\Gamma (\mathrm{pt}) \cong H^{n}(B\Gamma;\mathbb{Z}) \cong H^{n-1}(B\Gamma;\mathbb{R}/\mathbb{Z}),$$
corresponding to the group cohomology. Since connections on such bundles are unique, it comes as no surprise that the action for all $n$ may be obtained using the construction for odd $n$ just given. \qed
\end{subexample}
\begin{subexample}[Tori]\label{sec:exu1}
For $G=U(1)$, $\mathbb{C}P^\infty$ is a model for $BU(1)$ and so we get
$$\widehat{H}^\ast_{U(1)}(\mathrm{pt}) = \mathbb{Z} \oplus \mathbb{R}/\mathbb{Z} \oplus \mathbb{Z} \oplus \mathbb{R}/\mathbb{Z} \oplus \dots $$
Thus, in odd spactime dimensions we get a Chern-Simons term with integer coupling.
To see why this `quantization of the coupling' is necessary, consider a trivial $U(1)$-principal bundle over an $M$ containing a non-contractible $S^1$. Since the bundle is trivial, every connection can be pulled back to a 1-form $A_\sigma$ on $M$ along a global section $\sigma$ and from this one may construct a form $A_\sigma \wedge d A_\sigma \wedge d A_\sigma \wedge \dots$ on $M$ of top degree and integrate it over $M$. {\em A priori} any real multiple of the integral (reduced modulo $\mathbb{Z}$) yields an exponentiated action, but we must ensure that the result is independent of the choice of global section (which was not part of the given data).
Choosing two sections which differ in their winding around the fibre as they wind around the $S^1$ shows that the coupling must be integer quantized.
In even spacetime dimensions, we have already given a general construction of the action without using the connection, but one can also give a construction which uses it. To wit, one takes a wedge product of the curvature 2-form on $M$ to obtain a top-degree form on $M$ and integrates over $M$. By the Chern-Weil correspondence, the integral is an integer (and, moreover, is independent of the connection) and coincides with our earlier construction.
For multiple $U(1)$ factors, we may use the fact that $BG \times BH$ is a model for $B(G \times H)$, together with the K\"{u}nneth formula. \qed
\end{subexample}
\begin{example}[Actions by translations]\label{sec:TranslationAct}
When $G$ acts on itself by translations, we have that $H^\ast_G(G;A) = H^\ast(G/G;A) = H^\ast(\mathrm{pt};A) \cong A$. From the diagram, we then read off
$$\widehat{H}^0_G(G) \cong \mathbb{Z},\quad \widehat{H}^1_G(G) \cong \mathbb{R}/\mathbb{Z},\quad \widehat{H}^{n \geq 2}_G(G) \cong d_G \Omega_G^{n-1}(G) \cong \Omega_G^{n-1}(G)/d_G \Omega_G^{n-2}(G).$$
So let us examine the Cartan complex $\Omega^\ast_G(G)$. Evaluation of differential forms at the identity $e \in G$ gives a map
$$\text{ev}: \Omega^\ast_G(G) = [S^\ast(\mathfrak{g}^\vee) \otimes \Omega^\ast(G)]^G \to S^\ast(\mathfrak{g}^\vee) \otimes \Lambda^\ast(\mathfrak{g}^\vee).$$
This is an isomorphism, since if $F : \mathfrak{g} \to \Omega^p(G)$ is a $G$-equivariant polynomial map then for any $v \in \mathfrak{g}$ and $h \in G$ we have $F(v)(h) = (h \cdot F(v))(e) = F(h \cdot v)(e) = \text{ev}(F)(h \cdot v)$, and we can use this formula to define a corresponding $F$ given any polynomial map $f : \mathfrak{g} \to \Lambda^p(\mathfrak{g}^\vee)$. Under this isomorphism the differential $d_G$ is the sum of the Lie algebra cohomology differential on $C^\ast(\mathfrak{g} ; S^\ast(\mathfrak{g}^\vee)) \cong S^*(\mathfrak{g}^\vee) \otimes \Lambda^*(\mathfrak{g}^\vee)$ and the Koszul differential on $S^*(\mathfrak{g}^\vee) \otimes \Lambda^*(\mathfrak{g}^\vee)$ (determined by $1 \otimes v \mapsto v \otimes 1$ and $v \otimes 1 \mapsto 0$ for $v \in \mathfrak{g}^\vee$, and the fact that it is a derivation).\footnote{Indeed, this gives an explanation for the chain complex $(\Omega^\ast_G(G), d_G)$ having trivial cohomology in strictly positive degrees: filtering this chain complex by degree of the $\Lambda^*(\mathfrak{g}^\vee)$ factor reduces the differential to the Koszul differential, which has a $G$-equivariant chain contraction given by $v \otimes 1 \mapsto 1 \otimes v$ and $1 \otimes v \mapsto 0$.} In fact, $\Omega^\ast_G(G)$ is nothing but the Weil algebra, introduced by Cartan (reprinted in \cite{GuilleminSternberg}). This can be used to show that
$$\widehat{H}^{2}_G(G) \cong \mathfrak{g}^\vee, \quad\quad \widehat{H}^{3}_G(G) \cong \Lambda^2(\mathfrak{g}^\vee);$$
beyond this the answer depends on the Lie algebra structure, so is somewhat more complicated. Nevertheless, it is always finite-dimensional.
\qed
\end{example}
\begin{subexample}[Tori]\label{sec:exu11}
By way of example, consider the action of $U(1)$ on itself by left translations. Because $U(1)$ is abelian the Lie algebra cohomology differential on $C^*(\mathfrak{u}(1) ; S^*(\mathfrak{u}(1)^\vee))$ is zero, so $\Omega^*_{U(1)}(U(1))$ is identified with the Koszul complex. Thus $\widehat{H}^{n \geq 2}_{U(1)}(U(1))$ vanishes in odd degrees and is $\mathbb{R}$ in even degrees.
Thus we obtain
$$\widehat{H}^\ast_{U(1)}(U(1)) \cong \mathbb{Z} \oplus \mathbb{R}/\mathbb{Z} \oplus \mathbb{R} \oplus 0 \oplus \mathbb{R} \oplus \dots $$
and we find that $\widehat{H}^\ast_{U(1)}(\mathrm{pt}) \to \widehat{H}^\ast_{U(1)}(U(1))$ is not injective. Indeed, the theta terms have disappeared compared to the pure gauge theory case. Moreover, the quantization condition on Chern--Simons terms has been removed. These apparently odd results become obvious once one considers the geometric picture. The insistence on having an equivariant map $f:P \to X$ in the data, which corresponds to a section of $P \times_{U(1)} U(1) \cong P$, forces $P$ to be a trivial bundle, so the action corresponding to theta terms becomes trivial. Moreover, we now have a privileged section of $P$, so the requirement that the Chern-Simons action be independent of the section is rendered obsolete. \qed
\end{subexample}
\begin{subexample}[$G=SO(3)$]\label{sec:exso3}
Let us consider the case of degree two which corresponds to the quantum mechanics of a rigid body, from the geometric viewpoint. The $U(1)$-principal bundles on $SO(3) \cong \mathbb{R} P^3$ are classified by $H^2(\mathbb{R} P^3;\mathbb{Z}) \cong \mathbb{Z}/2$. Suitable representatives are the homogeneous spaces $SO(3)\times U(1) \twoheadrightarrow SO(3)$ and $U(2) \twoheadrightarrow SO(3)$. Since $H^2_{SO(3)}(SO(3);\mathbb{Z}) \cong 0$, we see that only the first of these can admit an $SO(3)$-equivariant action. Because this bundle is trivial, the $SO(3)$-invariant connections descend to $SO(3)$-invariant 1-forms on $SO(3)$, in one-to-one correspondence with $\widehat{H}_{SO(3)}^2(SO(3)) \cong \Omega^2_{SO(3)}(SO(3))_\mathbb{Z} \cong \mathfrak{so}(3)^\vee \cong \mathbb{R}^3$.
The fact that the non-trivial bundle $U(2) \twoheadrightarrow SO(3)$ does not have an $SO(3)$-equivariant action leads us to the conclusion that the global rotational symmetry possessed by a rigid body that is a fermion cannot be gauged. This result, which is similar to the result that we obtained by {\em ad hoc} arguments for the Dirac monopole in \S \ref{sec:dirac}, is similarly consistent with an exact quantum mechanical solution (see, {\em e.g.} \cite{Davighi:2019ffp}), which shows that the energy eigenstates have half-integer spin and so carry projective representations of $SO(3)$, which leads to an anomaly on gauging.\qed
\end{subexample}
\begin{example}[Transitive actions]\label{sec:TransitiveAct}
When $G$ acts transitively on $X$, we have that $X$ is diffeomorphic to $G/H$ for some $H \subset G$. Since $EG$ is a model for $EH$, we immediately obtain $H^\ast_G(G/H,A)\cong H^\ast(BH,A)$. Thus we are reduced to finding a description of $\Omega_G^*(G/H)$.
Analogously to Example \ref{sec:TranslationAct}, evaluation at the identity gives an isomorphism
$$\text{ev} : \Omega_G^*(G/H) = [S^*(\mathfrak{g}^\vee) \otimes \Omega^*(G/H)]^G \to [S^*(\mathfrak{g}^\vee) \otimes \Lambda^* ((\mathfrak{g}/\mathfrak{h})^\vee)]^H;$$
the differential $d_G$ is the identified with the sum of the relative Lie algebra cohomology differential on $C^*(\mathfrak{g}, H ; S^*(\mathfrak{g}^\vee))\cong [S^*(\mathfrak{g}^\vee) \otimes \Lambda^* ((\mathfrak{g}/\mathfrak{h})^\vee)]^H$ and the analogous Koszul-like differential on $[S^*(\mathfrak{g}^\vee) \otimes \Lambda^* ((\mathfrak{g}/\mathfrak{h})^\vee)]^H$. This is nothing but Cartan's relative Weil algebra (reprinted in \cite{GuilleminSternberg}).
\qed
\end{example}
\begin{subexample}[$SO(3)/SO(2)$]\label{sec:exu11}
Here too we can see that there will be an obstruction to gauging the $SO(3)$ symmetry in quantum mechanics. Indeed, in degree two we have that $H_{SO(3)}^2(S^2;\mathbb{Z})=H^2(BSO(2);\mathbb{Z})\cong \mathbb{Z}$. But nevertheless we can see that the forgetful map to invariant cohomology (which as we will later show is isomorphic to $H^2(S^2;\mathbb{Z}) \cong \mathbb{Z}$) does not surject, but rather corresponds to multiplication by 2.
Let us first give an algebraic argument using the Serre exact sequence.
We have a fibration $S^2 \to ESO(3) \times_{SO(3)} S^2 \to BSO(3)$. Since $BSO(3)$ and $S^2$ are both 1-connected, part of the Serre sequence reads
$$ H^2(BSO(3);\mathbb{Z}) \to H_{SO(3)}^2(S^2;\mathbb{Z}) \to H^2(S^2;\mathbb{Z}) \to H^3(BSO(3);\mathbb{Z}) \to H_{SO(3)}^3(S^2;\mathbb{Z}) $$
or
$$ 0 \to \mathbb{Z} \to \mathbb{Z} \to \mathbb{Z}/2 \to 0,$$ so that the map of interest (which is the one induced by inclusion of the fibre) is indeed multiplication by two.
The algebraic result is hardly surprising from the geometric point of view. Indeed,
the principal $U(1)$-bundles over $S^2$ are isomorphic to Lens spaces and are classified by an integer $m$. The trivial bundle with $m=0$ evidently admits an $SO(3)$-equivariant action, as does the bundle with $m=2$, being isomorphic to $SO(3) \twoheadrightarrow S^2$. But it seems highly improbable that the bundle with $m=1$, {\em viz.} the Hopf bundle $S^3 \twoheadrightarrow S^2$, admits an equivariant action by $SO(3)$, given that it admits an obvious $SU(2)$-equivariant action in which the center acts non-trivially. A purely geometric proof that no such action exists can be found easily enough, but we spare the reader the details.
The physics of this example is the following. We imagine an electrically-charged particle moving in the background of a magnetic monopole. There is a rotation symmetry, and we learn that it can only be gauged when the magnetic charge is even. As for the example of the rigid body, this is consistent with the fact that the quantum mechanical energy eigenstates carry a projective representation of the rotation group.
To finish the calculation of equivariant differential cohomology in degree two, we need to compute $\Omega^2_{SO(3)}(S^2)_\mathbb{Z}$. The dual of the chain complex $[S^*(\mathfrak{so}(3)^\vee) \otimes \Lambda^* ((\mathfrak{so}(3)/\mathfrak{so}(2))^\vee)]^{SO(2)}$ in degrees one, two, and three has the form
$$[\mathfrak{so}(3)/\mathfrak{so}(2)]_{SO(2)} \overset{d_G^\vee}\leftarrow [\Lambda^2(\mathfrak{so}(3)/\mathfrak{so}(2)) \oplus \mathfrak{so}(3) \otimes \mathbb{R}]_{SO(2)} \overset{d_G^\vee}\leftarrow [\mathfrak{so}(3) \otimes \mathfrak{so}(3)/\mathfrak{so}(2)]_{SO(2)},$$
where $[\cdot]_G$ denotes the $G$-coinvariants. Letting $\mathfrak{so}(3) = \langle X, Y, Z \rangle$ with $[X,Y]=Z$, $[Z,X]=Y$, and $[Y,Z]=X$ and $\mathfrak{so}(2)=\langle X \rangle$ we easily calculate
$$[\mathfrak{so}(3)/\mathfrak{so}(2)]_{SO(2)}=0 \quad\quad [\Lambda^2(\mathfrak{so}(3)/\mathfrak{so}(2))]_{SO(2)} = \langle Y \wedge Z \rangle \quad\quad [\mathfrak{so}(3)]_{SO(2)} = \langle X\rangle$$
and $[\mathfrak{so}(3) \otimes \mathfrak{so}(3)/\mathfrak{so}(2)]_{SO(2)} = \langle Y \otimes Y, Y \otimes Z\rangle$. In these terms the differential is given by $d_G^\vee(Y \otimes Y)=0$ and $d_G^\vee (Y \otimes Z) = Y \wedge Z + X$. Dualising again, we see that the closed forms in $\Omega^2_{SO(3)}(S^2)$ are 1-dimensional, spanned by an equivariant volume form of $S^2$.
But, as we have seen, the integrality condition picks out the forms corresponding to de Rham classes $2\mathbb{Z} \subset \mathbb{R}$. Nevertheless, equivariant differential cohomology in degree two is isomorphic to $\mathbb{Z}$. In the geometric picture, there is a unique $SO(3)$-invariant connection on the $SO(3)$-equivariant bundles with even first Chern class, given by pulling back such a connection on the $SO(3)$-equivariant bundle $SO(3) \to S^2$ (of Chern class 2) along a degree $n$ map $S^2 \to S^2$.\qed
\end{subexample}
\section{Invariant differential cohomology and global symmetry}\label{sec:inv}
Now we wish to consider the case of physics actions that are invariant under a global symmetry. Now, we have a target $X$ with a smooth action by Lie group $G$ (no longer necessarily compact). An invariant action can be constructed straightforwardly as follows. The $G$-action on $X$ induces an action on the abelian group $\widehat{H}^\ast(X)$ (as well as on all the other objects appearing in (\ref{eq:character diagram IDC})). We define the invariant differential cohomology of $X$, denoted $\widehat{H}^\ast(X)^G$, to be the subgroup of elements of $\widehat{H}^\ast(X)$ that are fixed by the induced $G$-action. Clearly, taking an element $h^G$ in $\widehat{H}^\ast(X)^G$ and performing the construction described in \S\ref{sec:ord} results in a physics action that is $G$-invariant.
Let us give a geometric description in low degrees, as we did in the ordinary and equivariant cases. In degree one, invariant differential cohomology is isomorphic to the $G$-invariant maps from $X$ to $U(1)$, so is in fact isomorphic to equivariant differential cohomology. In degree two, it is isomorphic to the isomorphism classes of principal $U(1)$-bundles with connection whose holonomies are $G$-invariant, which differs from what we found in the equivariant case. In \S \ref{sec:gau}, we will see that there is a natural map from equivariant to invariant differential cohomology, which neither injects nor surjects in general in degree two or higher. The failure to surject leads to the possibility of topological physics actions with global symmetries which cannot be gauged.
Whilst invariant differential cohomology is straightforward to define it is less easy to give an algebraic characterization. A first observation is that, while taking $G$-invariants is functorial, the functor is only left exact, in general. Thus, whilst it is the case that we do have a commutative diagram
\begin{equation} \label{eq:character diagram IDC}
\begin{tikzcd}[row sep=scriptsize,column sep=tiny]
{} & {} & H^{n-1}(X;\mathbb{R}/\mathbb{Z})^G \arrow[rr,"b^G" description] \arrow[dr, hookrightarrow, "j^G" description] & {} & H^{n}(X;\mathbb{Z})^G \arrow[dr] & {} & {} \\
{} & H^{n-1}(X;\mathbb{R})^G \arrow[ur] \arrow[dr] & {} & \widehat{H}^{n}(X)^G \arrow[ur, "\text{char}^G" description] \arrow[dr, "\text{curv}^G" description] & {} & H^{n}(X;\mathbb{R})^G & {} \\
{} & {} & \left[\Omega^{n-1}(X)/\Omega^{n-1}(X)_{\mathbb{Z}} \right]^G \arrow[rr,"d^G" description] \arrow[ur, hookrightarrow, "i^G" description] & {} & \Omega^{n}(X)_{\mathbb{Z}}^G \arrow[ur] & {} & {}
\end{tikzcd}
\end{equation}
(where the superscript $^G$ on a map denotes the restriction to the invariant subgroups) in which the 2 diagonals in the centre are exact at $\widehat{H}^\ast(X)^G$, it is no longer always the case that the outer parts of the diagram make up long exact sequences and nor is it the case that the maps $\text{curv}^G$ and $\text{char}^G$ necessarily surject.
\begin{example}(Circle action by translations).
For a counterexample, it suffices to consider the action of the group $U(1)$ on itself by left translations. In degree one, elements of invariant differential cohomology correspond to $U(1)$-equivariant maps $U(1) \to U(1)$ where the action is trivial on the target and by translation in the source: in other words, constant maps. Being nullhomotopic, such maps do not surject on to $H^1(U(1),\mathbb{Z})^{U(1)} \cong \pi_1(U(1))^{U(1)} \cong \mathbb{Z}$ which includes classes of maps with non-vanishing winding, which, though not invariant themselves, nevertheless are homotopic to their translates. Similarly, the map $\text{curv}^G$ corresponds to the derivative, and does not surject, since the derivative of a constant map vanishes, whilst there are non-vanishing invariant integral 1-forms on the circle, namely those forms that are integer multiples of the unit volume form. \qed
\end{example}
To get a better handle on the maps $\text{curv}^G$ and $\text{char}^G$, it is natural to consider the derived functors of the invariants functor $\cdot^G$, which enable us to extend a left-exact sequence to a long exact sequence. Before doing that, it is desirable to endow differential cohomology with extra structure, namely a topology. Doing so is not only motivated on physical grounds (after all, we expect that physics actions which are close enough to each other should be indistinguishable in experiments), but also allows us to give a more concrete characterization of invariant differential cohomology.
\section{A smooth structure on equivariant differential cohomology}\label{sec:top}
Becker, Schenkel, and Szabo \cite[Appendix A]{Becker:2014tla} have explained how, for manifolds $X$ having finite type\footnote{That is, which admit a finite good cover. In fact it suffices for the integral homology groups of $X$ to be finitely-generated, and this is what we shall assume.}, the terms in the diagram \eqref{eq:character diagram DC} may be given the structure of abelian Fr{\'e}chet--Lie groups such that all the homomorphisms involved are smooth. Here we outline how their construction extends to equivariant differential cohomology i.e.\ the diagram \eqref{eq:character diagram EDC}, and also explain how this makes the rows and diagonals of \eqref{eq:character diagram EDC} smoothly exact and the diagonals smoothly split.
We adopt the notation $H^n_G(X;\mathbb{R})_\mathbb{Z} := \im(H^n_G(X;\mathbb{Z}) \to H^n_G(X;\mathbb{R}))$, and write $\Omega_G^{n}(X)_{\mathrm{cl}}$ for the subspace of $\Omega_G^{n}(X)$ consisting of equivariantly-closed forms.
\vspace{1ex}
\noindent\textbf{The Bockstein sequence}. We begin with the top row of \eqref{eq:character diagram EDC}, given by the Bockstein sequence
$$\cdots \to H_G^{n-1}(X;\mathbb{R}) \to H_G^{n-1}(X;\mathbb{R}/\mathbb{Z}) \overset{b_G}\to H_G^{n}(X;\mathbb{Z}) \to H_G^{n}(X;\mathbb{R}) \to \cdots$$
in equivariant cohomology. We give $H_G^{n}(X;\mathbb{Z})$ the discrete topology, with which it is trivially an abelian Fr{\'e}chet--Lie group. As $X$ has finite type the cohomology groups $H^n_G(X ; \mathbb{R})$ are finite dimensional real vector spaces, so have a unique Lie group structure. The Bockstein sequence provides a short exact sequence
$$0 \to H_G^{n}(X;\mathbb{R})/H_G^{n}(X;\mathbb{R})_\mathbb{Z} \to H_G^{n}(X;\mathbb{R}/\mathbb{Z}) \to \mathrm{tors~} H_G^{n+1}(X;\mathbb{Z}) \to 0.$$
We give $H_G^{n}(X;\mathbb{R})/H_G^{n}(X;\mathbb{R})_\mathbb{Z}$ its standard Lie group structure, and as $\mathrm{tors}(H_G^{n+1}(X;\mathbb{Z}))$ is discrete the group $H_G^{n}(X;\mathbb{R}/\mathbb{Z})$ then has a unique Lie group structure as a disjoint union of cosets of the torus $H_G^{n}(X;\mathbb{R})/H_G^{n}(X;\mathbb{R})_\mathbb{Z}$.
With these choices the Bockstein sequence consists of abelian Fr{\'e}chet--Lie group and smooth homomorphisms.
\vspace{1ex}
\noindent\textbf{The de~Rham sequence}. We now consider the bottom row of \eqref{eq:character diagram EDC}, given by the de~Rham sequence
$$\cdots \to H_G^{n-1}(X;\mathbb{R}) \to \Omega_G^{n-1}(X)/\Omega^{n-1}_G(X)_{\mathbb{Z}} \xrightarrow{d_G} \Omega_G^{n}(X)_{\mathbb{Z}} \to H_G^{n}(X;\mathbb{R}) \to \cdots.$$
Recall that $\Omega_G^{*}(X) := [S^\ast\mathfrak{g}^\vee \otimes \Omega^*(X)]^G$ is the Cartan model for $G$-equivariant de~Rham forms, with differential $d_G$, and $\Omega_G^{*}(X)_\mathbb{Z}$ denotes the $d_G$-closed forms which, under the equivariant de~Rham isomorphism $H^*(\Omega_G^{*}(X), d_G) \cong H_G^*(X;\mathbb{R})$, represent classes in $H_G^{*}(X;\mathbb{R})_\mathbb{Z}$.
We equip $\Omega^n(X)$ with the weak Whitney $C^\infty$-topology---with which it is a Fr{\'e}chet space---give $\mathfrak{g}$ its usual topology, and take the induced topology on $S^*\mathfrak{g}^\vee \otimes \Omega^*(X)$, with which it is also a (graded) Fr{\'e}chet space. As such it is Hausdorff and so the $G$-fixed points $[S^*\mathfrak{g}^\vee \otimes \Omega^*(X)]^G = \Omega_G^*(X)$ form a closed subspace, and so are also a (graded) Fr{\'e}chet space. The differential $d_G$ is bounded. By the equivariant de Rham theorem there is a short exact sequence
$$0 \to d \Omega_G^{n-1}(X) \to \Omega_G^{n}(X)_{\mathrm{cl}} \to H^n_G(X ; \mathbb{R}) \to 0.$$
As $X$ has finite type $H^n_G(X ; \mathbb{R})$ is a finite-dimensional vector space and so this sequence has a continuous splitting: it follows that $d \Omega_G^{n-1}(X)$ is a closed subspace of $\Omega_G^{n}(X)_{\mathrm{cl}}$, and as $d_G$ is bounded $\Omega_G^{n}(X)_{\mathrm{cl}}$ is a closed subspace of $\Omega_G^{n}(X)$. Thus the exact forms $d \Omega_G^{n-1}(X)$ are a closed subspace of all forms, and so are again a Fr{\'e}chet space. The short exact sequence
$$0 \to d \Omega_G^{n-1}(X) \to \Omega_G^{n}(X)_\mathbb{Z} \to H^n_G(X;\mathbb{R})_\mathbb{Z} \to 0$$
and the fact that $H^n_G(X;\mathbb{R})_\mathbb{Z}$ is discrete thus endows $\Omega_G^{n}(X)_\mathbb{Z}$ with the structure of an abelian Fr{\'e}chet--Lie group. Similarly, considering the short exact sequence
$$0 \to H^{n-1}_G(X;\mathbb{Z})\to \Omega_G^{n-1}(X)/d\Omega^{n-2}_G(X) \to \Omega_G^{n-1}(X)/\Omega^{n-1}_G(X)_{\mathbb{Z}} \to 0$$
and using that $d\Omega^{n-2}_G(X)$ is a closed subspace of $\Omega_G^{n-1}(X)$ so that $\Omega_G^{n-1}(X)/d\Omega^{n-2}_G(X)$ is a Fr{\'e}chet space, we obtain an abelian Fr{\'e}chet--Lie group on $\Omega_G^{n-1}(X)/\Omega^{n-1}_G(X)_{\mathbb{Z}}$.
With these choices the de~Rham sequence consists of abelian Fr{\'e}chet--Lie groups and smooth homomorphisms
\vspace{1ex}
\noindent\textbf{Equivariant differential cohomology}. Consider the diagonal short exact sequence
\begin{equation}\label{eq:char}
0 \to \Omega_G^{n-1}(X)/\Omega_G^{n-1}(X)_\mathbb{Z} \overset{i_G}\to \widehat{H}^n_G(X) \overset{\text{char}_G}\to H_G^n(X; \mathbb{Z}) \to 0
\end{equation}
from \eqref{eq:character diagram EDC}. As we have given $H_G^n(X; \mathbb{Z})$ the discrete topology, this expresses $\widehat{H}^n_G(X)$ as a disjoint union of cosets of the abelian Fr{\'e}chet--Lie group $\Omega_G^{n-1}(X)/\Omega_G^{n-1}(X)_\mathbb{Z}$ and we therefore give each coset a Fr{\'e}chet manifold structure using an identification with $\Omega_G^{n-1}(X)/\Omega_G^{n-1}(X)_\mathbb{Z}$. This defines an abelian Fr{\'e}chet--Lie group structure on $\widehat{H}^n_G(X)$, making the homomorphisms in this short exact sequence smooth.
It remains to see that the other short exact sequence
\begin{equation}\label{eq:curv}
0 \to H_G^{n-1}(X;\mathbb{R}/\mathbb{Z}) \overset{j_G}\to \widehat{H}^n_G(X) \overset{\text{curv}_G}\to \Omega_G^n(X)_\mathbb{Z} \to 0
\end{equation}
now consists of smooth homomorphisms. It suffices to check this when restricted to the path components of the identity.
For $\text{curv}_G$ it follows from the fact that $d_G = \text{curv}_G \circ i_G$ is smooth. For $j_G$ we may use that the identity component of $H_G^{n-1}(X; \mathbb{R}/\mathbb{Z})$ is a quotient space of $H_G^{n-1}(X; \mathbb{R})$, and that the homomorphism $H_G^{n-1}(X; \mathbb{R}) \to \Omega_G^{n-1}(X)/\Omega_G^{n-1}(X)_\mathbb{Z}$ in the de Rham sequence is smooth.
\vspace{1ex}
\noindent\textbf{Smooth exactness and splitness}. Above we have shown that there are various sequences of abelian Fr{\'e}chet--Lie groups and smooth homomorphisms which are exact in the algebraic sense, i.e.\ after neglecting the Fr{\'e}chet manifold structure. But a stronger notion of exactness is available for abelian Fr{\'e}chet--Lie groups:
\begin{defn}
Say that a short exact sequence $0 \to A \to B \to C \to 0$ of abelian Fr{\'e}chet--Lie groups and smooth homomorphisms is \emph{smoothly exact} if
\begin{enumerate
\item[a.] $A \to B$ is a diffeomorphism onto a submanifold, and
\item[b.] $B \to C$ admits a smooth section on a neighbourhood of the identity.
\end{enumerate}
Alternatively, condition b.\ is equivalent to
\begin{enumerate}
\setcounter{enumi}{1}
\item[b$^\prime$.] $B \to C$ is a smooth principal $A$-bundle.
\end{enumerate}
Say that it is \emph{smoothly split} if there is in addition a smooth homomorphism $C \to B$ right inverse to $B \to C$; equivalently, a smooth homomorphism $B \to A$ left inverse to $A \to B$.
Say that a long exact sequence $\cdots \to A_i \overset{d_i}\to A_{i+1} \overset{d_{i+1}}\to A_{i+2} \to \cdots$ is smoothly exact if each of the short exact sequences $0 \to \ker(d_i) \to A_i \overset{d_i}\to \im(d_i) \to 0$ is (condition a.\ is automatic in this case, as $\ker(d_i)$ is a submanifold of $A_i$ by definition).
\end{defn}
\begin{lemma}\label{lem:TopExact}
With the abelian Fr{\'e}chet--Lie group structures we have described, in the diagram \eqref{eq:character diagram EDC} the rows and diagonals are smoothly exact. Moreover, the diagonals are smoothly split.
\end{lemma}
\begin{proof}
It is easy to see that the top row of \eqref{eq:character diagram EDC} is smoothly exact (the fact that $H^{n}_G(X;\mathbb{Z})$ is discrete makes this especially easy).
For the bottom row we use the argument of \cite[Appendix A.1]{Becker:2014tla}, adapted to the equivariant case. For smooth exactness of
$$0 \to H_G^{n-1}(X;\mathbb{R})/H_G^{n-1}(X;\mathbb{R})_\mathbb{Z} \to \Omega_G^{n-1}(X)/\Omega^{n-1}_G(X)_{\mathbb{Z}} \xrightarrow{d_G} d\Omega_G^{n}(X) \to 0$$
it suffices to show that the short exact sequence of Fr{\'e}chet spaces
$$0 \to H_G^{n-1}(X;\mathbb{R}) \to \Omega_G^{n-1}(X)/\Omega^{n-1}_G(X)_{\mathrm{cl}} \xrightarrow{d_G} d\Omega_G^{n}(X) \to 0$$
has a continuous linear splitting. As $H_G^{n-1}(X;\mathbb{R})$ is finite-dimensional by our assumption that $X$ has finite type, this has a continuous linear splitting by an application of the Hahn--Banach theorem for locally convex topological vector spaces. For smooth exactness at $\Omega^n_G(X)_\mathbb{Z}$ we use that its image in $H^n_G(X;\mathbb{R})$ is the lattice $H^n_G(X;\mathbb{R})_\mathbb{Z}$ and so is discrete, hence there is nothing to check. For smooth exactness at $H^n_G(X;\mathbb{R})$ we use that this is a finite-dimensional vector space whose image in $\Omega^n_G(X)/\Omega^n_G(X)_\mathbb{Z}$ is $H^n_G(X;\mathbb{R})/H^n_G(X;\mathbb{R})_\mathbb{Z}$, and $H^n_G(X;\mathbb{R}) \to H^n_G(X;\mathbb{R})/H^n_G(X;\mathbb{R})_\mathbb{Z}$ certainly has a smooth inverse on a neighbourhood of the identity.
Our definition of the abelian Fr{\'e}chet--Lie group structure on $\widehat{H}^n_G(X)$ makes
\begin{equation}\label{eq:ordchar}
0 \to \Omega^{n-1}_G(X)/\Omega^{n-1}_G(X)_\mathbb{Z} \overset{i}\to \widehat{H}^n_G(X) \overset{\text{char}_G}\to H^n_G(X; \mathbb{Z}) \to 0
\end{equation}
smoothly exact by definition. For
\begin{equation}\label{eq:ordcurv}
0 \to H^{n-1}_G(X;\mathbb{R}/\mathbb{Z}) \overset{j}\to \widehat{H}^n_G(X) \overset{\text{curv}_G}\to \Omega^n_G(X)_\mathbb{Z} \to 0,
\end{equation}
we observe that the homomorphisms
$$H^{n-1}_G(X;\mathbb{R}) \to H^{n-1}_G(X;\mathbb{R}/\mathbb{Z}) \quad \text{ and } \quad H^{n-1}_G(X;\mathbb{R}) \to \Omega^{n-1}_G(X)/\Omega^{n-1}_G(X)_\mathbb{Z}$$
have the same kernel, $H^n_G(X;\mathbb{R})_\mathbb{Z}$, so the identity component of $H^{n-1}_G(X;\mathbb{R}/\mathbb{Z})$ (which we denote with a subscript $0$) may be identified with a subspace of $\Omega^{n-1}_G(X)/\Omega^{n-1}_G(X)_\mathbb{Z}$ and hence of $\widehat{H}^n_G(X)$, verifying condition {\em a}. For condition {\em b}, note that the identity component of $\Omega^n_G(X)_\mathbb{Z}$ is the space $d\Omega^{n-1}_G(X)$ of exact forms, and use the Hahn--Banach argument above to say that the composition
$$\Omega^{n-1}_G(X)/\Omega^{n-1}_G(X)_{\mathrm{cl}} \to \Omega^{n-1}_G(X)/\Omega^{n-1}_G(X)_\mathbb{Z} = \widehat{H}^n_G(X)_0 \overset{d_G}\to d\Omega^{n-1}_G(X)$$
has a continuous linear, and hence smooth, right inverse, so the right-hand homomorphism does too.
To see that \eqref{eq:ordchar} is smoothly split, observe that as $H^n_G(X;\mathbb{Z})$ is discrete it suffices to show that it splits as discrete groups. Firstly, the Bockstein sequence provides exact sequences
$$0 \to \mathrm{tors~} H^n_G(X; \mathbb{Z}) \to H^n_G(X; \mathbb{Z}) \to H^n_G(X ; \mathbb{R})_\mathbb{Z} \to 0,$$
$$0 \to H^{n-1}_G(X;\mathbb{R})/H^{n-1}_G(X;\mathbb{R})_\mathbb{Z} \to H^{n-1}_G(X;\mathbb{R}/\mathbb{Z}) \to \mathrm{tors~} H^n_G(X; \mathbb{Z}) \to 0.$$
As $H^n_G(X ; \mathbb{R})_\mathbb{Z}$ is free abelian, the first sequence is split and we may chose a splitting of \eqref{eq:ordchar} over the corresponding free abelian group, so it remains to show that \eqref{eq:ordchar} may be split over $\mathrm{tors}(H^n_G(X; \mathbb{Z}))$. For this we use that the second sequence is split because the torus $H^{n-1}_G(X;\mathbb{R})/H^{n-1}_G(X;\mathbb{R})_\mathbb{Z}$ is a divisible abelian group and so injective. Combining a splitting of the second sequence with the map $j : H^{n-1}_G(X;\mathbb{R}/\mathbb{Z}) \to \widehat{H}^n_G(X)$ gives the required splitting of \eqref{eq:ordchar} over $\mathrm{tors}(H^n_G(X; \mathbb{Z}))$.
To see that \eqref{eq:ordcurv} is smoothly split, observe that the de Rham sequence gives an exact sequence
$$0 \to d\Omega^{n-1}_G(X) \to \Omega^n_G(X)_\mathbb{Z} \to H^n_G(X ; \mathbb{R})_\mathbb{Z} \to 0,$$
and, as above, because $H^n_G(X ; \mathbb{R})_\mathbb{Z}$ is free abelian this is (smoothly) split and we may furthermore choose a splitting of \eqref{eq:ordcurv} over the corresponding free abelian group; it remains to show that \eqref{eq:ordcurv} is smoothly split over $d\Omega^{n-1}_G(X)$. But as we have explained above the homomorphism $d_G : \Omega^{n-1}_G(X)/\Omega^{n-1}_G(X)_\mathbb{Z} \to d\Omega^{n-1}_G(X)$ has a smooth right inverse, and composing this with $i : \Omega^{n-1}_G(X)/\Omega^{n-1}_G(X)_\mathbb{Z} \to \widehat{H}^n_G(X)$ gives the required smooth splitting of \eqref{eq:ordcurv} over $d\Omega^{n-1}_G(X)$.
\end{proof}
\section{Characterizing invariant differential cohomology}\label{sec:char}
The operation of forming $G$-invariants is only left-exact, so applied to the curvature sequence in \eqref{eq:character diagram DC} gives an exact sequence
\begin{align*}
0 \to H^{n-1}(X; \mathbb{R}/\mathbb{Z})^G \to &\widehat{H}^n(X)^G \overset{\text{curv}^G}\to \Omega^n(X)_\mathbb{Z}^G
\end{align*}
which need not be surjective on the right. Group cohomology gives a way of extending this to long exact sequences, in particular providing a connecting homomorphism
\begin{align*}
\partial : \Omega^n(X)_\mathbb{Z}^G &\to H^1(G ; H^{n-1}(X; \mathbb{R}/\mathbb{Z}))
\end{align*}
so that the image of $\widehat{H}^n(X)^G$ in $\Omega^n(X)_\mathbb{Z}^G$ is given by the kernel of this homomorphism.
As $G$ is a Lie group which acts smoothly on the terms in \eqref{eq:character diagram DC}, and the diagonals in that diagram are short \emph{smoothly} exact sequences, we may replace the targets of the maps $\partial$ with the corresponding smooth cohomology groups, which we denote by $H^1_{sm}(G;M)$ for a smooth $G$-module $M$. (Specifically, we can take the cohomology of the complex of locally smooth cochains from \cite{WagemannWockel}, denoted $H^*_{loc, s}(G;M)$ there.) This is given by the smooth crossed homomorphisms $ \phi : G \to M$ modulo principal ones.
For a smoothly exact sequence $0 \to M \to M' \to M'' \to 0$ of $G$-modules and $G$-equivariant maps the connecting map $\partial : [M'']^G \to H^1_{sm}(G;M)$ is given as follows. Choose a section $s : M'' \to M'$ which is smooth near the identity: this is possible as the sequence was smoothly exact. Then, given $m'' \in [M'']^G$ let $\partial(m'') : G \to M$ be given by $g \mapsto g \cdot s(m'') - s(m'') \in M$. This is smooth on a neighbourhood of the identity element of $G$ but is also a crossed homomorphism, so is smooth everywhere.
\vspace{1ex}
\noindent\textbf{Actions reachable by flows}. We apply the previous discussion in the case where the action on $X$ of each $g \in G$ (or more generally a generating set) is reachable by the flow of a vector field on $X$. Then $G$ acts trivially on $H^*(X;A)$, so we have
$$H^1_{sm}(G ; H^{n-1}(X; \mathbb{R}/\mathbb{Z})) = \mathrm{Hom}_{sm}(G, H^{n-1}(X; \mathbb{R}/\mathbb{Z})) = \mathrm{Hom}_{sm}(G/[G,G], H^{n-1}(X; \mathbb{R}/\mathbb{Z})),$$
where we have used the fact that any smooth homomorphism to an abelian Lie group factors uniquely through the abelianization $G/[G,G]$ (which is an abelian Lie group with the quotient topology).
The connecting homomorphism $\partial$ is given as follows. Let $\omega \in \Omega^n(X)_\mathbb{Z}^G$ be a $G$-invariant integral form and $g \in G$. Choose a section $s : \Omega^n(X)_\mathbb{Z} \to \widehat{H}^n(X)$ smooth near the identity, and let $\hat{\omega} := s(\omega) \in \widehat{H}^n(X)$; then by definition we have
$$\partial (\omega)(g) = g \cdot \hat{\omega} - \hat{\omega} \in \iota(H^{n-1}(X; \mathbb{R}/\mathbb{Z})) \subset \widehat{H}^n(X).$$
Let $v$ be a vector field on $X$ such that $\exp(v)$ coincides with the action of $g$. As $\exp(v) \cdot -$ is homotopic to the identity via $F(t, x) = \exp(t v) \cdot x : [0,1] \times X \to X$, we may express this using the homotopy formula in differential cohomology as
$$\partial (\omega)(\exp(v)) = \left[\int_{[0,1]} F^*( \text{curv}(\hat{\omega})) \right]= \left[\int_{[0,1]} F^*(\omega)\right] \in H^{n-1}(X;\mathbb{R}/\mathbb{Z}).$$
Now $F^*\omega = \pi_2^*(F(t,-)^* \omega) + dt \wedge \pi_2^*(\iota_v(\omega))$ by a direct calculation, which is $\pi_2^*\omega + dt \wedge \pi_2^*(\iota_v(\omega))$ as $\omega$ is $G$-invariant, and so $\int_{[0,1]} F^*(\omega) = \iota_v(\omega)$.
Hence we find that $\omega$ is in the kernel of $\partial$ only if $\iota_v(\omega)$ is an integral form. (As $\omega$ is closed and $G$-invariant, $\iota_v(\omega)$ is already closed by Cartan's formula.) One easily shows that if this holds for one $v$ such that $g$ acts as $\exp(v)$ then it holds for any other, so the kernel of $\partial$ is characterized by those forms $\omega$ such that for each $g \in G$ (or a generating set thereof) there exists $v$ such that $g$ acts as $\exp(v)$ such that $\iota_v(\omega)$ is an integral form.
\vspace{1ex}
\noindent\textbf{Connected groups}. When $G$ is connected, an even stronger result holds.
The identity component of $H^{n-1}(X; \mathbb{R}/\mathbb{Z})$ is a torus with Lie algebra $H^{n-1}(X; \mathbb{R})$ so, using that $G$ is connected, taking derivatives at the identity identifies the above group $\mathrm{Hom}_{sm}(G/[G,G], H^{n-1}(X; \mathbb{R}/\mathbb{Z}))$ with a subgroup of $\mathrm{Hom}_\mathbb{R}(\mathfrak{g}/[\mathfrak{g}, \mathfrak{g}], H^{n-1}(X; \mathbb{R}))$. We therefore have an exact sequence
$$0 \to H^{n-1}(X; \mathbb{R}/\mathbb{Z}) \to \widehat{H}^n(X)^G \overset{\text{curv}^G}\to \Omega^n(X)_\mathbb{Z}^G \overset{\partial'}\to \mathrm{Hom}_\mathbb{R}(\mathfrak{g}/[\mathfrak{g}, \mathfrak{g}], H^{n-1}(X; \mathbb{R}))$$
and we wish to describe $\partial'$.
As before, we have that
$$\partial (\omega)(g) = g \cdot \hat{\omega} - \hat{\omega} \in \iota(H^{n-1}(X; \mathbb{R}/\mathbb{Z})) \subset \widehat{H}^n(X).$$
Applying this to $g = \exp(v)$ for $v \in \mathfrak{g}$, using the homotopy formula as above, and taking derivatives we find that
$$\partial' (\omega)([v]) = [\iota_v(\omega)] \in H^{n-1}(X;\mathbb{R})$$
(where $v$ on the right denotes the fundamental vector on $X$ corresponding to $v \in \mathfrak{g}$). In particular $\ker(\partial)$ consists of those $G$-invariant integral forms $\omega$ whose contraction $\iota_v(\omega)$ is exact for every $v \in \mathfrak{g}$.
This is precisely the so-called Manton condition derived in \cite{Davighi2018}. It shows that a consistent definition of the topological action requires that the curvature form be not just invariant (which for connected $G$ equates to vanishing of the Lie derivative $L_v = \iota_v d + d \iota_v$ and thus implies that $\iota_v \omega$ be closed, since $\omega$ is closed), but rather the stronger condition that $\iota_v \omega$ be exact.
We now make two remarks regarding this Manton condition. The first remark is that it invalidates the classification of invariant WZNW actions given in \cite{DHoker:1994rdl}, because the {\em ad hoc} construction of the action given there involves choices that are manifestly not invariant (the example of quantum mechanics on the torus described below provides a simple counterexample). The second remark is that the condition has an intriguing relation to equivariant differential cohomology which, as we have seen, describes the actions with local symmetry. Indeed, when $G$ is connected, the condition that $\iota_v(\omega)$ be exact is a necessary but not sufficient condition for the closed form $\omega$ to have an equivariantly-closed extension, which is itself a necessary but not sufficient condition for $\omega$ to be the curvature of an element in equivariant differential cohomology. Thus, insisting that the exponentiated action be globally-invariant\footnote{At the purely classical level, the Manton condition is not required for covariance of the Euler-Lagrange equations of motion, though it is required for conservation of the Noether current \cite{Davighi2018}.} already guarantees that one of the conditions required to promote the symmetry to a local one is satisfied.
\vspace{1ex}
\noindent\textbf{Transitive actions}. Now let us further suppose that the connected group $G$ acts transitively on $X$, so that $X=G/H$ for some closed subgroup $H$. As usual there is an identification $\Omega^*(G/H)^G = C^*(\mathfrak{g}, H;\mathbb{R})$ with the relative Lie algebra cochains, and so an identification $\Omega^n(G/H)_\mathbb{Z}^G = Z^n(\mathfrak{g}, H;\mathbb{R})_\mathbb{Z}$ with the integral Lie algebra cocycles. This gives a map $H^*(\mathfrak{g}, H;\mathbb{R}) \to H^*(G/H;\mathbb{R})$ (which is well-known to be an isomorphism if $G$ is compact).
In this case the discussion of the last section identifies
$$\partial' : \Omega^n(G/H)_\mathbb{Z}^G \to \mathrm{Hom}_\mathbb{R}(\mathfrak{g}/[\mathfrak{g}, \mathfrak{g}], H^{n-1}(G/H; \mathbb{R}))$$
with the map
$$\psi \mapsto ([v] \mapsto [\psi(v \wedge -)]) : Z^n(\mathfrak{g}, H;\mathbb{R})_\mathbb{Z} \to \mathrm{Hom}_\mathbb{R}(\mathfrak{g}/[\mathfrak{g}, \mathfrak{g}], H^{n-1}(\mathfrak{g}, H, \mathbb{R}))$$
followed by $H^{n-1}(\mathfrak{g}, H, \mathbb{R}) \to H^{n-1}(G/H; \mathbb{R})$.
\vspace{1ex}
\noindent\textbf{Splitting invariant differential cohomology}. In the situation above, of a transitive $G$-action, we have a short exact sequence
\begin{equation}\label{eq:GInvSES}
0 \to H^{n-1}(G/H; \mathbb{R}/\mathbb{Z}) \to \widehat{H}^n(G/H)^G \overset{\text{curv}^G}\to \ker(\partial) \to 0,
\end{equation}
with $\ker(\partial) \subset Z^n(\mathfrak{g}, H;\mathbb{R})_\mathbb{Z}$.
\begin{lemma}
The topology on $\widehat{H}^n(G/H)^G$ induced from $\widehat{H}^n(G/H)$ makes it into an abelian Lie group, and with this structure the sequence \eqref{eq:GInvSES} splits as abelian Lie groups.
\end{lemma}
\begin{proof}
We first claim that with this induced topology \eqref{eq:GInvSES} is a principal $H^{n-1}(G/H; \mathbb{R}/\mathbb{Z})$-bundle. By Lemma \ref{lem:TopExact} the curvature sequence is smoothly exact, so is a smooth principal $H^{n-1}(G/H; \mathbb{R}/\mathbb{Z})$-bundle. It therefore remains a principal bundle when restricted to the subspace $\ker(\partial) \subset \Omega^n(G/H)_\mathbb{Z}$, which is \eqref{eq:GInvSES}.
Now $\ker(\partial)$ is an abelian Lie group, as is $H^{n-1}(G/H; \mathbb{R}/\mathbb{Z})$, so as \eqref{eq:GInvSES} is a principal bundle it follows that $\widehat{H}^n(G/H)^G$ admits a a unique smooth structure (induced by a local trivialisation) making this sequence an extension of abelian Lie groups. The group $\ker(\partial)$ is isomorphic to $\mathbb{Z}^a \times \mathbb{R}^b$ for some $a$ and $b$. As $\mathbb{Z}^a$ is free abelian we can split the extension \eqref{eq:GInvSES} over this factor, and it remains to show that we can split it over the identity component $\mathbb{R}^b$ of $\ker(\partial)$. But this may be done by choosing a splitting at the level of Lie algebras and then exponentiating in the abelian Lie group $\widehat{H}^n(G/H)^G$.
\end{proof}
\begin{example}[Torus action by translations and a particle moving in a crystal]
On the torus, the volume form is invariant under translations. In local coordinates $(x_1,x_2)$ we have $\omega \, \propto \, dx_1 dx_2$. But $\iota_{\partial/\partial x_1} dx_1 dx_2 = dx_2$ and patching things together, we see that $\iota_{\partial/\partial x_1} \omega$ is a closed, but not exact 1-form on the torus for all non-zero $\omega$. Thus $\text{curv}^{T^2}$, whose target is isomorphic to $\mathbb{Z}$, is the zero map. The invariant differential cohomology of the torus in degree two is therefore $\widehat{H}^2(T^2)^{T^2} \cong H^1(T^2;\mathbb{R}/\mathbb{Z}) \cong \mathbb{R}/\mathbb{Z} \oplus \mathbb{R}/\mathbb{Z}$, with action given by two `theta terms' corresponding to the two independent non-trivial cycles on the torus.
If we instead consider the action of, say, $\mathbb{Z}/n \oplus \mathbb{Z}/m \subset T^2$, then we find that $i_v \omega$ is integral (but not exact) for the vector fields $v$ whose flows reach the elements in $\mathbb{Z}/n \oplus \mathbb{Z}/m$. Thus the image of $\text{curv}^{\mathbb{Z}/n \oplus \mathbb{Z}/m}$ is the forms $\omega$ whose integral over $T^2$ is a multiple of $n$ and $m$.
This set-up is realised physically by the quantum mechanics of a particle moving in a square crystal lattice in the presence of a uniform magnetic field. The failure of translation invariance was first observed by Manton \cite{Manton:1983mq}, via an explicit calculation of the wavefunctions corresponding to energy eigenstates, and the connection to topological actions was made in \cite{Davighi2018}. The putative construction of an invariant action described in \cite{DHoker:1994rdl} fails here because it requires an explicit choice of generators of homology $1$-cycles, which cannot be done in a way which is invariant under translations.
\qed
\end{example}
Ref.~\cite{Davighi:2018xwn} discuss a number of other examples arising in physical theories in which the Higgs boson is composite, including one put forward in \cite{Gripaios:2016mmi} which fails to have the desired invariance properties.
\section{Gauging global symmetries}\label{sec:gau}
Now let us turn to the issue of gauging global symmetries. Again we suppose that $G$ is compact. There are obvious natural maps
from all of the equivariant objects in the diagram (\ref{eq:character diagram EDC}) to the ordinary objects in the diagram (\ref{eq:character diagram DC}), obtained by forgetting the $G$-action. These maps are compatible with the maps in the diagram and moreover they factor through the invariant objects in the diagram (\ref{eq:character diagram IDC}).\footnote{More details will be given in a updated version of \cite{redden2016differential} to appear.} In this way, we obtain a natural map from $\widehat{H}^{\ast}_G(X)$, which classifies locally-symmetric physics actions on $X$, to $\widehat{H}^{\ast}(X)^G$, which classifies globally-symmetric physics actions on $X$. In simple terms, given a locally-symmetric action on $X$, we can obtain a globally-symmetric action, defined on each spacetime $M$ by evaluating it on a trivial bundle over $M$ with trivial connection.
Now, this map is neither injective nor surjective, in general. As per our earlier comments regarding the lack of injectivity of the map $\widehat{H}_G^{\ast}(\mathrm{pt}) \to\widehat{H}^{\ast}_G(X)$ in (\ref{eq:MapToPt}), the lack of injectivity here does not admit an interpretation as a `pure gauge theory' contribution; rather, actions in the kernel are simply actions that vanish when both the bundle and connection are taken to be trivial. But the lack of surjectivity has a direct interpretation in terms of actions with global symmetries that cannot be gauged. As we already have remarked, this phenomenon has been observed more than once before. But the ability to compute it systematically using differential cohomology brings a new power to studying 't Hooft anomaly matching in quantum field theory.
Thus, it is of interest to try to characterize both the kernel and cokernel of the map from equivariant to invariant differential cohomology in terms of the corresponding forgetful maps on the other objects in (\ref{eq:character diagram EDC}) and (\ref{eq:character diagram IDC}). Here we find that the characterization is even more difficult than the characterization of invariant differential cohomology. To wit, the fact that the forgetful maps are compatible with the maps in the diagrams (\ref{eq:character diagram EDC}) and (\ref{eq:character diagram IDC}) means that the kernels and cokernels also fit into analogous commutative diagrams, but now the exactness properties are weakened yet further. Indeed, the only tool we have available is the snake lemma. Applying this to, say, the diagonal sequence featuring the curv map (an analogous sequence is obtained for the char map) allows us to conclude only that the sequence
\begin{multline}
0 \to \text{ker} \; U_GH^{n-1}(X;\mathbb{R}/\mathbb{Z}) \to \text{ker} \; U_G\widehat{H}^n(X)\to \text{ker} \; U_G\Omega^n(X)_\mathbb{Z} \to \\ \text{coker} \; U_GH^{n-1}(X;\mathbb{R}/\mathbb{Z}) \to \text{coker} \; U_G\widehat{H}^n(X) \to \text{coker} \; U_G\Omega^n(X)_\mathbb{Z}
\end{multline}
is exact, where we denote the forgetful map at object $A$ by $U_GA$.
Evidently, this sequence constrains the kernel of the map $U_G\widehat{H}(X)$ rather more than it does the cokernel. However, if we are able to characterise the image of the map $\text{curv}^G$, as we did for special $G$-actions in the previous Section, then we are able to strengthen the snake lemma to
\begin{multline}
0 \to \text{ker}\; U_GH^{n-1}(X;\mathbb{R}/\mathbb{Z}) \to \text{ker} \; U_G\widehat{H}^n(X)\to \text{ker} \; U_G\Omega^n(X)_\mathbb{Z} \to \\ \text{coker} \; U_GH^{n-1}(X;\mathbb{R}/\mathbb{Z}) \to \text{coker} \; U_G\widehat{H}^n(X) \to \text{coker}\; U_G\Omega^n(X)_\mathbb{Z}|_{\im \text{curv}^G} \to 0
\end{multline}
such that the cokernel of the map $U_G\widehat{H}^n(X)$ is similarly constrained.
The next example shows, however, that the connecting homomorphism does not vanish, in general.
\begin{example}[Translations on the circle]\label{sec:ugu12}
In the case of $U(1)$ acting on itself by left translation, the diagram for equivariant differential cohomology in degree two reads
\begin{equation} \label{eq:character diagram u1edc}
\begin{tikzcd}[row sep=scriptsize,column sep=tiny]
{} & {} & 0 \arrow[rr] \arrow[dr, hookrightarrow] & {} & 0 \arrow[dr] & {} & {} \\
{} & 0 \arrow[ur] \arrow[dr] & {} & \mathbb{R} \arrow[ur,twoheadrightarrow] \arrow[dr,twoheadrightarrow] & {} & 0 & {} \\
{} & {} & \mathbb{R} \arrow[rr] \arrow[ur, hookrightarrow] & {} & \mathbb{R} \arrow[ur] & {} & {}
\end{tikzcd}
\end{equation}
while the diagram for invariant differential cohomology reads
\begin{equation} \label{eq:character diagram u1idc}
\begin{tikzcd}[row sep=scriptsize,column sep=tiny]
{} & {} & \mathbb{R}/\mathbb{Z} \arrow[rr] \arrow[dr, hookrightarrow] & {} & 0 \arrow[dr] & {} & {} \\
{} & \mathbb{R} \arrow[ur] \arrow[dr] & {} & \mathbb{R}/\mathbb{Z} \arrow[ur] \arrow[dr,] & {} & 0 & {} \\
{} & {} & \mathbb{R}/\mathbb{Z} \arrow[rr] \arrow[ur, hookrightarrow] & {} & 0 \arrow[ur] & {} & {}
\end{tikzcd}.
\end{equation}
Here we get lucky in that the commutativity properties fix all of the forgetful maps between the diagrams. We find that the kernels thus fit into the commutative diagram
\begin{equation} \label{eq:character diagram u1ker}
\begin{tikzcd}[row sep=scriptsize,column sep=tiny]
{} & {} & 0 \arrow[rr] \arrow[dr] & {} & 0 \arrow[dr] & {} & {} \\
{} & 0 \arrow[ur] \arrow[dr] & {} & \mathbb{Z} \arrow[ur] \arrow[dr] & {} & 0 & {} \\
{} & {} & \mathbb{Z} \arrow[rr] \arrow[ur] & {} & \mathbb{R} \arrow[ur] & {} & {}
\end{tikzcd}
\end{equation}
while the cokernels fit into the commutative diagram
\begin{equation} \label{eq:character diagram u1coker}
\begin{tikzcd}[row sep=scriptsize,column sep=tiny]
{} & {} & \mathbb{R}/\mathbb{Z} \arrow[rr] \arrow[dr] & {} & 0 \arrow[dr] & {} & {} \\
{} & \mathbb{R} \arrow[ur] \arrow[dr] & {} & 0 \arrow[ur] \arrow[dr] & {} & 0 & {} \\
{} & {} & 0 \arrow[rr] \arrow[ur] & {} & 0 \arrow[ur] & {} & {}
\end{tikzcd}.
\end{equation}
In particular, the snake lemma for the $\text{curv}$ sequence reduces to the statement that
\begin{equation}
0 \to 0 \to \mathbb{Z} \to \mathbb{R} \to \mathbb{R}/\mathbb{Z} \to 0 \to 0
\end{equation}
is exact. Whilst this is certainly true, one sees that the connecting homomorphism does not vanish, so we cannot expect a decoupling of the kernels from the cokernels, in general. \qed
\end{example}
In physics terms, we see that there is no sense in which one can think of the possible topological actions for a gauge theory with matter fields living in $X$ as a `sum' of the topological actions for the ungauged theory and the pure gauge theory, as one's physical intuition might suggest. In some cases (such as $U(1)$ acting on itself by left translation), this intuition is nearly correct, in that these contributions fit into a non-split short exact sequence, but more generally we have neither a surjection on the right nor an injection on the left.
Let us now compare with earlier results \cite{Jack:1989ne,Hull:1989jk,HULL1991379,witten1992,1993JGP10381W,Figueroa-OFarrill:1994vwl}, which linked the obstruction to gauging global symmetries to the obstruction to finding a closed equivariant extension to a given integral form in $\Omega^\ast(X)^G_\mathbb{Z}$. Our example shows that the true situation is rather more subtle. Indeed, since the map $\text{curv}^G$ does not surject in general, we cannot even conclude that $\text{coker} \; U_G\widehat{H}^\ast(X) \to \text{coker} \; U_G\Omega^\ast(X)_\mathbb{Z}$ surjects, let alone that it is an isomorphism. By replacing $\Omega^\ast(X)^G_\mathbb{Z}$ by the image of $ \text{curv}^G$ (which we have characterized in special cases), we do obtain a surjection in the snake lemma. To establish injectivity or otherwise, we must also consider the effect of the map from $H^{\ast-1}_G(X;\mathbb{R}/\mathbb{Z})$ to $H^{\ast-1}(X;\mathbb{R}/\mathbb{Z})^G$, along with the kernels of the other maps. Finally, even when the map $\text{coker} \; U_G\widehat{H}^\ast(X) \to \text{coker} \; U_G\Omega^\ast(X)_\mathbb{Z}$ is an isomorphism, we must take note that it is not sufficient to seek closed equivariant extensions of the forms in $\Omega^\ast(X)^G_\mathbb{Z}$. Indeed, there is a further `integrality' condition, in that the closed equivariant extensions must lie in classes whose image is in the image of $H^\ast_G(X;\mathbb{Z})$.
\begin{example}[$SO(3)$ rotation symmetry in quantum mechanics]
We have already discussed the topological terms arising for rigid bodies and the charge-monopole system with a gauged $SO(3)$ rotation symmetry. To see that there is an obstruction to gauging global symmetries using differential cohomology, it remains to compute the invariant differential cohomology in degree two. But since $G=SO(3)$ is connected and has simple Lie algebra, it follows that $\text{curv}^G$ surjects. Moreover, since $G$ acts transitively, the short exact sequence of Lie groups involving $\text{curv}^G$ splits smoothly. Thus we find that for the rigid body we have $\widehat{H}^2(SO(3))^{SO(3)} \cong H^1(SO(3);\mathbb{R}/\mathbb{Z}) \times \Omega^2(SO(3))_\mathbb{Z}^{SO(3)} \cong \mathbb{Z}/2 \times \mathbb{R}^3$, while for the charge-monopole system $\widehat{H}^2(S^2)^{SO(3)} \cong \Omega^2(S^2)_\mathbb{Z}^{SO(3)} = \mathbb{Z}$. Comparing with $\widehat{H}_{SO(3)}^2(SO(3)) \cong \mathbb{R}^3$ and $\widehat{H}_{SO(3)}^2(S^2) = 2\mathbb{Z}$, we see that in both cases there is an obstruction to gauging, despite the fact that there is no difficulty in finding closed equivariant extensions of the corresponding invariant curvature forms.
Since the energy eigenstates in quantum mechanics in both cases carry genuine representations of the universal cover $SU(2)$ of $SO(3)$, we expect the obstructions to gauging to disappear if we consider instead the actions of $SU(2)$ on either $X=SO(3)$ or $X=S^2$. Because $H^2(BSU(2);\mathbb{Z}) \cong H^3(BSU(2);\mathbb{Z}) \cong 0$, the Serre exact sequence yields isomorphisms $H^2_{SU(2)}(X;\mathbb{Z}) \cong H^2(X;\mathbb{Z}) \cong H^2(X;\mathbb{Z})^{SU(2)}$, so there is indeed no obstruction to gauging $SU(2)$.
\qed
\end{example}
\begin{example}[$SO(2)$ rotation symmetry in quantum mechanics]
In our {\em ad hoc} study of the Dirac monopole in \S \ref{sec:dirac}, we considered only an $SO(2) \cong S^1$ subgroup of the rotation symmetry. Let us check that using differential cohomology yields the same result obtained there. This turns out to be somewhat trickier than for $SO(3)$, because the map $U_{S^1} H^2(S^2,\mathbb{Z})$ does in fact surject (for $SO(3)$ it was multiplication by 2). We thus must work slightly harder to identify the equivariant forms in $\Omega_{S^1}^2(S^2)_{\mathbb{Z}}$.
We begin by computing the equivariant cohomology $H^2_{S^1}(S^2;\mathbb{Z})$ via the Serre spectral sequence for the fibration $S^2 \to ES^1 \times_{S^1} S^2 \to BS^1$, which reads
$$ H^1 (S^2;\mathbb{Z}) \to H^2(BS^1;\mathbb{Z}) \to H^2_{S^1} (S^2;\mathbb{Z}) \to H^2(S^2;\mathbb{Z}) \to H^3(BS^1;\mathbb{Z}),$$
that is,
$$ 0 \to \mathbb{Z} \to H^2_{S^1} (S^2;\mathbb{Z}) \to \mathbb{Z} \to 0.$$
This tells us not only that $H^2_{S^1}(S^2;\mathbb{Z}) \cong \mathbb{Z} \oplus \mathbb{Z}$ but also that it is generated by a class $\bar{u}$ restricting to a generator $u$ of $H^2(S^2 ; \mathbb{Z}$) and the first Chern class $c_1$ pulled back from $BS^1$. Now $c_1$ is canonical but $\bar{u}$ is a choice, which needs to be normalised as $\bar{u} + X c_1$ also restricts to $u \in H^2(S^2 ; Z)$. We can choose the normalisation by insisting that $s^* \bar{u}=0,$
where $s:\text{pt} \to S^2$ is the equivariant map given by inclusion of the South pole. These equivariant integral cohomology classes $\bar{u}$ and $c_1$ determine real cohomology classes which we denote by the same symbols.
Now consider the equivariant volume form $\bar{\omega} = \omega - x_1 dt/2$ introduced in \S \ref{sec:dirac}. This restricts to $u \in H^2(S^2 ; \mathbb{R})$, and pulls back to $dt/2 = c_1/2$ under $s^*$. Thus with our choice of normalisation we have
$$[\bar{\omega}] = \bar{u} + c_1/2 \in H^2_{S^1}(S^2 ; \mathbb{R}).$$
In particular this is not in the lattice generated by $\bar{u}$ and $c_1$, so is not in the image of integral equivariant cohomology in real equivariant cohomology. (Note a different normalisation of $\bar{u}$ changes it by an {\em integral} multiple of $c_1$, so this fact does not depend on the choice of normalisation.)
\qed
\end{example}
\begin{example}[WZNW models in dimension 2 and non-abelian bosonization]
In Ref.~\cite{Witten:1983ar}, Witten showed how the bosonic sigma model based on homogeneous space $X=O(n)\times O(n)/O(n)$ with the usual kinetic term and a WZNW term is dual, for suitable values of the couplings, to the theory of $n$ free Majorana fermions, with $O(n)\times O(n)$ representing the fermionic chiral symmetries. As such, Witten remarked, there ought to be an obstruction to gauging $O(n)\times O(n)$.
This is easily seen
directly using equivariant differential cohomology. Indeed, the gaugeable WZNW terms are those whose equivariant extension has image in real equivariant cohomology in the image of integer equivariant cohomology. This image is the zero element, because $H^3_{O(n)\times O(n)} (O(n)\times O(n)/O(n);\mathbb{Z}) = H^3 (BO(n);\mathbb{Z})$. But the cohomology in odd degrees of $BG$ is pure torsion for any compact Lie group $G$, an old result of Borel which can be seen more directly via de Rham's theorem from the Cartan complex $\Omega^\ast_G(\mathrm{pt})$, which is concentrated in even degrees.
\qed
\end{example}
\subsection*{Partial gauging}
It is common in physics to study situations in which only a proper subgroup $K$ of a global symmetry $G$ acting on $X$ is gauged, a classic example being the gauged electromagnetic symmetry of the chiral lagrangian describing hadrons at low energies.
The corresponding topological actions are given by $U_K^{-1} (\widehat{H}^{n+1}(X)^G)$, which turns out to be difficult to characterize in general. When $K$ is a normal subgroup of $G$ (or more generally when $G$ acts on $K$) then the notion of $G$-invariants of $K$-equivariant objects makes sense and we have that $\widehat{H}_K^\ast (X)^G \subset U_K^{-1} (\widehat{H}^\ast (X)^G)$ (and similarly for any of the corresponding objects in the diagram (\ref{eq:character diagram EDC})). This offers us the chance of characterizing at least some of the possible topological actions. In particular, in the case where $G$ is connected, we have that every element $g \in G$ (or at least in a generating set) can be reached from the identity by a $K$-equivariant homotopy such that $H_K^*(X)^G \cong H_K^*(X)$; furthermore the homotopy formula in $K$-equivariant differential cohomology \cite{kubel1510equivariant} leads to the conclusion that the image of the map $\text{curv}_K^G$ consists of the $K$-equivariant differential forms $\omega \in \Omega_K^*(X)_\mathbb{Z}^G$ such that $\iota_v \omega$ is $K$-equivariantly exact for all $v \in \mathfrak{g}$. We thus obtain a simple short exact sequence characterizing $\widehat{H}_K^\ast (X)^G \subset U_K^{-1} \widehat{H}^\ast (X)^G)$.
Once again, we find that the situation for topological actions is somewhat more complicated than expected. For the usual non-topological action of a sigma model, gauging a subgroup $K$ of a global symmetry $G$ yields a residual symmetry given by the normalizer $N$ of $K$ in $G$ \cite{Gripaios:2015qya}. Were this to carry over to the topological case, we would expect an isomorphism between $U_K^{-1}(\widehat{H}^\ast X^G)$ and $\widehat{H}_K^\ast (X)^N$, but in fact there is no suitable natural map in either direction.
Despite the lack of general theorems, one may still proceed in an {\em ad hoc} fashion, as our final example shows.
\begin{example}[WZNW models in dimension 2 and the connection to anomalies]
Ref.~\cite{witten1992} shows how the obstruction to gauging topological actions for a class of sigma models in dimension 2 is related to perturbative anomalies arising from one-loop diagrams involving fermions in putative ultraviolet descriptions. In that set-up, $X$ is taken to be a compact, simple, and 2-connected Lie group, while $K$ is a simple, 1-connected\footnote{Ref. \cite{witten1992} does not explicitly state that $K$ (written there as $F$) must be connected, but this is needed for the arguments that follow.} subgroup of the product $G=X \times X$ which acts on $X$ by left and right translations.
Let us now recover the results obtained in \cite{witten1992} using differential cohomology. Focussing first on $K$-equivariance, because $K$ is 0-connected, we have $H^2(X;\mathbb{R}/\mathbb{Z})^K = H^2(X;\mathbb{R}/\mathbb{Z})$, which in turn vanishes because $X$ is 2-connected. Hence $\widehat{H}^3_K(X) \cong \Omega^3_K(X)_\mathbb{Z}$. We similarly have that $H^2(X;\mathbb{R}/\mathbb{Z})^G =0$ and, because
$G$ connected and is semi-simple, the map $\text{curv}^G$ surjects, yielding $\widehat{H}^3(X)^G \cong \Omega^3(X)^G_\mathbb{Z}$. So the problem reduces to studying ($K$-equivariant and $G$-invariant) integral forms.
For seeing the connection to anomalies, it suffices to consider the integral forms on $X$ that are not exact. Each of these corresponds to a class in $H^3(X,\mathbb{Z}) \cong \mathbb{Z}$, for which a generator may be constructed as follows \cite{pressleysegal}. Because $X$ is simple, there is a unique $G=X\times X$-invariant inner product $\langle , \rangle$ on the Lie algebra $\mathfrak{x}$ of $X$ and from this we construct a canonical $G$-invariant integral 3-form $\phi = \frac{1}{24\pi} \langle \theta^L, [\theta^L,\theta^L] \rangle $ on $X$ from the (left, say) Maurer-Cartan form $\theta^L$.
Now, the extension $\phi_G(v)= \phi + \frac{1}{4\pi} \langle \theta^L + \theta^R, v \rangle $ is $K$-equivariantly closed iff. $\langle v_L , v_L \rangle =\langle v_R , v_R \rangle $ for all $v \in \mathfrak{k}$, where $v_{L,R}$ are the projections on to the left and right factors of $v \in \mathfrak{k} \subset \mathfrak{x} \oplus \mathfrak{x}$. This coincides with the condition that the local (a.k.a. perturbative) anomalies in the fermionic high-energy description cancel, as explained in \cite{witten1992}.
But now we know that one must go beyond \cite{witten1992} and check the integrality condition, namely that the image of $\phi_G$ in $H^3_K(X;\mathbb{R})$ is in the image of $H^3_K(X;\mathbb{Z})$. We leave this for future work.
\qed
\end{example}
\subsection*{Acknowledgments}
We thank Daniel Freed, Corbett Redden, Yuji Tachikawa, and Edward Witten for correspondence and Christian B\"ar, Avner Karasik, Nakarin Lohitsiri, David Tong, and Carl Turner for discussions. JD and BG are supported by STFC consolidated grant ST/P000681/1, and BG is supported by King’s College, Cambridge.
|
2,877,628,091,499 | arxiv | \section{Introduction}
\label{intro}
Dielectrons were proposed several decades ago~\cite{Shuryak1978} to be an important source of information from the hot and
dense medium, which can be created in heavy-ion collisions. Since dielectrons are emitted throughout the collision process and do not
interact via the strong interaction, they are ideal probes for all stages of the collision. Moreover, the measurement of virtual
photons, i.e. photons which convert internally into dileptons, allows to reduce systematic uncertainties significantly compared to the
measurement of real photons, since the main sources of the background, photons and dielectrons from $\pi^0$ decays, can be rejected at
finite mass.
However, to access information on the medium in heavy-ion collisions, the dielectron production in the vacuum and possible cold nuclear
matter effects need to be evaluated. Therefore, it is necessary to have reference measurements from proton-proton (pp) and proton-nucleus
(p--A) collisions.
LHC provided during Run~1 three different collision systems, i.e. pp, p--Pb and Pb--Pb. In this proceedings, preliminary results from pp
collisions at $\ensuremath{\sqrt{s}}\xspace=7$~TeV on the dielectron invariant mass continuum and direct photons measured via virtual photons are summarized. The
dielectron continuum as a function of invariant mass and dielectron transverse momentum is compared to the expected hadronic sources of
dielectrons in p--Pb collisions at $\ensuremath{\sqrt{s_{\rm NN}}}\xspace = 5.02$~TeV. In addition, the status of the analysis for Pb--Pb collisions at $\ensuremath{\sqrt{s_{\rm NN}}}\xspace = 2.76$~TeV
will be discussed.
\section{Data analysis}
\label{analysis}
ALICE has capabilities for particle identification in the low transverse momentum ($\ensuremath{p_{\rm T}}\xspace$) regime that are unique at the LHC. Electrons with
the transverse momentum $\ensuremath{p_{\rm T}}\xspace^e > 0.2$~GeV/$c$ are identified by combined energy loss information from the Time Projection Chamber (TPC)
and, in the case of p--Pb and Pb--Pb, the outermost four layers of the Inner Tracking System (ITS). Additionally, the Time-Of-Flight
detector (TOF) is used in the range $ 0.4 < \ensuremath{p_{\rm T}}\xspace^e < 5.0$~GeV/$c$ to reject kaons and protons. The remaining hadron contamination is at most
$1$~\% in pp collisions and up to $10$~\% in Pb--Pb collisions.
When measuring unlike-sign dielectron pairs, $N_{\rm US}$, one of the main challenges is the estimation of the combinatorial background,
which arises from random dielectron combinations and is superimposed on the physics signal. The signal-over-background ratio is in
the order of $10^{-2}$ for pp and p--Pb collisions and about a factor $10$ lower in central Pb--Pb collisions for $\mee \approx
0.5$~GeV/$c^2$. The combinatorial background is measured by the same-event like-sign method. This method holds under the assumption that
the physics signal consists only of unlike-sign pairs. The like-sign spectra are normalized via $N_{\rm LS} = 2 \cdot
R \cdot \sqrt{N_{++}N_{--}}$, where $R$ is a correction factor for the difference between the acceptance of unlike-sign pairs and like-sign
pairs and $N_{++}$ and $N_{--}$ are positive and negative like-sign dielectron pairs, respectively. The acceptance correction is calculated
as $R = B_{+-}/(2 \cdot \sqrt{B_{++}B_{--}})$, where $B$ indicates mixed event distributions. $R$ depends on the minimum single
electron $\ensuremath{p_{\rm T}}\xspace^e$ and is consistent with unity within its statistical uncertainties for the pp and the p--Pb analysis for $\ensuremath{p_{\rm T}}\xspace^e >
0.2$~GeV/$c$. For $\ensuremath{p_{\rm T}}\xspace^e > 0.4$~GeV/$c$ in Pb--Pb collisions, the deviation from unity is of the order of $5$~\% for $\mee < 0.1$~GeV/$c^2$
and approaches unity for increasing mass. The raw signal is calculated as $ S = N_{\rm US} - N_{\rm LS}$.
The data are corrected for detector and reconstruction efficiency via Monte-Carlo (MC) simulations. Single electron efficiencies are
calculated as a function of $(\ensuremath{p_{\rm T}}\xspace,\eta,\phi)$. Every electron is weighted with its efficiency in a dielectron generator with realistic
electron and dielectron kinematics.
\begin{figure}[t]
\begin{minipage}{14pc}
\includegraphics[scale = 0.38]{./2012-Oct-11-CocktailComparisonRatio.pdf}
\end{minipage}\hspace{4pc}%
\begin{minipage}{14pc}
\includegraphics[scale = 0.38]{./2014-May-13-ExBin.pdf}
\end{minipage}
\begin{minipage}{14pc}
\includegraphics[scale = 0.38]{./2014-May-13-RvsPt.pdf}
\end{minipage}\hspace{4pc}%
\begin{minipage}{14pc}
\includegraphics[scale = 0.35]{./2014-May-13-DirPhotCross.pdf}
\end{minipage}
\caption{Upper left: Dielectron invariant mass distribution together with the cocktail calculations for pp collisions at $\ensuremath{\sqrt{s}}\xspace = 7$~TeV. Upper
right: Result for two component fit of the \mee distribution. Lower left: Fit parameter $r$ as a function of photon \ensuremath{p_{\rm T}}\xspace. Lower right:
Direct photon cross section as a function of transverse momentum together with pQCD NLO
calculations. At low \ensuremath{p_{\rm T}}\xspace only upper limits (95~\% confidence level) can be determined, as indicated by the arrows. The inclusive photon
cross section measured via photon conversion method (PCM) is also shown.}\label{fig:pp_plots}
\end{figure}
The expected hadronic sources of dielectrons at the moment of freeze-out, the so-called hadronic cocktail, are calculated based on
measured differential cross sections for $\pi^0,\eta,\phi$ and $\ensuremath{{J/\psi}}\xspace$ in pp collisions, and on the charged pion spectrum in p--Pb
collisions. The mass shape of resonances is based on~\cite{Gounaris} and the Dalitz pair mass distributions are following~\cite{KrollWada}.
To estimate the expected yield of correlated dielectrons from semi-leptonic decays of open heavy-flavor mesons, the PYTHIA
event generator~\cite{pythia}, tuned to NLO calculations~\cite{mangano}, is used. The pair distributions are normalized to the cross
sections measured in pp collisions and scaled by the number of binary collisions for p--Pb collisions. See~\cite{ALICE_cocktail} for a
compilation of references. Contributions from hadrons, which have not been measured, are estimated by $\ensuremath{m_{\rm T}}\xspace$ scaling of the $\pi^0$ cross
section.
\section{Results}
\label{results}
The preliminary results for pp collisions at $\ensuremath{\sqrt{s}}\xspace = 7$~TeV are shown in Figure~\ref{fig:pp_plots}. In the upper left panel, the dielectron
data are compared to the cocktail as a function of invariant mass for integrated pair \ensuremath{p_{\rm T}}\xspace. The hadronic cocktail is consistent with the
dielectron data. Virtual photon production is studied in pp collisions. Virtual photons convert internally into dielectrons. The relation
between the dielectron invariant mass distribution and the virtual photon yield is given for $\ensuremath{p_{\rm T}}\xspace^{ee} \gg \mee$ by the Kroll-Wada
equation~\cite{KrollWada}.
In the upper right panel of Figure~\ref{fig:pp_plots}, pp data are compared to the cocktail for $2.4 < \ensuremath{p_{\rm T}}\xspace^{ee} < 3.2$~GeV/$c$.
The different sources are indicated as dashed lines. The function $f_{\rm comb} = (1-r)f_{c} + rf_{\rm \gamma,dir}$ is fitted to the data in
the region $0.1 < \mee < 0.4$~GeV/$c^2$, where $f_{c}$ is the cocktail contribution, $f_{\rm \gamma,dir}$ is the photon input from the
Kroll-Wada equation and $r$ is the only fitting parameter. $r$ reflects the ratio of direct over inclusive photons. The result for $r$ as a
function of the photon \ensuremath{p_{\rm T}}\xspace is shown in the lower left panel of Figure~\ref{fig:pp_plots}. Under the assumption that the ratio of
direct over inclusive photons is the same for real and virtual photons, the direct photon cross section can be calculated by $\gamma_{\rm
dir} = r \times \gamma_{\rm incl}$, where $\gamma_{\rm dir}$ and $\gamma_{\rm incl}$ are the direct and inclusive photon yields. The
inclusive photon cross section has been measured via photon conversions, see e.g.~\cite{Wilde}. In the lower right panel of
Figure~\ref{fig:pp_plots}, the direct photon cross section is shown as a function of the photon \ensuremath{p_{\rm T}}\xspace. NLO pQCD
calculations~\cite{Vogelsang} are consistent with the data.
In the upper left panel of Figure~\ref{fig:pPb_plots}, the dielectron invariant mass spectrum is compared to the cocktail for p--Pb
collisions at $\ensuremath{\sqrt{s_{\rm NN}}}\xspace = 5.02$~TeV. The cocktail is in good agreement with the data. In the upper right and lower left panels, data are
compared to the cocktail distributions as a function of dielectron transverse momentum for $0.14 < \mee < 0.75$~GeV/$c^2$ and $1.1 < \mee <
3.0$~GeV/$c^2$, respectively. The data are well described by the cocktail. These two mass regions are of special interest. The mass
region $0.14 < \mee < 0.75$~GeV/$c^2$ is sensitive to hot hadronic medium effects. The region $1.1 < \mee < 3.0$~GeV/$c^2$ is dominated by
semi-leptonic decays of heavy-flavour mesons. In this mass region, heavy quark pair correlations can be studied.
In the lower right panel of Figure~\ref{fig:pPb_plots}, the raw yield as a function of invariant mass is shown for Pb--Pb collisions in
$0-10$~\% centrality at $\ensuremath{\sqrt{s_{\rm NN}}}\xspace = 2.76$~TeV. Further analysis of this spectrum will allow the study of the virtual photon yield and for
the exploration of a possible low-mass enhancement in Pb--Pb collisions.
\begin{figure}[t]
\begin{minipage}{14pc}
\includegraphics[scale = 0.34]{./2014-May-13-mee_pt200.pdf}
\end{minipage}\hspace{4pc}%
\begin{minipage}{14pc}
\includegraphics[scale = 0.34]{./2014-May-13-pt2_pt200.pdf}
\end{minipage}
\begin{minipage}{14pc}
\includegraphics[scale = 0.34]{./2014-May-13-pt4_pt200.pdf}
\end{minipage}\hspace{4pc}%
\begin{minipage}{14pc}
\includegraphics[scale = 0.34]{./2014-May-13-hSubtractedGeV-0010-ptee1_0to2_0.pdf}
\end{minipage}
\caption{The dielectron mass distribution is compared to the cocktail calculations for p--Pb collisions at $\ensuremath{\sqrt{s_{\rm NN}}}\xspace = 5.02$~TeV as a
function of invariant mass (upper left panel) and as a function of pair transverse momentum for the mass intervals $0.14 < \mee <
0.75$~GeV/$c^2$ (upper right panel) and $1.1 < \mee < 3.0$~GeV/$c^2$ (lower left panel). In the lower right panel, the raw dielectron yield
is shown for Pb--Pb collisions at $\ensuremath{\sqrt{s_{\rm NN}}}\xspace = 2.76$~TeV.}\label{fig:pPb_plots}
\end{figure}
\section{Summary and outlook}
\label{conclusions}
The dielectron invariant mass spectrum measured in pp collisions at $\ensuremath{\sqrt{s}}\xspace = 7$ TeV is consistent with the expectation from hadronic sources.
The same is observed for the invariant mass and transverse momentum distributions of dielectrons in p--Pb collisions at 5.02 TeV. The direct
photon yield extracted from dielectron data in pp collisions at 7 TeV is consistent with NLO pQCD calculations. The study of dielectron
production in Pb--Pb collisions at $\ensuremath{\sqrt{s_{\rm NN}}}\xspace = 2.76$~TeV is ongoing. \\
At the end of Run~$2$, statistical uncertainties will be reduced significantly. After the second long shutdown at the LHC, which is expected
to end in $2019$, ALICE will run with upgraded detector components~\cite{ALICE_upgrade}. The upgrade of the ITS will allow high precision
vertexing to measure and reject dielectrons from correlated heavy flavour decays and the continuous read-out of the TPC will allow to take
full advantage of the high luminosity at the upgraded LHC. Hence, detailed studies of the dielectrons will become feasible in Pb--Pb
collisions.
|
2,877,628,091,500 | arxiv | \section{Introduction}
One of the approaches to proving integrality of the $(q,t)$-Kostka
coef\/f\/icients is the idea, due to Kirillov--Noumi \cite{kin,kin1} and
Lapointe--Vinet \cite{lv,lv1}, of using raising operators for Macdonald
polynomials. (See also \cite{gr,gt,kn,sa1} for other
approaches.) In their proof Kirillov and Noumi give an explicit construction
of such raising operators for the Macdonald polynomials $J_{\lambda}\left(
x;q,t\right) $ for the root system of type $A_{n-1}$. They also pose the
problem of f\/inding analogous operators for the six-parameter Koornwinder
corresponding to the $BC_{n}$ root system.
This question was also raised by Tom Koornwinder at the Edinburgh conference
on symmetric functions organized by Vadim Kuznetsov. The case $n=1$
corresponds to the celebrated Askey--Wilson polynomials and Koornwinder's paper
\cite{ko1} from that conference contains partial results in this direction as
well as a survey of earlier results.
In this paper we such construct raising/lowering operators for Askey--Wilson
polynomials. In fact we describe \emph{two} such pairs of operators, which
result from constructions involving very dif\/ferent techniques.
The f\/irst technique is quite elementary, and depends only on the ``classical''
properties of these polynomials, \textit{viz.} the $q$-dif\/ference equation and
the three term recurrence. Therefore it can be applied to all the polynomials
in the Askey scheme. After this work was completed, we obtained a recent
preprint by T.~Koornwinder \cite{ko2}, the main result of which is very close
to this approach. Also through \cite{ko2} we discovered still earlier work
of G.~Bangerezako \cite{b} which obtains similar operators based on an
\textit{ad-hoc} factorization of the Askey--Wilson operator. Our approach
however is more direct and quite short.
The second technique is less elementary and involves the one-variable version
of the powerful Hecke algebra method as described in \cite{m,n,
ns,sa2,sa3,st}. This approach is related to a
fairly remarkable mathematical object~-- the double af\/f\/ine Hecke algebra
(see \cite{c1, c2,e,sa2}). The calculations, while
non-trivial to carry out, are conceptually rather straightforward. The
raising/lowering operators so obtained are dif\/ferent from those coming from
the ``classical'' method. This method also provides a new factorization of the
Askey--Wilson operator described in Lemma~\ref{factor}, which is much simpler
than that of Bangerezako.
In subsequent work, we hope to extend these methods to construct raising
operators for Koornwinder polynomials \cite{ko,v,sa2,sa3}.
\section{The classical approach}
\subsection{Askey Wilson polynomials}
The $q$-hypergeometric series is given by the formula%
\[
{}_{r}\phi_{s}\left( \left.
\genfrac{}{}{0pt}{}{a_{1},\ldots,a_{r}}{b_{1},\ldots,b_{s}}%
\right| q;y\right) =\sum_{k\geq0}\frac{\left( a_{1},\ldots,a_{r}\right)
_{k}}{\left( b_{1},\ldots,b_{s}\right) _{k}}\left( -1\right) ^{\left(
1+s-r\right) k}q^{\left( 1+s-r\right) \binom{k}{2}}\frac{y^{k}}{\left(
q\right) _{k}},
\]
where the ``$q$-Pochhammer symbols'' are def\/ined by
\begin{gather*}
\left( a,b,c,\ldots\right) _{k} :=\left( a\right) _{k}\left(
b\right) _{k}\left( c\right) _{k}\cdots,\\
\left( a\right) _{k} :=\left( 1-a\right) \left( 1-aq\right)
\cdots\big( 1-aq^{k-1}\big).
\end{gather*}
The Askey--Wilson polynomials \cite{aw} are def\/ined by the formula%
\[
P_{n}\left( z;a,b,c,d|q\right) =\,\frac{\left( ab,ac,ad\right) _{n}}%
{a^{n}\left( abcdq^{n-1}\right) _{n}}\,_{4}\phi_{3}\left( \left.
\genfrac{}{}{0pt}{}{q^{-n},abcdq^{n-1},az,az^{-1}}{ab,ac,ad}%
\right| q;q\right).
\]
Since $\left( q^{-n}\right) _{k}$ vanishes for $k>n$, we have%
\[
_{4}\phi_{3}\left( \cdots\right) =\sum_{k=0}^{n}\left[ \frac{\left(
abcdq^{n-1}\right) _{k}}{\left( ab,ac,ad\right) _{k}}\right] \left[
\frac{\left( q^{-n}\right) _{k}q^{k}}{\left( q\right) _{k}}\right]
\left( az\right) _{k}\left( az^{-1}\right) _{k}.
\]
It follows that $P_{n}$ is a Laurent polynomial of degree $n,$ which is
moreover symmetric in $z$ and~$z^{-1}$ and is of the form
\[
P_{n}\left( z;a,b,c,d|q\right) =\left( z^{n}+z^{-n}\right) +\ \text{lower
terms}.
\]
It is also symmetric in $\left\{ a,b,c,d\right\} $ although this is not
entirely obvious from the formula above.
We have chosen to normalize $P_{n}$ in order to make it monic. Of course there
are several other possible normalizations, and we discuss some of these below.
First of all, we remark that formula (3.1.7) of \cite{ks} considers the
polynomial
\[
\frac{\left( ab,ac,ad\right) _{n}}{a^{n}}\,_{4}\phi_{3}\left( \left.
\genfrac{}{}{0pt}{}{q^{-n},abcdq^{n-1},az,az^{-1}}{ab,ac,ad}%
\right| q;q\right)
\]
which is $\left( abcdq^{n-1}\right) _{n}$ times our $P_{n}.$
Next, since the Askey--Wilson polynomial is symmetric in $z$, $z^{-1},$ it can be
expressed as an (ordinary) polynomial of degree $n$ in%
\[
x=\left( z+z^{-1}\right) /2.
\]
The function $p_{n}\left( x\right) $ considered in (3.1.5) of \cite{ks}, is
monic in $x$, and hence is related to our normalization $P_{n}$ by the formula%
\[
p_{n}\left( \frac{z+z^{-1}}{2}\right) =2^{-n}P_{n}\left( z\right).
\]
Finally, the polynomials $P_{n}$ are orthogonal with respect the inner product
$\left\langle \cdot ,\cdot \right\rangle $ def\/ined in (3.1.2) of \cite{ks}. If we def\/ine
\begin{gather}
Q_{n} =\gamma_{n}P_{n},\label{gamma-n}
\end{gather}
where
\begin{gather*}
\gamma_{n} =\frac{(abq^{n},acq^{n},adq^{n},bcq^{n}
,bdq^{n},cdq^{n},q^{n+1})_{\infty}}{\left( abcdq^{2n}\right) _{\infty}
}\left( abcdq^{n-1}\right) _{n}.
\end{gather*}
Then $Q_{n}$ is dual to $P_{n}$ in the sense that
\[
\left\langle P_{m},Q_{n}\right\rangle =\delta_{m,n}.
\]
\subsection{Raising and lowering operators}
The main result of this section are the following raising and lowering
operators for the Askey--Wilson polynomials:
\begin{theorem}
\label{one} For all $n>1,$ the Askey--Wilson polynomials satisfy the relations%
\begin{gather*}
\left[ D\left( z+z^{-1}\right) -\lambda_{n-1}\left( z+z^{-1}\right)
-\alpha_{n}\left( \lambda_{n}-\lambda_{n-1}\right) \right] P_{n}
=\left( \lambda_{n+1}-\lambda_{n-1}\right) P_{n+1},\\
\left[ D\left( z+z^{-1}\right) -\lambda_{n+1}\left( z+z^{-1}\right)
-\alpha_{n}\left( \lambda_{n}-\lambda_{n+1}\right) \right] Q_{n}
=\left( \lambda_{n-1}-\lambda_{n+1}\right) Q_{n-1},
\end{gather*}
where $D$, $\lambda_{n}$ and $\alpha_{n}$ are as in \eqref{Ddef} and
\eqref{alpha-n} below.
\end{theorem}
\begin{proof}
The proof involves two key properties of the Askey--Wilson polynomials.
The f\/irst property is the `$q$-dif\/ference equation' from (3.1.7) ) of
\cite{ks} which asserts that $P_{n}$ is an eigenfunction for the Askey--Wilson
operator, i.e.
\begin{equation}
DP_{n}\left( z\right) =\lambda_{n}P_{n}\left( z\right) . \label{q-diff}
\end{equation}
The operator and its eigenvalue are def\/ined by
\begin{gather}
D =A\left( z\right) \left( T_{q}-1\right) +A\left( z^{-1}\right)
\left( T_{q^{-1}}-1\right), \label{Ddef}\\
\lambda_{n} =\left( q^{-n}-1\right) \left( 1-abcdq^{n-1}\right)
=\left( q^{-n}+abcdq^{n-1}\right) -\left( 1+abcdq^{-1}\right), \nonumber
\end{gather}
where $A\left( z\right) $ is the rational function%
\begin{equation}
A\left( z\right) =\frac{\left( az,bz,cz,dz\right) _{1}}{\left(
z^{2}\right) _{2}}=\frac{\left( 1-az\right) \left( 1-bz\right) \left(
1-cz\right) \left( 1-dz\right) }{\left( 1-z^{2}\right) \left(
1-qz^{2}\right) } \label{Az}%
\end{equation}
and $T_{q}$ is the shift operator%
\[
T_{q}f\left( z\right) =f\left( qz\right) .
\]
(To forestall possible confusion we emphasize that, in accordance with custom,
we think of $f\left( z\right) $ as a Laurent polynomial rather than as a
function of $z$. This means that we have $T_{q}\left( z^{k}\right)
=q^{k}z^{k}$ rather than $T_{q}\left( z^{k}\right) =q^{-k}z^{k}.$)
The second key property of these polynomials is the `normalized recurrence
relation' from (3.1.5) of \cite{ks} which can be rewritten in the form%
\begin{equation}
\left( z+z^{-1}\right) P_{n}=P_{n+1}+\alpha_{n}P_{n}+\frac{\gamma_{n-1}%
}{\gamma_{n}}P_{n-1}\qquad \text{for} \quad n>1, \label{recurr}%
\end{equation}
where
\begin{equation}
\alpha_{n}=a+1/a-\frac{a\left( bcq^{n-1},bdq^{n-1},cdq^{n-1},q^{n}\right)
_{1}}{\left( abcdq^{2n-2}\right) _{2}}-\frac{\left( abq^{n},acq^{n}
,adq^{n},abcdq^{n-1}\right) _{1}}{a\left( abcdq^{2n-1}\right) _{2}}.
\label{alpha-n}%
\end{equation}
We combine these two properties as follows:
First apply the operators $D-\lambda_{n-1}$ and $D-\lambda_{n+1}$,
respectively, to the recurrence relation to get%
\begin{gather*}
\left( D-\lambda_{n-1}\right) \left( z+z^{-1}-\alpha_{n}\right) P_{n}
=\left( \lambda_{n+1}-\lambda_{n-1}\right) P_{n+1},\\
\left( D-\lambda_{n+1}\right) \left( z+z^{-1}-\alpha_{n}\right) P_{n}
=\frac{\gamma_{n-1}}{\gamma_{n}}\left( \lambda_{n-1}-\lambda_{n+1}\right)
P_{n-1}.
\end{gather*}
Finally simplify, using the $q$-dif\/ference equation (\ref{q-diff}), to get%
\begin{gather*}
\left[ D\left( z+z^{-1}\right) -\lambda_{n-1}\left( z+z^{-1}\right)
-\alpha_{n}\left( \lambda_{n}-\lambda_{n-1}\right) \right] P_{n}
=\left( \lambda_{n+1}-\lambda_{n-1}\right) P_{n+1},\\
\left[ D\left( z+z^{-1}\right) -\lambda_{n+1}\left( z+z^{-1}\right)
-\alpha_{n}\left( \lambda_{n}-\lambda_{n+1}\right) \right] Q_{n}
=\left( \lambda_{n-1}-\lambda_{n+1}\right) Q_{n-1}
\end{gather*}
as desired.
\end{proof}
\section{The Hecke algebra approach}
In this section we provide raising/lowering operators for Askey--Wilson
polynomials based on Hecke algebra considerations \cite{sa2,sa3}. Once
again the main idea is quite straightforward, although the calculations are a
little more intricate. The resulting formulas are dif\/ferent and perhaps
slightly simpler.
\subsection{The Hecke algebra}
The key to this approach are the involutions $s_{1}$, $s_{0}$ which act on
Laurent polynomials as follows:%
\[
s_{1}f\left( z\right) =f\left( z^{-1}\right) \qquad \text{and}\qquad s_{0}f\left(
z\right) =f\left( qz^{-1}\right).
\]
Once again we regard these operators as acting on polynomials, rather than
functions, so that we have%
\[
s_{1}\left( z^{k}\right) =z^{-k}\qquad \text{and} \qquad s_{0}\left( z^{k}\right)
=q^{k}z^{-k}.
\]
These operators provide a factorization of the $q$-shift operators, and one
has%
\[
s_{1}s_{0}=T_{q}\qquad \text{and} \qquad s_{0}s_{1}=T_{q^{-1}}.
\]
The af\/f\/ine Hecke algebra \cite{sa2,sa3} is the algebra of operators
generated by the two opera\-tors~$T_{0}$ and $T_{1}$ def\/ined as
\[
T_{i}:=t_{i}+r_{i}\left( s_{i}-1\right),
\]
where
\begin{gather}
t_{0} =-cd/q,\qquad r_{0}=\frac{\left( z-c\right) \left( z-d\right)
}{\left( z^{2}-q\right) },\label{rrtt}\\
t_{1} =-ab,\qquad r_{1}=\frac{\left( 1-az\right) \left( 1-bz\right)
}{\left( 1-z^{2}\right) }.\nonumber
\end{gather}
\begin{remark}
\label{Remark} The operator $T_{i}$ as def\/ined here is $t_{i}^{1/2}$ times the
corresponding operator from~\cite{sa2,sa3}. This accounts for the
slight dif\/ference between the formulas here and in~\cite{sa3}.
\end{remark}
From the def\/inition of $T_{1}$ it follows that a polynomial $f$ is symmetric
in $z$, $z^{-1}$, if and only if%
\begin{equation}
T_{1}f=t_{1}f. \label{symm}%
\end{equation}
Consequently, if $g$ is any polynomial, then the quadratic relation
(\ref{quad1}) implies that $\left( T_{1}+1\right) g$ is a symmetric
polynomial. The operators $T_{i}$ are deformations of $s_{i}$ and satisfy a
quadratic relation. For the convenience of the reader unfamiliar with the
Hecke algebra, we give a proof this relation.
\begin{lemma}
The operators $T_{i}$ satisfy the relation
\begin{equation}
\left( T_{i}-t_{i}\right) \left( T_{i}+1\right) =0. \label{quad1}%
\end{equation}
\end{lemma}
\begin{proof}
Def\/ine $s_{i}\left( r_{i}\right) =r_{i}^{\prime}$, then we claim that
\begin{equation}
r_{i}+r_{i}^{\prime}=t_{i}+1 \label{rrt1}%
\end{equation}
To see this, we calculate for $i=0,$%
\begin{gather*}
r_{0}+r_{0}^{\prime} =\frac{\left( z-c\right) \left( z-d\right)
}{\left( z^{2}-q\right) }+\frac{\left( qz^{-1}-c\right) \left(
qz^{-1}-d\right) }{\left( q^{2}z^{-2}-q\right) }\\
\phantom{r_{0}+r_{0}^{\prime}}{} =\frac{\left( z-c\right) \left( z-d\right) }{\left( z^{2}-q\right)
}-\frac{\left( q-cz\right) \left( q-dz\right) }{q\left( z^{2}-q\right)
}\\
\phantom{r_{0}+r_{0}^{\prime}}{} =\frac{q\left( z^{2}-cz-dz+cd\right) -\left( q^{2}-qcz-qdz+cdz^{2}%
\right) }{q\left( z^{2}-q\right) }\\
\phantom{r_{0}+r_{0}^{\prime}}{} =\frac{qz^{2}+qcd-q^{2}-cdz^{2}}{q\left( z^{2}-q\right) }=\frac{\left(
q-cd\right) \left( z^{2}-q\right) }{q\left( z^{2}-q\right) }\\
\phantom{r_{0}+r_{0}^{\prime}}{} =1-\frac{cd}{q}=1+t_{0}.
\end{gather*}
The calculation for $i=1$ is similar and simpler.
Now the quadratic relation can be proved as follows:%
\begin{gather*}
\left( T_{1}-t_{1}\right) \left( T_{1}+1\right) =r_{i}\left(
s_{i}-1\right) \left[ t_{i}+1+r_{i}\left( s_{i}-1\right) \right]
\nonumber\\
\phantom{\left( T_{1}-t_{1}\right) \left( T_{1}+1\right)}{} =r_{i}\left[ \left( t_{i}+1\right) \left( s_{i}-1\right) +\left(
s_{i}r_{i}\right) \left( s_{i}-1\right) -r_{i}\left( s_{i}-1\right)
\right] \nonumber\\
\phantom{\left( T_{1}-t_{1}\right) \left( T_{1}+1\right)}{}
=r_{i}\left[ \left( t_{i}+1\right) \left( s_{i}-1\right) +\left(
r_{i}^{\prime}s_{i}\right) \left( s_{i}-1\right) -r_{i}\left(
s_{i}-1\right) \right] \nonumber\\
\phantom{\left( T_{1}-t_{1}\right) \left( T_{1}+1\right)}{}
=r_{i}\left[ \left( t_{i}+1\right) \left( s_{i}-1\right)
+r_{i}^{\prime}\left( 1-s_{i}\right) -r_{i}\left( s_{i}-1\right) \right]
\nonumber\\
\phantom{\left( T_{1}-t_{1}\right) \left( T_{1}+1\right)}{}
=r_{i}\left( t_{i}+1-r_{i}-r_{i}^{\prime}\right) \left( s_{i}-1\right)
\, \overset{{\rm by}\ \eqref{rrt1}}{=}\, 0. \tag*{\qed}
\end{gather*}\renewcommand{\qed}{}
\end{proof}
The following result is an immediate consequence
\begin{corollary}
The operators $T_{i}$ are invertible, with%
\begin{equation}
t_{i}T_{i}^{-1}=T_{i}-t_{i}+1. \label{quad2}%
\end{equation}
\end{corollary}
We will also need a number of commutation results between the $T_{i}$ and the
multiplication operator by~$z.$ They follow directly from the def\/inition, and
we leave the (easy) proof to the reader.
\begin{lemma}
\label{commute} The operators\ $T_{i}$ satisfy the following commutation
relations%
\begin{gather*}
zt_{0}T_{0}^{-1} =qT_{0}z^{-1}+c+d,\\
\left( T_{1}+1\right) z^{-1} =t_{1}z^{-1}+zT_{1}+a+b,\\
\left( T_{1}+1\right) z =z+t_{1}z^{-1}T_{1}^{-1}-\left( a+b\right).
\end{gather*}
\end{lemma}
\subsection[Nonsymmetric Askey-Wilson polynomials]{Nonsymmetric Askey--Wilson polynomials}
The next ingredient in the Hecke algebra method are the nonsymmetric
Askey--Wilson polynomials. These are certain Laurent polynomials, $E_{n}$,
$n\in\mathbb{Z}$, which can be characterized up to multiples as eigenfunctions
of the operator%
\[
Y=T_{1}T_{0}.
\]
More precisely, one has%
\begin{gather}
YE_{n} =\mu_{n}E_{n},\label{mun}
\end{gather}
where
\begin{gather*}
\mu_{n} =\left\{
\begin{array}{lll}
q^{n} & \text{for} & n<0,\\
q^{n}t_{1}t_{0}=q^{n-1}abcd & \text{for} & n\geq0.
\end{array}
\right.
\end{gather*}
The symmetric Askey--Wilson polynomials $P_{\left| n\right| }$ are closely
related to $E_{\pm n}$. Up to normalization, one has $P_{0}=E_{0}=1$, while
for $n>0$ we have up to a scalar
\begin{equation}
P_{\left| n\right| }\sim\left( T_{1}+1\right) E_{\pm n}=c_{n}^{\pm}
E_{n}+c_{-n}^{\pm}E_{-n}. \label{PE}
\end{equation}
The explicit formula for the coef\/f\/icients $c_{n}^{\pm}$ and $c_{-n}^{\pm}$ is
known, but will not be needed in what follows.
We now def\/ine a slight variant of the Askey--Wilson operator, as follows:
\begin{equation}
D^{\prime}=A\left( z\right) \left( T_{q}-s_{1}\right) +A\left(
z^{-1}\right) \left( T_{q^{-1}}s_{1}-1\right). \label{Dprime}
\end{equation}
Observe that $D$ and $D^{\prime}$ have the same action on functions which are
symmetric $z$ and $z^{-1},$ thus the Askey--Wilson polynomials satisfy%
\[
D^{\prime}P_{n}=DP_{n}=\lambda_{n}P_{n}.
\]
Just as the operator $s_{0}$ and $s_{1}$ factorize the $q$-shift operator, it
turns out that the opera\-tors~$T_{0}$ and $T_{1}$ provide a factorization of
$D^{\prime}$.
\begin{lemma}
\label{factor}The operator $D^{\prime}$ of formula \eqref{Dprime} admits the
following factorization:
\[
D^{\prime}=\left( T_{1}+1\right) \left( T_{0}-t_{0}\right).
\]
\end{lemma}
\begin{proof}
To prove this, we calculate as follows
\begin{gather*}
\left( T_{1}+1\right) \left( T_{0}-t_{0}\right) =\left( t_{1}
+r_{1}\left( s_{1}-1\right) +1\right) \left( r_{0}\left( s_{0}-1\right)
\right) \,
\overset{{\rm by} \eqref{rrt1}}{=} \, \left( r_{1}^{\prime}+r_{1}s_{1}\right) \left( r_{0}s_{0}-r_{0}\right)
\\
\phantom{\left( T_{1}+1\right) \left( T_{0}-t_{0}\right)}{}
=r_{1}^{\prime}r_{0}\left( s_{0}-1\right) +r_{1}\widetilde{r}_{0}\left(
s_{1}s_{0}-s_{1}\right)
=r_{1}^{\prime}r_{0}\left( T_{q}^{-1}s_{1}-1\right) +r_{1}\widetilde
{r}_{0}\left( T_{q}-s_{1}\right), \nonumber
\end{gather*}
where $r_{1}^{\prime}=r_{1}\left( z^{-1}\right) $ and $\widetilde{r}%
_{0}=r_{0}\left( z^{-1}\right) .$
Comparing the formulas for $r_{i}$ (\ref{rrtt}) and $A\left( z\right) $
(\ref{Az})$,$ we conclude that%
\[
A\left( z\right) =r_{1}\widetilde{r}_{0}\qquad {\rm and} \qquad A\left( z^{-1}\right)
=r_{1}^{\prime}r_{0}%
\]
which completes the proof.
\end{proof}
\subsection{Raising and lowering operators}
To state our main result we need some notation. We write
\begin{equation}
e_{1}=a+b+c+d,\qquad e_{3}=abc+abd+acd+bcd. \label{e13}
\end{equation}
Also recall that for $n\geq0$, $\lambda_{n}$ is the symmetric Askey--Wilson
eigenvalue as in (\ref{Ddef}). For $n<0$ we def\/ine%
\[
\lambda_{n}=\lambda_{\left| n\right| }%
\]
and for all integral $n$ we set%
\begin{equation}
\beta_{n}=\frac{\lambda_{n}+1-\mu_{n-1}}{\mu_{n-1}-\mu_{-n}}e_{1}-\frac
{1-\mu_{n-1}}{\mu_{n-1}-\mu_{-n}}e_{3}. \label{beta-n}
\end{equation}
\begin{theorem}
\label{sraise} The Askey--Wilson polynomials satisfy the following relations:
\begin{gather}
\left[ D^{\prime}z+\left( 1-q^{1-n}\right) \left( z+z^{-1}\right)
+\beta_{-n}\right] P_{n} =\left( q^{n}abcd-q^{1-n}\right)
P_{n+1},\qquad n\geq0,\label{sraise1}\\
\left[ D^{\prime}z+\left( 1-q^{n}abcd\right) \left( z+z^{-1}\right)
+\beta_{n}\right] Q_{n} =\left( q^{1-n}-q^{n}abcd\right) Q_{n-1},\qquad
n>0. \label{sraise2}%
\end{gather}
\end{theorem}
The key for the proof is the ``af\/f\/ine intertwiner'' for the nonsymmetric
Askey--Wilson polynomials from \cite{sa3}. This involves the additional
parameters $u_{0}$ and $u_{1},$ which satisfy the relations%
\[
a=t_{1}^{1/2}u_{1}^{1/2},\qquad b=-t_{1}^{1/2}u_{1}^{-1/2},\qquad c=q^{1/2}t_{0}^{1/2}%
u_{0}^{1/2},\qquad d=-q^{1/2}t_{0}^{-1/2}u_{0}^{1/2}.
\]
Now from Theorem 1.2 of \cite{sa3} we have, up to a multiple,%
\begin{gather}
E_{n} \sim\left( a_{n}U_{0}+b_{n}\right) E_{-n-1},\label{eraise}
\end{gather}
where
\[
a_{n} =\big( q^{\overline{n}}-q^{\overline{-n-1}}\big)
\qquad {\rm and} \qquad b_{n} =q^{\overline{n}}\big( u_{0}^{-1/2}-u_{0}^{1/2}\big)
+q^{-1/2}\big( u_{1}^{-1/2}-u_{1}^{1/2}\big)
\]
with
\[
q^{\overline{n}} =\mu_{n}t_{1}^{1/2}t_{0}^{1/2}
\]
and $U_{0}$ is the operator
\[
U_{0}=q^{-1/2}t_{0}^{1/2}T_{0}^{-1}z.
\]
We will derive Theorem \ref{sraise} from formula (\ref{eraise}); however some
remarks are in order before we proceed:
\begin{enumerate}\itemsep=0pt
\item There is a typo in the statement of formula (\ref{eraise}) in Theorem
1.2 of \cite{sa3}, namely $n$ and $-n-1$ have been inadvertently switched.
This is easily seen by comparison with Theorem~4.1 from which Theorem~1.2 is derived.
\item The formula for $U_{0}$ here is slightly dif\/ferent from that in
\cite{sa3} because of the dif\/ference in~$T_{0}$~-- see Remark~\ref{Remark}).
\item Although Theorem 1.2 in \cite{sa3} is only stated (and needed) for
$n\geq0,$ it is easy to see that after the correction above it holds for all
integer $n$.
\item Finally, we note that the ideas of \cite{sa2} and \cite{sa3} work in
the more general setting of Koornwinder polynomials, and they involve the
af\/f\/ine intertwiner $S_{0}$, which can also be written as
\[
S_{0}=\left[ Y,z^{-1}T_{1}^{-1}\right].
\]
It is expected that this operator will play a key role in the raising
operators for Koornwonder polynomials.
We now give the proof \ of Theorem \ref{sraise}.
\end{enumerate}
\begin{proof}
We f\/irst simplify (\ref{eraise}) by multiplying through by $q^{1/2}t_{1}%
^{1/2}t_{0}$. This gives
\begin{gather*}
E_{n} \sim\left[ \left( \mu_{n}-\mu_{-n-1}\right) t_{0}T_{0}^{-1}%
z-\mu_{n}\left( c+d\right) -t_{0}\left( a+b\right) \right] E_{-n-1}\\
\phantom{E_{n}}{} \sim\left[ t_{0}T_{0}^{-1}z-\frac{\mu_{n}\left( c+d\right) +t_{0}\left(
a+b\right) }{\mu_{n}-\mu_{-n-1}}\right] E_{-n-1}.
\end{gather*}
Replacing $n$ by $n-1,$ we get%
\[
E_{n-1}\sim\left[ t_{0}T_{0}^{-1}z-\frac{\mu_{n-1}\left( c+d\right)
+t_{0}\left( a+b\right) }{\mu_{n-1}-\mu_{-n}}\right] E_{-n}.
\]
For ease in subsequent calculations, we write this as
\begin{equation}
E_{n-1}\sim\left( t_{0}T_{0}^{-1}z-\frak{\kappa}_{n}\right) E_{-n},
\label{kapparaise}
\end{equation}
where
\begin{equation}
\frak{\kappa}_{n}=\frac{\mu_{n-1}y+t_{0}x}{\mu_{n-1}-\mu_{-n}} ,\qquad
x=a+b,\qquad y=c+d. \label{kappaxy}
\end{equation}
The key idea to obtain a raising operator for $P_{n}$ is as follows: By
formula (\ref{PE}), for $n>1,$ $P_{\left| n\right| }$~is a combination of
$E_{n}$ and $E_{-n}$. We f\/irst kill of\/f the $E_{n}$ component. This can be
accomplished by applying the operator~$Y-\mu_{n}$ to $P_{n}$. However
it is more convenient (and equivalent) to apply the operator%
\[
t_{1}t_{0}\left( Y^{-1}-\mu_{n}^{-1}\right) =t_{1}t_{0}T_{0}^{-1}T_{1}%
^{-1}-\frac{t_{1}t_{0}}{\mu_{n}}%
\]
For $n\neq0$ we have $\frac{t_{1}t_{0}}{\mu_{n}}=\mu_{-n}$. Thus since $P_{n}$
is symmetric, formula (\ref{symm}) implies that up to a~non-zero multiple, one
has%
\begin{equation}
\left( t_{0}T_{0}^{-1}-\mu_{-n}\right) P_{n}\sim E_{-n}. \label{project}%
\end{equation}
Although the argument given above only applies for $n\neq0,$ it is easy to see
that formula (\ref{project}) is true (up to a non-zero multiple) for $n=0$ as
well! Now combining formulas (\ref{PE}), (\ref{kapparaise}), and
(\ref{project}) we conclude that up to a multiple, we have%
\[
P_{\left| n-1\right| } \sim R_{n}P_{\left| n\right| },
\]
where
\[
R_{n} =\left( T_{1}+1\right) \left( t_{0}T_{0}%
^{-1}z-\frak{\kappa}_{n}\right) \left( t_{0}T_{0}^{-1}-\mu_{-n}\right).
\]
The main problem now is to simplify the expression of the operator $R_{n}$
using properties of~$P_{\left| n\right| }$.
We f\/irst calculate using Lemma \ref{commute} and (\ref{quad2}), as follows%
\begin{gather*}
\left( t_{0}T_{0}^{-1}z-\frak{\kappa}_{n}\right) \left( t_{0}T_{0}%
^{-1}-\mu_{-n}\right) \\
\qquad{} =t_{0}T_{0}^{-1}zt_{0}T_{0}^{-1}-t_{0}T_{0}^{-1}\left( \mu_{-n}%
z+\frak{\kappa}_{n}\right) +\kappa_{n}\mu_{-n}\\
\qquad{} =t_{0}T_{0}^{-1}\left( qT_{0}z^{-1}+y\right) -t_{0}T_{0}^{-1}\left(
\mu_{-n}z+\frak{\kappa}_{n}\right) +\kappa_{n}\mu_{-n}\\
\qquad{} =t_{0}T_{0}^{-1}\left( y-\mu_{-n}z-\frak{\kappa}_{n}\right) +qt_{0}%
z^{-1}+\kappa_{n}\mu_{-n}\\
\qquad{} =\left( T_{0}-t_{0}+1\right) \left( y-\mu_{-n}z-\frak{\kappa}%
_{n}\right) +\left( qt_{0}z^{-1}+\kappa_{n}\mu_{-n}\right).
\end{gather*}
Applying $T_{1}+1$ to this, we get by Lemma \ref{factor}
\[
R_{n}=\left( D^{\prime}+T_{1}+1\right) \left( -\mu_{-n}z+y-\frak{\kappa
}_{n}\right) +\left( T_{1}+1\right) \left( qt_{0}z^{-1}+\kappa_{n}\mu
_{-n}\right).
\]
To simplify this further we note that the commutation relations of Lemma
\ref{commute} can be rewritten as and hence imply%
\begin{gather*}
\left( T_{1}+1\right) z^{-1} =t_{1}z^{-1}+zT_{1}+x,\\
\left( T_{1}+1\right) z =z+t_{1}z^{-1}T_{1}^{-1}-x.
\end{gather*}
Also on $P_{\left| n\right| }$, $D^{\prime}$ acts by $\lambda_{n}$ while
$T_{1}$ acts by $t_{1}$. Therefore $R_{n}$ acts by the operator%
\begin{gather*}
-\mu_{-n}\left[ D^{\prime}z+z+z^{-1}-x\right] +\left( \lambda_{n}%
+t_{1}+1\right) \left( y-\frak{\kappa}_{n}\right) \\
\qquad{} +qt_{0}\left[ t_{1}\left( z+z^{-1}\right) +x\right] +\left(
t_{1}+1\right) \kappa_{n}\mu_{-n}.
\end{gather*}
Dividing by $-\mu_{-n}$, we see that up to a multiple
\begin{equation}
\left[ D^{\prime}z+\left( 1-\frac{qt_{1}t_{0}}{\mu_{-n}}\right) \left(
z+z^{-1}\right) +\beta_{n}^{\prime}\right] P_{\left| n\right| }\sim
P_{\left| n-1\right| }, \label{raise-lower}
\end{equation}
where
\[
\beta_{n}^{\prime}=-x-\frac{qt_{0}}{\mu_{-n}}x-\frac{\lambda_{n}+t_{1}+1}
{\mu_{-n}}\left( y-\frak{\kappa}_{n}\right) -\left( t_{1}+1\right)
\frak{\kappa}_{n}.
\]
We now show that $\beta_{n}^{\prime}$ is equal to $\beta_{n}$ from formula
(\ref{beta-n}). For this we simplify the expression, substituting for
$\frak{\kappa}_{n},$ using (\ref{kapparaise}) above to get%
\begin{gather*}
\beta_{n}^{\prime} =-x-\frac{qt_{0}}{\mu_{-n}}x+\frac{\left( \lambda
_{n}+t_{1}+1\right) }{\mu_{-n}}\frac{\mu_{-n}y+t_{0}x}{\mu_{n-1}-\mu_{-n}%
}-\left( t_{1}+1\right) \frac{\mu_{n-1}y+t_{0}x}{\mu_{n-1}-\mu_{-n}}\\
\phantom{\beta_{n}^{\prime}}{} =\frac{\lambda_{n}+1-\mu_{n-1}}{\mu_{n-1}-\mu_{-n}}y+\frac{1-\mu_{n-1}}%
{\mu_{n-1}-\mu_{-n}}t_{1}y+c_{1}x+c_{2}\left( qt_{0}x\right),
\end{gather*}
where $c_{1}$ and $c_{2}$ are moderately complicated expressions which can be
computed explicitly. However, we can save some computation by observing that
since the result is \emph{a priori} symmetric in $\left\{ a,b,c,d\right\} $,
$c_{1}$ and $c_{2}$ must reduce to the coef\/f\/icients of $y$ and $-t_{1}y$ respectively.
It follows then that we have%
\begin{gather*}
\beta_{n}^{\prime} =\frac{\lambda_{n}+1-\mu_{n-1}}{\mu_{n-1}-\mu_{-n}
}\left( x+y\right) +\frac{1-\mu_{n-1}}{\mu_{n-1}-\mu_{-n}}\left(
qt_{0}x+t_{1}y\right) \\
\phantom{\beta_{n}^{\prime}}{} =\frac{\lambda_{n}+1-\mu_{n-1}}{\mu_{n-1}-\mu_{-n}}e_{1}-\frac{1-\mu_{n-1}%
}{\mu_{n-1}-\mu_{-n}}e_{3}=\beta_{n}.
\end{gather*}
Replacing $n$ by $-n$, formula (\ref{raise-lower}) becomes
\[
\left[ D^{\prime}z+\left( 1-\frac{qt_{1}t_{0}}{\mu_{n}}\right) \left(
z+z^{-1}\right) +\beta_{-n}\right] P_{\left| n\right| }\sim P_{\left|
-n-1\right| }.
\]
For $n\geq0,$ we have $\mu_{n}=q^{n}t_{1}t_{0}$ and this becomes
\begin{equation}
\left[ D^{\prime}z+\left( 1-q^{1-n}\right) \left( z+z^{-1}\right)
+\beta_{-n}\right] P_{n}\sim P_{n+1} \label{mraise1}%
\end{equation}
which is (\ref{sraise1}) up to a multiple.
For $n\geq1,$ we have $\mu_{-n}=q^{-n}$ and formula (\ref{raise-lower})
becomes%
\begin{equation}
\left[ D^{\prime}z+\left( 1-q^{n}abcd\right) \left( z+z^{-1}\right)
+\beta_{n}\right] P_{n}\sim P_{n-1} \label{mraise2}%
\end{equation}
which is (\ref{sraise2}) up to a multiple.
It remains then only to calculate the multiples in (\ref{mraise1}),
(\ref{mraise2}).
To determine the multiple (\ref{mraise1}), it suf\/f\/ices to calculate the
coef\/f\/icient of $z^{n+1}$ on the left. For this we divide the left side of
(\ref{mraise1}) by $z^{n+1}$ and take the limit as $z\rightarrow\infty$. This
gives%
\begin{gather*}
\lim_{z\rightarrow\infty}\frac{1}{z^{n+1}}\left( A\left( z\right)
q^{n+1}z^{n+1}-A\left( z^{-1}\right) z^{n+1}+\left( 1-q^{1-n}\right)
z^{n+1}\right) \\
\qquad{} =\frac{abcd}{q}q^{n+1}-1+1-q^{1-n}=q^{n}abcd-q^{1-n}
\end{gather*}
which proves formula (\ref{sraise1}).
To determine the multiple in\ (\ref{mraise2}), we rewrite (\ref{mraise2}) in
the form%
\[
\left[ D^{\prime}z+\left( 1-q^{n}abcd\right) \left( z+z^{-1}\right)
+\beta_{n}\right] P_{n}=c_{n}\left( q^{1-n}-q^{n}abcd\right) \frac
{\gamma_{n-1}}{\gamma_{n}}P_{n-1},\qquad n>0
\]
for some unknown constant $c_{n}$. It then suf\/f\/ices to show that $c_{n}=1.$
Subtracting this from (\ref{sraise1}) we have%
\begin{gather*}
\left[ \left( q^{n}abcd-q^{1-n}\right) \left( z+z^{-1}\right)
+\beta_{-n}-\beta_{n}\right] P_{n}\\
\qquad =\left( q^{n}abcd-q^{1-n}\right) \left( P_{n+1}+c_{n}\frac{\gamma_{n-1}
}{\gamma_{n}}P_{n-1}\right)
\end{gather*}
or
\[
\left( \text{ }z+z^{-1}\right) P_{n}=P_{n+1}+\frac{\beta_{n}-\beta_{-n}
}{q^{n}abcd-q^{1-n}}P_{n}+c_{n}\frac{\gamma_{n-1}}{\gamma_{n}}P_{n-1}.
\]
Comparing this with the recurrence relation (\ref{recurr}) we deduce $c_{n}
=1$, which proves formula (\ref{sraise2}) and completes the proof of the theorem.
We note in passing that comparison with (\ref{recurr}) also proves the
following identity (which can be verif\/ied independently):%
\begin{gather*}
\alpha_{n}=\frac{\beta_{n}-\beta_{-n}}{q^{n}abcd-q^{1-n}}.\tag*{\qed}
\end{gather*}\renewcommand{\qed}{}
\end{proof}
\subsection*{Acknowledgements}
We would like to thank the (anonymous) referee for several insightful
suggestions which have improved the paper considerably. The referee has also
pointed out that one can give an alternative proof of formulas (\ref{sraise1})
and (\ref{sraise2}) by combining Theorem \ref{one} with the following identity
relating the operators $D$ and $D^{\prime}:$
\begin{gather*}
\left[ (1-q^{2})D^{\prime}z+q^{2}D(z+z^{-1})-q(z+z^{-1})D\right] f\\
\qquad =(1-q)\left[ (e_{1}-e_{3})-(1-abcd)(z+z^{-1})\right] f,
\end{gather*}
which holds for all symmetric Laurent polynomials $f$.
\pdfbookmark[1]{References}{ref}
|
2,877,628,091,501 | arxiv | \section{Introduction}
\label{sec:intro}
\noindent Stellar systems (i.e. single or multiple stars) form in
groups. The dynamical processes within these alter the properties of
the young systems when they leave the site where they formed. The
dynamical properties of a stellar system are its mass (i.e. luminosity
if age is known), the multiplicity and the orbital parameters if it is
a multiple system. The distribution of velocities of young stars
emanating from star-forming centres (i.e. the kinematical signature of
star formation) will also be affected by the dynamical interactions
within the young groups. Both, the distribution of {\it dynamical
properties} and the {\it kinematical signature of star formation}
bear an imprint of the dynamical configuration at the time when the
stellar group was born.
Star formation in Taurus-Auriga gave birth to aggregates with sizes of
roughly 0.5--1~pc consisting of about 20--50 stars. It is now well
established that most stars form in binary systems in Taurus-Auriga
(e.g. K\"ohler \& Leinert 1998). The same appears to hold true in
other star-forming regions (Ghez et al. 1997). Embedded clusters may
also have a binary proportion that is higher than in the Galactic
field (Padgett, Strom \& Ghez 1997). In the Trapezium cluster, which
is a very dense embedded cluster and probably less than 1~Myr old,
Prosser et al. (1994) find a binary proportion that is at least as
large as in the Galactic field. In the cluster core, Petr et
al. (1998) observe, for low-mass stars, a binary proportion similar to
the Galactic field, and smaller by about a factor of three than the
binary proportion in Taurus-Auriga. These findings are particularly
interesting, because binary destruction is expected to be efficient in
such an environment. A review of pre-main sequence binary stars, and
their relation to Galactic field systems, is provided by Mathieu
(1994, see also Kroupa 1995a, Simon et al. 1995). Young Galactic
clusters also contain binary systems. The particularly well studied
Pleiades and Praesepe clusters have binary proportions of 40--50~per
cent, for systems of spectral type earlier than K0 (Raboud \&
Mermilliod 1998a, 1998b).
There exists thus evidence that the formation of binary systems may be
by far the dominant star-formation mode in both loose groups and
highly concentrated embedded clusters, some of which may evolve to
bound Galactic clusters. The term {\it aggregates} is used henceforth
to mean loose groups or embedded clusters of more than 10~stars.
If stars form predominantly in aggregates of binary systems, then the
kinetic energy distribution after aggregate dissolution should be
enhanced at high energies, when compared to dissolved aggregates of
single stars, because binary star binding energy can transform into
kinetic energy (Heggie 1975, Hills 1975, Hut 1983). Large
accelerations are destructive to binary systems, so that the
proportion of binaries should be a decreasing function of increasing
final kinetic energy. Additionally, different initial aggregate
concentrations lead to different final binary proportions and
kinematical signatures, as will be shown here. This is also true
under the extreme assumption that {\it all} stars always form in
binary systems with the same initial dynamical properties. This
dynamical mechanism of producing variations in binary proportion and
associated dynamical properties stands in contrast to a possible
variation of these parameters determined by the star-formation
process. Durisen \& Sterzik (1994) make the interesting point that the
binary proportion may be smaller in molecular clouds with a higher
temperature than in lower-temperature clouds.
It is important to study the signatures that arise from purely
dynamical interactions in stellar groups, for a comparison with
outcomes from usually less well-understood alternative scenarios. In
a study of the large-scale distribution of young stars around active
star-forming regions, Sterzik \& Durisen (1995) find that the
dynamical decay of small stellar groups can lead to sufficiently large
velocities to populate large areas on the sky with young stars, so
that these need not have formed near their observed location. They
find that special initial dynamical configurations of the stars
(e.g. cold thin strings) lead to enhanced production of ejected
stars. Initial decay of cold sub-groups within larger complexes also
has this effect (Aarseth \& Hills 1972), and scattering of proto-stars
on cloud clumps during an even earlier dynamical phase may likewise
eject very young low-mass stars (Gorti \& Bhatt 1996). However, the
number of ejected stars cannot account for the observed number of
widely distributed young stars (Feigelson 1996). The evolution of
circum-stellar discs around stars ejected from small stellar groups is
studied by Armitage \& Clarke (1997), and McDonald \& Clarke (1995)
show that the presence of circum-stellar material in small
proto-stellar groups increases the number of binaries formed and
randomises the mass-ratio distribution. That the number of
dynamically ejected stars is increased significantly in binary-rich
stellar aggregates, when compared to clusters consisting initially
only of single stars, is shown by Kroupa (1995c). These simulations
show that a mass-ratio distribution produced by randomly associating
masses from the IMF, decays to the observed distribution for G-dwarf
binaries, if most stars form in aggregates similar to observed
embedded clusters. Also, initially more concentrated aggregates
produce more stars with a high ejection velocity, the maximum of which
increases with decreasing cluster radius. De la Fuente Marcos (1997)
investigates the dependence on cluster richness, and finds that the
mean ejection velocity increases for more initially populous clusters.
Ejection velocities larger than a few hundred~km/s can be achieved in
young star clusters containing massive primordial binaries (Leonard \&
Duncan 1990). This may explain the location of OB stars far from
active star-forming sites. Leonard (1991) finds, on the basis of many
scattering experiments, that the maximum ejection velocity is of the
order of the escape velocity from the stellar surface of the most
massive star. If its mass is $60\,M_\odot$, then a similar star can
attain an ejection velocity of up to 700~km/s. A low-mass star may
find itself fleeing with a velocity of up to 1400~km/s, after a
surface-grazing encounter with such a star. A critical discussion of
the possible origin of runaway OB stars is provided by Leonard
(1995). He stresses that collisions of two stars during binary-binary
interactions can produce runaway OB stars with very similar properties
as in the alternative scenario, in which such stars result from a
supernova explosion in close binary systems. An interesting and
insightful discussion of the implications of the binary properties of
runaway OB stars for the dynamical configuration of massive stars at
birth is to be found in Clarke \& Pringle (1992).
In this paper, the correlations between stellar velocity, system mass
and binary proportion that arise from aggregates with different
initial concentration and consisting initially either of 400~single
stars or of 200~binary systems, is studied. The resulting correlations
are useful for interpreting the properties and distribution of young
stars near and in star forming regions (see for example Brandner et
al. 1996, Feigelson 1996, Frink et al. 1997).
In Section~\ref{sec:method} the assumptions, simulations and
definitions are described. The results are presented in
Section~\ref{sec:results}, and Section~\ref{sec:conclusions} contains
the conclusions.
\section{Method}
\label{sec:method}
\noindent
The initial conditions and numerical method are described in
Section~\ref{subsec:assumptions}, and the data analysis is outlined in
Section~\ref{subsec:observables}.
\subsection{Assumptions}
\label{subsec:assumptions}
\noindent
$N_{\rm bin}=200$ binary systems are distributed in virial equilibrium
according to the Plummer density law, with initial half mass radii
$R_{0.5}=2.53, 0.77, 0.25, 0.077$~pc. These approximately span the
region of parameter space similar to distributed (e.g. Taurus-Auriga)
and very tightly clustered (e.g. Trapezium cluster) star formation.
The clusters have zero initial centre-of-mass velocities in the local
standard of rest. The $R_{0.5}=0.8$~pc aggregate is especially
interesting, because inverse dynamical population synthesis (Kroupa
1995a,b) suggests that it may be representative of the dynamical
structures in which most stars form (compare with Lada \& Lada 1991).
While the mechanism of binary system formation cannot be specified in
detail, the assumption that the majority of all stars form in binaries
is supported by observational evidence (see review by Mathieu 1994),
and by recent advances in the theory of star formation (for reviews
see Boss 1995, Clarke 1996). However, theory cannot, at present,
constrain the early dynamical properties of stellar systems. The
interesting suggestion has been made (Durisen \& Sterzik 1994) that
cloud temperature may influence the binary proportion, such that it
may be lower in dense embedded clusters. For comparison with the
binary rich aggregates, $N_{\rm sing}=400$ single stars are
distributed in aggregates, initially with $R_{0.5}=0.25, 0.077$~pc.
The initial velocity dispersion, $\sigma$, and escape velocity from
the centre of the aggregates, $v_{\rm esc}=\sqrt{2\left|\phi\right|}$
($\phi$ is the Plummer potential at the origin), are:
$\sigma=0.3$~km.s, $v_{\rm esc}=0.77$~km/s ($R_{0.5}=2.5$~pc),
$\sigma=0.5$~km/s, $v_{\rm esc}=1.4$~km/s ($R_{0.5}=0.8$~pc),
$\sigma=0.9$~km/s, $v_{\rm esc}=2.4$~km/s ($R_{0.5}=0.25$~pc), and
$\sigma=1.7$~km/s, $v_{\rm esc}=4.4$~km/s ($R_{0.5}=0.08$~pc). Other
physical parameters are listed in table~1 of Kroupa (1995a).
Aarseth's NBODY5 programme (Aarseth 1994) is employed for the N-body
simulation of the dynamical evolution of each aggregate in a standard
Galactic tidal field.
In order to simplify the computational burden, the stars are treated
as point particles and stellar evolution is neglected. The assumption
of virial equilibrium is the simplest case, and implies that the
results presented here are strictly only applicable to escaping stars
from Galactic clusters. The present results can, however, also be used
as guidelines of the type of correlations one might find after
embedded clusters dissolve. An explicit formulation of this problem
requires treatment of gas expulsion, and thus the introduction of
additional ill-defined parameters. As gas expulsion is not treated
here, the results are representative of star formation with high
efficiency, i.e. aggregates with low residual gas content. In the
alternative case of a low star formation efficiency, the major effect
gas expulsion has, is a shortening of the time-scale during which the
dynamical evolution occurs. This can be compensated for by a reduction
of $R_{0.5}$, in order to obtain the same effective dynamics
(section~6.4 in Kroupa 1995a). The initial $v_{\rm esc}$ is then
larger.
Stellar masses, $m$, with $0.1\,M_\odot\le m\le 1.1\,M_\odot$, are
obtained from the IMF: $\xi(m)\propto m^{-\alpha_i}$, $\alpha_1=1.3$
for $0.08\,M_\odot \le m<0.5\,M_\odot$, $\alpha_2=2.2$ for
$0.5\,M_\odot \le m<1.0\,M_\odot$ (Kroupa, Tout \& Gilmore 1993), and
$\alpha_3=2.7$ for $1.0\,M_\odot\le m$ (Scalo 1986), where
$\xi(m)\,dm$ is the number of stars with masses in the range $m$ to
$m+dm$. The mean stellar mass is $0.32\,M_\odot$, and the mass of
each aggregate amounts to $M_{\rm tot}=128\,M_\odot$. Adopting for the
mass of B~stars $6-18\,M_\odot$, each should have associated with it
280~stars with mass in the range $0.08-1\,M_\odot$. The maximum
ejection velocity that can be achieved is thus limited to about
600~km/s for $0.1~M_\odot$ stars, and about 300~km/s for G-dwarfs
(Leonard 1991).
The main-sequence mass-ratio distribution for G-dwarf binaries
(Duquennoy \& Mayor 1991) is not consistent with random pairing from
the IMF, but may be derived from this assumption if most stars form in
embedded clusters (Kroupa 1995a,b). In accordance with this result,
and the evidence presented by Leinert et al. (1993), stellar masses
are combined at random to generate the initial binary-star population.
Special care must be taken when interpreting an observed mass-ratio
distribution, as it can be affected significantly by even simple
observational bias (Trimble 1990, Tout 1991).
Binary systems must arrive on the birth-line with eccentricities
approximately dynamically relaxed, because subsequent thermalisation
in the stellar aggregate is not efficient enough to produce such a
distribution (Kroupa 1995b). This is because the cross-section for a
significant change in eccentricity decreases very steeply with increasing
distance of closest approach of a perturber (Heggie \& Rasio 1996).
Consequently, the initial eccentricity distribution is taken to be
dynamically relaxed. The results are not sensitive to this assumption,
however.
An initial period distribution that is consistent with the
observational data for young binaries is used. The orbital periods,
$P$ (in days), form a flat distribution, $f_{\rm P}({\rm
log}_{10}P)=\left[{\rm log}_{10}(P_{\rm max}) - {\rm log}_{10}(P_{\rm
min})\right]^{-1}$ (equation~3 in Kroupa 1995a), with log$_{10}P_{\rm
min}=3$, log$_{10}P_{\rm max}=7.5$ and $P_{\rm min}\le P\le P_{\rm
max}$.
For each binary and single-star aggregate, $N_{\rm run}=5$ and~3
simulations, respectively, are carried through.
\subsection{The observables}
\label{subsec:observables}
\noindent
All results quoted here are averages of $N_{\rm run}$ simulations that
are evaluated after 1~Gyr, i.e. after the aggregates have dissolved.
Aggregate dissolution occurs after $700\pm130$~Myr in all cases, when
the number of stars in a volume with a radius of 2~pc, that is centred
on the density maximum of the cluster, has decayed to~3 of less.
The velocity-dependent binary proportion is
\begin{equation}
f_v = {N_{{\rm bin},v}\over(N_{{\rm sing},v}+N_{{\rm bin},v})},
\end{equation}
\noindent
where $N_{{\rm sing},v}$ and $N_{{\rm bin},v}$ are the number of
single-star and binary systems, respectively, in a velocity interval
$v$ to $v+\Delta v$ relative to the local standard of rest. The
binary proportion in some sub-domain, which may, for example, be the
period range or spatial region accessible to the observer, is
$f=N_{\rm bin}/(N_{\rm sing}+N_{\rm bin})$, where $N_{\rm sing}$ and
$N_{\rm bin}$ are the number of single and binary systems,
respectively, in the sub-domain. Similarly, $f_{\rm tot}$ is the
binary proportion of the entire population.
The mean system mass in the velocity interval is
\begin{equation}
<m>_v={M_v\over(N_{{\rm sing},v}+N_{{\rm bin},v})},
\end{equation}
\noindent
where $M_v$ is the total stellar mass in the velocity
interval. Initially, i.e. at $t=0$, $f_v=1$ and $<m>_v=0.64\,M_\odot$
independent of $v$ for the binary-star aggregates, and $f_v=0$ with
$<m>_v=0.32\,M_\odot$ for the single-star aggregates.
The relative proportion of systems in a velocity interval is
\begin{equation}
h_v = {(N_{{\rm sing},v} + N_{{\rm bin},v}) \over (N_{\rm sing,tot} + N_{\rm
bin,tot})}.
\end{equation}
\noindent
Note that this is $f_v$ in Kroupa (1995c).
The circular orbital velocity, $v_{\rm orb}$ [km/s], of a binary star
with system mass, $m_{\rm sys}$ [$M_\odot$], and orbital period, $P$
[days], is
\begin{equation}
{\rm log}_{10}P = 6.986 + {\rm log}_{10}m_{\rm sys} - 3\,{\rm
log}_{10}v_{\rm orb}.
\end{equation}
\noindent
The primordial binary population used here has a maximum $v_{\rm
orb}=27.7$~km/s (for $P=10^3$~d, $m_{\rm sys}=2.2\,M_\odot$) and a
minimum $v_{\rm orb}=0.39$~km/s ($P=10^{7.5}$~d, $m_{\rm
sys}=0.2\,M_\odot$).
Finally, a stellar system that is {\it ejected} has a final
(i.e. evaluated after 1~Gyr) velocity $v\ge2$~km/s.
\section{Results}
\label{sec:results}
\noindent
Long distance encounters between systems (two-body relaxation) and in
addition scattering of systems on the non-uniform background potential
(collective effects) dominates the dynamical evolution of the
aggregates. Relatively energetic, stochastically occurring encounters
between stellar systems can lead to the ionization of binary stars and
to the acceleration of a system to escape velocity from the aggregate.
If the mean kinetic energy of the population is ${\overline E_{\rm
kin}}$ and the binding energy of a binary is $-E_{\rm
bin}=-G\,m_1\,m_2/(2\,a)$, where $m_i$ and $a$ are the masses of the
components and the semi-major axis, respectively, then a binary is
termed {\it hard} if $E_{\rm bin}/{\overline E_{\rm kin}}>1$. Hard
binaries are likely to gain binding energy, i.e. to {\it harden}
(Heggie 1975, Hills 1975), in which case the perturber can be
accelerated to escape velocity. The resulting hardened binary suffers
a recoil which may be sufficient to also expel it from the aggregate.
These processes change the distributions with velocity of the number
of systems, $h_v$, of the binary star proportion, $f_v$, of the mean
system mass, $<m>_v$. Also, the correlations between binary-star
binding energy, $E_{\rm bin}$, and kinetic energy, $E_{\rm kin}$, and
between the velocity, $v$, and orbital period, system mass and mass
ratio, evolve. The distributions that emerge after aggregate
dissolution thus contain information about the initial dynamical
configuration, but care must be taken in interpreting distribution
data. This is the subject of the present section.
\subsection{Distribution of velocities}
\label{subsec:veldistr}
\noindent
In Fig.~\ref{fig:binprop1}, $h_v$, $f_v$ and $<m>_v$ are plotted as a
function of velocity for the binary star aggregates with $R_{0.5}=2.5,
0.8, 0.25, 0.08$ pc. Fig.~\ref{fig:binprop2} contains the same
information for the two single star aggregates. In
Table~\ref{table:vdist}, column~1 contains the centre of each
logarithmic velocity bin. Columns~2--7 list, for each aggregate, the
fraction, $h_v$, of systems per logarithmic velocity bin.
After aggregate dissolution, most systems have a velocity near
0.35~km/s (Figs.~\ref{fig:binprop1} and~\ref{fig:binprop2}). A slight
shift in the maximum of $h_v$ towards smaller $v$ with decreasing
$R_{0.5}$ comes about, because systems have to overcome the initial
aggregate potential before escaping. The cooling is much more
pronounced in the absence of binary star heating
(Fig.~\ref{fig:binprop2}), and during the first few cluster crossing
times. Later, the aggregate expands to fill its tidal radius and
looses memory of its initial concentration. Initially very
concentrated aggregates, and those with large $R_{0.5}$, have an
indistinguishable life-time (Kroupa 1995c). The binary star
population, however, retains this memory (Kroupa 1995a).
For aggregates with initially smaller $R_{0.5}$, an increase in the
proportion of systems with $v>2$~km/s results. Comparison of
Figs.~\ref{fig:binprop1} and~\ref{fig:binprop2} shows that, for the
same $R_{0.5}$, aggregates initially with a high proportion of binary
systems have significantly larger $h_v$ at $v>2$~km/s, than
aggregates that consist initially only of single stars. A high
proportion of primordial binary systems thus increases the percentage
of ejected systems.
\subsection{Binary proportion}
\label{subsec:binprop}
\noindent
High velocity systems are expected to be primarily single stars,
because only relatively hard binaries can survive the large
accelerations during the encounter, as is also stressed by Sterzik \&
Durisen (1995). This is borne out for the $R_{0.5}=2.5,0.8,0.25$~pc
aggregates (Fig.~\ref{fig:binprop1}), and also in the simulations
reported by Leonard \& Duncan (1990, their figs.~4 and~5) and also
fig.~9 in Kroupa (1995c).
Dynamical evolution is quiescent in star forming regions where the
stellar systems freeze out of the gas in low-density aggregates. The
$R_{0.5}=2.5$~pc aggregate approximates this situation. In this
aggregate, a binary with $m_{\rm sys}=0.2\,M_\odot$ and $P=10^{7.5}$~d
has $v_{\rm orb}=0.39$~km/s, which is comparable to the velocity
dispersion. The entire binary population is therefore hard, and indeed
most binaries survive cluster evolution. From such aggregates results
a high ($f_v>0.8$) proportion of binaries for systems with
approximately $v<1$~km/s. Only 2.6~per cent of the systems end up
with $v>5$~km/s (Table~\ref{table:vdist}), and these have a binary
proportion of approximately $f_v<0.1$ (Fig.~\ref{fig:binprop1}).
For the $R_{0.5}=0.8$~pc aggregate, a relatively high proportion of
binaries ($f_v>0.8$) among systems that have $v<0.3$~km/s is obtained.
Significant differences between the binary proportions in the two
models can be found in the velocity intervals a) $-0.5<$~log$_{10}v<0$
and b) $0.1<$~log$_{10}v<0.6$. In interval~(a), $f_v\approx0.9,0.6$
and in interval~(b), $f_v\approx0.05,0.15$, for $R_{0.5}=2.5,0.8$~pc,
respectively. Of all systems that finally emerge from such an
aggregate, 4~per cent have a velocity $v>5$~km/s. These have a
slightly larger binary proportion than for the $R_{0.5}=2.5$~pc case
discussed above. Overall, for $R_{0.5}\ge 0.8$~pc, $f_v$ {\it
decreases} monotonically with increasing $v$, and escaping stars have a
low ($f_v<0.2$) binary proportion.
Star formation in denser aggregates ($R_{0.5}<0.8$~pc) leads to
significantly different behaviour of $f_v$ with $v$
(Fig.~\ref{fig:binprop1}). For $R_{0.5}=0.25$~pc, $f_v\approx0.45$
for $v<1$~km/s, with a discontinuous decrease to $f_v\le0.2$ for
$v>1$~km/s.
Of special interest is the $R_{0.5}=0.08$~pc model. It represents most
closely the Trapezium cluster, because it has a comparable central
number density, half-mass radius and velocity dispersion. In the
present model, the initial crossing and relaxation times are $t_{\rm
cr}=1\times10^5$~yr and $t_{\rm rel}=3\times10^5$~yr, respectively
(Table~1 in Kroupa 1995a), whereas in the Trapezium cluster, $t_{\rm
cr}\approx4-12\times10^5$~yr and $t_{\rm rel}\approx0.7-3.7$~Myr
(Bonnell \& Kroupa 1998). The Trapezium cluster, however, is different
in that it contains 500--1000~stars with a mean mass of about
$0.6\,M_\odot$, and in that it is the core of the much more massive
and extended Orion Nebula Cluster (Hillenbrand \& Hartmann
1998). Also, it is not clear if the entire cluster is gravitationally
bound.
After dissolution of this $R_{0.5}=0.08$~pc aggregate most systems
have $v<1$~km/s (Fig.~\ref{fig:binprop1}). The proportion of binary
stars shows a rather complex dependence on $v$. The binary proportion
ranges from $f_v\approx0.1$ for systems with $v\approx0.1-0.2$~km/s to
$f_v\approx0.6$ for $v\approx1$~km/s; $f_v$ thus {\it increases} with
$v$ for $v<1$~km/s. The binary proportion shows a significant maximum
($f_v\approx 0.6$) near $v\approx1$~km/s, and remains approximately
constant at $f_v\approx0.25$ for $3$~km/s$\,<v<20$~km/s. In this
rather extreme model of star formation, 4.6~per cent of all systems
have $v>5$~km/s after aggregate dissolution. The low value of $f_v$
for small $v$, and its rise with $v$, is due to efficient disruption
of binaries at an early dynamical age, when the ejected stars are
decelerated most effectively by the young deep potential well. A
fraction of the predominantly single stars with low velocity, is also
a decelerated part of the high-velocity tail in the aggregates with
$R\ge0.8$~pc. This can be inferred from Figs.~\ref{fig:orbit1}
and~\ref{fig:orbit2} (Section~3.4). Hardened binaries (log$_{10}P<4$)
are found with small velocities. Usually they are the result of
energetic three-body or binary-binary interactions causing ejection.
Thus, in a realistic embedded cluster with the same stellar mass, the
initial escape velocity is larger because the potential is dominated
by the gas. Within a few~Myr most of the gas is expelled, leaving an
expanding cluster population, and a binary deficient remnant
population, in which each system has a small centre-of-mass velocity.
This decelerated and binary deficient population remains bound to the
molecular cloud and, after a few Myr, contributes to a distributed
population of young stars with significantly different dynamical
properties to the distributed population in Taurus-Auriga. Dispersal
of this binary deficient population takes long, and an observer finds
a loosely distributed group of stars of similar age, and with a
reduced binary proportion that depends on the initial cluster
concentration.
Concerning the aggregates with initially no primordial binaries
(Fig.~\ref{fig:binprop2}), more binaries form by capture in the
initially more concentrated aggregate ($R_{0.5}=0.08$~pc), owing to
the more frequent three-body encounters in the young concentrated
aggregate. The resulting total binary proportion, however, remains
insignificant (Kroupa 1995a). The data plotted in
Fig.~\ref{fig:binprop2} indicate that $f_v$ increases with $v$ (for
$v>1$~km/s), which is contrary behaviour to the aggregates that
contain a large population of primordial binaries.
\subsection{Mean system mass}
\label{subsec:meanmass}
\noindent
In a binary-binary or binary-single star encounter, binding energy can
be transformed into kinetic energy of the escaping stellar system. A
given acquired kinetic energy corresponds to a smaller ejection
velocity if the system mass is larger. Low-mass stars can be ejected
with higher velocities than high-mass stars. The extensive simulations
performed by Leonard \& Duncan (1990) and Leonard (1991, 1995)
demonstrate that this is the case, and are consistent with the
observational mass-velocity diagram for OB stars produced by Gies \&
Bolton (1986). This is also discussed in length by Conlon et
al. (1990).
The present study concentrates on the mean-system-mass--velocity
relationship obtained from self-consistent $N$-body simulations of
clusters of low-mass stars ($m\le1.1\,M_\odot$), and is thus relevant
for the large-scale distribution of young-low mass stars seen in the
ROSAT survey (compare with Sterzik \& Durisen 1995).
As is evident from Figs.~\ref{fig:binprop1} and~\ref{fig:binprop2},
the behaviour of $<m>_v$ with $v$ is complex and depends on the
initial concentration of the aggregate. The complexities of the
underlying binary--binary and triple-star encounters are discussed at
length by Harrington (1975), Heggie (1975), Hills (1975), Leonard \&
Duncan (1990) and Leonard (1991, 1995), and a review can be found in
Valtonen \& Mikkola (1991). The most-apparent result here is that the
expected simple correlation (smaller $<m>_v$ for larger $v$) does not
hold, except for the $R_{0.5}=2.5$~pc aggregate. The few stars that
are ejected from this aggregate are low-mass stars expelled from
unstable triple or higher-order systems (Heggie 1975, Harrington 1975,
Hills 1977, compare also with the second-mass-family interactions of
Leonard 1991, and with Kiseleva, Eggleton \& Orlov 1994).
The expected correlation is also observed for $v>30$~km/s for all
binary-star aggregates. An acceleration beyond this velocity, which is
the orbital velocity of the hardest primordial binary in the present
simulations (Section~\ref{subsec:observables}), is destructive to all
primordial binaries. Only single stars appear with such large
velocities.
For $R_{0.5}=0.8, 0.25$ and 0.08~pc, the velocity range
$v\approx2$~km/s to~30~km/s yields no clear correlation between
$<m>_v$ and $v$: $<m>_v\approx0.5\,M_\odot$ is approximately
constant. This is an interesting finding which was also noted by
Harrington (1975). It shows that systems more massive than
$0.5\,M_\odot$ are ejected from aggregates similar to embedded
clusters, with velocities that can place them at distances between~20
and 300~pc within 10~Myr of ejection time from their formation
site. These systems result from stochastic and quite energetic
binary-binary and three-body encounters.
The majority of systems that have $v<1$~km/s, and which do not spread
much further than 1--10~pc from their formation site within 10~Myr,
have a constant $<m>_v$ that lies between the average stellar mass and
mean primordial system mass. Smaller values are seen for binary-poor
remnant populations (e.g. for $R_{0.5}=0.08$~pc). It follows that an
observer must be careful not to interpret the stellar mass function
and binary proportion of such populations in terms of a possible
dependence of these quantities on star-formation environment, without
due consideration of the dynamical history.
\subsection{Binding energy, period and mass ratio}
\label{subsec:ebin}
\noindent
Only binaries that are sufficiently bound will not be ionised when
they suffer a close interaction with another system, after which they
may leave the aggregate with relatively high velocity. Thus, the
correlation between binary star binding energy, $E_{\rm bin}$, and its
centre of mass kinetic energy, $E_{\rm kin}$, indicates the history of
a system.
Above it was seen that the final binary proportion, $f_v$, for systems
with 2~km/s$\,<v<30$~km/s, is larger for initially more concentrated
binary star aggregates. It can achieve values of 20-40~per cent for
$R_{0.5}\le0.25$~pc, although the overall final binary proportion is
small ($f_{\rm tot}\approx0.27$, fig.~3 in Kroupa 1995a). This result
is relevant for the distribution of young stars around star-forming
regions. Initially highly concentrated embedded clusters may add young
binaries to regions as far as 300~pc over a period of 10~Myr. Such
binaries have a well-defined correlation between $E_{\rm bin}$ and
$E_{\rm kin}$. This correlation transforms to correlations between $P$
and $v$, between the mass ratio ($q=m_2/m_1\le1$) and $v$, and
between the system mass ($m_1+m_2$) and $v$, where $m_1$ and $m_2$ are
the primary- and secondary-star masses, respectively.
\subsubsection{Binding energy and period}
\label{sssec:eb_P}
\noindent
In Figs.~\ref{fig:orbit1} and~\ref{fig:orbit2} are plotted $E_{\rm
bin}$ against $E_{\rm kin}$, as well as the orbital period, $P$,
against ejection velocity, $v$, for each binary system in the $N_{\rm
run}$ simulations. The distribution of data points for the
$R_{0.5}=2.5$~pc aggregate (top two panels in Fig.~\ref{fig:orbit1})
reflects approximately the initial distribution in binding energies
(log$_{10}E_{\rm bin}>2$) and orbital periods (log$_{10}P\ge3$). Only
very few binaries have gained binding energy and/or have been
accelerated to higher velocities. As the initial $R_{0.5}$ is reduced,
the number of binary systems with hardened orbits and larger $v$
increases. The depletion of orbits at large $P$ is clearly evident in
the $R_{0.5}=0.08$~pc cluster.
Figs.~\ref{fig:orbit1} and~\ref{fig:orbit2} nicely show that a binary
with an orbital period corresponding to $v_{\rm orb}$ only remains
bound when suffering a collision, if the ejection velocity $v<v_{\rm
orb}$. A single notable exception, that occurred in the five
simulations of the $R_{0.5}=0.08$~pc aggregate, is the binary system
with $P=10^{5.7}$~d and $v=16$~km/s. It is difficult to trace the
detailed dynamical history of any individual stellar system owing to
the discrete output times when stellar masses, positions and
velocities are written to computer disk. But this binary probably
formed in a complex high-order interaction, that resulted in two stars
being ejected on essentially identical trajectories.
The results discussed so far are valid for a primordial binary star
population that has periods $P>10^3$~d. In reality, binaries with
shorter periods do exist with a binary proportion $f\approx0.15$
(fig.~1 in Mathieu 1994). The inclusion of primordial binaries with
$P<10^3$~d does not change the results presented here, apart from
slightly increasing $f_v$ for the ejected systems. In
Figs.~\ref{fig:orbit1} and~\ref{fig:orbit2}, both the ($E_{\rm
bin},E_{\rm kin}$) and (log$_{10}P,v$) plots would contain orbits with
$E_{\rm bin}>10^2\,M_\odot$~km$^2$/s$^2$ and $P<10^3$~d, and an upper
envelope for $v>1$~km/s given by the dashed diagonal lines.
The correlation between binding and kinetic energy is established
particularly well for the binary systems that are formed by capture in
the single star aggregates (Fig.~\ref{fig:orbit3}). Their periods
range from $P>100$~d to $10^{11}$~d. The binary systems with
$P\approx10^{10}$~d form by chance three-body low-velocity encounters
and form a distinct group in the figure.
\subsubsection{Mass ratio and system mass}
\label{sssec:q_msys}
\noindent
More massive systems have a higher binding energy, and are thus more
resistant to higher accelerations. Binaries that are ejected from an
aggregate should thus have a larger system mass and a mass-ratio
nearer to unity for higher $v$.
In Figs.~\ref{fig:mass1} and~\ref{fig:mass2}, $m_1+m_2$ and $q$ are
plotted in dependence of $v$ for the binary-star aggregates. The
$R_{0.5}=2.5$~pc aggregate shows approximately the initial
distributions, and is useful as a reference. As the aggregate
concentration is reduced, the number of ejected systems increases.
These tend to have larger system masses, as expected. At the same
time, binary stars with a mass-ratio $q<0.2$ are preferentially
removed as $R_{0.5}$ is reduced. This is expected because, for a
given semi-major axis, they have the lowest binding energy, $E_{\rm
bin}\propto m_1^2 \times q$. The correlation between $q$ and $v$ is
in the expected sense, which is evident in the figures by the
appearance of orbits in the region $q>0.6, v>2$~km/s. However, the
correlation is weak, because $E_{\rm bin}$ also depends on $m_1$. As
is evident from the figures, the number of binaries with
$m_1+m_2>1.5\,M_\odot$ increases for smaller $R_{0.5}$. This is
because the more frequent three-body and higher-order interactions
lead to more frequent exchanges of companions, which most often leads
to the production of binary systems consisting of the most massive
stars involved in the interaction (Harrington 1975, Heggie 1975, Hills
1977, see also Valtonen \& Mikkola 1991, McDonald \& Clarke 1993).
Concerning the single-star aggregates, Fig.~\ref{fig:mass3} shows that
the distribution of mass ratios of the dynamically formed binaries in
the $R_{0.5}=0.25$~pc aggregate is roughly flat over the whole
accessible range. In the more concentrated aggregate, binaries form
with larger system mass and a bias towards larger $q$, which is a
result of dynamical biasing discussed in greater detail by McDonald \&
Clarke (1993).
\section{Conclusions}
\label{sec:conclusions}
\noindent
The correlations between ejection velocity and the proportion of
binaries, as well as their orbital parameters, have been quantified
for a range of initial dynamical configurations. The correlations are
useful in the study of stellar systems that are apparently ejected
from Galactic clusters (see e.g. Frink et al. 1997), some of which are
known to be rich in binaries (e.g. Raboud \& Mermilliod 1998a,
1998b). Observed ejected binaries should show correlations as
presented in Figs.~\ref{fig:orbit1}, \ref{fig:orbit2},
~\ref{fig:mass1} and~\ref{fig:mass2}. The results for the binary-rich
aggregates modelled here are also relevant for an understanding of the
large-scale distribution of young stars, because most stars appear to
form in aggregates with a high binary proportion. Additionally, the
correlations contain information about the dynamical configuration at
birth.
For binary-rich aggregates containing a few hundred stars the
following correlations result: (i) more tightly clustered aggregates
lead to more stellar systems having larger ejection velocities and a
smaller overall binary proportion, (ii) the large population of
primordial binaries leads to a significantly enhanced number of
systems with high-ejection velocities compared to single-star
aggregates, (iii) systems with high ejection velocities have a
significantly reduced binary proportion, (iv) binary stars with high
ejection velocities have short-period orbits, and tend to be more
massive with a mass-ratio biased towards unity, (v) the average system
mass as a function of ejection velocity is defined above about 2~km/s
by stochastic close encounters, so that systems more massive than
$0.5\,M_\odot$ with high ejection velocities occur, and (vi)
aggregates with $R_{0.5}\le0.25$~pc lead to a complex dependence of
the resulting binary proportion on velocity, whereas a stellar
population emerging from less concentrated aggregates shows a
monotonic decrease of the binary proportion with increasing velocity.
For aggregates of a few hundred single stars one obtains: (i) more
tightly clustered aggregates lead to an increased number of stellar
systems with larger ejection velocities (but significantly less so
than in the binary-rich aggregates), and an enhanced overall binary
proportion that remains significantly below the observed binary
proportion in the Galactic field, (ii) the binary proportion increases
with ejection velocity, (iii) is as (iv) above, and (iv) is as (v)
above.
Remnant unbound young populations take long to disperse because they
have a small velocity dispersion. The binary proportion and mean
system mass (and thus the inferred IMF) of such a remnant population,
sensitively depends on the initial dynamical configuration of the
binary-rich birth aggregate. After emerging from the birth aggregate,
the distribution of velocities of a young stellar population changes
with time in the gravitational potential of the nearby molecular
clouds. A substantial proportion of emerging stars is likely to
remain bound to the parent molecular cloud until it ceases to exist.
These findings are important for interpreting the spatial
distribution, kinematics and binarity of young stars within and
surrounding star-forming regions. Molecular clouds, in which stars
form preferentially in dense embedded binary-rich clusters, should
have an enhanced halo population of ejected and relatively binary poor
($f\approx0.25$) young stellar systems. Also, young but
binary-depleted groups of stars can be misinterpreted to be evidence
for an environmental dependency of the binary-formation mechanism.
For example, in fig.~6 of Brandner et al. (1996), the region US-B has
more binaries than the region US-A, which also contains many more
B~stars than US-B. The presence of B~stars suggests that the stars in
US-A may have formed in dense embedded clusters. The stars seen in
US-A would then constitute the $v\simless1$~km/s remnant population
for $R_{0.5}\simless0.25$~pc (Fig.~\ref{fig:binprop1}). Given the
results of the present study, it is suggested that such a difference
in binary proportion between two regions may be due to different
initial dynamical configurations, and need not imply a dependence of
the binary proportion on the star-forming environment.
Important for the interpretation of the large-scale distribution of
young stars surrounding star forming sites is the realisation that
relatively massive systems are ejected with relatively large velocity
(2--30~km/s, Fig.~\ref{fig:binprop1}), which is a point also stressed
by Sterzik \& Durisen (1995). The X-ray surveys are flux limited and
detect the massive stars (Wichmann et al. 1996), the presence of which
around star-forming regions may be a natural consequence of the
processes studied here. However, if some young binary systems are
found to have orbital periods that place them above the dashed lines
in the right panels of Figs.~\ref{fig:orbit1}--\ref{fig:orbit3}, then
this would support the suggestion by Feigelson (1996), that
some star-formation occurs in small high-velocity clouds.
\acknowledgements
\vskip 10mm
\noindent{\bf Acknowledgements}
\vskip 3mm
\noindent
I am very grateful to Sverre Aarseth for allowing me to use his NBODY5
programme, and I thank Rainer Spurzem and Mirek Giersz for helpful
discussions.
|
2,877,628,091,502 | arxiv | \section*{Appendix}
\setlength\tabcolsep{2pt}
\setlength\extrarowheight{4pt}
\begin{table*}[!ht]
\centering
\caption{Covariance matrix for the parametrization of the energy-loss function for molecular tritium, as provided in Tab.~\ref{tab:fitResults}.}
\resizebox{\linewidth}{!}{%
\begin{tabular}{c|ccccccccc}
\toprule
& $m_1$ &$m_2$ &$m_3$ &$\sigma_1$ &$\sigma_2$ &$\sigma_3$ &$a_1$ &$a_2$&$a_3$ \\
\midrule
$m_1$&6.941$\cdot$10$^{\text{-5}}$&1.034$\cdot$10$^{\text{-5}}$&-3.388$\cdot$10$^{\text{-6}}$&4.537$\cdot$10$^{\text{-5}}$&-7.980$\cdot$10$^{\text{-6}}$&8.094$\cdot$10$^{\text{-6}}$&4.529$\cdot$10$^{\text{-6}}$&-6.505$\cdot$10$^{\text{-7}}$&-6.581$\cdot$10$^{\text{-8}}$\\
$m_2$&1.034$\cdot$10$^{\text{-5}}$&4.503$\cdot$10$^{\text{-6}}$&7.403$\cdot$10$^{\text{-7}}$&8.265$\cdot$10$^{\text{-6}}$&-1.206$\cdot$10$^{\text{-6}}$&-8.627$\cdot$10$^{\text{-6}}$&1.342$\cdot$10$^{\text{-6}}$&2.262$\cdot$10$^{\text{-7}}$&1.893$\cdot$10$^{\text{-7}}$\\
$m_3$&-3.388$\cdot$10$^{\text{-6}}$&7.403$\cdot$10$^{\text{-7}}$&1.641$\cdot$10$^{\text{-5}}$&-4.727$\cdot$10$^{\text{-6}}$&3.464$\cdot$10$^{\text{-6}}$&-2.255$\cdot$10$^{\text{-6}}$&-1.004$\cdot$10$^{\text{-6}}$&-1.272$\cdot$10$^{\text{-7}}$&-6.165$\cdot$10$^{\text{-7}}$\\
$\sigma_1$&4.537$\cdot$10$^{\text{-5}}$&8.265$\cdot$10$^{\text{-6}}$&-4.727$\cdot$10$^{\text{-6}}$&4.858$\cdot$10$^{\text{-5}}$&-8.929$\cdot$10$^{\text{-6}}$&1.503$\cdot$10$^{\text{-5}}$&2.481$\cdot$10$^{\text{-6}}$&-9.888$\cdot$10$^{\text{-9}}$&-1.840$\cdot$10$^{\text{-7}}$\\
$\sigma_2$&-7.980$\cdot$10$^{\text{-6}}$&-1.206$\cdot$10$^{\text{-6}}$&3.464$\cdot$10$^{\text{-6}}$&-8.929$\cdot$10$^{\text{-6}}$&4.746$\cdot$10$^{\text{-6}}$&-1.521$\cdot$10$^{\text{-5}}$&-1.755$\cdot$10$^{\text{-6}}$&-5.149$\cdot$10$^{\text{-7}}$&2.435$\cdot$10$^{\text{-7}}$\\
$\sigma_3$&8.094$\cdot$10$^{\text{-6}}$&-8.627$\cdot$10$^{\text{-6}}$&-2.255$\cdot$10$^{\text{-6}}$&1.503$\cdot$10$^{\text{-5}}$&-1.521$\cdot$10$^{\text{-5}}$&1.632$\cdot$10$^{\text{-4}}$&3.346$\cdot$10$^{\text{-6}}$&-2.017$\cdot$10$^{\text{-6}}$&-4.154$\cdot$10$^{\text{-6}}$\\
$a_1$&4.529$\cdot$10$^{\text{-6}}$&1.342$\cdot$10$^{\text{-6}}$&-1.004$\cdot$10$^{\text{-6}}$&2.481$\cdot$10$^{\text{-6}}$&-1.755$\cdot$10$^{\text{-6}}$&3.346$\cdot$10$^{\text{-6}}$&1.462$\cdot$10$^{\text{-6}}$&9.769$\cdot$10$^{\text{-8}}$&-4.513$\cdot$10$^{\text{-8}}$\\
$a_2$&-6.505$\cdot$10$^{\text{-7}}$&2.262$\cdot$10$^{\text{-7}}$&-1.272$\cdot$10$^{\text{-7}}$&-9.888$\cdot$10$^{\text{-9}}$&-5.149$\cdot$10$^{\text{-7}}$&-2.017$\cdot$10$^{\text{-6}}$&9.769$\cdot$10$^{\text{-8}}$&4.581$\cdot$10$^{\text{-7}}$&4.877$\cdot$10$^{\text{-8}}$\\
$a_3$&-6.581$\cdot$10$^{\text{-8}}$&1.893$\cdot$10$^{\text{-7}}$&-6.165$\cdot$10$^{\text{-7}}$&-1.840$\cdot$10$^{\text{-7}}$&2.435$\cdot$10$^{\text{-7}}$&-4.154$\cdot$10$^{\text{-6}}$&-4.513$\cdot$10$^{\text{-8}}$&4.877$\cdot$10$^{\text{-8}}$&1.354$\cdot$10$^{\text{-7}}$\\
\bottomrule
\end{tabular}
}
\label{tab:covarianceMatrix}
\end{table*}
\begin{table*}[!ht]
\centering
\caption{Covariance matrix for the parametrization of the energy-loss function for molecular deuterium, as provided in Tab.~\ref{tab:fitResultsD2}.}
\resizebox{\linewidth}{!}{%
\begin{tabular}{c|*{9}{c}}
\toprule
& $m_1$ &$m_2$ &$m_3$ &$\sigma_1$ &$\sigma_2$ &$\sigma_3$ &$a_1$ &$a_2$&$a_3$ \\
\midrule
$m_1$&3.883$\cdot$10$^{\text{-4}}$&5.087$\cdot$10$^{\text{-5}}$&-2.607$\cdot$10$^{\text{-5}}$&2.487$\cdot$10$^{\text{-4}}$&-4.157$\cdot$10$^{\text{-5}}$&6.592$\cdot$10$^{\text{-5}}$&1.214$\cdot$10$^{\text{-5}}$&-4.525$\cdot$10$^{\text{-6}}$&-3.856$\cdot$10$^{\text{-7}}$\\
$m_2$&5.087$\cdot$10$^{\text{-5}}$&2.093$\cdot$10$^{\text{-5}}$&1.873$\cdot$10$^{\text{-5}}$&4.040$\cdot$10$^{\text{-5}}$&-2.989$\cdot$10$^{\text{-6}}$&-5.680$\cdot$10$^{\text{-5}}$&4.437$\cdot$10$^{\text{-6}}$&2.116$\cdot$10$^{\text{-6}}$&4.871$\cdot$10$^{\text{-7}}$\\
$m_3$&-2.607$\cdot$10$^{\text{-5}}$&1.873$\cdot$10$^{\text{-5}}$&1.144$\cdot$10$^{\text{-4}}$&-3.436$\cdot$10$^{\text{-5}}$&4.237$\cdot$10$^{\text{-5}}$&-2.466$\cdot$10$^{\text{-4}}$&-8.612$\cdot$10$^{\text{-6}}$&5.337$\cdot$10$^{\text{-6}}$&9.459$\cdot$10$^{\text{-7}}$\\
$\sigma_1$&2.487$\cdot$10$^{\text{-4}}$&4.040$\cdot$10$^{\text{-5}}$&-3.436$\cdot$10$^{\text{-5}}$&2.793$\cdot$10$^{\text{-4}}$&-4.330$\cdot$10$^{\text{-5}}$&6.404$\cdot$10$^{\text{-5}}$&-4.041$\cdot$10$^{\text{-6}}$&-7.999$\cdot$10$^{\text{-7}}$&-5.273$\cdot$10$^{\text{-8}}$\\
$\sigma_2$&-4.157$\cdot$10$^{\text{-5}}$&-2.989$\cdot$10$^{\text{-6}}$&4.237$\cdot$10$^{\text{-5}}$&-4.330$\cdot$10$^{\text{-5}}$&2.798$\cdot$10$^{\text{-5}}$&-1.050$\cdot$10$^{\text{-4}}$&-7.907$\cdot$10$^{\text{-6}}$&-2.660$\cdot$10$^{\text{-7}}$&6.033$\cdot$10$^{\text{-7}}$\\
$\sigma_3$&6.592$\cdot$10$^{\text{-5}}$&-5.680$\cdot$10$^{\text{-5}}$&-2.466$\cdot$10$^{\text{-4}}$&6.404$\cdot$10$^{\text{-5}}$&-1.050$\cdot$10$^{\text{-4}}$&1.033$\cdot$10$^{\text{-3}}$&1.829$\cdot$10$^{\text{-5}}$&-2.974$\cdot$10$^{\text{-5}}$&-1.231$\cdot$10$^{\text{-5}}$\\
$a_1$&1.214$\cdot$10$^{\text{-5}}$&4.437$\cdot$10$^{\text{-6}}$&-8.612$\cdot$10$^{\text{-6}}$&-4.041$\cdot$10$^{\text{-6}}$&-7.907$\cdot$10$^{\text{-6}}$&1.829$\cdot$10$^{\text{-5}}$&7.761$\cdot$10$^{\text{-6}}$&2.777$\cdot$10$^{\text{-7}}$&-6.118$\cdot$10$^{\text{-8}}$\\
$a_2$&-4.525$\cdot$10$^{\text{-6}}$&2.116$\cdot$10$^{\text{-6}}$&5.337$\cdot$10$^{\text{-6}}$&-7.999$\cdot$10$^{\text{-7}}$&-2.660$\cdot$10$^{\text{-7}}$&-2.974$\cdot$10$^{\text{-5}}$&2.777$\cdot$10$^{\text{-7}}$&2.173$\cdot$10$^{\text{-6}}$&4.225$\cdot$10$^{\text{-7}}$\\
$a_3$&-3.856$\cdot$10$^{\text{-7}}$&4.871$\cdot$10$^{\text{-7}}$&9.459$\cdot$10$^{\text{-7}}$&-5.273$\cdot$10$^{\text{-8}}$&6.033$\cdot$10$^{\text{-7}}$&-1.231$\cdot$10$^{\text{-5}}$&-6.118$\cdot$10$^{\text{-8}}$&4.225$\cdot$10$^{\text{-7}}$&2.193$\cdot$10$^{\text{-7}}$\\
\bottomrule
\end{tabular}
}
\label{tab:covarianceMatrixD2}
\end{table*}
\FloatBarrier
\subsection{Backgrounds}
\label{sec:background}
As the rear section is directly connected to the WGTS, tritium migration upstream
towards the electron gun cannot be completely prevented. Tritium can decay
within the acceleration fields of the electron gun. Ions created from the
$\upbeta$-decays are accelerated towards the photocathode, where their impact
can generate multiple secondary electrons simultaneously. Those electrons are
accelerated to the same energy as the signal photoelectrons. The kinetic energy
of the background electrons changes along with the change of the photocathode voltage $U_\mathrm{ph}$ in a scan. This results in a background spectrum following
the shape of an integral response function, as it is shown in
Fig.~\ref{fig:BackgroundMeasurements}. The background electrons only differ in
their initial energy distribution and the emission multiplicity (i.e. the number
of electrons generated from an ion impact). The mean energy $m_\mathrm{Bg}$ and
the Gaussian width $w_\mathrm{Bg}$ of the initial energy distribution of the secondary
electrons can be obtained by performing a combined fit to the three background measurements
using the same integral response-function model as described in \linebreak Sec.~\ref{sec:Analysis}.
The initial energy distribution dominates the spectral shape of the transmission function $T(E_\mathrm{s})$, which describes the transmission probability of the electrons inside the main spectrometer as a function of the surplus energy $E_\mathrm{s}$. The transmission function can be approximated with an error function using $m_\mathrm{Bg}$ and $w_\mathrm{Bg}$ as free parameters. The nine energy-loss function parameters were fixed to preliminary evaluated values during the fit. The best-fit result yields
\begin{equation}
\label{eq:backgroundParameters}
m_\mathrm{Bg}=\SI{2.42\pm0.03}{\eV} \quad\text{and}\quad w_\mathrm{Bg}=\SI{ 2.05\pm0.04}{\eV}\,.
\end{equation}
The electron multiplicity distribution of the ion-induced events follows a
Poisson distribution (including ion-induced events with no electrons being
emitted) with the mean value
\begin{equation}
\label{eq:initialBackgroundMultiplicity}
\bm\hat S=\SI{1.3\pm0.4}{}\,.
\end{equation}
Background events cause a larger detector pile-up effect compared to the signal
electrons generated by the pulsed laser, especially in the differential
data. The remaining events after the TOF selection are nearly unaffected by detector
pile-up, since only the scattered electrons survive. As the arrival time of
scattered electrons is delayed compared to other unscattered electrons from the
same light pulse, they do not arrive at the detector in time coincidence with
other electrons. This allows the multiplicity estimator
$\bm\hat{\mathcal{M}}(E_\mathrm{FPD}, \mathcal{W})$ to discriminate background events from
signal electrons. By excluding events with $\bm\hat{\mathcal{M}}>1$ in the analysis, the
background component can be reduced by about a factor of two without any
significant distortion of the signal component. A comparison of the differential response
function (at 15\% $\rho_0 d$) before and after applying the multiplicity cut is
provided in Fig.~\ref{fig:pileupcorrectedvsuncorrectedat50cd}, showing the
reduction of the background component. However, the multiplicity
$\bm\hat{\mathcal{M}}>1$ cut causes a distortion of the shape of the
background component, which is determined from simulations. The resulting
response functions of the background component after an event multiplicity
cut are displayed in Fig.~\ref{fig:backgroundAfterCut}. The four
simulated spectra of the background components for the individual column
densities are included in the fit model.
\begin{figure}[tbp]
\centering
\includegraphics[width=\figWidth\linewidth]{Fig7.pdf}
\caption{Background measurements of the electron gun with the light source
turned off at different fractions of the nominal
column density $\rho_0 d$. Background electrons
generated on the emission electrode of the electron gun show similar
energy and column density dependencies as signal electrons.
Compared to signal electrons, the background energy distribution
is broader and shifted towards higher initial values. A combined fit to
the data (red line) is used to determine the mean position
$m_\mathrm{Bg}$ and the width $w_\mathrm{Bg}$ of the initial energy distribution. For better illustration, the
shown data is normalized such that the region of unscattered electrons in
the plateau at $E_\mathrm{s}\in\left[\SI{0}{\eV},\SI{8}{\eV}\right]$ equals
$P_0(\mu)$.
\label{fig:BackgroundMeasurements}}
\end{figure}
\begin{figure}[tbp]
\centering
\includegraphics[width=\figWidth\linewidth]{Fig8.pdf}
\caption{Simulated background spectra with and without multiplicity $\bm
\hat \mathcal{M}>1$ cut for the differential mode after applying the TOF selection. The background spectra
without multiplicity cut (orange) show the shape of an integral response
function (cf. Fig.~\ref{fig:dataOnly_integral}). The TOF selection (not
shown) does not affect this shape of background electrons. The background
spectra after multiplicity cut (blue) are strongly reduced but are
deformed in shape. The shaded areas show the $1\sigma$ intervals
resulting from the uncertainty on the mean emission multiplicity $\bm\hat S$ of the
ion events provided in Eq.~\eqref{eq:initialBackgroundMultiplicity}.}
\label{fig:backgroundAfterCut}
\end{figure}
\subsection{Deuterium results}
\label{sec:D2}
Measurements, similar to the ones described in Sec.~\ref{sec:measurements}, were performed with molecular deuterium as source gas in an early commissioning run of the KATRIN experiment.
Four integral measurements at \SIlist{0;5;35;87}{\percent} of the nominal source density and a single differential measurement at \SI{5}{\percent} were made.
The data were processed and fit in the same manner as described in Secs.~\ref{sec:measurements} and \ref{sec:Analysis}.
For the combined $\chi^2$-fit of the deuterium measurements, the best-fit result is obtained at a reduced $\chi^2=\SI{1.57\pm0.02}{}$. Similar to the tritium data, the uncertainties of the data points are rescaled by $\sqrt{\chi^2/N_\mathrm{dof}}$ to obtain a reduced $\chi^2=1$. The parameter values as well as the covariance matrix are provided in Tabs.~\ref{tab:fitResultsD2} and \ref{tab:covarianceMatrixD2}.
The slightly increased $\chi^2$ value and the larger model uncertainties (cf. Fig.~\ref{fig:elossFunctionComparison}) can be explained by the presence of a stronger detector pile-up in the integral data due to an electron rate that was twice as high as that of the tritium measurements combined with the availability of only one differential dataset.
A full propagation of the systematic uncertainties was not performed for the deuterium measurements as the simulations for tritium showed that the measurements are strongly dominated by the statistical uncertainty. Furthermore, neither the systematic uncertainty due to methane freezing causing column-density drift nor the background generated from tritium ions is present in the absence of tritium.
Figure \ref{fig:elossFunctionComparison} shows the minor differences of the energy-loss models for deuterium and tritium, as the electronic excitation states are shifted to lower energies on the order of \SI{100}{\meV} \footnote{Such a difference between the different hydrogen isotopologs is theoretically expected. The observed difference of $\mathcal{O}(\SI{100}{\meV})$ is in agreement with preliminary calculations in dipole approximation in which the peak positions of the rovibrationally resolved spectra for the 2p$\sigma$ $^1\Sigma_\mathrm{u}$ and the 2p$\pi$ $^1\Pi_\mathrm{u}$ states were compared for the isotopes D$_2$ and T$_2$ \cite{Miniawy2021}.}.
Extrapolating again to $E_\mathrm{max}$ the energy loss function results in a mean energy loss of\linebreak
$\overline{\Delta E}(\mathrm{D}_2)=30.64(1)_\mathrm{fit}$\,eV,
for the dominant deuterium isotopologs. This mean energy-loss value is \SI{0.15}{\eV} smaller than for tritium isotopologs, but we should not forget that we extrapolate the energy-loss function in energy by a factor 200 and we do not account for systematic uncertainties here for this consistency check\footnote{Just to get an order of magnitude estimate of the systematic uncertainties of the mean energy loss, we have left the junction point $E_\mathrm{i}$ between the three Gaussians and the BED tail in equation (\ref{eq:katrinFitModel}) free in our fits, yielding already an additional systematic uncertainty on the mean energy loss as big as the discrepancy. We want to add that the systematics of this extrapolation is not critical for the determination of the energy-loss function in our interval of interest $[\SI{0}{\eV},\SI{54}{\eV}]$.}.
\begin{table}[tbp]
\centering
\caption{
Best-fit parameter values for the energy-loss function in molecular deuterium as described in Eq.~\eqref{eq:katrinFitModel}. Parameter correlations are provided as a covariance matrix in Tab.~\ref{tab:covarianceMatrixD2}.}
\begin{tabularx}{\tabWidth\linewidth}{llc}
\toprule
Parameter & Unit & {Value}\\
\midrule
$m_{1}$& eV &\tablenum{11.793\pm0.020}\\
$m_{2}$& eV &\tablenum{12.7300\pm0.0046}\\
$m_{3}$& eV &\tablenum{14.875\pm0.011}\\
$\sigma_{1}$& eV &\tablenum{0.166\pm0.017}\\
$\sigma_{2}$& eV &\tablenum{0.4828\pm0.0053}\\
$\sigma_{3}$& eV &\tablenum{1.073\pm0.032}\\
$a_{1}$& \si{\per\eV} &\tablenum{0.0344\pm0.0028}\\
$a_{2}$& \si{\per\eV} &\tablenum{0.2737\pm0.0015}\\
$a_{3}$& \si{\per\eV} &\tablenum{0.07466\pm0.00047}\\
\bottomrule
\end{tabularx}
\label{tab:fitResultsD2}
\end{table}
\subsection{Differential (time-of-flight) measurements}
\label{sec:differential}
The time of each trigger pulse for the laser is saved in the detector data
stream and used to define the electron-emission time at the electron gun. For
each event at the detector, its time difference to the laser pulse is
calculated. The time difference corresponds to the time-of-flight (TOF) of the
electron through the KATRIN beamline from the electron gun to the detector,
including delays for the signal propagation and processing on the order of
\SI{1}{\micro\second}. The knowledge of the electron's time-of-flight can be used
as additional information on its kinetic energy.
The negative retarding potential in the main spectrometer $U_0$ acts as a
barrier for the electrons, slowing them down and only allowing electrons with
surplus energies $E_\mathrm{s}> 0$ to pass through (high-pass filter). The
higher the electrons' surplus energy, the less they are slowed down inside the
main spectrometer; connecting their flight time through the main
spectrometer $\tau$ to their surplus energy by $\tau \sim
\frac{1}{\sqrt{E_\mathrm{s}}}$ \cite{Steinbrink2013}.
Selecting only electrons with $\tau > \tau_{\mathrm{cut}}$ is equivalent to a
low-pass filter on $E_{\mathrm{s}}$ \cite{Bonn1999}. Applying this TOF
selection, the high-pass filter main spectrometer is transformed into a narrow
band-pass filter for measuring the differential energy spectrum.
For the differential measurements, the laser was pulsed at \SI{20}{\kilo\hertz}
to be able to distinguish flight times up to \SI{50}{\us} between the pulses
(see Fig.~\ref{fig:tof-eloss}). In this mode, an estimated 0.35 electron per
pulse are emitted. Measurements at four different column densities were
performed, which are listed in Tab.~\ref{tab:integralMeasurements}.
Figure~\ref{fig:tof-eloss} shows the measurements at \SI{86}{\percent} nominal
column density $\rho_0 d$ as an example. The top panel shows the time-of-flight
versus surplus energy. Here the unscattered electrons as well as one-fold and
two-fold scattered electrons are prominently visible as hyperbolic structures.
A TOF selection of events with flight times longer than $\tau_{\mathrm{cut}} =
\SI{35}{\micro\second}$ is applied to obtain a differential spectrum, which is
projected on $E_\mathrm{s}$ and shown in the bottom panel. $\tau_{\mathrm{cut}}$
is chosen such that an energy resolution of $\approx\,$\SI{0.02}{\electronvolt}
is achieved. Higher $\tau_{\mathrm{cut}}$ allows for a higher energy resolution
but results in significantly lower statistics. The vertical features --- at
\SIlist{0;12.5;25}{\electronvolt} --- for ${\tau < \SI{25}{\micro\s}}$ are
electrons with flight times $>\,$\SI{50}{\micro\second} from a previous laser
pulse. These events are neglected in the analysis.
All events with $\tau$ in the range of \SI{35}{\micro\second} to
\SI{50}{\micro\second} are selected and corrected for laser intensity
fluctuations analogous to the integral analysis. The energy scale for each
measurement is constructed using the measured ramping speed of the high voltage and
the position of the peak of unscattered electrons set to $E_\mathrm{s} = 0$.
\begin{figure}[tbp]
\centering
\includegraphics[width=\figWidth\linewidth]{figures/Fig4.png}
\caption{The differential measurements of the time of flight $\tau$ (top)
and its one-dimensional projection on the electron surplus energy
$E_\mathrm{s}$ axis (bottom) at \SI{86}{\percent} of nominal column
density. The dashed line marks the lower boundary of the TOF selection at $\tau_{\mathrm{cut}} =
\SI{35}{\micro\second}$. The bottom panel shows all events in the
TOF selection.}
\label{fig:tof-eloss}
\end{figure}
\section{Energy-loss function}
\label{sec:eloss}
Multiple processes contribute to the energy loss of electrons traversing
molecular tritium gas. The median energy loss from elastic scattering amounts to $\overline{\Delta
E}_{\mathrm{el}}=\SI{2.3}{\milli\electronvolt}$ \cite{Kleesiek2019}, which is
negligible in the KATRIN measurement. The predominant processes for the KATRIN
experiment are inelastic scatterings, resulting in electronic excitations in combination with rotational and
vibrational excitations of the molecule, ionization, and molecular dissociation.
Data from detailed measurements is only available for the scattering of
\num{25}-\si{\kilo\electronvolt} electrons on molecular hydrogen
gas~\cite{Gei64,Uls72}; these direct measurements of the energy-loss function
were made with energy resolutions down to 40~meV.
In these measurements, the contribution of three different groups of lines can be discerned, which are created from the excitations of the
$(\mathrm{2p}\sigma\ ^1\Sigma^+_\mathrm{u})$, $(\mathrm{2p}\pi\ ^1\Pi_\mathrm{u})$, and $(\mathrm{3p}\pi\ ^1\Pi_\mathrm{u})$ molecular states around \SI{12.6}{\electronvolt} and \SI{15}{\electronvolt}, respectively.
Aseev et al.~\cite{Ase00} and Abdurashitov et al. \cite{Abd17} report on the measurements of energy losses of
electrons in gaseous molecular hydrogen, deuterium, and tritium. The shape of the energy-loss function was evaluated by fitting an
empirical model to the integral energy spectra obtained with a mono-energetic electron source which generated a beam of electrons with kinetic energies near the endpoint energy of the tritium $\upbeta$-decay. Because of the
low energy resolution of several eV, the shape of the energy-loss function was
coar\-se\-ly approximated by a Gaussian to represent electronic excitations and
dissociation, and a one-sided Lorentzian to represent the continuum caused
by ionization of the molecules \cite{Ase00}.
\subsection{New Parametrization}
\label{sec:parametrization}
The high-quality data from the first KATRIN energy-loss measurements described in
Sec.~\ref{sec:measurements} allows us to improve the parametrization used in
Aseev et al.~\cite{Ase00} and Abdurashitov et al.~\cite{Abd17}. While the
experimental energy resolution is not sufficient to resolve individual molecular
states, the combined contribution of each of the three groups of states can
clearly be discerned in the KATRIN data.
A new parametrization of the energy-loss function was developed to describe the
inelastic scattering region between about 11~eV and 15~eV using three Gaussians,
each of which is approximating one group of molecular states. The ionization
continuum beyond this energy region is described by the relativistic
binary-encounter-dipole (BED) model developed by Kim et
al.~\cite{Kim2000}. While the parameters required by this model are only
available for the ionization of H$_2$-molecules \cite{Kim94}, by taking into
account the ionization thresholds for the different isotopologs \cite{Wec99}
\begin{equation}
\begin{split}
E_\mathrm{i}(\mathrm{H}_2)&=\SI{15.433}{eV}\\
E_\mathrm{i}(\mathrm{D}_2)&=\SI{15.470}{eV}\\ E_\mathrm{i}(\mathrm{T}_2)&=\SI{15.486}{eV}\,,
\end{split}
\end{equation}
the shape of the BED model is a good
representation for the tritium data, as can be seen from the fit result in Sec.~\ref{sec:combinedFitmodel}. The
new parametrization of the full energy-loss function is written as:
\begin{equation}
\label{eq:katrinFitModel}
f(\Delta E) = \begin{cases}
\sum_{j=1}^3 a_j \exp\left( -\frac{(\Delta E-m_j)^2}{2\sigma_j^2} \right)
&: \Delta E \le E_\mathrm{i}\\
\frac{f(E_\mathrm{i})}{f_{\rm BED}(E_\mathrm{i}) } \cdot f_{\rm BED}(\Delta E) &: \Delta E > E_\mathrm{i},
\end{cases}
\end{equation}
where $\Delta E$ is the energy loss and $a_j$, $m_j$, and $\sigma_j$ are the amplitude, the mean, and the width of the three Gaussians, respectively. $f_{\rm BED}(\Delta E)$ is the functional form of the BED model as given in~\cite{Kim2000} and $E_\mathrm{i}$ is the junction point between the two regions given by the ionization threshold. For a \linebreak smooth continuation of the model at the junction, the BED function $f_{\rm BED}(\Delta E)$ is normalized to the local value $f(E_\mathrm{i})$ of the Gaussian components at that position.
\subsection{Systematic uncertainties}
\label{sec:MCpropagation}
Systematic uncertainties are not included in the combined fit; they are determined
separately by a Monte Carlo (MC) simulation framework. The framework generates
many MC samples, each composed of a detailed simulation of all integral and
differential datasets. The systematic effects under investigations can be folded
into these MC sets individually, or combined, with or without statistical
fluctuation of the count rates included. The underlying response function, on
which the MC generation is based, is taken from the best-fit values given in
Tab.~\ref{tab:fitResults}.
The considered systematic uncertainties cover known effects that arise from the
measurement conditions and effects specific to the integral or differential
analysis. All systematic effects are shown in Tab.~\ref{tab:systematics}. Their
implementation in MC generation is described in the following.
\footnotetext[3]{The uncertainty band of the Aseev et al. \cite{Ase00} result is significantly smaller than the uncertainty band of the Abdurashitov et al. \cite{Abd17} result. However, the position of the Gaussian kernel was fixed to \SI{12.6}{\electronvolt} in the analysis of Aseev et al.}\addtocounter{footnote}{1}
\begin{table*}[tbp]
\centering
\caption{A list of systematic uncertainties. The listed systematics are
investigated by MC simulations yielding the contribution to the total
parameter uncertainties and the parameter shifts, which are displayed in
Fig.~\ref{fig:toyMCParamDistr}. The area of the uncertainty band of the
energy-loss function caused by the individual systematic effects relative to that of all systematic effects is provided
in the second to last column. The resulting parameter shifts due to each
systematic effect are quantified in the last column as the area of the
absolute deviations of the nominal function and the function given by the
parameter means of the systematic variation. See text for more
details.
}
\setlength\extrarowheight{5pt}
\begin{tabularx}{\linewidth}{>{\raggedright\arraybackslash}X >{\raggedright\arraybackslash}X >{\hsize=3cm\raggedright\arraybackslash}Xrr}
\toprule
Systematic effect & Source of input & Input values & {$\frac{\int|\sigma_\mathrm{sys}|}{\int |\sigma_\mathrm{all}|}$} & {$\frac{\int |f_0 - f_\mathrm{sys}|}{\int |\sigma_\mathrm{all}|}$}\\
\midrule
Transmission-function model & error function fit to reference measurement & ${m_\mathrm{E}=\SI{-0.2\pm2.2}{\meV}}$\newline ${w_\mathrm{E}=\SI{90\pm1}{\meV}}$&\tablenum{0.94} & \tablenum{0.260}\\
Column-density drift & throughput sensor & \SI{<0.2}{\percent\per h}\newline modeling according to sensor data &\tablenum{0.015} & \tablenum{0.023}\\
Rate drift & measurement data & \SI{<0.15}{\percent\per h} & \tablenum{0.002} &\tablenum{0.004} \\
Background & bg measurement & ${m_\mathrm{Bg}=\SI{2.42\pm0.03}{\eV}}$
\newline
${w_\mathrm{Bg}=\SI{2.05\pm0.04}{\eV}}$ &\tablenum{0.032} & \tablenum{0.008}\\
Multiplicity cut & bg measurement and simulation & ${\bm\hat S=\SI{1.3\pm0.4}{}}$ &\tablenum{0.153} & \tablenum{0.327}\\
Pile-up correction & simulation & max. \SI{0.05}{\percent} & \tablenum{0.166} & \tablenum{0.271}\\
Binning & HV sweep & bin width of \SI{0.05}{\eV} & \tablenum{0.0} & \tablenum{0.110} \\
\midrule
All systematics& & & \tablenum{1.0} & \tablenum{0.395}\\
\bottomrule
\end{tabularx}
\label{tab:systematics}
\end{table*}
\begin{itemize}
\item \textit{Transmission-function model} In order to obtain an analytical
description of the integral transmission function $T(E_\mathrm{s})$ for the construction of
the integral response-func\-tion model, an error function is fit to a
reference measurement with an empty WGTS.
The error function models the electron's surplus energy threshold needed for transmission in the main spectrometer \linebreak $m_\mathrm{E}=\SI{-0.2\pm2.9}{\milli\electronvolt}$ and the energy spread \linebreak $w_\mathrm{E}=\SI{90\pm2}{{\milli\electronvolt}}$ due to the angular and energy distribution of the electron gun and the energy resolution of the main spectrometer.
To
investigate the uncertainty of this analytical model, MC samples of
the measurements at different column densities were generated with
$m_\mathrm{E}$ and $w_\mathrm{E}$ drawn from a multivariate normal
distribution according to the best-fit values above with the correlation between them taken into account. No uncertainty on the transmission-function
model was considered for the differential data, since the peak of the unscattered
electrons from the measurement data is directly used as the transmission
function.
\item \textit{Column-density drift} As the scattering
probability $P_n$ depends on the column density, drifts in the column density
during the measurements can cause a distortion of the response function.
During the measurements at \SI{41}{\percent} of the nominal column
density, drifts on the order of \SI{0.2}{\percent\per h} were visible. The
reduced stability was caused by CO and tritiated methane freezing inside the injection
capillaries. The CO and the methane were generated from radiochemical reactions with the stainless-steel surface during the \mbox{burn-in} period of the first tritium operation \cite{Aker2021}. The column
density is constantly monitored with a throughput sensor, which allows the drift to be modeled precisely in the simulations. To do so, a linear function
$\rho(t)$ is fit to the sensor data, yielding the slope of the drift and the corresponding parameter uncertainty. This linear function is used to model the rate drift due to the column density drift with the slope sampled according to its uncertainty.
\item \textit{Rate drift} The electron-production rate of the electron gun can drift due to changes in
the work function or a possible degradation of the photocathode (e.g. by
ion impacts). The number of unscattered electrons is analyzed for each
run after correcting for drifts in the light intensity and the column
density to monitor for intrinsic long-term rate drifts. Although the rate drift is very
small at $\mathcal{O}(\SI{0.1}{\percent\per\hour})$, the resulting drift
is used to modulate the response functions accordingly.
\item \textit{Background} A background component created from
secondary electrons by ion impact on the photocathode (cf.
Sec.~\ref{sec:background}) adds to the response functions. In the MC
simulations, a background component is added with the parameters of the initial energy distribution of the background electrons (see
Eq.~\eqref{eq:backgroundParameters}) sampled according to their
uncertainties.
\item \textit{Multiplicity cut} The event multiplicity \linebreak
${\bm\hat{\mathcal{M}}(E_\mathrm{FPD}, w)>1}$ cut distorts
the background shape in the differential measurements (see
Fig.~\ref{fig:backgroundAfterCut}) depending on the initial electron multiplicity
$\bm\hat S$ (see Eq.~\eqref{eq:initialBackgroundMultiplicity}) of the ion
impact. In the MC
simulations, the value of $\bm\hat S$ is sampled according to its Gaussian
uncertainty determined from the measurement and the resulting background component is added to the
differential data. Distortions on the signal component from the photoelectrons
due to the multiplicity cut were investigated by dedicated detector
simulations and added to the differential response functions.
\item \textit{Pile-up correction} Detector pile-up is a dominant systematic effect for the integral measurements and is corrected with the pile-up reconstruction method described in Sec.~\ref{sec:pileup}.
The efficiency $\zeta(E_{s})$ of this pile-up correction method is determined with detector simulations for each data point. The simulated response functions are multiplied by $\zeta(E_{s})$ to include the remaining distortions after applying the pile-up correction. The efficiency $\zeta(E_{s})$ is varied according to the Gaussian uncertainty determined in detector simulations.
\item \textit{Binning} The response functions are measured by continuously ramping the emission energy of the electron gun. For the data analysis, the continuous data stream is binned into \num{50}-\si{\meV} bins. This binning effect is included in the MC simulations.
\end{itemize}
A total of 10000 MC datasets are generated from the distributions of the systematic effects. Every MC dataset is fit and the best-fit values are taken to construct the probability distribution for each of the nine parameters of interest.
From these distributions, the parameter uncertainties are determined from the standard deviations. In addition, systematic parameter shifts are determined from the difference between the median of the distribution and the initial input value from the underlying energy-loss function.
The results of this evaluation are shown in Fig.~\ref{fig:toyMCParamDistr}.
The total uncertainty is dominated by the statistics in the data and the widths of the distributions agree well with the parameter uncertainties of the best-fit result provided in Tab.~\ref{tab:fitResults}.
In order to condense the information of the nine parameter uncertainties for easier interpretation, two metrics are defined. They are shown in the last two columns of Tab.~\ref{tab:systematics}.
The first metric, $\int|\sigma_\mathrm{sys}|/\int |\sigma_\mathrm{all}|$, is the area of the error band in the energy-loss function caused by the specific systematic ($\int|\sigma_\mathrm{sys}|$) with respect to the area of the error band caused by all systematic effects ($\int |\sigma_\mathrm{all}|$). The error bands originate from the combination of all nine parameter uncertainties. The areas of the error bands are estimates for the uncertainty of the scattering probability over the whole energy range.
The second metric,
$\int |f_0 - f_\mathrm{sys}|/\int |\sigma_\mathrm{all}|$, is the area of the difference between the nominal energy-loss function ($f_0$) and the energy-loss function ($f_\mathrm{sys}$) obtained from the simulations including the individual systematic uncertainties. This difference is normalized to $\int |\sigma_\mathrm{all}|$. A difference can be created by shifts of the nine parameter values caused by a given systematic effect. The impact of parameter shifts on the functional form of the energy loss is found to be smaller than the impact of the parameter uncertainties. The dominant contribution to the systematic uncertainty originates from the transmission-function model.
Since the total uncertainty of the energy-loss function is dominated by statistical uncertainties in the data and no significant parameter shifts are found, the considered systematic effects are negligible and not further considered in this study.
\begin{figure*}[tbp]
\centering
\includegraphics[width=0.73\figWidth\linewidth]{figures/Fig12.pdf}
\caption{A breakdown of systematic uncertainties for all nine individual energy loss parameters as obtained from Monte Carlo simulations. Shown are the total uncertainty (stat. \& sys.), the statistical uncertainty (stat. only) as well as the total systematic uncertainty (all sys.).
The data points indicate the difference between the fit to data without any systematic effects and the median of the parameter distribution obtained from the fits to 10000 MC samples with systematic effects. The bars indicate the standard deviation of the distributions.
The measurement is strongly dominated by the statistical uncertainty. The investigated systematic effects do not significantly contribute to a broadening of the parameter uncertainties nor to a significant shift of their mean values.
\label{fig:toyMCParamDistr}}
\end{figure*}
\section{Analysis}
\label{sec:Analysis}
The energy-loss parameters in Eq.~\eqref{eq:katrinFitModel} are extracted with a
$\chi^2$-fit to multiple datasets in integral and differential mode at different column
densities. The systematic uncertainties in the energy-loss function (for example, those
due to the measurement conditions, pile-up and background effects) are determined
with Monte Carlo simulations (cf. Sec.~\ref{sec:MCpropagation}). Results are
given for molecular tritium and deuterium source gases below.
\subsection{Combined fit of the datasets}
\label{sec:combinedFitmodel}
The fit model is constructed with the energy-loss function, effects of multiple
scatterings in the source, energy smearing in the experimental setup, and the described background component.
The energy-loss function describes a single electron scattering. The probability
for $n$-fold scattering follows a Poisson distribution and is given by
\begin{equation}
P_n(\mu)=\frac{\mu^n}{n!}\exp\left(-\mu \right)\, ,
\end{equation}
with the expected mean number of scatterings $\mu$ given by
\begin{equation}
\mu=\rho d\cdot \sigma^{\mathrm{tot}}_{\mathrm{inel}}(qU_0).
\end{equation}
$\rho d$ is the column density during the individual measurements and $\sigma^\mathrm{tot}_\mathrm{inel}$ is the total inelastic
scattering cross section. To correct for the inelastic scattering cross section
at different kinetic energies, the parameter $\mu$ is scaled by the ratio
${\sigma^\mathrm{tot}_\mathrm{inel}(E_\mathrm{kin})/\sigma^\mathrm{tot}_\mathrm{inel}(qU_0)}$,
which gives $P_n(\mu, E_\mathrm{s})$. The effects of elastic scattering off
tritium can be neglected since the amount of energy transferred in these
scattering processes (${\overline{\Delta
E}_{\mathrm{el}}=\SI{2.3}{\milli\electronvolt}}$ \cite{Kleesiek2019}) is
negligible compared to the energy smearing caused, among others, by the width of
the kinetic energy distribution of the electrons produced with the electron gun or the finite energy resolution of the KATRIN main spectrometer. The
experimental response to electrons that have been scattered $n$ times in the
source gas is given by the $n$-fold convolution of the energy-loss function
$f(\Delta E)$ with itself and convolved one time with the experimental transmission function
$T(E_\mathrm{s})$, leading to the following definition of the corresponding
scattering functions $\epsilon_n(E_\mathrm{s})$
\begin{align}
\label{eq:scattering_functions}
\epsilon_0(E_\mathrm{s}) =\; & T(E_\mathrm{s}) \; , \notag\\
\epsilon_1(E_\mathrm{s}) =\; & T(E_\mathrm{s}) \otimes f(\Delta E) \; , \notag\\
\epsilon_2(E_\mathrm{s}) =\; & T(E_\mathrm{s}) \otimes f(\Delta E) \otimes f(\Delta E) \; , \; ...\; ,
\end{align}
with $E_\mathrm{s}$ being the surplus energy of the electrons (see \linebreak Eq.~\eqref{eq:surplusEnergy}) and $\Delta E$ being the energy loss resulting from an inelastic
scattering. The shape of the ionization tail of the energy-loss function is
corrected for the shape distortion \linebreak ($<\,$\SI{e-2}{\percent}) caused by the change
of the kinetic energy.
The model $R(E_\mathrm{s},\mu)$, which is fit to data, is the sum of the
scattering functions $\epsilon_n(E_\mathrm{s})$ weighted by the corresponding
Poissonian probabilities
\begin{equation}
\label{eq:responseFunction}
R(E_\mathrm{s},\mu) = \sum_{n=0}^{4} P_n(\mu, E_\mathrm{s}) \cdot \epsilon_n(E_\mathrm{s}).
\end{equation}
Given that the surplus energies considered in the energy-loss analysis are
limited to ${E_\mathrm{s} \leq \SI{56}{\eV}}$, the highest scattering order that
needs to be considered is $n=4$.
In the integral measurement, the shape of the experimental transmission function
$T_{\rm int}(E_\mathrm{s})$ is obtained from the response function with an empty
source volume; Eq.~\eqref{eq:responseFunction} collapses to $R(E_\mathrm{s}, 0)=
T(E_\mathrm{s})$. $T(E_\mathrm{s})$ is modeled with an error
function. Similarly, the transmission function for the differential data $T_{\rm
dif}(E_\mathrm{s})$ could be obtained from a TOF measurement with an empty
source. However, it is simply given by the shape of the peak of unscattered
electrons observed at non-zero column densities; no additional measurement
is required in this case. Thus, we directly use the measurement data to
construct the fit model. Figure~\ref{fig:scattering_functions} shows the
scattering functions constructed for the differential ($\epsilon_{n}^{\rm
dif} (E_\mathrm{s})$) and the integral ($\epsilon_{n}^{\rm int}
(E_\mathrm{s})$) measurement modes for the first four scattering orders.
In addition to the nine parameters in the energy-loss mo\-del in
Eq.~\eqref{eq:katrinFitModel} (amplitude, mean and width of the three Gaussians
contained in the model), several nuisance parameters are included in the
combined fit to differential and integral datasets taken at different column
densities. These nuisance parameters include normalization factors
$c^{\rm{dif(int)}}_{i}$, mean scattering probabilities $\mu^{{\rm dif(int)}}_{ i}$,
and background amplitudes \linebreak $b^{{\rm dif(int)}}_{i}$ for each differential (integral) dataset that is added
to the fit. In the fit, we minimize the following $\chi^2$ function for the
vector of free fit parameters $\vec{\mathcal{P}}$
\vskip 2cm
\begin{widetext}
\begin{align}
\label{eq:chisquare}
\chi^2\left(\vec{\mathcal{P}}\right) = & \sum_i^{N_{\rm dif}} \sum_j \left( \frac{c^{{\rm dif}}_{i} \, R^{\rm dif}(E_{\mathrm{s}, j},\mu^{{\rm dif}}_{ i}) + b^{{\rm dif}}_{i} \, B^{\rm dif}(E_{\mathrm{s}, j},\mu^{{\rm dif}}_{ i}) - y^{{\rm dif}}_{i,j}}{dy^{{\rm dif}}_{i,j}} \right)^2 \notag \\
+ & \sum_i^{N_{\rm int}} \sum_j \left( \frac{c^{{\rm int}}_{ i} \, R^{\rm int}(E_{\mathrm{s}, j},\mu^{{\rm int}}_{ i}) + b^{{\rm int}}_{ i}\, B^{\rm int}(E_{\mathrm{s}, j},\mu^{{\rm int}}_{ i}) - y^{{\rm int}}_{i,j}}{dy^{{\rm int}}_{i,j}} \right)^2 \notag \\
+ & \left(\frac{\int^{E_\mathrm{max}}_0 f(\Delta E) d(\Delta E) - 1}{\delta}\right)^2 \; ,
\end{align}
\end{widetext}
where $N_{\rm dif(int)}$ are the number of differential
(integral) datasets considered. $y^{{\rm dif(int)}}$ and $dy^{{\rm dif(int)}}$
represent the individual data points and their uncertainties. The index of summation $j$ denotes the data points of the individual datasets. The first summand
of Eq.~\eqref{eq:chisquare} describes the contribution of the differential datasets to
the $\chi^2$ value. The fit range for the differential datasets extends from
\SI{10}{\electronvolt} to \SI{56}{\electronvolt}, excluding the zero-scatter
peak and the adjacent background region, which do not contain information on the
energy-loss function. The second summand describes the contribution of integral
datasets with the fit range of \SI{-1}{\electronvolt} to \SI{56}{\electronvolt} \footnote{This extended fit range is required to determine the amplitude of the background component, which is only accessible below the transmission edge at $E_\mathrm{s}=\SI{0}{\eV}$.}.
The ion-induced background component (see Sec.~\ref{sec:background}) is
considered in both summands. For the integral measurements, the shape of the
background component {$B^{\rm int}(E_{\mathrm{s}, j},\mu^{{\rm int}}_{ i})$} is
described by an integral response function (see
Fig.~\ref{fig:BackgroundMeasurements}), but with a different initial energy
distribution than the signal electrons. For the differential measurement, $B^{\rm
dif}(E_{\mathrm{s}, j},\mu^{{\rm dif}}_{ i})$ is more complex and is obtained from
simulations described in Sec.~\ref{sec:background} and depicted in
Fig.~\ref{fig:backgroundAfterCut}.
The third summand is a pull term that ensures a proper normalization of the
fitted energy-loss function up to $E_\mathrm{max}=(E-E_\mathrm{i})/2$ with a
desired precision of $\delta = 10^{-4}$.
With the definition of the $\chi^2$ given in Eq.~\eqref{eq:chisquare}, a
combined fit to four differential datasets and three integral datasets taken at different
column densities (see Tab.~\ref{tab:integralMeasurements}) was performed. The
results are displayed in Fig.~\ref{fig:combined_fit} for each of the differential and
integral datasets included in the fit.
\begin{figure}[tbp]
\centering
\includegraphics[width=\linewidth]{Fig9.pdf}
\caption{Differential ($\epsilon_{n}^{\mathrm{dif}} (E_\mathrm{s})$) and integral ($\epsilon_{n}^{\mathrm{int}} (E_\mathrm{s})$) scattering functions for up to four-fold scattering.
}
\label{fig:scattering_functions}
\end{figure}
\begin{figure*}[tbp]
\centering
\includegraphics[width=\figWidth\linewidth]{Fig10a.pdf}\newline \includegraphics[width=\figWidth\linewidth]{Fig10b.pdf}
\caption{Results of the combined fit to the differential and integral datasets at different
column densities. Each panel shows the data points (blue) and the best-fit
result (red) in the upper part and the corresponding residuals in the lower
part. A normalization is applied to each of the differential and integral response
functions. The differential data is normalized by the total number of counts within
the fit range and the integral data by the number of counts in the last bin.
}
\label{fig:combined_fit}
\end{figure*}
The corresponding best-fit parameters of the energy-loss function are given in
Tab.~\ref{tab:fitResults}. The fit has a reduced $\chi^2$ value of
\SI{1.13\pm0.02}{}. A deviation from $\chi^2/N_\mathrm{dof}=1$ can arise from an imperfect
semi-empirical parametrization of the energy-loss function or an underestimation
of uncertainties. We do not observe significant structures in the fit residuals
in Fig.~\ref{fig:combined_fit} and thus inflate the uncertainties of the data
points by \linebreak $\sqrt{\chi^2/N_\mathrm{dof}}$ to achieve a $\chi^2/N_\mathrm{dof}=1$ \cite{PDG2020}. The
statistical uncertainties from the fit are included in the third column of Tab.~\ref{tab:fitResults} with the
covariance matrix shown in Tab.~\ref{tab:covarianceMatrix} in the
Appendix. Compared to the empirical energy-loss models of Aseev et al. and
Abdurashitov et al. superimposed on our results in
Fig.~\ref{fig:elossFunctionComparison}, the KATRIN result provides a better
energy resolution and reduced uncertainties.
As a consistency check, we extrapolate the energy-loss function (fitted up to \SI{56}{\eV}) to ${E_\mathrm{max} = \SI{9.280}{\keV}}$ yielding a mean energy loss of
$\overline{\Delta E}(\mathrm{T}_2)=30.79(1)_\mathrm{fit}$\,eV,
which agrees well with the value of \SI{29.9\pm 1}{\eV} reported by Aseev et al. \cite{Ase00}.
\begin{table}[tbp]
\centering
\caption{Best-fit parameters for the energy-loss function in molecular
tritium as described in Eq.~\eqref{eq:katrinFitModel}. Parameter
correlations are provided as a covariance matrix in
Tab.~\ref{tab:covarianceMatrix} in the Appendix.}
\begin{tabularx}{\tabWidth\linewidth}{llc}
\toprule
Parameter & Unit & Value\\
\midrule
$m_{1}$& \si{\electronvolt} &\tablenum{11.9189\pm0.0083}\\
$m_{2}$& \si{\electronvolt} &\tablenum{12.8046\pm0.0021}\\
$m_{3}$& \si{\electronvolt} &\tablenum{14.9677\pm0.0041}\\
$\sigma_{1}$& \si{\electronvolt} &\tablenum{0.1836\pm0.0070}\\
$\sigma_{2}$& \si{\electronvolt} &\tablenum{0.4677\pm0.0022}\\
$\sigma_{3}$& \si{\electronvolt} &\tablenum{0.907\pm0.013}\\
$a_{1}$& \si{\per\electronvolt} &\tablenum{0.0328\pm0.0012}\\
$a_{2}$& \si{\per\electronvolt} &\tablenum{0.29570\pm0.00068}\\
$a_{3}$& \si{\per\electronvolt} &\tablenum{0.07575\pm0.00037}\\
\bottomrule
\end{tabularx}
\label{tab:fitResults}
\end{table}
\begin{figure}[tbp]
\centering
\includegraphics[width=\figWidth\linewidth]{Fig11.pdf}
\caption{A comparison of the energy-loss functions in D$_2$ and T$_2$ from
this work and previous measurements of Aseev et al. \cite{Ase00} and
Abdurashitov et al. \cite{Abd17}. The y-axis indicates the
probability density normalized in $\Delta E\in\left[0, E_\mathrm{max}\right]$ for energy losses $\Delta E$ due to inelastic
scattering. The Gaussian $1\sigma$ uncertainty bands are indicated by the shaded areas\protect\footnotemark[3]. Since the uncertainty of the KATRIN T$_2$ and D$_2$ results are too small to be visible in the top plot, the uncertainties are additionally shown in absolute values in the bottom plot.}
\label{fig:elossFunctionComparison}
\end{figure}
\subsection{Integral measurements}
In the standard KATRIN measurement mode, only electrons with high enough surplus
energies to overcome the main spectrometer retarding potential reach the
detector. By \linebreak changing the kinetic energy of the electrons and keeping the
retarding potential at a fixed value, the integral response function was measured.
A set of integral measurements at three different non-zero column densities as
well as one reference measurement at zero column density (see
Tab.~\ref{tab:integralMeasurements}) were performed. The pulse frequency of the
laser was set to \SI{100}{\kilo\hertz}, which results in an estimated mean value
of 0.05 generated electrons per light pulse.
\begin{table}[tbp]
\centering
\caption{A summary of the number of scans $\Sigma$ performed at different
column densities relative to the nominal value $\rho_0d$. The
corresponding scattering probability $\mu$ is also shown. The average
number of counts $<N_0>$ per \num{50}-\si{\milli\volt} bin for the unscattered
electrons at $E_s\in\left[\SI{2}{\eV},\SI{10}{\eV}\right]$ is provided for
the integral dataset, as well as the sum of all unscattered electrons
$N_0$ at $E_s\in\left[\SI{-1}{\eV},\SI{1}{\eV}\right]$ for the differential
dataset.}
\begin{tabular}{c c c c}
\multicolumn{4}{c}{Integral}\\
\toprule
Column density / $\rho_0d$ &$\mu$&$\Sigma$ & $<N_0>$\\
\midrule
\SI{0}{\percent} & 0.00 & 28 & 204806\\
\SI{14}{\percent} & 0.25 &14 & 88002\\
\SI{41}{\percent} & 0.75 &26 & 112655\\
\SI{86}{\percent} & 1.56 & 31 & 62191\\
\bottomrule
& & & \\
\multicolumn{4}{c}{Differential}\\
\toprule
Column density / $\rho_0d$ &$\mu$&$\Sigma$ & $N_0$\\
\midrule
\SI{15}{\percent} & 0.27 &33 & 565316 \\
\SI{22}{\percent} & 0.41 &23 & 380633\\
\SI{39}{\percent} & 0.72 &23 & 267829\\
\SI{84}{\percent} &1.52 & 28 & 154460\\
\bottomrule
\end{tabular}
\label{tab:integralMeasurements}
\end{table}
The individual scans were both corrected for rate intensity fluctuations and
detector pile-up (see Sec.~\ref{sec:pileup}). The former are caused by
fluctuations of the laser intensity, which is stable to
\SI{1.2}{\percent\per\hour}. The light intensity is continuously monitored by a
photodiode connected to a fiber splitter, which is installed just before the light is coupled into the vacuum system of the electron gun (see Fig.~\ref{fig:egun_drawing}). The light intensity
correction is done by dividing the measured FPD rate by the relative deviation
of the light intensity to its mean intensity. The precision of the measured light intensity with this monitoring system is \SI{0.4}{\percent} and is propagated into the uncertainties of the correction.
Data from scans at the same column density are accumulated (Fig. ~\ref{fig:dataOnly_integral}).
The resulting integral response functions are superpositions of $n$-fold scattering functions, as indicated in the figure by arrows above the measurement data.
\begin{figure}[tbp]
\centering
\includegraphics[width=\figWidth\linewidth]{figures/Fig3.pdf}
\caption{The measured response functions in integral mode at different fractions of the nominal
column density $\rho_0 d$. The response functions are normalized by the
electron rate of the reference measurement with an empty source (blue
curve). The arrows indicate the energy region where $n$-fold scattering
takes place.}
\label{fig:dataOnly_integral}
\end{figure}
\begin{comment}
But since multiple scatterings can take place on the measurement range of up to \SI{60}{\electronvolt}, the measured response function is always an superposition of n-fold convolutions of the energy-loss function.
\end{comment}
\section{Introduction}
The KArlsruhe TRItium Neutrino (KATRIN) experiment\linebreak aims to determine the
effective electron anti-neutrino mass in a model-independent way by examining
the kinematics of tritium $\upbeta$-decays. The observable $m^2_\nu = \sum_{i}{ \left | U_{\mathrm{e}i}\right |^2 m^2_{i}}$ is the squared incoherent sum of neutrino-mass eigenstates $m_{i}$\linebreak weighted by their contribution $U_{\mathrm{e}i}$ to the electron \linebreak anti-neutrino.
The target sensitivity for the neutrino-mass measurement in KATRIN is
0.2~eV/c$^2$ (at 90\% CL) with three live-years of data~\cite{KAT04}. The
$5\sigma$ discovery potential is 0.35~eV/c$^2$. This requires a precise control
of all systematic effects.
The experiment is designed for a high-precision spectral shape measurement of
\ttwo{} $\upbeta$-decay electrons around the endpoint of 18.6~keV. An overview
of the KATRIN experiment is shown in Fig.~\ref{fig:katrin}. The setup \cite{Aker2021design} includes a
high-activity Windowless Gaseous Tritium Source (WGTS) and a high-resolution
electrostatic retarding spectrometer of the MAC-E (Magnetic Adiabatic Collimation with an Electrostatic filter) type \cite{Beamson1980,LOBASHEV1985,Pic92}.
Molecular tritium gas at 30~K is continuously injected through the capillaries
at the center of the WGTS and pumped out at both ends. This allows a nominal
steady-state column density (i.e. the integrated nominal source density $\rho_0(z)$ along the length $d$ of the source cryostat) $\rho_0d=\SI{5e17}{\per\centi\meter\squared}$ resulting in an activity of \SI{1.7e11}{\becquerel}
with a stability better than \SI{ 0.1}{\percent\per\hour} \cite{Aker2021design}.
In order to prevent tritium from entering the spectrometer section which would
induce background in the measurement, the transport section reduces the tritium
flow by at least 14 orders of magnitude \cite{Friedel2019}. This is
achieved with a differential pumping section \cite{Aker2021design,Marsteller2021}, which comprises turbo-molecular
pumps followed by a cryogenic pumping section that makes use of an argon frost
layer to adsorb tritium cryogenically \cite{Aker2021design,Roettele2017}. The spectrometer section consists of the
pre- and the main spectrometer. The pre-spectrometer rejects\linebreak low-energy
electrons, which reduces the electron flux into the main spectrometer. The final
precision discrimination of the electron energy is performed in the analyzing
plane at the center of the main spectrometer with a resolution of
\SI{2.77}{\eV} \cite{Aker2021design} for \num{18.6}-\si{\kilo\electronvolt} electrons with isotropic angular distribution.
\begin{figure*}[tbp]
\centering
\includegraphics[width=0.95\textwidth]{Fig1.pdf}
\caption{Overview of the KATRIN experiment. The main components are from left to right: The rear section containing calibration and monitoring systems as well as the electron gun (see Fig.~\ref{fig:egun_drawing}) used in this work; the \num{10}-\si{\m}-long windowless gaseous tritium source (WGTS) with differential pumps on both sides; the transport section consisting of a differential (DPS) and cryogenic pumping section (CPS); the spectrometer and detector section with the pre- and main spectrometer, and the silicon detector. The overall length of the experimental setup is more than \SI{70}{\meter}.
\label{fig:katrin}}
\end{figure*}
The pre- and main spectrometer are MAC-E type high-pass filters,
which can only be traversed by electrons with longitudinal kinetic energy
higher than the preset potential. The isotropically emitted $\upbeta$-electrons
are adiabatically collimated to a longitudinal motion inside the
spectrometer. This is achieved by a gradual decrease of the magnetic field
strength $B$ from the entrance of the spectrometer towards its center, conserving
the magnitude of the $\upbeta$-electron's magnetic moment in the cyclotron motion
$\mu =E_\perp/B$ \cite{Beamson1980}, with $E_\perp$ being the transverse component of the electron's kinetic energy with respect to the magnetic field lines. Varying the electric potential of the spectrometer allows the
energy region around the endpoint of the tritium $\upbeta$-decay to be scanned
as an integral spectrum, i.e. the rate of electrons with kinetic energy above the
set filter potential \cite{Aker2021}.
Electrons passing the main spectrometer are re-ac\-cel\-er\-a\-ted by the main spectrometer potential and a post-ac\-cel\-er\-a\-tion of \SI{10}{\kV} at the focal-plane detector (FPD) system and are then
counted by a 148-pixel silicon PIN detector~\cite{Ams15} shown at the far right
in Fig.~\ref{fig:katrin}. An \num{18}-\si{\keV}-wide selection window (\SI{14}{\keV} to \SI{32}{\keV}) around
the \num{28}-\si{\keV} electron energy peak is chosen to minimize systematic effects in
counting efficiencies \cite{Aker2021}.
The observable $m_\nu^2$ is determined by fitting the recorded integral spectrum
with a model that comprises four parameters: the normalization, the
endpoint energy, the background rate, and $m_\nu^2$ \cite{Kleesiek2019}. The
model is constructed from the shape of the $\upbeta$-decay spectrum and
the response of the experimental setup. The main components of the response are
the transmission function of the main spectrometer and the energy loss of
electrons from elastic and inelastic scatterings in the \ttwo{} source. The
latter is the focus of this work.
At the nominal source density, approximately \SI{60}{\percent} of all
electrons scatter inelastically and lose energies between
$\approx\,$\SI{11}{\electronvolt} and \SI{9.3}{\kilo\electronvolt}. The
upper limit of this energy transfer arises due to the fact that the primary and secondary electrons from the ionization process
are indistinguishable in the measurement and always the higher energetic electron is measured.
Minuscule energy losses can result in electrons with energies close to the
endpoint downgraded to lower energies in the spectrum fit window. Therefore,
the energy-loss function needs to be known with high precision in order to meet the systematic uncertainty budget of\linebreak $\sigma(m_\nu^2)<\SI{7.5e-3}{\eV\squared}$ \cite{KAT04} reserved for this individual systematic.
Theoretical differential cross sections for \num{18.6}-\si{\keV} electrons
scattering off molecular tritium are not available at the required precision for
the $m_\nu^2$ measurements. While data from energy-loss measurements for
gaseous tritium or deuterium from the former neutrino mass
experiments in Troitsk and Mainz~\cite{Ase00,Abd17} exist, the precision is not
sufficient to achieve the KATRIN design sensitivities. Other more precise
experimental data on the energy losses of electrons with energies near the
tritium $\upbeta$-decay endpoint energy are only available for molecular
hydrogen as the target gas~\cite{Abd17,Gei64,Uls72}. In this paper we report the
results of the in-situ measurements of the energy-loss function in the KATRIN
experiment.
We used a monoenergetic and angular-selective electron gun, of the type described
in \cite{Beh16}, mounted in the rear section (far left in Fig.~\ref{fig:katrin}),
which allowed us to probe the response of the entire KATRIN setup, including the energy loss in tritium gas.
We begin this paper in Sec.~\ref{sec:eloss} with a brief introduction to existing energy-loss function models and continue with the description of the novel semi-empirical parametrization developed in this work. In Sec.~\ref{sec:measurements}, the measurement approaches of the integral as well as the novel differential time-of-flight measurements are explained, including a description of the working principle of the electron gun used for these measurements. The analysis of the tritium data using a combined fit is presented in Sec.~\ref{sec:Analysis} including a detailed discussion of the systematic uncertainties of the measurements.
Additional measurement results for the energy-loss function in deuterium gas
are provided in Sec.~\ref{sec:D2}.
We conclude this paper in Sec.~\ref{sec:summary} by summarizing and discussing our results in the context of the neutrino-mass-sensitivity goal of KATRIN.
\section{Measurements}
\label{sec:measurements}
The energy-loss function $f(\Delta E)$ (Eq.~\ref{eq:katrinFitModel}) describes the electron energy losses $\Delta E$ from scattering inside the source, which distort the shape of the response function.
By
measuring the response function, it is possible to determine $f(\Delta E)$. For
this, a quasi-monoenergetic and angular-selective photoelectron source (``electron gun"), located at the end of the rear section (see
Fig.~\ref{fig:katrin}), is used.
Guiding the quasi-mo\-no\-e\-ner\-ge\-tic beam --- at a
pitch angle of approx. $\theta=\SI{0}{\degree}$ between the magnetic field lines
and the electrons' momentum vector --- through the WGTS allows the investigation
of the energy loss from scatterings with the source gas molecules stabilized at
\SI{30}{\kelvin}. Measuring the electron rate at the focal-plane detector as a
function of the electron surplus energy $E_\mathrm{s}$ at the analyzing plane (see Eq.~
\ref{eq:surplusEnergy}) yields the response function of the setup.
The working principle of the electron gun and a general description of the measurement strategy are provided in the following. This is followed by a discussion of the measurement data taken in the two different measurement modes (integral and differential) as well as two important systematic effects in the measurements (pile-up and background).
\paragraph{Electron gun}
A schematic drawing of the electron gun is provided in
Fig.~\ref{fig:egun_drawing}. The electrons are generated by photoelectric
emission when ultraviolet light is shone through an approximately
\num{30}-\si{\nm}-thick gold photocathode, which is installed inside two electrically charged parallel plates. The photoelectrons are accelerated by a
potential difference of \SI{4}{\kV} between the plates separated by
\SI{10}{\milli\meter}; the electrons exit the setup through a hole in the front
plate (see Fig.~\ref{fig:egun_drawing}). This first non-adiabatic acceleration
collimates the beam of photoelectrons in a cosine distribution \cite{Pei_2002} initially. By tilting the plates by the angle $\alpha$, well-defined pitch
angles $\theta$ can be obtained. A pitch angle of $\theta=\SI{0}{\degree}$, which is reached by aligning the plates with the magnetic field lines, is used in the measurements. The generated electrons are further accelerated by a
cascade of cylinder electrodes to the desired kinetic energy. The working
principle is explained in more detail in \cite{Beh16}. The energy profile of the
generated beam depends on the work function $\Phi$ of the photocathode and the
wavelength $\lambda$ of the light source.
For the measurements in this work, a \SI{266}{\nano \meter} pulsed UV
laser\footnote{InnoLas Mosquitoo Nd:YVO$_4$ \SI{1064}{\nm} (frequency
quadrupled).} with pulse widths of less than \SI{18}{\nano \second} (FWHM) is
used. The Q-switch of the laser can be externally triggered, which allows the
synchronization of the creation time of the electron pulses with the detector
system. This allows the time-of-flight (TOF) of the
signal electrons to be measured. The TOF is used for a differential analysis of
the data (see Sec.~\ref{sec:differential}).
The photon energy of the monochromatic laser light\linebreak (${h\nu=\SI{4.66}{\eV}}$) is
only \SI{0.22}{\eV} above the work function $\Phi=\SI{4.44}{\electronvolt}$
\cite{PhDSack2020} of the gold photocathode, which results in a measured
energy spread of $\sigma_\mathrm{E}<\SI{90}{\meV}$.
To generate electrons with well defined kinetic energies close to the tritium
endpoint, voltages down to \SI{-21}{\kilo\volt} can be applied to the
photocathode and cylinder electrodes. The photocathode potential
$U_{\mathrm{ph}}$ is varied to produce electrons with different surplus energies
$E_\mathrm{s}$ with respect to the negative main spectrometer retarding
potential $U_0$:
\begin{align}
\label{eq:surplusEnergy}
E_\mathrm{s} & = q \cdot U_{\mathrm{s}}+h \nu-\Phi_i = q \cdot \left (U_{\mathrm{ph}} - U_0 \right )+h \nu-\Phi_i\,,
\end{align}
taking into account the additional initial energy of the electrons given by the difference of the photon energy $h\nu$ and the work function $\Phi_i$ of the electrons populating different energy levels in the solid (neglecting further solid-state effects).
The total initial kinetic energy of the electrons is given as $E_{\mathrm{kin}}=q\cdot U_{\mathrm{ph}}$.
\begin{figure}[tbp]
\centering
\includegraphics[width=\figWidth\linewidth]{figures/Fig2.pdf}
\caption{A simplified schematic drawing of the electron gun, including the acceleration electrodes as well as the optical setup used to generate the photoelectrons.}
\label{fig:egun_drawing}
\end{figure}
\paragraph{Measurement approach}
To resolve the fine structures of the response function, small voltage steps on the
order of \SI{0.1}{\eV} are required over the analysis interval of
$E_\mathrm{s}=$\SIrange{-5}{60}{\electronvolt}. Multiple fast voltage sweeps in
alternating directions are preferred to compensate for systematic uncertainties
associated with the scan direction and long-term instabilities of the setup.
A single high-voltage setpoint adjustment of the main spectrometer requires more
than \SI{10}{\s} to stabilize, which does not allow repeated measurements within
a reasonable time. For faster measurements, the surplus energy of the electron
beam is modified by performing voltage sweeps of $U_{\mathrm{ph}}$ while the
filter potential of the main spectrometer is kept fixed at
$U_0=-\SI{18575}{\volt}$. The electron energy is chosen to be slightly above the
tritium endpoint energy to avoid $\upbeta$-electron backgrounds but close to the
region of interest to minimize effects from the energy dependence of the
scattering cross section.
Changing the kinetic energy of the electrons results in a small change of the total
inelastic scattering cross section $\sigma^\mathrm{tot}_\mathrm{inel}$ of up to \SI{0.27}{\percent} over
the scanned energy range. This is considered later in the data analysis.
Each sweep (called ``scan'' in the following) took \SI{30}{\minute} and was
repeated in alternating scanning directions for approximately \SI{12}{\hour}.
The obtained rates as a function of the continuous voltage ramp were binned to
obtain discrete energy values for the analysis. The data taking was performed in
integral and differential modes, which are described in more detail in the
following.
\section*{Acknowledgments}
We acknowledge the support of Helmholtz Association, \linebreak Ministry for Education and Research BMBF (5A17PDA, \linebreak 05A17PM3, 05A17PX3, 05A17VK2, and 05A17WO3),\linebreak Helmholtz Alliance for Astroparticle Physics (HAP), Helmholtz Young Investigator Group (VH-NG-1055), and Deutsche Forschungsgemeinschaft DFG (Research Training \linebreak Groups GRK 1694 and GRK 2149, and Graduate School GSC 1085 - KSETA) in Germany; Ministry of Education, Youth and Sport (CANAM-LM2015056, LTT19005) in the Czech Republic; Ministry of Science and Higher Education of the Russian Federation under contract 075-15-2020-778; and the United States Department of Energy through grants DE-FG02-97ER41020, DE-FG02-94ER40818, DE-SC0004036, DE-FG02-97ER41033, DE-FG02-97ER41041, DE-SC0011091 and DE-SC0019304, Federal Prime Agreement DE-AC02-05CH11231, and the National Energy Research Scientific Computing Center.
\subsection{Pile-up correction}
\label{sec:pileup}
The focal-plane detector is optimized to count single-e\-lec\-tron events with an
energy resolution of $\Delta
E_\mathrm{FPD}\approx\SI{2}{\kilo\electronvolt}$. Due to the high electron rate
of the electron gun ($\approx\,$\SI{e4}{cps}) and the use of a single detector
pixel, pile-up effects become relevant. Furthermore, the pulsed electron beam
with $<\,$\SI{18}{\ns} FWHM windows creates a non-Poisson time distribution compared
to a constant wave light source.
The electron flight time depends on the retarding potential and the energy loss
from scatterings inside the WGTS. The time difference of the electrons from the same pulse arriving at
the detector is thus modified as a function of the surplus energy. For
arrival-time differences shorter than the shaping time ($L=\SI{1.6}{\us}$) of the
trapezoidal filter used for pulse shaping of the detector signal, the electrons
are counted as one single event with correspondingly higher \linebreak event energy
$E_\mathrm{FPD}$ (Fig.~\ref{fig:energyHistogram}). The number of electrons within the same detector event is denoted as event multiplicity $\mathcal{M}$. As the peaks for
different multiplicity $\mathcal{M}$ events overlap in the
$E_\mathrm{FPD}$ histogram, a simple estimation of $\mathcal{M}$ based on
$E_\mathrm{FPD}$ is not possible. Processing the event signal with two
additional stages of trapezoidal filters allows more information on the signal
shape, such as the bipolar width $\mathcal{W}$ (i.e. the time difference of two consecutive zero crossings of the third trapezoidal-filter output), to be
obtained \cite{Aker2021design}. Electrons with arrival-time differences close to the shaping time distort the trapezoidal output of the first filter stage and thus change the determined bipolar width as a function of the arrival-time difference.
With the additional information on the pulse shape, these ambiguities
can be resolved and $\mathcal{M}$ can be estimated. The multiplicity estimate
$\bm\hat{\mathcal{M}}(E_\mathrm{FPD}, \mathcal{W})$ is obtained from Monte Carlo
simulations of the detector response for random combinations of $\mathcal{M}$
electrons arriving within the shaping time $L$. The estimate
$\bm\hat{\mathcal{M}}(E_\mathrm{FPD}, \mathcal{W})$ is not necessarily identical to
$\mathcal{M}$ as there are still remaining ambiguities, which are considered in
the uncertainty propagation (see Sec.~\ref{sec:MCpropagation}). In the case of
the integral measurement data, the correction is made by weighting each event
with the estimator value. For the differential measurements, no pile-up
correction is required, but a $\bm\hat{\mathcal{M}}(E_\mathrm{FPD}, \mathcal{W})>1$ cut is
applied for background suppression (see Sec.~\ref{sec:background}).
A comparison of the integral response function before and after pile-up
correction is provided in Fig.~\ref{fig:pileupcorrectedvsuncorrectedat50cd} to
demonstrate its dependence on the surplus energy at two different values of $\rho d$.
The dependence of $\bm\hat{\mathcal{M}}(E_\mathrm{FPD}, \mathcal{W})$ on the kinetic
energy of the electrons over the measurement range of \SI{60}{\eV} is neglected
in the correction and an average estimate is used instead. The uncertainty due
to the correction method was evaluated with a full simulation of the detector
response for each of the response functions measured in integral mode. This
yields a correction stability at \num{5e-4}, which is considered as a systematic
uncertainty for the \linebreak energy-loss function determination.
\begin{figure}[tbp]
\centering
\includegraphics[width=\figWidth\linewidth]{Fig5.pdf}
\caption{Reconstructed event energy in the focal-plane Si-detector for all events accumulated during the integral response function measurements at \SI{86}{\percent} nominal column density $\rho_0d$. The decomposition with the dedicated pile-up correction method shows that the different multiplicity regions overlap. This effect does not allow for a simple pile-up correction based on event energy alone. }
\label{fig:energyHistogram}
\end{figure}
\begin{figure*}[tbp]
\centering
\includegraphics[width=\figWidth\linewidth]{Fig6.pdf}
\caption{Left: Selection of measured response functions in integral mode
before (grey line) and after (colored line) pile-up correction. The
correction removes spectral shape distortions up to ten times larger than
the statistical uncertainties. Right: Differential response function
before and after applying the $\bm\hat{\mathcal{M}}>1$
cut. The cut reduces the background component by up to a factor of two
without significantly influencing the shape of the signal component.
Bottom: The difference between the uncorrected/uncut (uc) data and
the corrected/cut data normalized to the data point uncertainties $dy$.
}
\label{fig:pileupcorrectedvsuncorrectedat50cd}
\end{figure*}
\section{Summary and Outlook}
\label{sec:summary}
A series of precision measurements of the energy-loss function of \num{18.6}-\si{\keV} electrons scattering off molecular tritium and deuterium gas was performed.
The measurements were carried out in the KATRIN setup by using a pulsed beam of monoenergetic and angular selected electrons from a photoelectron source. The measurements were made in integral and differential time-of-flight measurement modes.
A new semi-empirical parametrization of the energy-loss function was developed, which describes the set of electronic states in combination with molecular excitations, dissociation, and ionization better than previous models.
This new model is described by nine parameters, which were determined by performing a combined $\chi^2$-fit to both integral and differential measurement data. The measurements and analyses performed in this work achieved a significant improvement over existing empirical energy-loss models in terms of energy resolution and uncertainties.
A detailed investigation of the systematic effects shows that the parameter uncertainties are dominated by statistical uncertainties. This allows further improvement in precision in future measurements.
The obtained electron energy-loss function in tritium was used in the analysis of the first KATRIN dataset, which led to an improved upper limit of the effective neutrino mass ${m_\nu<\SI{1.1}{\eV}}$ (90\% C.L.) \cite{Aker2019}.
For this dataset, recorded at reduced source strength, the uncertainty of the energy-loss model contributes to the systematic uncertainty of the observable $m_\nu^2$ with $\sigma(m_\nu^2)<\SI{e-2}{\eV^2}$ and is inconsequential compared to other effects \cite{Aker2021}.
The achieved precision of the energy-loss function is close to the target effect of ${\sigma(m_\nu^2)<\SI{7.5e-3}{\eV^2}}$ \cite{KAT04} that is necessary for reaching the final KATRIN sensitivity of ${m_\nu=\SI{0.2}{\eV}}$ (90\% CL). |
2,877,628,091,503 | arxiv | \section{Introduction}
This paper aims to teach a machine to discover the laws of physics from video streams. In the apocryphal story, Isaac Newton's observation of a falling apple was a catalyst for deriving his physical laws. In like fashion, our machine aims to observe the dynamics of a moving object as a means to infer physical laws. We refer to this as \emph{discovering physics from video}, as shown in Figure~\ref{fig:general_idea}.
The discovery problem is very difficult because a machine must derive not only the governing equations of a physical model but also governing parameters like velocity. We emphasize that a discovery algorithm like ours does not know \emph{a priori} what ``velocity'' means---it must learn the existence of velocity. In order to handle the underdetermined nature of recovering both governing equations and governing parameters, we make a few assumptions. Section~\ref{sec:discovery_definition} expands on our assumptions, which we believe are the most relaxed to date.
Our work is powered by methods from representation learning and evolutionary algorithms. The discovery of underlying governing parameters is achieved using a modified $\beta$-variational autoencoder ($\beta$-VAE) to obtain latent representations. These are then used in an equation discovery step, driven by genetic programming approaches. Our approach is able to learn equations that symbolically match ground truth, and have governing parameters that correspond to human interpretable constructs (e.g. velocity, angular frequency).
\paragraph{Contributions:} Our key contribution is a first attempt at an algorithm that is able to re-discover both governing equations and governing parameters from video. Previous work can either discover governing equations or the parameters, but not both. We test the algorithm on both synthetic data (with and without noise), as well as real data. Our performance analysis shows that the proposed method results in symbolically accurate expressions, and interpretable governing parameter discovery for a variety of simple, yet fundamental physics tasks. The method is also found to be robust to large amounts of positional noise and effective under a range of input data sizes. To lay a foundation for future work, we release the Visual Physics dataset, consisting of both real and synthetic videos of dynamic physical phenomena.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{Figures/fig1_high_level_v3.pdf}
\caption{
\textbf{Discovering physical equations from visual cues without human intervention.} Here, we show- how an input video of projectile motion can be processed by our method to recover both the governing equation of motion, as well as two governing parameters of initial velocities (both horizontal and vertical).}
\label{fig:general_idea}
\end{figure}
\section{Related Work}
\label{sec:related}
Although our goals are different, we are inspired by work in physics-based computer vision, physical representation learning, and symbolic equation derivation.
\paragraph{Physics-based computer vision} encompasses the use of known physical models to either directly solve or inspire computer vision techniques. Techniques like shape from shading~\cite{ikeuchi1981numerical,horn1989shape} and photometric stereo~\cite{woodham1980photometric} use known models of optical physics to estimate shape. Along this theme, recent work in the area of computational light transport has advanced the field to see around corners~\cite{ramesh20085d,velten2012recovering,o2018confocal,xin2019theory} or infer material properties~\cite{tanaka2018material}.\footnote{For an overview of the physics of light transport, the reader is directed to an ACM SIGGRAPH course by O'Toole and Wetzstein~\cite{o2014computational}.} Known physical models can also be used to inspire the design of vision algorithms. Examples include deformable parts models~\cite{felzenszwalb2008discriminatively,felzenszwalb2009object} or snakes~\cite{kass1988snakes}, which use the physics of springs to design computer vision cost functions. The recent popularity of data-driven techniques has spawned a family of work that combines a known physical model with pattern recognition. For example,~\cite{gregor2010learning, diamond2017unrolled} unfold the existing physical models as the backbone in the network architecture; \cite{chen2018reblur2deblur, stewart2017label} use physical information to supervise the training process; \cite{fei2019geo} relies on gravity cues to improve depth estimation; and \cite{davis2015visual, jin2017deep, kang2017deep, ba2019physics, li2019restoration, Halder_2019_ICCV, zeng2019tossingbot} introduce physics-based learning to set the new state-of-the-art in a range of vision problem domains. These approaches are powered by knowledge of a physical model, whereas our work has the complementary aim of learning the underlying model.
\paragraph{Learning physical parameters from visual inputs} has been a topic of interest in recent years. For instance,~\cite{JiajunWu2015Gallileo, Brubaker2009, Bhat2002, Mottaghi15Newton, purushwalkam2019bounce, Wu2017Deanimation} estimate parameters or equivalent information for well-characterized physical equations with visual inputs. These can be incorporated into realistic physical engines to infer complex system behavior. Fragidaki et al.~\cite{Fragidaki16Billiards} integrate the model of external dynamics within the agent to play simulated billiards games. More recently,~\cite{Battalgia2016IN,Watters2017VisualInteractionNetworks} deploy interaction networks with graph inputs to encode the interactions among objects in complex environments, and estimate other invariant quantities of the phenomenon using deep learning. In the field of controls, Shi et al.~\cite{shi2019neural} learn the near-ground dynamics to achieve stable trajectory control. While these prior attempts are capable of predicting the system dynamics precisely, they also require a well-characterized physical model.
\paragraph{Symbolic regression} aims to generate symbolic equations from a space of mathematical expressions to fit the distributions of input samples. Genetic programming~\cite{GeneticProgramming} is one of the prevalent methods in this field, with previous applications in discovering Lagrangians~\cite{hills2015algorithm} and nonlinear model structure identification~\cite{winkler2005new}. Additional features from the input variables~\cite{Kaizen, GP-RVM} and partial derivatives pairs~\cite{schmidt2009distilling} can also be introduced into genetic programming for more reliable regression. Other evolutionary methods can also be used to derive partial differential equations (PDEs)~\cite{maslyaev2019data}. Sparse regression~\cite{Brunton2016} and dimensional function synthesis~\cite{wang2019deriving} are two other alternatives to conduct symbolic regression. Recently, deep neural networks (DNNs) have also been utilized to generate symbolic regression~\cite{EQL, EQL_extented, NeuralSymbolicRegression2019}. These existing methods usually require predetermined terms or prior knowledge from physics.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{Figures/F_inputcomp.pdf}
\caption{\textbf{Previous work~\cite{huang2018NIPSworkshop} (a) requires both a temporal stream of bounding boxes and the physical parameters.} (b) Our proposed technique also requires a stream of bounding boxes, but is able to discover latent parameters that correspond to true physical parameters, like velocity or angular frequency.}
\label{fig:comparsion_with_others}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{Figures/fig_pipeline_v2.pdf}
\caption{
\textbf{An overview of the proposed Visual Physics framework.} We use a number of video clips as inputs to our system. The extracted position information is fed through the physics parameter extractor, which identifies the governing physical parameters for the phenomenon. These are used as inputs to the genetic programming step, in order to identify a human interpretable, closed form expression for the phenomenon.
}
\label{fig:algorithm_pipeline}
\end{figure*}
\section{Defining Discovery and its Assumptions} \label{sec:discovery_definition}
\paragraph{Assumptions:} This paper represents only a first attempt to discover the laws of physics from video. As such, we make certain assumptions. First, we restrict our focus to the dynamics of single objects (rather than groups of objects). Second, it is assumed that we know the object for which we would like to derive the physical equations. Third, we assume that videos are in sequence. We believe these assumptions are sufficiently general to allow us to characterize our technique as ``discovering physics''. For example, the apocrypyhal story of Isaac Newton observing the apple falling aligns with the three assumptions outlined above. In the story, Newton was watching a temporal sequence of a single object in motion and was able to inductively reason about the laws of physics.
\paragraph{Defining ``discovery of physics'':} We define discovery of physics as discovering \emph{both} the governing parameters and governing equations. Given the assumptions from the previous paragraph, we must therefore discover all parameters except for the object location and time. As compared to Huang et al.~\cite{huang2018NIPSworkshop}, where the parameters of the governing equations are used as prior knowledge, our attempt at discovery is more general. Concretely, for a task like trajectory estimation, our framework has to tackle the challenging task of learning both the projectile equation, as well as the existence of a ``velocity'' term, from video input. Refer to Figure~\ref{fig:comparsion_with_others} for details.
\section{Algorithm Architecture for Discovery}
Having defined ``discovery'' in Section~\ref{sec:discovery_definition}, we now describe a framework that enables discovery of physics from video. There are three interconnected modules that handle position detection, latent physics discovery, and equation discovery, respectively. Figure~\ref{fig:algorithm_pipeline} summarizes this framework.
\paragraph{Position detection module:} We build the Visual Physics framework based on the assumption that the underlying physical equations are reflected in the dynamics of an object across different time steps. Therefore, a robust object detection algorithm is required at the first stage to achieve accurate moving object localization for diversified categories of objects. We deploy a pretrained Mask R-CNN~\cite{he2017mask} to extract the bounding box of the object in each frame, and the centroid of the detected bounding box is considered as the object location in a particular frame.
\paragraph{Latent physics module:} The objective of the Visual Physics framework is to derive the governing physical laws without prior knowledge. To achieve this goal, we need to infer the associated latent governing parameters from positional observations. VAEs~\cite{kingma2013auto} have been widely deployed to extract the latent representations with applications in physics, such as SciNet~\cite{iten2018discovering}. We adopt a modified $\beta$-VAE architecture for our latent physics module as well. The encoder takes a vector corresponding to the object trajectory at uniformly sampled time instants as the input, and condenses them into a limited number of latent parameters. The decoder tries to reconstruct the object location $(x_q,y_q)$ at an unseen time instant with these latent parameters $[l_1$ $l_2$ $l_3]^T$ and the time instant $t_q$ as inputs. This module is supervised by the object locations without other prior physical knowledge. Once the network converges, both locations obtained from the position detection module, and the corresponding learned hidden representations from the latent physics module are paired as the equation discovery module input.
\paragraph{Equation discovery module:} We concatenate the latent parameters and positional observations, and use this as input to a symbolic regression approach. Vanilla genetic programming approaches are usually subject to convergence issues, and may lead to trivial equations that are not descriptive for the physics associated with the data. Schmidt et al.~\cite{schmidt2009distilling} alleviate this problem by introducing partial derivative pairs between the input variables as a search criterion. We follow this strategy to design an equation discovery module, capable of generating multiple equations with a range of equation complexity and fit accuracy. The final output is a symbolic equation that is Pareto-optimal.
\section{Implementation} \label{sec:impl}
\paragraph{Visual Physics dataset:} To evaluate the proposed framework, we generate both a real and synthetic dataset of videos covering physical phenomena. Table~\ref{tab:dataset} shows three simulated phenomena: \textsc{Free Fall}, \textsc{Constant Acceleration Motion} and \textsc{Uniform Circular Motion}. Each synthetic task includes 600 videos with randomly sampled physical parameters. We additionally include real video clips for \textsc{Free Fall} (411 videos). For all scenes, the physical phenomena is known in closed-form, enabling us to compare our proposed approach to ground truth. While the physics may seem elementary, we test in real-world conditions and add noise to make the task harder. Please see the supplement for additional scenes with a wider range of complexity.
\begin{table*}
\begin{center}
\begin{tabular}{p{1.6cm} c p{12.5cm}}
\toprule
Dataset & Visualization & Description \\
\midrule
\textsc{free fall} & \raisebox{-1.3\totalheight}{\includegraphics[width=0.1\textwidth]{Figures/fig_toss_only.pdf}}
& This dataset consists of 600 videos of 150 frames each at a frame rate of 240 frames per second. The frame size is chosen to be 720$\times$720 pixels. The object of interest is released with random initial velocities, from random points across different videos. The positions are selected from a uniform distribution, such that the initial position is in the bottom-left quadrant of the image. Initial velocities are also selected from a uniform distribution such that the object stays in the frame for the duration of the video. The object is acted upon by earth's gravity ($9.8m/s^2$ at a scale of 300 pixels per meter), which is the only active external agent. \\
\midrule
\textsc{constant acceleration motion} & \raisebox{-0.93\totalheight}{\includegraphics[width=0.1\textwidth]{Figures/fig_acceleration_only.pdf}}
& This dataset consists of 600 videos of 200 frames each, at a frame rate of 40 frames per second and a frame size of 720$\times$720 pixels. Here, the object of interest is released horizontally with a fixed initial velocity of $5 m/s$ (at a scale of 8 pixels per meter), and is acted upon by a uniformly random sampled external force, leading to an acceleration $a\in [0,4]$ $m/s^2$. \\
\midrule
\textsc{uniform circular motion} & \raisebox{-1.05\totalheight}{\includegraphics[width=0.1\textwidth]{Figures/fig_rotation_only.pdf}}
& This dataset consists of 600 videos of 200 frames each, at a frame rate of 20 frames per second and a frame size of 720$\times$720 pixels. In this scenario, the object of interest is in uniform circular motion at a fixed radius of 5 m (at a scale of 50 pixels per meter), with angular velocity $\omega \in [1,2]$ rad/s. The center of rotation is kept fixed across all dataset videos. The initial position of the object is kept fixed, and no additional external force affects this motion (that is, the motion is assumed to be in the horizontal plane).\\
\bottomrule
\end{tabular}
\end{center}
\caption{\textbf{Description of the synthetic Visual Physics dataset.} These three physical phenomena are representative of fundamental trajectory motion. Although all scenes describe trajectories, the governing equations and parameters are different (e.g. polynomial for some, and sinusoidal for others).}
\label{tab:dataset}
\end{table*}
\paragraph{Software implementation and training details:} For the position detection module, we deploy a Mask R-CNN~\cite{he2017mask} pretrained on COCO dataset~\cite{lin2014microsoft}. As to the physical inference module, both the encoder and the decoder consist of six fully-connected layers, and the size of the latent parameters is set to be three. We use the mean squared error (MSE) of the reconstructed locations and the $\beta$-VAE loss~\cite{higgins2017betaVAE} to supervise the training process. $\beta$-VAE penalty is introduced to encourage the disentanglement of latent representations, so that independent physical parameters are inferred in separate latent nodes. The entire loss function $L$ of the latent physics network can be written as follows:
\begin{equation}
L = L_{mse}(Y_{t_q}, \hat{Y}_{t_q}) + \beta L_{kl}(Z),
\end{equation}
where $Y_{t_q}$ is the ground-truth location at time step $t_q$, $\hat{Y}_{t_q}$ is the estimated location from the network, $L_{mse}(\cdot)$ is the MSE loss, $Z$ denotes the extracted latent representations, $L_{kl}(\cdot)$ denotes the Kullback–Leibler divergence between a Gaussian prior, and $\beta$ is the balance factor for the $\beta$-VAE loss as described in~\cite{higgins2017betaVAE}. We use Adam optimizer~\cite{kingma2014adam} with an initial learning rate of 0.001, and this learning rate is decayed exponentially with a factor of 0.99 every 200 epochs. All the networks are implemented in the PyTorch framework~\cite{paszke2017automatic}. We construct the equation discovery module by using the widely available Eureqa package~\cite{EureqaSoftware}. The candidate operation set includes all the basic operations, such as addition, multiplication, and sine function. We search two equations for horizontal and vertical directions separately, and R-squared value is used to measure the goodness of fit during searching. Please refer to Appendix~\ref{app:software_implementation} for additional implementation details.
\section{Evaluation}
Section~\ref{ss:synth} evaluates our results on discovering equations from synthetic videos. Section~\ref{ss:real} shows that the method generalizes to real data. Finally, Section~\ref{ss:performance} tests the robustness of our technique by introducing noise and other confounding factors.
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{Figures/fig_sim_results_v3.pdf}
\caption{
\textbf{Discovered physical equations from Visual Physics framework, on simulated videos.} We show the observed embedding trends and the obtained equations, which are both accurate in fitting to the observations as well as in human interpretable form. Results are shown on three simulated datasets: ball toss, acceleration and circular motion.
}
\label{fig:main_result}
\end{figure*}
\subsection{Synthetic Data Evaluation}
\label{ss:synth}
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{Figures/F_realfinal.pdf}
\caption{
\textbf{Evaluating performance on real data, in two conditions.} (a) Testing on a set of real data, and training on real data. The videos of several basketball tosses are used as input to the pipeline. The accurate representations and the derived human interpretable equations, governing the real world phenomenon, are shown to emphasize the robustness of the pipeline. In (b), similar approach but the training set is synthetic data. Similar performance is observed, which underscores that the proposed results are not obtained from overfitting.}
\label{fig:real_results}
\end{figure*}
Figure~\ref{fig:main_result} illustrates various results from our framework, tested on synthetically generated data described in Table~\ref{tab:dataset}. With \textsc{free fall}, we assess the ability of our system to perform with parameters that affect the discovery linearly (as coefficients to a term linear in time). With \textsc{constant acceleration}, we observe the performance on non-linear (quadratic) parameter effect. Finally, \textsc{circular motion} provides insight into performance for sinusoidal dependence. Results for two additional tasks, \textsc{helical motion} and \textsc{damped oscillation}, may be found in Appendix~\ref{app:hard_physics}.
\paragraph{\textsc{Free Fall} (synthetic):} In this scene, all possible trajectories are completely parameterized by the initial velocities $v_{0x}$ and $v_{0y}$ along the $x$ and $y$ directions. Figure~\ref{fig:main_result}(a) displays the output of our method for \textsc{free fall}, including both embeddings as well as the discovered equation. The embedding trends show that our latent physics model successfully learns to separate these horizontal and vertical velocity in two separate nodes. The correlation of the three latent nodes with the two governing (ground-truth) parameters demonstrate that the nodes learn an affine transform of the ground-truth velocities. It is important to note that the third node does not show dependence on the input, assuming a constant value. This reconciles with human intuition in the sense that \textsc{free fall} is determined only by two parameters. In evaluating the final output, we observe that the discovered governing equation matches the form of the familiar kinematic equations. The value of the acceleration due to gravity is learnt exactly and the parametric dependence of the equation on the initial velocities is accurate up to an affine transform.
\paragraph{\textsc{Constant Acceleration Motion} (synthetic):} In this task, the trajectory is governed by a single parameter: the acceleration $a$ acting on the object. Obtained results are displayed in Figure~\ref{fig:main_result}(b). As we expect, since only one of the nodes is required to describe the phenomenon, the embedding trends show that two nodes are invariant to the input and learn an almost constant, low magnitude value. The other node, which is correlated to the input, learns acceleration. Turning to the output equations, we find our method discovers both the correct form, and the latent variable maps to an interpretation of $a$. Also note that the value of the $y$ coordinate, which is expected to be constant, is discovered accurately.
\paragraph{\textsc{Uniform Circular Motion} (synthetic):} This task has a sinusoidal, rather than polynomial form. For a fixed radius of revolution, the governing parameter we seek to discover is the angular frequency $\omega$ of the rotating object. Hence, this task also depends on a single governing parameter. Figure~\ref{fig:main_result}(c) highlights that one of the latent parameters is correlated with angular frequency, while the other two are uncorrelated to the input. Based on the learned parameters and observed positions, the proposed method correctly identifies a sinusoidal dependence for both the $x$ and the $y$ coordinates.
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{Figures/fig_noise.pdf}
\caption{
\textbf{The proposed method is found to be robust when considerable zero-mean additive Gaussian noise is added to the trajectory.} The pipeline is tested on synthetically added noise with standard deviation ranging from 4 to 128 pixels (at a scale of 300 pixels/meter). The representations are found to be robust for up to noise of standard deviation up to 32 pixels, with equations demonstrating analogous robustness. The method fails at a noise of standard deviation 128 pixels, which can be seen to completely bury the trajectory signal in noise.
}
\label{fig:noise_analysis}
\end{figure*}
\subsection{Real Data Evaluation} \label{ss:real}
\paragraph{\textsc{free fall} (real experiment):} We replicate \textsc{free fall} in the real-world in a relatively uncontrolled manner. As shown in Figure~\ref{fig:real_results} the test set is a video sequence of a human tossing a ball with varying spins and uncontrolled air resistance. The motion may also not be perpendicular to the camera, leading to scale inconsistencies. 411 videos are collected, where each video represents a toss. To obtain ground truth initial velocities, we fit the kinematic equations to the observed videos, using the appropriate scaled value of the acceleration due to gravity $g$. The proposed latent discovery module does not have the luxury of this information. We report results in two conditions. In Figure~\ref{fig:real_results}(a), we train on real data and test on real data. Diversity in the dataset occurs due to different types of spins and tosses. To show that our method is not overfitting, Figure~\ref{fig:real_results}(b) displays results when we train on synthetic data and test on real data. Both cases achieve successful discovery of the ground-truth governing equation. In particular, two latent nodes show strong affine correlations with the ground truth horizontal and vertical velocities. In contrast, the third node, as we would expect, is uncorrelated (since only two parameters, $v_{0x}, v_{0y}$ govern the system). The symbolic form of the equation we learn reconciles with the known physics model up to an affine transform in the governing parameters. It is important to note that slight error is observed when testing on real data. In both Figure~\ref{fig:real_results}(a) and~\ref{fig:real_results}(b) the value of acceleration we learn due to gravity is off by a factor of about $7\%$. We believe the following reasons account for a part of this inconsistency: (i) Noise due to the greater Mask R-CNN error on the real videos, as compared to the simulations; and (ii) physical non-idealities such as air resistance and drag. We successfully test our method on an additional real task, \textsc{uniform circular motion}. Please refer to Appendix~\ref{app:real_scenes} for details and results.
\subsection{Performance Analysis} \label{ss:performance}
We now look at analyzing, in reasonable detail, the characteristics and performance of the proposed approach. These factors hold special importance towards the function of the pipeline as a physics discovery unit, in a future application domain (e.g. biomedical, astrophysics).
\paragraph{Latent nodes an affine transform of ground truth:} Figure~\ref{fig:main_result} and Figure~\ref{fig:real_results} explicitly show that the latent nodes are an affine transformation of the ground truth, governing parameters. This reinforces our claim that the latent parameters we learn are human interpretable. Due to the use of a $\beta$-VAE, the latent physics module is constrained to learn sparse representations, subject to a Pareto fit. Adding additional latent nodes therefore results in representations for these superfluous nodes either being entirely uncorrelated to the governing parameters, or of extremely low magnitude. The affine transform is important, not only for interpretability, but also because a linear least squares can be used to tune the parameters once the governing equation has been identified.
\paragraph{Robustness against noise:} To assess performance in context of noise, we use the synthetic \textsc{free fall} task and add noise to the position detection module of varying strengths. This corrupted data is then used to train the latent physics module and serve as the input to the equation discovery module. The plots of governing parameters in Figure~\ref{fig:noise_analysis} show that with increasingly noisy input trajectories, the representations remain relatively robust. However, the variance in representations is found to increase as the input corruption level increases. We are satisfied with the quality of these representations. Using even noisy (yet correlated) representations in the equation discovery step, still enables us to recover output equations that are symbolically accurate. The method eventually fails for corruption with noise of standard deviation of 128. At this very high noise level, even the direction of the trajectory is changing (i.e. the ball appears to travel backward). We can observe this in the last column of Figure~\ref{fig:noise_analysis}.
\paragraph{Equation complexity versus accuracy:} Here we discuss how the proposed framework is able to recover the correct equation by balancing optimality in context of \emph{equation sparsity} and \emph{performance fit}. The equation discovery module results in a set of possible equations, of varying complexity (a function of the number of terms and operations in the equation). In order to choose an appropriate trade-off between fitting accuracy and complexity, we use plots such as those shown in Figure~\ref{fig:trade-off}. The knee point of the trade-off curve is chosen as the expression of interest, since it marks the point of maximum gain in error performance with minimal increase in complexity. Such a selection ensures that the genetic programming algorithm refrains from over-fitting on the relevant data, which is essential towards allowing for interpretability. This is also analogous to similar observations from representation learning, where there is an understood trade-off between the extent of disentanglement of latent embeddings and downstream prediction accuracy~\cite{higgins2017betaVAE}.
\paragraph{Effect of training data size:} Finally, we analyze the performance of our proposed method with respect to varying amounts of training data. This holds relevance in terms of the possible application of the pipeline (or others inspired by it) toward tasks with varying data availability. Figure~\ref{fig:training_samples_effect} shows the results of this analysis on the synthetic \textsc{free fall} task. We evaluate performance based on: (a) the normalized cross-correlation coefficient between the learnt active latent node and the ground-truth governing parameters, and (b) the trajectory prediction accuracy based on the latent values predicted by the latent physics module on test data, used on the discovered equations. Please refer to Appendix~\ref{app:second_order_implementation} for a detailed description of these metrics. The general trend of increasing correlation and reducing prediction error with increasing training samples is clearly visible in the plots. However, what is also of interest is the fact that the worst case error for the scenario with the lowest number of input samples (200 samples) has a sufficiently high correlation of 0.95. This highlights the versatility and robustness of the proposed approach towards a range of possible tasks.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{Figures/ParetoPlotWithGrid.pdf}
\caption{\textbf{Trade-off between equation complexity and accuracy.} We show multiple candidate equations for the synthetic free fall task along the vertical direction. The equation with the correct parametric form occurs at the optimal trade-off point.}
\label{fig:trade-off}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{Figures/PerformancePlotFat.pdf}
\caption{\textbf{Visual Physics framework improves consistently with different numbers of training samples.} We test the performance on the free-fall task under dataset sizes of 200, 300, 400, and 500 respectively. (a) shows the correlation coefficients between the ground-truth physical parameters and the discovered physical parameters, and (b) shows the mean squared error of the estimated locations in centimeters.}
\label{fig:training_samples_effect}
\end{figure}
\section{Discussion}
In summary, we have demonstrated the ability to discover physics from video streams. Our method is unique in that it is able to discover both the governing equations and physical parameters. Our results are powered by an encoder-decoder framework that learns latent representations. We show that, even in cases of significant noise, the latent representations are physically interpretable.
\paragraph{Beyond 2D phenomena:} The Visual Physics dataset consists of 2-dimensional scenarios. For example, the tossing ball is viewed from the side, such that the ball does not change in its axial depth. For engineering reasons, we assume that the physical phenomena is observed in the 2D camera space of a video camera. If dynamics occur in 3-dimensions (e.g. motion in $x,y,z$), then our algorithmic pipeline is still valid, but we must use a 3D camera to capture these 3D dynamics. In general, Visual Physics framework can apply to higher-dimensional scenarios, potentially outside of video, provided that the measurement space is able to capture the phenomena.
\paragraph{Applications:} For reader accessibility and experimental reproducibility, we have chosen simple problems (like projectile motion and circular motion). However, we could envision future applications of this framework to domains like high-energy astrophysics, optical scattering, and medical imaging where the governing equations are unknown or partially known. In medical imaging, for example, it is important to find latent embeddings that are both discriminative, but also physically interpretable.
\paragraph{Open problems:} Analogous to the apocryphal story of Newton's apple we have considered dynamics of a single object. This work is therefore a stepping stone to understanding the dynamics of multiple-objects. Another open problem is to extend the pipeline, beyond the three modules we have proposed. Concretely, we could also see adding a fourth module where the equation and embeddings we discover is used as input to another inference framework. For example, it might be possible to improve object detection given the velocities of objects, or create computational imaging pipelines that learn to classify scenes based on scattering properties. In conclusion, this paper is scratching the surface of the possibilities at the seamline of computer vision, physics, and artificial intelligence. We are excited to see these fields continue to merge.
{\small
\bibliographystyle{ieee_fullname}
|
2,877,628,091,504 | arxiv |
\section{Conclusion}
This paper advocates input quantization for the verification of neural networks
with low-dimensional inputs. Our experiments show that this technique is
significantly faster and more scalable than verifiers that analyze the internal
computations of the neural networks on verifying ACAS Xu networks. Moreover, our
method does not suffer from the floating-point discrepancy between the verifier
and the network inference implementation. In general, our method applies to
deterministic floating-point programs that take low-dimensional inputs as long
as the target application tolerates input quantization such that enumerating all
the quantized values takes acceptable time.
\section{Experiments}
We evaluate our method on checking the safety properties for the ACAS Xu
networks \citep{katz2017reluplex}. Note that accuracy of input-quantized
networks in deployed systems is acceptable since the quantization is equivalent
to nearest-neighbor interpolation that has been shown to provide effective
collision avoidance advisories in simulation \citep{julian2019deep}.
Experiments in this section focus on evaluating the runtime overhead of input
quantization and the actual speed of verification by enumerating quantized
states. We train two networks of different sizes to evaluate the scalability of
the proposed method.
\subsection{Experimental Setup}
The horizontal CAS problem takes seven inputs as described in
\cref{tab:input-def}, and generates one of the five possible
advisories: COC (clear of conflict), WL (weak left), WR (weak right), SL (strong
left), and SR (strong right).
\begin{table}[t]
\begin{threeparttable}
\centering
\scriptsize
\caption{
Description of horizontal CAS inputs. The last column describes the
values used to generate the lookup table, which are taken from the
open-source implementation of HorizontalCAS \citep{
julian2019guaranteeing} and the Appendix VI of \citet{
katz2017reluplex}.
\label{tab:input-def}
}
\vskip .5em
\begin{tabular}{lll}
\toprule
Symbol & Description & Values in the lookup table \\
\midrule
$\rho$ (m) & Distance from ownship to intruder &
32 values between 0 and 56000 \tnote{1}\\
$\theta$ (rad) & Angle to intruder \tnote{2}
& 41 evenly spaced values between $-\pi$ and $\pi$ \\
$\psi$ (rad) & Heading angle of intruder \tnote{2}
& 41 evenly spaced values between $-\pi$ and $\pi$ \\
$v_{own}$ (m/s) & Speed of ownship & $\{50, 100, 150, 200\}$ \\
$v_{int}$ (m/s) & Speed of intruder & $\{50, 100, 150, 200\}$ \\
$\tau$ (sec) & Time until loss of vertical separation &
$\{0, 1, 5, 10, 20, 40, 60\}$ \\
$\alpha_{prev}$ & Previous advisory & \{COC, WL, WR, SL, SR\} \\
\bottomrule
\end{tabular}
\vspace{1ex}
\begin{tablenotes}
\item[1] Distance values are nonuniformly distributed. They are given in
the source code of \citet{julian2019guaranteeing}:
\url{https://github.com/sisl/HorizontalCAS/blob/cd72ffc073240bcd4f0eb9164f441d3ad3fdc074/GenerateTable/mdp/constants.jl\#L19}
\item[2] Angle is measured relative to ownship heading direction.
\end{tablenotes}
\end{threeparttable}
\end{table}
\citet{julian2016policy} proposes to train a collection of neural networks where
each network works with a pair of specific $(\tau,\, \alpha_{prev})$ values,
takes the remaining five values as network inputs, and approximates the
corresponding scores in the lookup table. Although ACAS only needs to suggest
the action with the maximal score, the network is still trained to approximate
the original scores in the table instead of directly giving the best action
because the numerical scores are used in a Kalman filter to improve system
robustness in the face of state measurement uncertainty~\cite{
julian2016policy}. In order to maintain the action recommendation of the
original table while reducing score approximation error, \citet{julian2016policy}
adopts an asymmetric loss function that imposes a higher penalty if the network
and the lookup table give different action advisories.
\citet{katz2017reluplex} proposes a few ACAS Xu safety properties as a sanity
check for the networks trained by \citet{julian2016policy}. These properties
have also served as a useful benchmark for many neural network verifiers.
Although the pretrained networks of \citet{julian2016policy} are publicly
accessible, the authors told us that they could not provide the training data or
the source code due to regulatory reasons. They suggested that we use their
open-source HorizontalCAS system \citep{ julian2019guaranteeing} to generate the
lookup tables to train our own networks. However, HorizontalCAS networks differ
from the original ACAS Xu networks in that they only have three inputs by fixing
$v_{own}=200$ and $v_{int}=185$. We modified the source code of HorizontalCAS to
match the input description in \cref{tab:input-def} so that we can directly use
the ReluVal \citep{wang2018formal} verifier.
We evaluate our method by analyzing the property $\phi_9$ proposed in
\citet{katz2017reluplex}, which usually takes the longest time to verify among
all the properties for many verifiers~\cite{katz2017reluplex, wang2018formal,
singh2018robustness}. Other properties share a similar form but have different
input constraints and output requirements. Note that property $\phi_9$ is the
most compatible with the open-source HorizontalCAS because the input constraints
of other properties are beyond the ranges in \cref{tab:input-def}. For example,
property $\phi_1$ has $v_{own} \geq 1145$ but the quantization scheme of
$v_{own}$ for the original ACAS Xu networks is not publicly available.
The specification of $\phi_9$ is:
\begin{itemize}
\item {\bf Description:} Even if the previous advisory was ``weak right'',
the presence of a nearby intruder will cause the network to output a
``strong left'' advisory instead.
\item {\bf Tested on:} the network trained on $\tau=5$ and
$\alpha_{prev}=\text{WR}$
\item {\bf Input constraints:} $2000 \le \rho \le 7000$,
$-0.4 \le \theta \le -0.14$, $-3.141592 \le \psi \le -3.141592 + 0.01$,
$100 \le v_{own} \le 150$, $0 \le v_{int} \le 150$.
\end{itemize}
We conduct the experiments on a workstation equipped with two GPUs (NVIDIA Titan
RTX and NVIDIA GeForce RTX 2070 SUPER), 128 GiB of RAM, and an AMD Ryzen
Threadripper 2970WX processor. We train two neural networks for property
$\phi_9$ (i.e., with $\tau=5$ and $\alpha_{prev}=\text{WR}$) with PyTorch.
Our small network has five hidden layers with 50 neurons in each layer, and our
large network has seven hidden layers with 100 neurons in each layer. We use the
ReLU activation.
We implement the nearest-neighbor quantization for $\rho$ via directly indexing
a lookup table. The greatest common divisor of differences between adjacent
quantized $\rho$ values is 5. Therefore, we precompute a lookup table $\V{U}$
such that $U_{\lfloor \rho/5 \rceil}$ is the nearest neighbor of $\rho$ in the
set of quantized values. We use the \verb|torch.index_select| operator provided
by PyTorch to take elements in the lookup table in a batched manner. Other
network inputs use uniform quantization as described in \cref{tab:input-def}. We
implement uniform quantization according to the equation \eqnref{uniform-quant}.
\subsection{Experimental Results}
\begin{table}[t]
\centering
\caption{Accuracies achieved by the networks evaluated on the lookup table.
For comparison, \citet{julian2019guaranteeing} reports an accuracy of
97.9\% for networks trained only with three out of the five inputs (they
fixed $v_{own}=200$ and $v_{int}=185$). This table shows that our
network achieves sufficient accuracy for practical use.
\label{tab:acc}
}
\vskip .5em
\begin{tabular}{lrr}
\toprule
Metric & Small network & \hspace{1em} Large network \\
\midrule
Policy accuracy & 96.87\% & 98.54\% \\
Score $\ell_1$ error & 0.052 & 0.026 \\
Score $\ell_2$ error & $1.3\times10^{-3}$ & $3.3\times10^{-4}$ \\
\bottomrule
\end{tabular}
\end{table}
\begin{table}[t]
\caption{Comparing verification time (in seconds) for the property $\phi_9$
on two methods: the ReluVal verifier~\cite{wang2018formal} that runs on
multiple cores, and exhaustive enumeration in the quantized input space
on a single CPU core. This table shows that verification by enumerating
quantized input states is significantly faster in our case and also more
scalable regarding different network sizes.
\label{tab:cas/verify-time}
}
\renewcommand{\TPTminimum}{\linewidth}
\begin{threeparttable}
\vskip .5em
\makebox[\linewidth]{
\begin{tabular}{lrr}
\toprule
Verification method & Small network & \hspace{1em} Large network \\
\midrule
ReluVal \citep{wang2018formal} & 0.622 & 171.239 \\
Input quantization - specific \tnote{1} &
0.002 & 0.002 \\
Input quantization - all \tnote{2} & 0.384 & 0.866 \\
\bottomrule
\end{tabular}
}
\begin{tablenotes}
\item[1] Network is evaluated on the 60 input states that fall within
the input constraint of $\phi_9$.
\item[2] Network is evaluated on all the 860,672 input states in a
batched manner. This time is the upper bound for verifying any
first-order specification in the form of $\forall_{\V{x}}P(\V{x})
\implies R(\V{f}(\V{x}))$ by ignoring the time on evaluating predicates
$P(\cdot)$ and $R(\cdot)$.
\end{tablenotes}
\end{threeparttable}
\end{table}
Let $\V{y_i}\in\mathbb{R}^5$ (resp. $\hat{\V{y_i}}\in\mathbb{R}^5$) denote the scores given by the
network (resp. the original lookup table) for the five candidate actions on the
$\nth{i} $ lookup table entry. We consider three accuracy measurements, assuming
a uniform distribution of the table index $i$:
\begin{itemize}
\item \emph{Policy accuracy} is the probability that the network recommends
the same action as the original lookup table. Formally, it is $P(\argmax
\V{y_i} = \argmax \hat{\V{y_i}})$.
\item \emph{Score $\ell_1$ error} measures the $\ell_1$ error of
approximated scores, defined as $E(\pnorm{1}{\V{y_i} - \hat{\V{y_i}}})$,
where $\pnorm{1}{\V{x}} \vcentcolon= \sum_i |x_i|$.
\item \emph{Score $\ell_2$ error} measures the $\ell_2$ error of
approximated scores, defined as $E(\pnorm{2}{\V{y_i} - \hat{\V{y_i}}})$,
where $\pnorm{2}{\V{x}} \vcentcolon= \sqrt{\sum_i x_i^2}$.
\end{itemize}
\cref{tab:acc} presents the accuracies achieved by our networks, which shows
that our training achieves comparable results as the HorizontalCAS system
\citep{julian2019guaranteeing}.
To verify the networks, we prepend them with an input quantization layer that
implements the quantization scheme given in \cref{tab:input-def}. To verify any
specification or a set of specifications, we evaluate the network on all the 860,
672 points in the quantized space and check if each input/output pair meets the
specification(s). Evaluating the network on the grid points takes 0.384 seconds
for the small network and 0.866 seconds for the large one. We evaluate the
network on multiple inputs in a batched manner to benefit from optimized
numerical computing routines included in PyTorch. Adding the quantization layer
incurs about 2\% runtime overhead. We do not do any performance engineering and
use the off-the-shelf implementation provided by PyTorch. Our verification speed
can be further improved by using multiple CPU cores or using the GPU.
We also compare our method with ReluVal \citep{wang2018formal} on verifying the
property $\phi_9$. The input constraint of $\phi_9$ consists of only 60 states
in the quantized space. Therefore, we only need to check if the network
constantly gives the ``weak right' advisory for all the 60 states to verify
$\phi_9$. As shown in \cref{tab:cas/verify-time}, input quantization
significantly reduces the verification time compared to the ReluVal solver.
\section{Introduction}
\label{sec:intro}
The Airborne Collision Avoidance System (ACAS) is crucial for aircraft
safety~\citep{ kochenderfer2011robust}. This system aims to avoid collision with
intruding aircraft via automatically controlling the aircraft or advising a
human operator to take action. The ACAS typically takes low-dimensional sensory
inputs, including distance, direction, and speed for the intruder and ownship
aircraft, and provides a control policy which is a valuation for a set of
candidate actions such as ``weak left'' or ``strong right''. Recent work has
formulated aircraft dynamics under uncertainties such as advisory response delay
as a partially observable Markov decision process for which dynamic programming
can be used to compute values for different actions~\citep{
kochenderfer2015optimized}. The value function computed via dynamic programming
is often stored in a lookup table with millions of entries \citep{
kochenderfer2010decision} that require gigabytes of storage. While this table
could, in principle, be used to implement the ACAS, the high storage demand
makes it too costly to be embedded in practical flight control systems. This
situation has motivated the development of table compression techniques,
including block compression with reduced floating-point precision \citep{
kochenderfer2013compression} and decision trees \citep{julian2019deep}.
Recently, neural networks have emerged as an efficient alternative for
compressing the lookup tables in ACAS Xu (ACAS X for unmanned aircraft) by
approximating the value function with small neural networks. Specifically,
\citet{ julian2019deep} compresses the two-gigabyte lookup table into 45 neural
networks with 2.4MB of storage, where each network handles a partition of the
input space.
\citet{katz2017reluplex} proposes a set of safety properties for the ACAS Xu
networks, such as that a ``strong right'' advisory should be given when a nearby
intruder is approaching from the left. These safety properties have served as a
valuable benchmark to motivate and evaluate multiple verification algorithms
\citep{katz2017reluplex, wang2018formal, singh2019boosting, tran2020nnv,
bak2020improved}. Such verifiers typically need to perform exact or conservative
analysis of the internal neural network computation \citep{liu2019algorithms,
urban2021review}. Unfortunately, neural network verification is an NP-Complete
problem \citep{katz2017reluplex}, and therefore the verifiers need exponential
running time in the worst case and can be very slow in practice. In particular,
\citet{bak2020improved} recently presented the first verifier that is able to
analyze the properties $\phi_1$ to $\phi_4$ in the ACAS Xu benchmarks with a
time limit of 10 minutes for each case, but their verifier still needs 1.7 hours
to analyze the property $\phi_7$.
In summary, previous techniques perform the following steps to obtain and verify
their neural network controllers for ACAS:
\begin{enumerate}
\item Compute a lookup table containing the scores of different actions
given sensory states via dynamic programming.
\item Train neural networks to approximate the lookup table.
\item In deployed systems, use the neural networks to provide control
advisories.
\begin{itemize}
\item At run time, the networks give interpolated scores for
states not present in the original lookup table.
\item Neural network verifiers that analyze the internal computing
of neural networks are adopted to check if the networks meet
certain safety specifications.
\end{itemize}
\end{enumerate}
We propose instead to verify neural networks with low-dimensional inputs, such
as the ACAS Xu networks, via input quantization and state enumeration.
Specifically, we prepend a quantization layer to the network so that all the
internal computation is performed on the discretized input space. Our proposed
technique performs the following steps to obtain and verify a quantized neural
network:
\begin{enumerate}
\item We take a pretrained network and prepend an input quantization layer
to the network. The input quantization should be compatible with the
original lookup table, i.e., preserving the grid points in the lookup
table.
\item In deployed systems, sensory inputs are first quantized by the input
quantization layer. The original network then computes the scores for
the quantized input.
\begin{itemize}
\item At run time, the quantization process is equivalent to
nearest-neighbor interpolation.
\item To verify the network for any specification, we enumerate all
quantized states within the constraint of the specification and
check if the network outputs meet the specification.
\end{itemize}
\end{enumerate}
Our method provides the following desirable features:
\begin{enumerate}
\item Our method provides acceptable runtime accuracy for ACAS Xu. Our
input quantization is equivalent to nearest-neighbor interpolation and
gives identical results on the table grid points as the original
continuous network. \citet{ julian2019deep} has shown that
nearest-neighbor interpolation on the lookup table for runtime sensory
inputs provides effective collision avoidance advisories in simulation.
\item Our method enables efficient verification. Verifying the
input-quantized networks for any safety specification takes nearly
constant time bounded by evaluating the network on all the grid points
in the quantized space. Multiple specifications can be verified
simultaneously by evaluating the network on the grid once and checking
the input and output conditions for each property. Our method provides a
verification speedup of tens of thousands of times compared to the
ReluVal~\citep{wang2018formal} verifier.
\item Many existing verifiers do not accurately model floating-point
arithmetic due to efficiency considerations, thus giving potentially
incorrect verification results \citep{jia2020exploiting}. For example,
\citet{ wang2018formal} reports that Reluplex \citep{katz2017reluplex}
occasionally produces false adversarial examples due to floating-point
error.
By contrast, our verification result is exact (i.e., complete and sound)
and does not suffer from floating-point error because we combine input
quantization and complete enumeration of the effective input space.
Moreover, input quantization allows directly verifying on the target
implementation or an accurate simulation of the implementation, and
therefore provides trustworthy safety guarantees for given neural
network inference implementations.
\item Our technique allows easily verifying more complicated network
architectures, such as continuous-depth models \citep{chen2018neural}.
Our verification only needs an efficient inference implementation for
the networks. By contrast, extending other neural network verifiers to
new network architectures requires significant effort.
\end{enumerate}
We recommend input quantization for neural networks with low-dimensional inputs
as long as the quantization provides sufficient accuracy for the target
application and the quantization space is small enough to allow efficient
enumeration. This technique enables efficient, exact, and robust verification
and provides reliable performance on the deployed platform.
\section{Method}
We formally describe our input-quantization method. This paper uses bold symbols
to represent vectors and regular symbols to represent scalars. The superscript
represents derived mathematical objects or exponentiation depending on the
context.
Let $\V{f}:\mathbb{R}^n \mapsto \mathbb{R}^m$ denote the computation of a neural network on
$n$-dimensional input space with $n$ being a small number. We propose to use
a quantized version of the network for both training and inference, defined as
\begin{align}
\V{f}^q(\V{x}) \vcentcolon= \V{f}(\V{q}(\V{x}))
\end{align}
where $\V{q}(\V{x})$ is the quantization function such that $\V{q}(\V{x})\in S$ with
$S$ being a finite-sized set. For a specification $\phi: \forall_{\V{x}} P(\V{x})
\implies R(\V{f}(\V{x}))$ where $P(\cdot)$ and $R(\cdot)$ are predicates, we verify
$\phi$ regarding $\V{f}^q$ by checking:
\begin{align}
\phi^q \;:\quad & \forall \V{x}^q \in S_p \implies R(\V{f}(\V{x}^q)) \\
\text{where } & S_p \vcentcolon= \{\V{q}(\V{x}) \;:\; P(\V{x}) \} \nonumber
\end{align}
Since $S_p \subseteq S$, the complexity of verifying $\phi^q$ is bounded by
$|S|$.
We quantize each dimension of $\V{x}$ independently via $\V{q}(\V{x}) = [q_1(x_1)
\ldots q_n(x_n)]$. Note that if some of the dimensions are highly correlated in
some application, we can quantize them together to avoid a complete Cartesian
product and thus reduce the size of the quantized space.
In many cases, the input space is uniformly quantized. Previous work has
utilized uniform input quantization for neural network verification \citep{
wu2020robustness, jia2020efficient} and uniform computation quantization for
efficient neural network inference \citep{ gholami2021survey}. Given a
quantization step $s_i$ and a bias value $b_i$, we define a uniform quantization
function $q_i(\cdot)$ as:
\begin{align}
q_i(x_i) = \left\lfloor \frac{x_i - b_i}{s_i} \right\rceil s_i + b_i
\label{eqn:uniform-quant}
\end{align}
where $\lfloor\cdot\rceil$ denotes rounding to the nearest integer.
The values of $q_i(\cdot)$ are essentially determined according to prior
knowledge about the target application and may thus be nonuniform. Let $Q_i =
\{v_i^1, \cdots, v_i^k\}$ denote the range of $q_i(\cdot)$. We use nearest
neighbor for nonuniform quantization:
\begin{align}
q_i(x_i) = \argmin_{v_i^j} |v_i^j - x_i|
\end{align}
The ACAS Xu networks are trained on a lookup table $\V{L}: G \mapsto \mathbb{R}^m$,
where the domain $G\subset\mathbb{R}^n$ is a finite set. We choose the quantization
scheme so that the quantization preserves grid points, formally $\forall \V{x} \in
G: \V{q}(\V{x}) = \V{x}$. In this way, the training processes of $\V{f}(\cdot)$ and
$\V{f}^q(\cdot)$ are identical. In fact, we directly prepend $\V{q}(\cdot)$ as an
input quantization layer to a pretrained network $\V{f}(\cdot)$ to obtain
$\V{f}^q(\cdot)$. Note that we can use a denser quantization than the grid points
in $G$ so that prediction accuracy might get improved by using the neural
network as an interpolator.
\section*{References}}
\makeatletter
\renewcommand\@biblabel[1]{#1.}
\makeatother
\renewcommand\UrlFont{\color{blue}\rmfamily}
\begin{document}
\title{\papertitle}
\author{Kai Jia\inst{1}\and
Martin Rinard\inst{1}}
\institute{MIT CSAIL, Cambridge MA 02139, USA \\
\email{\{jiakai,rinard\}@mit.edu}
}
\maketitle
\input{abstract}
\input{allcontent}
\bibliographystyle{splncs04nat}
|
2,877,628,091,505 | arxiv | \section{INTRODUCTION}
\label{sec:intro}
Driven by the need for flexible and efficient manufacturing, an increasing number of affordable mobile robots are expected to be deployed in warehouse environments for transportation purposes. One key component to support the applications of large scale robots is the multi-agent path planning technology. Many research efforts have been devoted to this field in recent years from different perspectives.
Generally, multi-agent planning methods can be classified into two categories: centralized methods and decentralized methods. When all the moving agents' intentions (e.g. future trajectories, goals) are known in a static environment, a centralized planner could generate collision-free paths for all the agents \cite{sharon2015conflict}. However, the computational burden may be a significant concern as the number of agents grows, and the agent's performance may degrade when exposed to unknown dynamic objects \cite{mellinger2012mixed}. Besides, in practice, centralized methods heavily rely on stable and fast communication networks and powerful servers, which would be costly to be deployed in large scale environments with a large number of robots. Therefore, in this paper, we focus on decentralized methods, where reliable communication can not be established between agents.
For decentralized methods, each agent independently makes decisions based on its own local observations and policies. A natural question is: what should the agent know and assume about other agents or dynamic obstacles around it? Some approaches assume all obstacles are static and re-plan at a high frequency to avoid collision \cite{koenig2005fast}, while other people assume homogeneous policy for agents and constant velocity for dynamic obstacles \cite{van2008reciprocal}. However, we argue that in practice, it is difficult to perfectly estimate neighbouring decision-making agents' intentions without communication. Therefore, instead of using traditional path planning procedures, some recent approaches use reinforcement learning to solve robot the navigation problem by implicitly learning to deal with such interaction ambiguity with surrounding moving obstacles \cite{chen2017socially, everett2018motion, mehta2016autonomous, sartoretti2019primal}.
Though learning-based approaches have shown great potential to model complex interactions in dynamic environments, most of them make assumptions about the homogeneity or motion models of surrounding moving obstacles \cite{long2018towards, chen2017decentralized}. In this paper, we focus on planning in mixed dynamic environments where moving obstacles can either be cooperative or non-cooperative. Also, inspired by state-of-the-art trajectory prediction methods \cite{cui2019multimodal}, we propose an image-based spatial-temporal dynamic obstacle representation, which doesn't need explicit motion estimation and can be generalized to the arbitrary number of agents.
Reinforcement learning agent is usually difficult to achieve satisfying performance in long-horizon tasks with sparse rewards, as in the long-range goal-conditioned navigation problem \cite{eysenbach2019search}.
Therefore, one insight in this paper is using mature planning methods to guide the reinforcement learning-based local planning policy. In this way, agents can learn from complicated local interaction with dynamic obstacles while persistently moving towards a long-range goal. In addition, to ensure the multi-agent training stability and performance, we propose an evolutionary reinforcement learning method that can be easily scaled to large and complex environments.
The major contributions of this paper are:
\begin{enumerate}
\item We investigate the multi-agent path planning problem in mixed dynamic environments without the homogeneity assumption.
To model the dynamic obstacles' behavior, we propose an image-based representation which improves our agents' robustness to handle different types of dynamic obstacles.
\item We decompose a difficult long-range planning problem into multiple easier waypoint-conditioned planning tasks with the help of mature global planners. Experiments show that this approach can greatly improve the performance of reinforcement learning-based methods.
\item We propose an evolutionary multi-agent reinforcement learning approach that gradually eliminate low-performance policies during training to increase training efficiency and performance.
\end{enumerate}
The structure of this paper is as follows. Section~\ref{sec:related} introduces related works about multi-agent path planning in dynamic environments. Section~\ref{sec:background} provides some preliminaries for our problem formulation. The detail of our multi-agent path planning with the evolutionary reinforcement learning (MAPPER) approach is presented in section~\ref{sec:approach}, followed by section~\ref{sec:experiment} which shows the experiment results of MAPPER in various grid world simulations with mixed dynamic obstacles. Finally, we give a brief conclusion in section~\ref{sec:conclusion}.
\section{Related Work}
\label{sec:related}
\subsection{Decentralized Path Planning in Dynamic Environment}
Decentralized planning methods can be broadly classified into two categories: reaction-based and trajectory-based.
For reaction-based approaches, we need to specify avoidance rules for dynamic obstacles and re-plan at each step based on the rules, such as D* lite \cite{koenig2005fast} and velocity obstacle (VO) based methods \cite{snape2011hybrid, bareiss2015generalized}.
Trajectory-based approaches usually estimate surrounding dynamic objects' intentions and then search collision-free paths in the space-temporal domain \cite{fan2018baidu}. These methods either require perfect information of surrounding dynamic obstacles (e.g. velocities and positions) or assume that all the moving agents adopt the same planning and control policy so that they are homogeneous \cite{van2011reciprocal, van2008reciprocal}. However, such assumptions may not hold in many real-world applications when involved with sensing uncertainty and heterogeneous moving obstacles, such as pedestrians. Beside, increasing local interaction complexity may lead to oscillation behaviors or \textit{freezing robot problem} \cite{trautman2010unfreezing}. Also, in practice, VO-based and trajectory-based approaches usually have several components to process sensor data, such as object-intention estimation and trajectory prediction. Each component may have many hyper-parameters that are sensitive to environment settings, which need extra human efforts to tune.
In order to reduce the amount of hand-tuning parameters and deal with sensing uncertainties, some researchers proposed learning-based methods to solve the planning problem.
\subsection{Reinforcement Learning-based Planning}
Reinforcement learning-based collision-avoidance algorithms for the single-agent case have been extensively studied in recent years. Deep neural networks are usually used to approximate agent's policy and value function. Some people propose to learn the navigation policy in a completely end-to-end fashion, which directly maps raw sensor data to the agent's action \cite{pfeiffer2017perception,tai2017virtual}. However, we believe that extracting object-level representation can improve the policy generalization ability, because different sensor data sources may encode the same object-level information. Chen et al. first estimate dynamic obstacle's physical states (e.g. velocity and positions) under certain motion model assumptions, and then feed them into neural network to obtain future actions~\cite{chen2017socially, chen2017decentralized} . However, the agents' number is restricted and can hardly be deployed in large scale environments. \cite{everett2018motion} addresses the problem of a varying number of dynamic obstacles with LSTM and removes the homogeneity assumption for surrounding agents. However, it still requires explicit estimation of surrounding agents' states and suffers performance degradation in tasks with a large number of agents. For multi-agent case, PRIMAL \cite{sartoretti2019primal} is the most similar work with ours, which also uses image-based representation and target goal as input sources. However, non-cooperative dynamic obstacles and temporal information are not considered in their work. Besides, their centralized training approach takes a long time even with the help of imitation learning. Our approach differs from theirs in that: 1) we encode both spatial and temporal information of surrounding obstacles in observation representations; 2) we consider planning in mixed dynamic environments; 3) we propose a decentralized evolutionary training method which can converge much faster and can be generalized to arbitrary number of training agents; 4) we use mature global planner to guide the local policy to solve long-range navigation problem. We will use the reinforcement learning method in PRIMAL as an experiment baseline in section~\ref{sec:experiment}.
\section{Background}
\label{sec:background}
\subsection{Problem Formulation}
We model the multi-agent planning problem under the Markov decision processes (MDPs) framework. An $N$-agent MDPs can be represented by the state space $\mathcal{S}$, which describes all the possible state configurations of the agents, the action space $\mathcal{A}_1, ..., \mathcal{A}_N$, which defines each agent's possible actions, and the observation space for each agent $\mathcal{O}_1, ..., \mathcal{O}_N$. In this paper, we consider the partially observable situation, which means each agent can not observe all the state. The agent $i$ receives its own observation $\boldsymbol{o}_i:\mathcal{S} \mapsto \mathcal{O}_i$ and produces an action from its stochastic policy $\boldsymbol{\pi}_{\theta_i}:\mathcal{O}_i \times \mathcal{A}_i \mapsto [0,1]$, where the policy is parameterized by $\theta_i$. All the agents' actions will produce new state that follows the state transition function $\mathcal{T} : \mathcal{S} \times \mathcal{A}_1 \times ... \times \mathcal{A}_N \mapsto \mathcal{S}$. For each time step, agent $i$ will receive rewards based on state and its action $r_i:\mathcal{S} \times \mathcal{A} \mapsto \mathbb{R}$. The initial states is determined by the distribution $\rho:\mathcal{S} \mapsto [0,1]$. The objective is to maximize the expected return $R_i = \sum_{t=0}^{T} \gamma^tr_i^t$ of each agent $i$, where $\gamma$ represents the discount factor and $T$ is the time length. The detail representations of observation space, action space and rewards will be introduced in section~\ref{sec:approach}.
\subsection{Advantage Actor Critic Algorithm}
We use advantage actor-critic (A2C) method \cite{mnih2016asynchronous} as the basis of our multi-agent evolutionary reinforcement learning framework to solve the planning problem. A2C uses stochastic policy, which is essential in our multi-agent scenario because the equilibrium policies in multi-agent MDPs are usually stochastic \cite{bucsoniu2010multi}. Additionally, policy gradient-based methods usually have better convergence property than value-based methods \cite{mnih2016asynchronous}. The objective of A2C is to find the policy $\boldsymbol{\pi}_{\theta}( a |\boldsymbol{o})$ that can maximize the expected return $\mathbb{E}_{\pi_\theta}R(\tau) = \mathbb{E}_{\pi_\theta}\sum_{t=0}^{T} \gamma^tr(\boldsymbol{o}_t, a_t)$ over the episode $\tau = (\boldsymbol{o}_0, a_0, \dots, \boldsymbol{o}_T, a_T)$, where $a$ is the action and $\boldsymbol{o}$ is the observation. Given this objective function, the policy gradient can be computed as:
\begin{equation}
\nabla_\theta J(\theta) = \mathbb{E}_{\boldsymbol{\pi}_\theta}[\nabla_\theta \log \boldsymbol{\pi}_{\theta}( a |\boldsymbol{o}) R(\tau)]
\end{equation}
To reduce the gradient variance, we employ a value function $V^{\boldsymbol{\pi}_{\theta}}(\boldsymbol{o})$ as the baseline and replace the expected return $R(\tau)$ with an advantage function $A^{\boldsymbol{\pi}_{\theta}}(\boldsymbol{o}, a)$. Then we can rewrite the gradients as:
\begin{equation}
\nabla_\theta J(\theta) = \mathbb{E}_{\boldsymbol{\pi}_\theta}[\nabla_\theta \log \boldsymbol{\pi}_{\theta}( a |\boldsymbol{o}) A^{\boldsymbol{\pi}_{\theta}}(\boldsymbol{o}, a)],
\end{equation}
where the advantage function $A^{\boldsymbol{\pi}_{\theta}}(\boldsymbol{o}, a)$ has an unbiased estimation:
\begin{equation}
A^{\boldsymbol{\pi}_{\theta}}(\boldsymbol{o}, a) = \mathbb{E}_{\boldsymbol{\pi}_\theta}(r + \gamma V^{\boldsymbol{\pi}_{\theta}}(\boldsymbol{o}^\prime) | \boldsymbol{o}, a) - V^{\boldsymbol{\pi}_{\theta}}(\boldsymbol{o})
\end{equation}
The policy $\boldsymbol{\pi}_\theta$ is usually termed as the \textit{actor} to produce actions based on current observations, and the value function $v^{\boldsymbol{\pi}_{\theta}}$ is the \textit{critic}, which is used to estimate the advantage function $A^{\boldsymbol{\pi}_{\theta}}$ that indicates the quality of produced actions. In this paper, we approximate the policy and value function with neural networks, which will be introduced in section~\ref{archi}.
\section{Approach}
\label{sec:approach}
This section shows how the multi-agent path planning problem is modeled into an evolutionary reinforcement learning framework. We firstly introduce the observation representation, action space, and reward design of each agent. Then, we detail the model architecture and training procedures.
\subsection{Observation Representation}
\label{obs}
In many real-world mobile robot applications, people usually use the single beam LiDAR for localization and obstacle detection purposes, which is cheap and reliable \cite{grisetti2007improved, pierson2019dynamic}. A common map representation based on the LiDAR data is called the cost map, which discretizes a 2D map into a fixed resolution grids and assigns a cost to each grid \cite{reid2013cooperative}. The cost and obstacle information can be continuously updated by the local observation of sensor data. Therefore, to mimic such common map representations in practice, we consider a partially observable grid world environment, where each agent has its own visibility that limited by the sensing range and there is no communication between agents. We argue that such a fully decentralized partially observable setting is feasible and important if we need to deploy our approach to the real-world with large scale robots. We assume that each agent is able to detect and distinguish surrounding agents and dynamic objects within its sensing range and estimate their relative positions. Also, we assume that each agent can access the static environment map so that it can plan a trajectory in this map.
We split the observations into three channels to encode different types of information. As shown in Fig.~\ref{fig:framework}, the first channel stores current observed static obstacles, surrounding agents and dynamic objects' positions, which are represented by different values. This channel is the basic reflection of sensing data and is corresponding to the cost map representation, which could be used in many traditional search-based planning algorithms \cite{koenig2005fast}. The second channel is the trajectory of surrounding agents and dynamic obstacles, which encodes the time sequence information. Inspired by the state-of-the-art trajectory prediction method in the autonomous vehicle field, we encode the trajectory with different grayscales in time \cite{cui2019multimodal}. For example, the point on a trajectory in the earlier time has a smaller value than the later one. The third channel is the reference path planned by a global planner based on the static environment map. The reference path update frequency could be much lower than our reinforcement learning-based local planner. We will demonstrate the importance of those observation representations in the experiment section~\ref{sec:experiment}.
\begin{figure}[htb]
\centering
\includegraphics[width=8.3cm]{framework.png}
\caption{MAPPER model architecture overview.}
\label{fig:framework}
\end{figure}
\subsection{Action Space}
In this paper, we consider an 8-connected grid environment, which means the agent can move to 8 directions: south, north, west, east, southwest, northwest, southeast and northeast. The agent can also choose to wait at the current grid. Thus, the action space contains 9 discrete options in total. At each time step, the agent will move one step following the direction that is selected. However, if the target grid is already occupied, the agent will not be able to move and will stay at the current position.
\subsection{Reward Design}
The objective of robot navigation is to reach the goal position with minimum steps while avoiding collision with obstacles. Therefore, the first part of the reward consists of step penalty $r_s$, collision penalty $r_c$ and goal-reaching reward $r_g$.
To encourage exploration, we penalize slightly more for waiting than moving if the agent has not reached the goal. A similar training trick is also used in \cite{sartoretti2019primal}. To prevent agents from adopting oscillating policies, we set penalty $r_o$ to agents that return to the positions they come from last time. The detailed values of these reward components in our experiment can be found in Table~\ref{reward}.
Since our local planning policy is guided by a reference path planned by global planner, we introduce an additional off-route penalty $r_{f}$ if the agent deviates from the reference path. The intuition is that if there are no dynamic obstacles around the agent, it should be able to follow the reference path. To obtain the off-route penalty,
we need to calculate the Euclidean distance between the agent's position and the closest point's position on the reference path. Denote the position of the agent as $\boldsymbol{p}_a \in \mathbb{R}^2$. Denote the reference path as a set of coordinates $\mathcal{S} = \{\boldsymbol{p}_{start},...,\boldsymbol{p}_{goal}\}$,
the penalty is calculated by
$
r_{f} = -\min_{\boldsymbol{p} \in \mathcal{S}} ||\boldsymbol{p}_a - \boldsymbol{p} ||_2
$.
Then the final reward is
$
R = r_s+r_c+r_o+r_g+\lambda r_{f}
$,
where the $\lambda$ controls the weight of off route reward.
\begin{table}[h]
\caption{Reward Design}
\label{reward}
\begin{center}
\begin{tabular}{|c |c|}
\hline
\textbf{Reward} & \textbf{Value}\\
\hline
step penalty $r_s$ & -0.1 (move) or -0.5 (wait) \\
\hline
collision penalty $r_c$ & -5 \\
\hline
oscillation penalty $r_o$
& -0.3\\
\hline
off-route penalty $r_{f}$
& -$\min_{\boldsymbol{p} \in \mathcal{S}} ||\boldsymbol{p}_a - \boldsymbol{p} ||_2$\\
\hline
goal-reaching reward $r_g$
& 30\\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{algorithm}
\caption{Multi-Agent Evolutionary Training Approach}
\label{alg:EA}
\begin{algorithmic}[1]
\REQUIRE ~ Agents number $N$; discount factor $\gamma$; evolution interval $K$; evolution rate $\eta$;
\STATE Initialize agents' model weights $\Theta = \{\Theta_1,...,\Theta_N\}$
\REPEAT
\STATE Set accumulated reward $R^{(k)}_1,...,R^{(k)}_N = 0$
\STATE // \textit{update model parameters via A2C algorithm}
\FOR{$k=1,...,K$}
\FOR{each agent $i$}
\STATE Executing the current policy $\boldsymbol{\pi}_{\Theta_i}$ for $T$ timesteps, collecting action, observation and reward $\{a_i^{t}, \boldsymbol{o}_i^t, r_i^t \}$, where $t\in [0, T]$
\STATE Compute return $R_i = \sum_{t=0}^{T} \gamma^tr_i^t$
\STATE Estimate advantage $\hat{A}_i = R - V^{\boldsymbol{\pi}_{\Theta_i}}(\boldsymbol{o}_i)$
\STATE Compute gradients $
\nabla_{\Theta_i} J = \mathbb{E}[\nabla_{\Theta_i} \log \boldsymbol{\pi}_{\Theta_i}\hat{A}_i]$
\STATE Update $\Theta_i$ based on gradients $\nabla_{\Theta_i} J$
\ENDFOR
\STATE $R^{(k)}_i = R^{(k)}_i + R_i$
\ENDFOR
\STATE Normalize accumulated reward to get $\Bar{R}^{(k)}_1,...,\Bar{R}^{(k)}_N$
\STATE Find maximum reward $\Bar{R}_{j}^{(k)}$ with agent index $j$
\STATE // \textit{Evolutionary selection}
\FOR{ each agent $i$}
\STATE Sample $m$ from uniform distribution between $[0,1]$
\STATE Compute evolution probability $p_i = 1-\frac{\exp (\eta\Bar{R}_i^{(k)})}{\exp (\eta\Bar{R}_j^{(k)})}$
\IF{$m < p_i$}
\STATE $\Theta_i \xleftarrow{} \Theta_j$
\ENDIF
\ENDFOR
\UNTIL converged
\end{algorithmic}
\end{algorithm}
\subsection{Model Architecture}
\label{archi}
We use deep neural networks to approximate the policy and the value function in our A2C method.
The model architecture is illustrated in Fig.~\ref{fig:framework}. We have two input sources to be processed independently before being concatenated as a combined feature. The first one is the three channels image represented observation, which has been introduced in section ~\ref{obs}. The image channels are passed through several blocks, which contain convolution layers, and max-pooling layers. After the final block, the extracted feature will be flattened to one feature embedding.
We notice that reinforcement learning may hardly solve long-term tasks to get the reaching goals rewards \cite{eysenbach2019search}. Therefore, instead of using final goals as one input source, we use the waypoint coordinates as sub-goals of our task, which is computed by the global planner. More specifically, the global planner, which is the A* planner in our case, will generate a reference path from the start point to the goal. Then the agent will choose waypoints on the reference path based on a certain distance interval threshold and attempt to reach them one by one. Once the agent approaches its current waypoint goal within a pre-defined range, it will begin to head to the next waypoint.
The currently selected waypoint can be viewed as a sub-goal. It will be passed through a fully connected layer, and then be fed together with the observation feature embedding to two shared fully connected layers. The output feature of the two shared layers will then be passed through two separate neural networks. The lower one is a two-layers policy network with softmax activation, which produces the probability of choosing each action. The upper one is the value function network, which outputs the expected value of the current state.
\subsection{Multi-Agent Evolutionary Reinforcement Learning}
Although reinforcement learning has achieved great success in many single-agent tasks \cite{mnih2013playing}, it is still hard to directly apply those methods to the multi-agent case.
One challenge is the scalability issue: as the number of agents grows, the environment becomes more complicated and the variance of policy gradients may grow exponentially \cite{lowe2017multi}.
Inspired by evolutionary algorithm that has been successfully applied to many optimization problems \cite{simon2013evolutionary}, we adopt a decentralized evolutionary approach based on A2C algorithm, which can be applied to arbitrary number of agents training procedure. Evolutionary algorithm usually contains three stages: crossover, mutation and selection. Let's denote the model parameters of agent $i$ as $\Theta_i$. We firstly initialize $N$ agents with random weights for their own model. Then the mutation process begins by training each agent's model separately using A2C algorithm. After $k$ episodes training, agent $i$ will accumulate the rewards over the last $k$ episodes, and we denote it as $R_i^{(k)}$. Denote $R_{max}^{(k)} = \max_{i \in \{1,...,N\}} R_i^{(k)}$ and $R_{min}^{(k)} = \min_{i \in \{1,...,N\}} R_i^{(k)}$. We normalize the accumulated reward for agent $i$ by:
$
\Bar{R}_i^{(k)} = \frac{R_i^{(k)}}{R_{max}^{(k)} - R_{min}^{(k)}}
$.
Assume agent $j$ has the maximum normalized reward $\Bar{R}_{j}^{(k)} = \max_{i \in \{1,...,N\}} \Bar{R}_i^{(k)}$, then we start the crossover and selection stages. Each agent $i$ has the probability $p_i$ to reserve its original model weights and $1-p_i$ probability to replace its weights with the weights of agent $j$. The probability is calculated by
$
p_i = 1-\frac{\exp (\eta\Bar{R}_i^{(k)})}{\exp (\eta\Bar{R}_j^{(k)})}
$.
where $\eta$ controls the evolution rate. Larger $\eta$ means agents with lower rewards are more likely to be updated. The core idea of our evolutionary method is very simple: gradually eliminate bad policies while maintaining good ones. The full MAPPER training process is shown in Algorithm~\ref{alg:EA}.
\section{Experiment and Discussion}
\label{sec:experiment}
\subsection{Experiment Settings}
\begin{figure}[htb]
\centering
\includegraphics[width=8.5cm]{maps_new.jpg}
\caption{Grid world simulation environment demonstration.}
\label{fig:map}
\end{figure}
We evaluate our approach in grid world simulation environment, just as Fig.~\ref{fig:map} shows. Gray blocks are static obstacles and black blocks are agents' goals. Orange circles represent agents. Each agent has a 7-grid sensing range in our experiment setting, which means the size of the observation image is $15 \times 15 \times 3$. Blue triangles present dynamic obstacles, where each dynamic obstacle will navigate to a randomly selected goal using LRA* algorithm~\cite{coop}. To increase the dynamic obstacle movement pattern diversity, we randomly select 50\% dynamic obstacles that will ignore the presence of surrounding agents, which would be more challenging for our agents because of their non-cooperative nature.
Existing centralized multi-agent path planning methods, such as conflict based search \cite{sharon2015conflict}, break down in mixed dynamic environments because of the unpredictable nature of non-cooperative moving obstacles. Therefore, we resort to decoupled reaction-based planning approaches. One benchmark we adopt is a modified local repair A* (LRA*) algorithm that re-plans at every time step, where we replace A* with D* lite \cite{koenig2005fast} implementation because the latter is more computationally efficient in dynamic environments \cite{coop}.
Each LRA* agent takes into account local observation, updates the cost map accordingly, and searches for a route to the destination.
Then, a coordinator that has access to every agent's future plan resolves conflicts between agents and adjust these paths. LRA* behaves similarly to our MAPPER method in that they both react based on local observation, but note that we do not require access to all agents' future plan information.
Another baseline is PRIMAL \cite{sartoretti2019primal}, a reinforcement learning-based decentralized planner. We modify the original PRIMAL to adjust to our experiment setting because the original model and observation representation do not consider non-cooperative dynamic obstacles. More specifically, we use the same observation representation and network architecture as ours, but keep the original A3C training procedures and goal-conditioned approach as PRIMAL, so we name it PRIMAL* in the rest of the paper.
We also conduct experiments that remove part of features of our MAPPER method, which are removing the moving dynamic obstacles' trajectory (w/o traj) and removing the global planner guidance feature (w/o guid). We evaluate the performance of each method in terms of the \textbf{success rate} in different experiment settings.
\begin{table*}[t]
\centering
\caption{Comparison of success rate over different experiment settings}
\label{compare}
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
\multicolumn{3}{|c|}{Environment Setting} & \multicolumn{5}{c|}{Success Rate} \\ \hline
map size & agent & dynamic obstacle & MAPPER & MAPPER w/o traj & MAPPER w/o guid & PRIMAL* & LRA* \\ \hline
20x20 & 15 & 10 & \textbf{1.0} & 0.971 & 0.877 & 0.964 & 0.996 \\ \hline
20x20 & 35 & 30 & \textbf{1.0} & 0.961 & 0.836 & 0.980 & 0.999 \\ \hline
20x20 & 45 & 30 & \textbf{0.999} & 0.854 & 0.607 & 0.971 & 0.997 \\ \hline
60x65 & 70 & 100 & \textbf{1.0} & 0.256 & 0.516 & 0.352 & \textbf{1.0} \\ \hline
60x65 & 130 & 140 & \textbf{1.0} & 0.473 & 0.221 & 0.404 & 0.992 \\ \hline
120x130 & 150 & 40 & \textbf{0.997} & 0.324 & 0.211 & 0.389 & 0.994 \\ \hline
\end{tabular}
\end{table*}
\subsection{Training Details}
Inspired by the idea of curriculum learning \cite{forestier2017intrinsically}, we divide the whole training procedure into two stages and start from easier tasks. We begin by initializing a small population of agents and dynamic obstacles, and sample goals within a certain distance to let agents learn a short-range navigation policy. Then we increase the agents and dynamic obstacles number, and sample goals in the whole map.
The training parameters are the same for MAPPER and its variants. We set off-route penalty weight $\lambda = 0.3$, the evolution rate and interval $\eta=2, K=50$, the discount factor $\gamma=0.99$, and the learning rate $lr= 0.0003$. For PRIMAL*, we observe that it is sensitive to the learning rate and will not converge if we set the same learning rate as MAPPER. Therefore, we set the learning rate for PRIMAL* as 0.00005 after several experimental explorations.
For the first stage, we initialize 4 agents and 10 dynamic obstacles in a $20\times 20$ map with 7 grid goal-sample range, as shown in Fig.~\ref{fig:map} left. For the second stage, we train models with 20 agents and 30 dynamic obstacles in a more complex $32\times 32$ map without goal-sample limitation, as shown in Fig.~\ref{fig:map} right.
\subsection{Results}
\begin{figure}[t]
\centering
\includegraphics[width=8cm]{training.jpg}
\caption{Success rate and average reward comparison of variants of MAPPER and PRIMAL* algorithms}
\label{fig:train}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=8cm]{goal.png}
\caption{Comparison of MAPPER and its variants with different goal sample range.}
\label{fig:goal}
\end{figure}
The training figures for the first stage are shown in Fig~\ref{fig:train}. For the second stage training, we find that MAPPER without dynamic obstacle trajectory (MAPPER w/o traj) and MAPPER without global planner guidance (MAPPER w/o guid) can hardly converge if we sample goals from the whole map, so we limit the goal sample range to 15 grids. For PRIMAL*, the proper learning rate depends on agents number because of its centralized training nature, so we keep the agents size and learning rate as in the first stage. Since the training settings are different for second stage, the training figures are not presented in Fig~\ref{fig:train}. But from the first stage training plot, we can see MAPPER has the most stable performance (smallest variance) and fastest convergence. The final average reward and success rate of MAPPER are also superior to the other methods in comparison.
To demonstrate the effectiveness of our observation representation, we evaluate trained models in a $65 \times 65$ size simulation environment with 10 agents, 10 dynamic obstacles, and different goal sample range. The success rate when we increase the goal range is shown in Fig.~\ref{fig:goal}.
We can see the performance of other variants of MAPPER is sub-optimal, while MAPPER agents will not be influenced by the goal range. Specifically, if we remove the global planner guidance feature, the agent's performance declines a lot when the distance to goal is increased, which means decomposing the long-range navigation task to several easier waypoint-conditioned tasks is necessary.
Though removing dynamic obstacle trajectory feature will not be influenced a lot when the goal range is changed, however, it shows worse capability to handle interactions with dynamic obstacles in a large environment.
We also evaluate the trained models as well as LRA* in various environment settings without goal sample range limitation to see their generalization capability. The performance is shown in Table~\ref{compare}.
Note that LRA* needs to access all the agents' (not dynamic obstacles) intention information and resolve conflicts before taking actions, while MAPPER only needs local observations.
We observe that in simple tasks where only a few moving obstacles are around the MAPPER agent, the agent will behave similar to following the reference path from the global planner. However, when the dynamic obstacle density is increased and the reference path is blocked, MAPPER agent performs aggressively to get out of surrounding obstacles and then moves towards its goal. We can see the success rate for MAPPER is the highest and is consistently above 0.99 in various experiment settings.
The MAPPER variant without dynamic obstacle trajectory works well when there are few dynamic obstacles but performs poorly when the complexity of the environment increases. It can be seen that the waypoints guidance is an important aspect of the MAPPER algorithm and the variant without waypoints guidance has low success rate even in a $20\times 20$ grid world with 15 agents and 10 dynamic obstacles.
\section{CONCLUSION}
\label{sec:conclusion}
This paper proposes a decentralized partially observable multi-agent path planning with evolutionary reinforcement learning (MAPPER) method to learn an effective local planning policy in mixed dynamic environments. We model dynamic obstacle's behavior with an image-based representation and decompose the long-range navigation task into many easier waypoint-conditioned sub-tasks. Furthermore, we propose a stable evolutionary training approach that could be easily scaled to large and complex environments while maintaining good convergence property compared with centralized training methods. The experiment result shows that MAPPER outperforms traditional method LRA* and learning-based method PRIMAL* in terms of success rate among various experiment settings. However, MAPPER may still collide with other agents or dynamic obstacles in complex environments in order to reach the goal. So the future direction would be to investigate safety-critical learning-based planning methods.
\section*{ACKNOWLEDGMENT}
The authors acknowledge the support from the Manufacturing Futures Initiative at Carnegie Mellon University made possible by the Richard King Mellon Foundation.
\newpage
\bibliographystyle{IEEEtran}
|
2,877,628,091,506 | arxiv |
\section{Introduction}
Among the different theoretical {\em predictive} approaches to the physics of glass-forming liquids, the mode-couplig theory (MCT) \cite{KobBinder,Goetze} is among the few descriptions rooted at a true microscopic level (see also \cite{ParisiRMP,Schweizer2005JCP}). Indeed, the starting point of MCT is the set of evolution equations for the microscopic conserved fields, wherefrom all physical predictions are derived. However, this sturdy basis does not prevent the MCT from shortcomings of its own, due to approximations necessary to get tractable equations.
First, it is well-known that the MCT predictions are not fully satisfactory, insofar as a spurious ergodic transition at some temperature $T_c$ is predicted by the theory and never observed in simulations or experiments \cite{KobBinder}. Besides, in the temperature range where the MCT works at best, the fits with experiments/simulations are obtained at the price of a temperature rescaling, whose origin remains elusive. Secondly, the MCT results are somewhat difficult to interpret physically, owing to the fact that the geometrical coordinates are encoded by their Fourier modes. This point becomes even more problematic for temperatures close to $T_c$, for it has been hinted that the diffusion process becomes progressively more local (and activated) when the temperature is lowered \cite{BerthierBiroliRMP,BerthierBook,SchweizerCurrOp2007}. In a Fourier perspective, this locality is translated by finely adjusted relative amplitudes and phases of different modes, a tailoring which is presumably lost by the approximations inherent to the MCT approach.
The analysis of the limitations of MCT (and its connections with spin glass systems) has elicited other approaches which shed new light on the physics of glass-forming liquids: phase space studies, facilitation models, new concepts (point-to-set length, propensity,\ldots), random first-order transition, etc\ldots (see the recent reviews \cite{BaschnagelVarnik,CavagnaReport,BerthierBiroliRMP}). Amidst the flourishing of these alternative approaches, the MCT appears today to some extent isolated (with the noteworthy exceptions of \cite{InhomMCT_PRL} and \cite{ZFGST}). A theoretical framework, able to reconcile on the one hand the modal MCT approach, certainly grasping the physics of moderately supercooled liquids and on the other hand real-space features described by facilitation models, preponderant in strongly supercooled systems, is both lacking and desirable.
A possible path toward this synthesis would be to extend the MCT beyond the ideal theory. Attempts in this direction have already been undertaken early \cite{GoetzeSjoegrenEMCT,DasMazenkoEMCT}; the avoiding of the MCT transition had been sought considering the momentum field relaxation as a supplementary relaxation channel \cite{KobBinder}. However, these approaches have been seriously challenged in \cite{Catesramaswamy,LiuOppenheim}. The underlying philosophy of these extensions is to recognize that a conserved field could act as a supplementary decay channel (beyond the density fields), through which the system would relax preferentially when the temperature is lowered. This general idea is certainly valuable, although the missing relevant decay channels are probably not only related to the momentum field (Ref. \cite{Catesramaswamy} stressed the fact that this field is irrelevant for glass-forming colloids, where the ergodic transition is nevertheless avoided as well \cite{SchweizerCurrOp2007}). Were it be possible to put forward some {\em other} conserved fields, a new route toward an extension of the ideal MCT theory would open.
This paper intends to show that it is possible to devise some new conserved fields, which would be {\em not redundant with the density field in any approximate fluid theory}. From the strict physical point of view, it is obvious that the sole conserved field of, say, a monodisperse colloidal system (for sake of simplicity, we disregard the fact that this system would crystallize) is its density field $\rho(\bm k)=\sum_j \exp(i\bm k\cdot\bm r_j)$ (expressed in Fourier modes). As a result, one cannot enlarge the set of conserved quantities by adding a field which would be conserved by the virtue of a physical property distinct from the mass conservation. For instance, any field depending solely on the microscopic configuration (and not on the velocities) can be related to $\rho(\bm k)$. However, as soon as the knowledge of the density field is not total, due e.g. to an implicit coarse-graining, or a truncation in a hierarchy of dynamical equations, this one-to-one relation between density field and functions over the configurational phase space ceases to be true.
In particular, we will show that thanks to the Voronoi tessellations, new conserved configurational fields naturally emerge. The Voronoi tessellation is a mathematical partition of the physical space and introduces the notion of neighbourhood of a particle, which is both very intuitive and related to the density field by an extremely complicated functional dependence. The outcome of this intricacy is precisely to allow a view on the structural properties which is only partially accounted for by the traditional moments of the density distribution.
In a first part (section \ref{secvol}), a so-called {\em volume field} is defined for each fluid configuration using the Voronoi cells of the configuration. Some fundamental static and dynamical properties of this notion are presented, from which a second conserved field, vectorial and Voronoi based, is deduced. This vectorial field, termed {\em geometrical polarisation}, is investigated in section \ref{geopol}. It is shown that this field naturally couples to the force field, but contrary to it develops a plateau relaxation when the temperature is lowered. This paper should be considered as an introductory work to these volume and polarisation fields, which are sensible candidates for devising a new type of extended MCT (eMCT). We do not aim here to write down and solve this eMCT, a task we postpone for future publication. We just stress that this program, although difficult, is not impossible, insofar as the static correlators needed in the eMCT have explicit expressions in terms of geometrical features of the Voronoi cells.
Our analysis employs equilibrium trajectories from molecular dynamics simulations of a bead-spring model for glass-forming polymer melts. The model and the simulations are explained in Ref. \cite{stefanthese,stefanMCTpaper}. Here we only give a brief summary. We examine a melt, made of 3072 oligomer chains of 4 monomers of mass $1$. The nearest neighbour intra-chain monomers interact via a potential $V_{\rm intra}(r)=\frac{1}{2} k(r-\ell_0)^2$, with $k=1110$ and $\ell_0=0.967$. The other pairs of monomers, including not nearest neighbour intra-chain monomers, interact via a Lennard-Jones potential with $\varepsilon=1$, $\sigma=1$, cut off and shifted at $r_c=2.3$. The dynamics is thermalized with a Nos\'e-Hoover thermostat at constant volume. The volume for different temperatures is determined so as to maintain a quasi-zero constant pressure . For this model, $T_c\simeq 0.38$ has been determined by usual techniques \cite{stefanthese,stefanMCTpaper,KobBinder}.
\section{The volume fields and the generalized structure factors\label{secvol}}
\subsection{Definitions, Basic properties\label{DBP}}
For a fluid of $N$ particles whose positions are $\bm r_j$ ($j=1\ldots N$), we denote by $U_j$ the volume of the Voronoi cell around the particle $j$ and by $v$ its average. This volume is defined as the set of points closer to $j$ than to any other particle \cite{SpatialTessellations}. The Voronoi cells are convex polyhedrons enclosing the particle $j$.
The Voronoi tessellations (i.e. the partition of space defined by the set of Voronoi cells for a given configuration) have been studied many times in the context of liquids \cite{Rahman,Glotzer_Voronoi,KumarKumaran}: In \cite{Glotzer_Voronoi} for instance, the distribution of volumes and asphericities of Voronoi cells are computed for glass-forming liquids, and display noticeable universal features as well as fluctuations scaling $\propto T^{1/4}$ with respect to the temperature at constant volume.
In dense liquids one expects the fluctuations of $U_j$ to be small as a result of the weak compressibility of the system: A substantial fluctuation of $U_j$, say positive, is likely to occur by shoving the surrounding particles away from the $j$-th particle. But, as noticed in \cite{Glotzer_Voronoi}, the compressibility is not easily related to $\sigma_v^2$, the standard deviation of $U_j$; One does not observe $\chi_T\sim \sigma_v^2/[vk_BT]$. The reason for that is that during an increase of $U_j$, the test particle is assumed to remain within the inflating volume, whereas $\chi_T$ is related to the fluctuations of the number of particles within a fixed volume : This slight difference does matter for such a small system (for large systems, fluctuations of particles at constant volume and the reverse are related by a density term, a density which precisely is a strongly fluctuating observable at the level of the particle).
Actually, the fluctuations of $U_j$ have two distinct origins: on the one hand, different (i.e. not superimposable) configurations of the first shell of neighbours give rise to fluctuations of $U_j$. On the other hand, different positions of the tagged particle within its cell induce different values of $U_j$ and hence, fluctuations. The first contribution is linked to the compressibility, whereas the second is not. Although mathematically well defined, this distinction is indeed somewhat artificial, because such a partitioning does not correspond dynamically to well separated timescales.
In fig. \ref{autocorr_UU}, we
\begin{figure}
\centering \resizebox{7cm}{!}{\includegraphics{autocorr_UU.eps}}
\caption{Normalized autocorrelation of $U_j$ at different temperatures (note that $T_c\sim0.38$).}
\label{autocorr_UU}
\end{figure}
plot the normalized autocorrelation $\langle [U_j(0)-v][U_j(t)-v]\rangle/\langle [U_j-v]^2\rangle$ at several temperatures. The first rapid decay is related to the fast vibrational motions of all particles with respect to their local metastable equilibrium positions. The subsequent plateau, which develops for low temperatures, is the embodiment of the transient arrest of the dynamics of the density field expressed within the variables $U_j$. This is an obvious statement, but there is however something interesting associated to it: The initial decay of the excess/default of surrounding volume is unsufficient to smooth out completely the cell volume fluctuations from particle to particle (otherwise the curves would have nearly vanished), therefore the crossover to a nonzero plateau should account for the structural disorder of the underlying inherent structure (IS). The presence of the positive plateau tells us that this IS is correlated at the level of the local structure to the initial condition it comes from: An initial condition with a positive (resp. negative) fluctuation of $U_j$ is likely to correspond to an IS with also a positive (resp. negative) fluctuation of $U_j$. If one focuses now on the dynamics on a larger timescale (``large'' means here for timescales comparable to the beginning of the plateau in Fig. \ref{autocorr_UU}), when it can be envisioned as a hopping from IS to IS \cite{Schroderetal}, an interesting question would be to determine to what extent the residual Voronoi volume field (i.e. the nonzero field associated to the IS) determines the transition to the next IS. For instance, if for an IS $U_j>v$, the choice by the dynamics of the next IS should be biased so as to most probably lessen the value of $U_j$ (for sake of simplicity, we assume that the median value of $U_j$ in the IS is $v$). This example is certainly a little bit too simple, for if the Voronoi volumes happen to be significant in the inter IS dynamics, this would involve necessarily their distribution over larger regions than just one Voronoi volume, owing to the fact that the transition from one IS to the next typically involves a dozen of particles \cite{Schroderetal}.
\subsection{Volume field}
The Voronoi volumes are obviously not conserved individually, but globally due to the partitioning of space by the Voronoi tessellation. We are thus led to define, for a fluid of $N$ particles whose positions are $\bm r_j$ ($j=1\ldots N$), the volume field by
\begin{align}
\rho_v(\bm r,t)&=v^{-1}\sum_{j=1}^NU_j(t)\delta(\bm r(t)-\bm r_j(t))\label{rhov}
\end{align}
This field has the significant property of being a conserved field, in the sense that the quantity $U_j$ attached to the particle $j$ varies by exchange with the adjacent Voronoi volumes. As already noted, this field is in principle entirely known as soon as the configuration of the particles is known (and vice-versa), but this subordination is actually only formal.
\medskip
From this field we can define three different generalized structure factors. The first two are
\begin{align}
S_v(k)&=N^{-1}\langle\rho_v(-\bm k)\rho_v(\bm k)\rangle\ \ \text{(VSF)}\\
S_i(k)&=N^{-1}\langle\rho(-\bm k)\rho_v(\bm k)\rangle\ \ \text{(ISF)}\\
\text{with }\rho_v(\bm k)&\equiv v^{-1}\sum_{j=1}^NU_j\exp(i\bm k\cdot \bm r_j)
\end{align}
Notice that $\rho_v(\bm k,t)$ is the Fourier transform of $\rho_v(\bm r,t)$ and we will omit the time henceforth in equal time correlation functions, for sake of clarity. The volume structure factor (VSF) $S_v(k)$ is the exact counterpart for $\rho_v$ of the usual structure factor (SF) $S(k)=N^{-1}\langle\rho(-\bm k)\rho(\bm k)\rangle$. The intermixed structure factor (ISF) $S_i(k)$ is the cross-correlation of the fields $\rho(\bm k)$ and $\rho_v(\bm k)$, and is likely to be nonzero since these two fields share exactly the same symmetries.
A third generalized structure factor can be constructed by orthogonalizing $\rho_v(\bm k)$ and $\rho(\bm k)$ with respect to the canonical average:
\begin{align}
\rho_{v\perp}(\bm k)&\equiv \rho_v(\bm k)-\frac{S_i(k)}{S(k)}\rho(\bm k)\\
S_{v\perp}(k)&=N^{-1}\langle \rho_{v\perp}(-\bm k)\rho_{v\perp}(\bm k)\rangle\ \ \text{(OVSF)}\nonumber\\&=S_v(k)-S_i(k)^2/S(k)\label{svperpdef}
\end{align}
where OVSF stands for orthogonalized volume structure factor. $\rho_{v\perp}(\bm k)$ is defined so as to get $\langle\rho_{v\perp}(-\bm k)\rho(\bm k)\rangle=0$ for all $k$.
\bigskip
The typical behaviour of the SF, VSF and ISF for our oligomer melt is plotted in figure \ref{fig1_starter}. Two distinct behaviours emerge according to the value of $k$.
For small $k$ spanning from the hydrodynamic range (i.e. the $k\rightarrow 0$ domain where the SF has a well-defined constant plateau value) to the crossover domain (i.e. up to $k\sim 4$), the SF, VSF and ISF behave very differently: Whereas the SF reaches for decreasing values of $k$ a nonzero compressibility plateau, the VSF and ISF both vanish, the former $\propto k^4$ and the latter $\propto k^2$. The fact that all volume structure factors vanish in the hydrodynamic limit is not surprising; It corresponds to the (somewhat imprecise) common sense statement according to which the volume cannot fluctuate (this point will be developed further in the next subsection). What is more dumbfounding is the different exponent associated to this asymptotic behaviour. This nontrivial point will be analysed in section \ref{geopol}.
\begin{figure}
\centering\resizebox{7cm}{!}{\includegraphics{fig1_starter.eps}}
\caption{(color online) Log-log plot of $S(k)$ (blue solid), $S_v(k)$ (green dashed) and $S_i(k)$ (red dash-dotted). The solid thick black lines highlight the slopes 4 and 2 (respectively, from bottom to top). The inset represents the correlation coefficient of $\rho(\bm k)$ and $\rho_v(\bm k)$.\label{fig1_starter}}
\end{figure}
The second main region in $k$ space concerns the microscopic domain, i.e. for $k\gtrsim 4$. Fig. \ref{fig1_starter} indicates that the three SFs are here strongly alike. The solid and dashed curves in Fig. \ref{fig2_Svperp}, showing $S_v(k)/S(k)$ and $S_i(k)/S(k)$ respectively, make this point more precise: One observes that a weak structuration (oscillations) of the signal around 1, out of phase with that of the SF itself (black thin curve, arbitrary ordinate units). This shows that $\rho_v$ and $\rho$ are quite strongly correlated in the microscopic domain, a fact that can be quantified using the correlation coefficient $\chi_0(k)=S_i(k)/\sqrt{S_v(k)S(k)}$. The Cauchy-Schwarz inequality imposes that $-1\leq \chi_0(k)\leq 1$, a value of 1 meaning an exact positive proportionality of $\rho$ and $\rho_v$. The correlation coefficient $\chi_0(k)$ is shown in the inset of Fig. \ref{fig1_starter}, where
\begin{figure}
\centering \resizebox{7cm}{!}{\includegraphics{fig2_Svperp.eps}}
\caption{(color online) Typical shape of $S_v(k)/S(k)$ (solid, left scale), $S_i(k)/S(k)$ (dashed,left scale) and the OVSF $S_{v\perp}(k)$ (dash-dotted, right scale). The thin solid black graph in the bottom shows the SF with arbitrary units for sake of comparison.}
\label{fig2_Svperp}
\end{figure}
the strong correlation between the two variables is confirmed over the whole $k$ range. Roughly speaking, a large part of information carried by $\rho_v$ is already provided by $\rho$, {\em at the level of the second-order fluctuations}. However, a small share of $\rho_v$ provides information of its own, not accounted for by the SF. The orthogonalized version of $\rho_v$, namely $\rho_{v\perp}$, precisely aims at ``freeing'' the volume density field from all the redundancy of statistical information carried both in $\rho$ and $\rho_v$. As a result, the typical values of the associated structure factor $S_{v\perp}(k)$ (red dash-dotted in Fig. \ref{fig2_Svperp}) are quite small. The OSVF starts from zero for $k\rightarrow 0$, grows $\propto k^4$ (this is easily seen from \myref{svperpdef}), and reaches a plateau, weakly modulated in phase with the SF, with typical values $\sim 3\times 10^{-3}$. The asymptotic plateau value is nothing but the normalized variance of the Voronoi volume $\mcal{V}_n=[\langle U_j^2\rangle-\langle U_j\rangle^2]/\langle U_j\rangle^2$, therefore the typical small values of $S_{v\perp}(k)$ (or equivalently, $1-\chi_0(k)$) mirror the smallness of the available volume fluctuations around the particles in the dense phases (notice that a typical value of the available volume fluctuation is $\sim \sqrt{\mcal{V}_n}\sim 5.4\%$ of the mean Voronoi volume). For sake of comparison, the ideal gas displays a variance $\mcal{V}_n\sim 0.18=(0.42)^2$ \cite{gilbert}.
A natural reflex would be to dismiss the field $\rho_{v\perp}$ (and therefore $\rho_v$ as well), as being unable to provide any valuable physical input, because of the smallness of its typical fluctuations (with respect to $S(k)$ for instance). Such a reflex should be at least deferred if one gives some credence to the mode-coupling theory, because a key criterion for the relevance of a variable as a decay channel is its ability to couple to the generalized force expression within the memory kernel. The nonlinear character of the theory is moreover capable of inducing strong dynamical amplifications of tiny modifications of the control parameters (the strong slowing down of the dynamics for only a small change of temperature for instance).
Furthermore, MCT deals actually only with normalized variables \cite{kawasaki_MCT_ovdpd}, therefore their actual unnormalized level of fluctuations does not play any direct role.
The next paragraph gives a hint at the physical content of this field by considering a specific example.
\subsection{Physical content: An illustration}
Let us consider the virtual spherical volume $V$ (radius $R$) with its center located at $\bm r=0$ (arbitrary origin), drawn into a fluid at equilibrium. We define the following observable:
\begin{align}
\mcal{I}_v&= \int_V [\rho_v(\bm r)-v^{-1}]d^3r=v^{-1}[V_v-V]
\end{align}
which is the fluctuation of $\rho_v(\bm r)$ with respect to its mean value $v^{-1}$, integrated over the sphere. $V_v$ is the volume obtained by the aggregation of all Voronoi cells located inside $V=4\pi R^3/3$. Therefore, the integral $\mcal{I}_v$ is essentially a {\em boundary term}, as exemplified in figure \ref{ex_circle} (the picture is in 2D, and for an ideal gas configuration, for sake of illustration, but we assume a 3D fluid in the discussion). The value of $\mcal{I}_v$ can be read by adding the external shaded region and substracting the internal ones.
\begin{figure}
\centering \resizebox{7cm}{!}{\includegraphics{voronoi_ex_circle_deco.eps}}
\caption{(color online) The field $\rho_v(\bm r)-\langle\rho_v\rangle$, integrated over the disc, measures the mismatch between the area covered by the Voronoi cells of inside particles and the disc area. This integral is the difference between the outside (red) and inside (green) shaded areas.} \label{ex_circle}
\end{figure}
By contrast, a similar integral $\mcal{I}$, where the density field replaces the volume field is the fluctuation of the particle number inside the sphere, and therefore a bulk term. As a result, the statistical signature of the fluctuations of $\rho_v$ are likely to account for a physics somewhat more local than that of $\rho$. Let us also notice the nonconnected topology of $\mathbb{R}^3\setminus [V_v-V]$, which implies for instance that a diffusing particle initially at $r<R$ cannot diffuse on distances larger than $R$ without coupling to $\mcal{I}_v$. Another example is the following conditionality: As long as $\mcal{I}_v$ does not relax, it prevents the homogeneisation of the inside and the outside and therefore would sustain an heterogeneity, if the initial densities in the two regions were initially different.
\medskip
As $\mcal{I}_v$ is actually a surface term, one expects its fluctuations to be affected by this peculiar dimensionality. Indeed, one has
\begin{align}
\langle\mcal{I}_v^2\rangle&\sur{\sim}{R\rightarrow\infty} 4R^2v^{-1}\int_0^\infty dk k^{-2}S_v(k)\label{IvIv}\\
\langle \mcal{I}_v\mcal{I}\rangle&\sur{\sim}{R\rightarrow\infty} 4R^2v^{-1}\int_0^\infty dk k^{-2}S_i(k)\label{IvI}
\end{align}
To demonstrate this, we begin by noticing that we have omitted so far the singular part that all generalized structure factors have in common with the ordinary SF: For instance, one has $S_v(k)=S_{v,\rm reg}(k)+(2\pi)^3v^{-1}\delta(\bm k)$. Implicitly, the SF cited in \myref{IvIv} and \myref{IvI} are the regular parts. This implicit will be continued. Thanks to the singular part, we arrive to $\langle \mcal{I}_v^2\rangle=\int (d^3k/(2\pi)^3) S_v(k)|\Pi(k)|^2$ where $\Pi(k)=4\pi k^{-3}[\sin(kR)-kR\cos(kR)]$ is the Fourier transform of the indicator function of the sphere. Owing to the fact that $S_v(0)=S_i(0)=0$, the main term in the limit of large $R$ is given by the last term of $\Pi(k)$, whence the result (after the neglecting of the rapid oscillating part of $\cos^2(kR)$).
On a qualitative level, the $R^2$ dependence in \myref{IvIv} is explained by considering the various Voronoi cells mismatches as $\sim R^2$ independent random variables with zero mean. The average of their sum squared is thus $\sim R^2$ as well.
For the result \myref{IvI}, the reasoning is different: A, say, positive fluctuation of density inside the sphere makes the Voronoi cells slightly smaller than those outside. This induces a typical {\em positive} mismatch of the boundary Voronoi cells: The inner particles at the boundary, denser, are typically closer to the sphere frontier than the outer particles. As a result, the frontiers of the Voronoi cells of inside boundary particles tend to overstep the limits of the sphere, since, by construction, a facet between two Voronoi cells is at equal distance from the two particles concerned.
This typical mismatch is in the linear order proportional to $[N_v-\langle N_v\rangle]/\langle N_v\rangle$, whence $\langle \mcal{I}_v\mcal{I}\rangle$ is proportional to $R^2\times \langle[N_v-\langle N_v\rangle]^2\rangle/\langle N_v\rangle\sim R^2$ as before.
This example makes more precise the intuitive statement according to which the volume should not fluctuate at large lengthscales: We see that indeed the large scale bulk fluctuations are exactly compensated, the remaining ones being squeezed into a surface region not thicker than the typical Voronoi cell size. This exact compensation endows $\mcal{I}_v$ with fluctuating properties very different from that of $\mcal{I}$, and accounted for in the large $R$ limit only if $\lim_{k\rightarrow 0} S_{v,i}(k)=0$. It must however be noted that this example does not enforce any particular functional form for $S_{v}$ and $S_i$ near $k=0$ provided they goes to zero. For symmetry reasons, they are at least $\propto k^2$, but Eq. \myref{IvIv} does not hint at why $S_v(k)$ should be $\propto k^4$ in the hydrodynamic limit. This nontrivial behaviour will be analysed in section \ref{geopol}.
\section{Geometrical polarisation} \label{geopol}
What is remarkable in the low $k$ region is the fact that if $S_i(k)$ goes to zero as $k^2$, $S_v(k)$ goes to zero as $k^4$, a fact that is not obvious from e.g. the inspection of formulas \myref{IvIv} and \myref{IvI}. One just notices that $S_v(k)\propto k^4$ removes long wavelength contributions from the integral in \myref{IvIv}, which is qualitatively in accordance with the physical content of $\mcal{I}_v$.
Actually, this property is related to a remarkable and general feature of the Voronoi tessellations, which, to the best of our knowledge, had not been unveiled so far. We will prove in Appendix \ref{App_pol_2} that the total geometrical polarisation of the configuration, defined by
\begin{align}
\bm P&=v^{-1}\sum_iU_i[\bm s_i-\bm r_i] \label{Pdef}
\end{align}
where $\bm s_j$ is the barycenter of the $j$-th Voronoi cell, is independent of the configuration (in a set of configurations connected by continuous transformations). More precisely, this is strictly true only for a system without boundaries (infinite or with a toroidal geometry), for which one has moreover $\bm P=\bm 0$. For systems with boundaries $\bm P$ is constant within subsets of configurations with identical boundary Voronoi cells, and one has just a local conservation in the bulk. We also define the microscopic {\em geometrical polarisation field} by
\begin{align}
\bm p(\bm r)&=\sum_j \bm \tau_j\delta(\bm r-\bm r_j)\\
\bm \tau_j&\equiv v^{-1}U_j(\bm s_j-\bm r_j)\label{taudef}
\end{align}
A typical realization of this field is shown in Fig. \ref{sketch_geopol} for a 2D Voronoi diagram (for sake of clarity) corresponding to a random choice of points with a density 1. Note that a clear anticorrelation of the polarisations arises when two particles are close to each other. This reflects the symmetrical position of two adjacent particles with respect to their common dividing facet.
\begin{figure}
\centering \resizebox{7cm}{!}{\includegraphics{sketch_geopol.eps}}
\caption{(Color online) An example of the geometrical polarisation field for a 2D Poisson-Voronoi (ideal gas) diagram. The arrows represent the vectors $\bm\tau_j$, have their origins at the particle's locations (red circles). Note that their tip is not at the geometrical centers (centroid) of the Voronoi cells the particles are associated to, because of the factor $U_j/v$ in the definition of $\bm\tau_j$. }\label{sketch_geopol}
\end{figure}
Two distinct propositions have to be demonstrated. First, assuming that $\bm P$ is indeed conserved and zero for an infinite system, we show that this implies a $k^4$ behaviour of $S_v(k)$ (and subsequently, of $S_{v\perp}(k)$). Second, we show that $\bm P$ is indeed conserved. These two important but technical demonstrations are developed in the Appendices \ref{App_pol_1} and \ref{App_pol_2}, respectively.
\subsection{Static properties of the geometrical polarisation}
The simplest characterization of $\bm\tau_j$ is probably its equilibrium distribution. In fig. \ref{pol_fluct} are shown the distributions of a $\tau_{j,x}$, an arbitrary cartesian component of the polarisation (upper plot) and that of its modulus $|\bm\tau_j|$ (lower plot).
\begin{figure}
\centering \resizebox{7cm}{!}{\includegraphics{pol_fluct.eps}}
\caption{(color online) probability densities of the scaled variables $\tau_x/\sqrt{\langle \tau_x^2\rangle}$ (left) and $|\bm\tau|/\sqrt{\langle\tau_x^2\rangle}$ (right). The $T=0.38$ and $T=1.00$ curves refer to the system defined in the introduction, and the formulas are those of the Maxwell-Boltzmann theory. } \label{pol_fluct}
\end{figure}
These distributions are well described by the Maxwell-Boltzmann (MB) distributions for the velocities components and modulus in a gas, except for the high values of $\tau_x$ (or $|\bm\tau|$), where the simulated distributions are larger than the MB distributions. By conservation of probability, this implies a tiny depletion in the low $\tau$ region, invisible in the semilog representation. This departure from the MB distribution is quasi-absent for the ideal gas, and tends to be reduced for our system when lowering the temperature.
To understand why the distributions of $\tau_x$ and $|\bm\tau|$ comply with the MB distribution, we remark that (in 2 or 3 dimensions)
\begin{align}
\bm\tau_j&=\frac{1}{2v(d+1)}\sum_{i\big/\langle i,j\rangle}r_{ji}S_{ji}\bm s_{ji}\label{centroidformula}
\end{align}
where the sum is over the particles $i$ sharing a facet with $j$, $r_{ji}=|\bm r_i-\bm r_j|$, $S_{ji}$ is the area of the common facet, and $\bm s_{ji}$ is the vector joining the particle $j$ to the barycenter of the facet $S_{ji}$. This formula comes from the fact that the $i$-th Voronoi cell is the stacking of triangles (2D) or tetrahedra (3D) whose the $S_{ij}$ are the basis and the particle $i$ the common summit \cite{SpatialTessellations}; Each term of the sum is just the volume of the simplex times the vector joining $i$ to the simplex' centroid.
Eq. \myref{centroidformula} shows that in 3 dimensions, $\bm\tau_j$ is typically the sum of $\sim 14-15$ terms which are only weakly correlated to each other, for it is well-known that the structural correlations in a dense fluid does not extend significantly beyond the first shell of neighbours. As a result $\tau_{j,x}$ resorts approximately to the central limit theorem, and must have an approximate Gaussian distribution. The isotropy of the direction of $\bm\tau$ leads also to the MB distribution for the distribution of moduli.
The observed departures from the MB distribution for our system (solid and dashed lines in fig. \ref{pol_fluct}) come from the polymeric nature of the fluid, and in particular from the bond length between two adjacent monomers in a chain. This bond length is quite rigid, and slightly shorter than the typical distance between two neighbouring particles. This situation is likely to deplete slightly the occurence of very small polarisation, and conversely gives rise to extra high-valued polarisations.
The magnitude of the polarisation with respect to the temperature is indicated in table \ref{table}.
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
Temperature & $\delta\tau$ &$\displaystyle\chi_{F\tau}$\\
\hline
$0.38$ & 0.0374 &0.3577\\
$0.50$ & 0.0438 &0.3338\\
$1.00$ & 0.0751 & 0.2190\\
ideal gas & 0.2141 &\\
\hline
\end{tabular}
\caption{$\delta\tau\equiv\sqrt{\langle\bm\tau^2\rangle}/v^{1/3}$ is the standard deviation of polarisation, normalized by $v^{1/3}$. The third column is $\chi_{F\tau}=\langle \bm F_j\cdot\bm \tau_j\rangle/[\langle \bm F_j^2\rangle\langle \bm \tau_j^2\rangle]^{1/2}$, the correlation coefficient between the total force on a monomer and its polarisation.}
\end{center}
\label{table}
\end{table}
One observes that the polarisation does not exceed few percents of the typical nearest-neighbour distance, and that it decreases with decreasing temperature. It is worth noticing that the $T\rightarrow 0$ limit of the normalized standard deviation of the polarisation is probably not zero (we neglect here the possibility of a thermodynamic transition) because such a case would correspond asymptotically to centroidal Voronoi tessellations, i.e. configurations such that each particle occupies the barycenter of its Voronoi cell. We do not see which could be the thermodynamic force responsible for such an arrangement (within configurations which would be anyway disordered; Voronoi centroidal {\em and} disordered configurations do exist). On the contrary, one expects for the standard deviation of the polarisation a $T\rightarrow0$ limit corresponding to the (common) value of any inherent structure.
\medskip
Provided one considers only static correlations, a consequent correlation between the total force $\bm F_j$ applied on the monomer $j$ and the polarisation $\bm\tau_j$ does exist, see table \ref{table}, and fig. \ref{corr_angle_force_tau}.
\begin{figure}
\centering \resizebox{7cm}{!}{\includegraphics{corr_ang_tau_force.eps}}
\caption{Probability density function $\rho(\theta)=dP(\theta)/[\sin\theta d\theta]$ of the angle between the polarisation $\bm\tau_j$ and the total force $\bm F_j$ acting on monomer $j$. The plots associated to a temperature refer to the oligomer melt; the ``monomeric LJ'' plot (line+circles) refer to a metastable Lennard-Jones fluid at density $1.0625$ and $T=1.0$. For a non-correlated vector pair, one would have the constant line $\rho(\theta)=1/2$.}
\label{corr_angle_force_tau}
\end{figure}
This is simply related to the fact that if one moves a particle toward another one, the total force and the polarisation react the same way, antiparallel to the direction of motion. Therefore, one is tempted to envision $\bm\tau$ merely as a sophisticated ``ghost'' of the force, and thus useless. We will see that this is not true, because both vectors decorrelate very differently with time: There is more in the geometrical polarisation than just a vector correlated to the force.
\subsection{Autocorrelation of $\bm\tau_j$}
As for the relaxational dynamics of $\bm\tau_j$, Fig. \ref{autocorrtau} reveals a behaviour similar to what one observed for $U_j$.
\begin{figure}
\centering \resizebox{7cm}{!}{\includegraphics{autocorr_tautau.eps}}
\caption{(color online) For $T=0.38$ (top), $T=0.50$ (middle) and $T=1.00$ (bottom), is shown the autocorrelation of various quantities related to the polarisation. Solid blue: $\langle \bm\tau_j(0)\cdot\bm\tau_j(t)\rangle/\langle \bm\tau_j^2\rangle$; Dashed green: $ \langle \hat{\bm\tau_j}(0)\cdot\hat{\bm\tau_j}(t)\rangle$ with $\hat{\bm\tau}_j=\bm\tau_j/|\bm\tau_j|$; Dash-dotted red: $\langle \delta|\bm\tau_j|(0)\delta|\bm\tau_j|(t)\rangle/\langle(\delta|\bm\tau_j|)^2\rangle $ with $\delta|\bm\tau_j|=|\bm\tau_j|-\langle|\bm\tau_j|\rangle$. The thin black curve is the normalized autocorrelation of $U_j$ from Fig. \ref{autocorr_UU}. In the top figure, the blue circles correspond to rescaled and shifted autocorrelation of the total force acting on a monomer, the zero level being the thin dashed line. This curve shows that the relaxation of the force is much faster than that of the polarisation.}
\label{autocorrtau}
\end{figure}
After a rapid initial decay, a plateau regime is reached, followed by the final $\alpha$ relaxation. What is worth stressing is that the plateau corresponding to the relaxation of the modulus of $\bm\tau_j$ is smaller than that of the total and directional relaxation. The relaxation of the modulus of $\bm\tau_j$ proceeds faster though at a pace comparable to that of the volume relaxation. On the other hand, the total or directional relaxations are much more blocked in the plateau regime. One can probably attribute this behaviour to the fact already touched upon, namely that the polarisation is sensitive to the connectivity in our model: As a result, the slower behaviour of the directional relaxation for $T=1$ may be attributed to polymeric effects, as reflecting the presence of a slower relaxation time (Rouse time) necessary for the polymer to relax its orientation (actually only a blob relaxation time would be relevant here, but for $N=4$ the blob and Rouse times are one and single concept). Therefore, the blatant shoulder for $T=0.50$ and high plateau for $T=0.38$ for the directional relaxation (or total relaxation: it is quite clear from the curves that the relaxation of the total polarisation is impeded mainly by that of its direction) accounts for a reorientational polymeric relaxation time, already slower in normal conditions, and considerably increased by the slowing down of the monomeric dynamics \footnote{Notice that a tailoring of the system so as to adjust the mean distance between intrachain neighbours with that of Voronoi nearest neighbours is just impossible, because such a system would partially crystallise on the monomeric scale (not with respect to the chain backbone).} . A systematic study of this issue with respect to $N$ would be clearly desirable.
\medskip
As we have already noticed, polarisation and force are structurally correlated. However, dynamically speaking, their behaviour is very different, insofar as the force cannot be correlated in the long run (this would imply indeed drifts in the motion of particles), see Fig. \ref{autocorrtau} (top). As for the volume $U_j$ (see the discussion at the end of the subsection \ref{DBP}), the interruption of the relaxation of $\bm\tau_j$ (at the early beginning of the correlation plateau) should account for the structural disorder of the inherent structures around which the system fluctuates in this early post-microscopic time regime. Notice that as far as the IS is concerned, the force field is correspondingly irrelevant, because it is everywhere zero.
\subsection{Correlation polarisation/displacement}
As the polarisation vector $\bm\tau_i$ of the particle $i$ is structurally correlated to the instantaneous total force $\bm F_i$ acting on $i$, necessarily a correlation builds up between the initial value of $\bm\tau_i$ and the displacement $\bm\delta_i(t)=\bm r_i(t)-\bm r_i(0)$ of $i$ in the short time regime. This can be seen in fig. \ref{corrpoldis} (top), where the correlator $\langle \hat{\bm \tau_i}(0)\cdot\hat{\bm \delta_i}(t)\rangle$ is plotted versus time (we note $\widehat{\bm X}=\bm X/|\bm X|$).
\begin{figure}
\centering\resizebox{7cm}{!}{\includegraphics{corr_tau_depl.eps}}
\caption{(color online) Top: Correlator $\langle \hat{\bm \tau_i}(0)\cdot\hat{\bm \delta_i}(t)\rangle$ as a function of time for several temperatures ($\hat{\bm X}=\bm X/|\bm X|$). Bottom: Monomeric mean squared displacement for the same temperatures.}
\label{corrpoldis}
\end{figure}
Clearly a noticeable correlation develops during the ballistic regime (see the mean-square displacement (MSD) in the bottom of Fig. \ref{corrpoldis}), crosses over a maximum, and then decreases. For intermediate times the decrease is temporarily blocked at a plateau, which is the more pronounced, the lower the temperature. It is again possible to interpret this interrupted decorrelation within the scenario of an intermediate timescale dynamics based on the hopping between adjacent IS \cite{Schroderetal}. In the early timescales of the plateau relaxation, only a few proportion of particles have actually moved out of their local cage (see Fig. \ref{fractionofmobile}). As a result, a large part of the signal should come from particles which are stuck, rattling back and forth in the cage of their neighbours. The plateau of Fig. \ref{corrpoldis} (top) is thus mainly the imprint of particles having relaxed to a long-lived effective cage associated to an IS.
A peculiar feature of the late relaxation of the polarisation/displacement correlation is the marked and long anticorrelation which occurs for a typical time similar to the $\alpha$ relaxation time. This again is a peculiarity of our model glass-forming liquid of short oligomers: To understand why this is so, consider an initial situation like that depicted in fig. \ref{sketchpol}.
\begin{figure}
\centering\resizebox{7cm}{!}{\includegraphics{sketch_corr_tau_displ.eps}}
\caption{Why the correlator of the fig. \ref{corrpoldis} changes sign in the long time regime, see text for details.}
\label{sketchpol}
\end{figure}
In this figure, the central black monomer is represented with its two intrachain neighbours, and the plane that these three particles define is outlined. If one assumes a locally icosahedral order, a pentagon of neighbouring particles can be drawn in that plane (for sake of clarity, of the three remaining in-plane neighbours needed to complete the pentagon, only two have been drawn ---with cupped bottoms to highlight their location across the plane). This in-plane pentagon is in average slightly distorted due to the intra-chain bond lengths which are a bit shorter that the average nearest-neighbour distance. In the figure, the two remaining monomers represent roughly the barycenters of the neighbours located above (solid) and below (dashed) the plane. The configuration has been chosen so that the particles below are typically closer to the central particle than that located above. This fluctuation plus the systematic distortion of the in-plane pentagon make the polarisation associated to that configuration (denoted ``initial'' in the figure) going from bottom to top and tilted to the left. The short time displacement will correspond to the smoothing of the vertical fluctuation, which is associated with a force fluctuation, whereas no in-plane relaxation takes place preferentially from right to left, because the distortion of the in-plane is structural. Therefore the short term displacement is mainly vertical and its scalar product with the initial polarisation is positive. For longer times, the displacement of the monomer at stake (but for a time scale still smaller than the diffusion time) corresponds to the polymeric relaxation of the different internal degrees of freedom. As a result, the late motion of the monomer corresponds to the exploration of the space defined by the blob of the three neighbours drawed in black, and therefore will be preferentially towards the right, leading to a negative scalar product.
\medskip
The correlator of fig. \ref{corrpoldis} is somewhat too general to describe acurately the relation between polarisation and displacement when the temperature is so low that the dynamics becomes heterogeneous. To get a glimpse on how the displacement decorrelates from polarisation differentially between mobile and stuck regions, one first defines what a mobile particle is: As a rule, a monomer $i$ is termed `mobile' at time $t$ if one has $|\bm\delta_i(t)|>r_c=3 \sqrt{\langle \bm r_{\rm pl}^2\rangle}$ where $\langle \bm r_{\rm pl}^2\rangle$ is the plateau value of the monomeric mean-square displacement (this value for $r_c$ corresponds to the usual criterion used to qualify the subpopulation of mobile particles at a time corresponding to the maximum of the nonergodicity parameter \cite{KobDonatiPRL1997,BaschnagelVarnik}). Among the population of mobile particles, it is clear that those whose displacement has been provoked by the motion of a neighbouring particle are not expected to sustain a correlation between polarisation and displacement directions. On the contrary, these induced motions are likely to be directed toward the vacancy left by the initially moving particle (string-like motion). Therefore, one is naturally led to distinguish two subpopulations among the mobile particles at time $t$: 1) A minority fraction, termed thereafter the dynamical seeds, which has substantially moved at time $t$, whereas their immediate environment at time $t=0$ (the neighbouring particles sharing a Voronoi facet with the particle under consideration) has not, and 2) the remaining ones, whose motion can be considered roughly as induced by the prior motion of a neighbouring monomer. Of course, this distinction is sensible only for sufficiently supercooled systems, where the motion of particles is substantially heterogeneous.
\begin{figure}
\centering\resizebox{7cm}{!}{\includegraphics{fraction_of_mobile_rc0.5765_T0.38.eps}}
\caption{(color online) Fraction of mobile particles for $T=0.38$ and $r_c=3\langle \delta_i^2\rangle=0.5765$ (solid blue). Note that the concept of ``mobile'' and ``immobile'' depends on the time considered. The other curves are the fraction of mobiles particles fulfilling an additional criterion: (i) dashed green (``isolated''): mobile particles whose initial neighbours are {\em all} immobile at time $t$. (ii) dash-dotted red (``induced, inter only''): mobile monomers $i$ with one or several mobile neighbours $j$, neither of the monomers $j$ belonging to the same molecule as that of $i$. (iii) dotted pale blue (``induced, inter+intra''): mobile monomers with mobile neighbours, not belonging to the preceding category. (iv) brown circles and solid line (``newly mobile, isolated''): particles of the same breed as (i), with the additional criterion that ``newly mobile'' particles at time $t$ must have become mobile between the last time step and $t$ only ---the circles are placed on the time steps at stake. This last curve stops early for technical reasons only.}
\label{fractionofmobile}
\end{figure}
This distinction is illustrated in figure \ref{fractionofmobile}, where the mean fraction of mobile particles is plotted in solid blue. Obviously, this curve is increasing and saturates at 1 for very long times (not shown). The intermediate behaviour is approximately a power law with an exponent around $2/3$ (this is largely coincidental, for it is not robust with respect to a variation of the choice of $r_c$). Together with this curve is shown also the fraction of dynamical seeds (dashed green curve). After the short time regime, where this curve is obviously superposed with the total curve, the fraction of dynamical seeds becomes roughly constant for a very long time. This should come from a balance between two fluxes, one providing isolated mobile particles, the other removing particles from this category: On the one hand, some immobile particles within an immobile environment become progressively mobile, probably because they are not too far from a reorganizing region. On the other hand, some dynamical seeds having initiated neighbouring displacements consequently leave the category of isolated mobile particles. For sake of completeness, Fig. \ref{fractionofmobile} displays also the tiny flux of ``newly mobile'' isolated particles (line+circles), i.e. particles mobile at time $t$ but immobile at all prior recording time steps (these recording time steps are shown by the symbols)
The very large discrepancy between the total fraction of mobiles particles and the fraction of isolated mobile particles, as soon as the early microscopic regime is overcome, witnesses the important fact that most of the subsequent diffusion events occurs by cascade, a dynamical seed inducing a neighbouring motion, which in its turn provokes another one, and so on. As our system is made of oligomers, we tested in Fig. \ref{fractionofmobile} also the effect of connectivity on the cascade. The red dash-dotted curve is the fraction of mobile particles having at least one mobile neighbour, but this or these mobile neighbour(s) do(es) not belong to the same molecule. The light blue dotted curve is the complementary fraction of not isolated mobile particles, i.e. mobile particles with at least a mobile neighbour belonging to the same molecule. These two curves are initially very close, which indicates that the connectivity enhances from the beginning the probability that a mobile monomer induces the motion of a neighbouring monomer belonging to the same chain. This point comes from the fact that a given monomer has typically 12-14 neighbours, only 3 of them at best belong to the same molecule (since our system is made of oligomers of length 4). For late times, the red dash-dotted curve becomes obviously negligible, since the vast majority of mobile particles (i) are not isolated, and (ii) have an environment with multiple, intra and inter-chain mobile monomers.
\medskip
If one comes back to the correlation of polarisation and displacement, it is clear that this correlation disappears at once for all diffusion events triggered by a displacement of a neighbouring particle. As a result, most of the plateau in Fig. \ref{corrpoldis} is built up by stuck particles which are unable to move beyond the initial release of the force fluctuation into the local basin of the inherent structure associated to the initial condition. A vivid illustration of this is obtained if one analyses the polarisation/displacement correlations of the small fraction of ``newly mobile'' particles at a time which is already considerable. In fig. \ref{vivid} (top) one sees the angular density $\rho_d(\theta)$
\begin{figure}
\centering \resizebox{7cm}{!}{\includegraphics{newlymobile_anglecorr.eps}}
\caption{}
\label{vivid}
\end{figure}
(i.e. the probability density divided by $\sin\theta$), of the angle between $\bm\tau_i(0)$ and $\bm\delta_i(t)$, for a large time $t\sim 123$ and various $r_c$. Clearly, if $r_c$ is too high, the `mobile' particles are indeed mobile thanks to the accumulation of several jumps between IS. It is therefore natural to observe a relatively flat angular distribution (the probability excess for angles near $180^\circ$ is again an outcome of the connectivity). For smaller values of $r_c$, the curves are mainly coding the already mentioned freezing of the initial relaxational moves due to the temporary caging around the inherent structures. It is worth stressing that we have no explanation for the apparent common value of the different curves for $\theta\sim 90^\circ$.
What is new is the bottom figure, where the same distributions are plotted, but calculated among the tiny subpopulation of ``newly mobile'' particles. Again, if $r_c$ is too large, the ``recent'' substantial moves are actually the result of several IS rearrangements and the correlation with the initial polarisation direction is lost. But, for $r_c\lesssim 0.4$, one observes that the ``fresh'' move of the particle (which occurred here at a time $t\in[82,123]$) has kept a strong correlation with the initial polarisation, despite a quite long waiting time before the move. We checked that the curves associated to the newly mobile particles are for $r_c\lesssim 0.4$ almost independent of time in the whole plateau time region. The result is quite interesting, for it shows that (i) still regions are able to preserve for quite long times the memory of the initial polarisations, and (ii) the substantial motion of a particle which can be considered as arising from a single dynamical hopping (and not a series of correlated events) is quite small (here $<0.4$), definitely smaller than the value usually taken to define the subpopulation of mobile particles \cite{KobDonatiPRL1997} (which is $\sim 3 \langle \bm r_{\rm pl}^2\rangle\sim 0.58$ in our system).
\subsection{Polarisation torque}
The system of vectors $\bm\tau_j$, like the force field, sums up to zero (with the important difference that one cannot decompose each $\bm\tau_j$ into vectors which would obey a kind of third Newton law). Consequently, the polarisation torque defined by
\begin{figure}
\centering\resizebox{6cm}{!}{\includegraphics{autocorr_torque.eps}}
\caption{Normalized autocorrelation of the polarisation torque for $T=0.36$.}
\label{autocorr_torque}
\end{figure}
\begin{align}
\bm{\mcal{M}}&=\sum_j \bm r_j\times \bm\tau_j
\end{align}
does not depend on the choice of the origin. For each configuration, this torque accounts for a geometrical global anisotropy. In liquid conditions, this anisotropy is reshuffled within a microscopic time scale, but for supercooled states, it retains some correlation until the $\alpha$ relaxation. This is shown in Fig. \ref{autocorr_torque} for the temperature $T=0.36$ (the other temperatures are not shown due to a lack of statistics on this global variable). Again, the plateau is associated to the fact that a typical inherent structure has a nonzero value of its torque, that the hopping from IS minimum to IS minimum only progressively decorrelates the torque from its initial orientation and magnitude. It is likely that this large-scale anisotropy couples to the shear properties of the system, and makes any rheological response on time scales faster than the $\alpha$ relaxation time dependent on the direction of strain.
\subsection{Polarisation field}
\medskip
As for the volume fields, we also study the autocorrelation functions of $\bm p$ with itself. As is usual for e.g. the current distribution function of the liquid physics, we proceed by defining in the Fourier space the components of $\bm p(\bm k)$ transverse and longitudinal with respect to the direction $\hat{\bm k}=\bm k/k$:
\begin{align}
p_L(\bm k,t)&\equiv -i(\bm p(\bm k,t)\cdot \hat{\bm k})\\
\bm p_T(\bm k,t)&\equiv \bm p(\bm k,t)-i p_L(\bm k,t)\hat{\bm k}\\
\text{with }\ \bm p(\bm k,t)&\equiv\sum_j\bm\tau_j(t)\exp(i\bm k\cdot\bm r_j(t))
\end{align}
(The $\sqrt{-1}$ in the definition of $p_L$ is just for $k$-inversion symmetry convenience). The (static) autocorrelation function of these longitudinal and transverse components are defined by
\begin{align}
S_{pL}(k)&=N^{-1}\langle p_L(-\bm k) p_L(\bm k)\rangle\\
S_{pT}(k)&=(2N)^{-1}\langle\bm p_T(-\bm k)\cdot\bm p_T(\bm k)\rangle
\end{align}
and are plotted in Fig. \ref{SpLSpT} (top).
\begin{figure}
\centering \resizebox{7cm}{!}{\includegraphics{SpLSpT.eps}}
\caption{(Color online) Top: Longitudinal $S_{pL}(k)$ (solid blue) and transverse $S_{pT}(k)$ (dashed green) fluctuations of the geometrical polarisation $\bm \tau_j$; The function $S_{pL\perp}(k)=N^{-1}\langle p_{L\perp}(-\bm k)p_{L\perp}(\bm k)\rangle$ is also shown (red dash-dotted). The inset shows the equivalent log-log plot to highlight the powerlaw $k^2$ (black thin line). Bottom: Cross-correlations $S_{iL}(k)$ (solid blue) and $S_{ivL}(k)$ (dashed green). The inset shows the log-log plot of their moduli, together with the powerlaw $k^1$ and $k^3$ (thin top and bottom lines respectively).}
\label{SpLSpT}
\end{figure}
The scale of the vertical axis, homogeneous to a squared length, shows that the typical values of $\tau_j$ are only $3-4\%$ of the nearest-neighbour distances, which is again a signature of the typically small fluctuations of the local structural properties in the dense fluid regime. The longitudinal part is rather structured with a first peak for $k<5$, whose qualitative origin is explained in Fig. \ref{petitschema}
\begin{figure}
\centering \resizebox{7cm}{!}{\includegraphics{petitschemaexplicatif.eps}}
\caption{(Color online) Origin of the first peak in $S_{pL}(k)$. On top is sketched the average position of the particles around a marked (shaded) one; The vertical lines show the boundaries of the corresponding Voronoi cells. At the bottom is shown a position fluctuation of the marked particle; it induces nonzero polarisation vectors (red arrows of arbitrary lengths) in the immediate vicinity of the particle, with a positive correlation between the vectors surrounding the marked particle vector.}
\label{petitschema}
\end{figure}
for a one-dimensional system. The secondary peak at $k=k_c$ is however not accounted for by the simple 1D representation. On the contrary, for a 1D model with excluded volume, the first structural peak corresponds to a minimum of $S_{pL}(k)$ (not shown), which is intuitively understood from the Fig. \ref{petitschema}, where a maximum anticorrelation is expected between the $\tau_j$ of nearest neighbours. As a result, this secondary peak can be related to some structural correlation involving the transverse directions (see Fig. \ref{whycorrelpositive}): Let us consider that $k\sim k_c$ is fixed. This corresponds to selecting a characteristing length $\sim k_c^{-1}$ as a filter. The particle pairs, say $(j,j')$ which contribute to $S_{pL}(k)$ are those at a distance $\sim k_c^{-1}$, i.e. the nearest neighbour pairs. From pair to pair, the two polarisation vectors $\bm\tau_j$ and $\bm\tau_{j'}$ are fluctuating because of the varying environment. By the way a large part of the immediate vicinities of $j$ and $j'$ are only weakly correlated, which induces a relatively weak correlation of $\bm\tau_j$ and $\bm\tau_{j'}$. Nevertheless, a positive correlation along the particle axis is expected, due to a density dipole effect: if (cf. fig. \ref{whycorrelpositive}) on the ``external'' side of $j$ a positive fluctuation of density is observed, the polarisation $\bm\tau_j$ is typically oriented toward $j'$ since the distance between $j$ and $j'$ is precisely assumed to have the average distance $k_c^{-1}$. But in a fluid, the correlations are very short ranged, which compels the positive density correlation to be immediately surrounded by a negative one (to restore the mean density beyond a distance $\sim 2-3\ k_c^{-1}$). This density depletion does not affect the distance $j-j'$ by assumption, but will typically provide the outward vicinity of $j'$ with a density slightly lowered, whence a $\bm\tau_{j'}$ pointing the {\em same} direction as $\bm\tau_j$.
\begin{figure}
\centering \resizebox{7cm}{!}{\includegraphics{whycorrelpositive.eps}}
\caption{Positive correlation of polarisation between two nearest neighbour particles (blue circles). The positive density fluctuation left hand the particle $j$ (dark zone) induces a polarisation $\bm\tau_j$ toward the right; this density fluctuation is surrounded by a negative density correction (light zone), which encompasses $j'$ and yields a polarisation $\bm\tau_{j'}$ pointing the same direction as $\bm\tau_j$.}
\label{whycorrelpositive}
\end{figure}
This discussion shows clearly that $p_L$ is sensitive to the local details of the density field, and notably to the transmission of the correlation due to excluded volume effects. Moreover, one shows that the reasoning involves clearly more than two particles, which witnesses the typical many-body nature of the Voronoi-inspired observables.
\medskip
As regards the transverse correlator $S_{pT}$, it shares with $S_{pL}$ a departure $\propto k^2$ for low $k$ (see inset of Fig. \ref{SpLSpT} top), which can be shown along lines similar to those developped in appendix \ref{App_pol_1}. In the microscopic domain $k\simeq k_c$, the function reaches a plateau, only weakly structured; The plateau value is $\langle\bm\tau_j^2\rangle/3$, the same as $S_{pL}$.
\medskip
The longitudinal polarisation field $p_{L}(\bm k)$ deserves some more comment. First, the relation \myref{Svkapp} does not imply that $S_{pL}(k)$ and $S_{v}(k)/k^2$ are equal in the limit $k\rightarrow0$. These two quantities are indeed proportional to $k^2$, but a priori with different prefactors. However, Eq. \myref{Svkapp} explicitely indicates that $p_{L}(-\bm k)$ can couple to $\rho(\bm k)$ and $\rho_v(\bm k)$, i.e. that
\begin{align}
S_{iL}(k)&\equiv N^{-1}\langle p_L(-\bm k)\rho(\bm k)\rangle\\
S_{ivL}(k)&\equiv N^{-1}\langle p_L(-\bm k)\rho_v(\bm k)\rangle
\end{align}
are both non zero (this is not the case with the transverse field, which does not couple to $\rho$ or $\rho_v$, due to incompatible symmetries). They are respectively $\propto k^1$ and $\propto k^3$ for vanishing $k$ and go to zero for high $k$ due to the isotropy, as can be seen in Fig. \ref{SpLSpT} (Bottom). As for $\rho_v$ and $\rho_{v\perp}$, we define an orthonormalized coordinate $p_{L\perp}$ from $p_L$, and its correlator $S_{pL\perp}(k)$ by
\begin{align}
p_{L\perp}(\bm k)&=p_L(\bm k)-\frac{S_{iL}(k)}{S(k)}\rho(\bm k)\\
S_{pL\perp}(k)&=N^{-1}\langle p_{L\perp}(-\bm k)p_{L\perp}(\bm k)\rangle\nonumber\\
&=S_{pL}(k)-\frac{S_{iL}(k)^2}{S(k)}
\end{align}
The structure factor $S_{pL\perp}(k)$ is plotted in Fig. \ref{SpLSpT} (top). The orthonormalization procedure has the drastic effect of removing the main parts of the structuration peaks. Interestingly, the shoulder at $k\sim 3$ is a remnant of the large former peak, and accounts for the specific three- or higher-point correlation ``content'' of the physics described schematically in fig. \ref{petitschema}. This new characteristic length provided by $S_{pL\perp}$ is specific to the polarisation, and such a shouldering is absent in the corresponding structure factor $S_{v\perp}$ associated to the orthogonalized volume field.
\section{Conclusion}
In this paper we put forward two new fields for the description of particle assemblies, defined thanks to the Voronoi tessellation of the configurations. The first one, the volume field $\rho_v(\bm r)$, associates to each Dirac function at the particles location a weight proportional to the Voronoi cell volume.
We have shown that the large scale fluctuations of this field display an anomaly that we proved to be related to a very peculiar feature of the Voronoi partitions, not noticed up to now to the best of our knowledge: The geometrical polarisation $\bm\tau_j$ of the cell $j$, a vector accounting roughly for the local anisotropy of the microscopic arrangement, is a conserved vectorial field, i.e. all geometrical polarisations sum up to zero. The associated polarisation field $\bm p(\bm k)$ is in our opinion a promising tool, because it has the unique property of being a conserved field bearing a vectorial information about the structure of the system.
We gave thereafter the main properties of the individual geometrical polarisations (we did not dwell too much on the individual Voronoi volumes, because their properties for dense fluids are rather well known \cite{Glotzer_Voronoi}). These vectors are approximately described by a Maxwell-Boltzmann statistics, and are statically somewhat correlated to the force field. In spite of this correlation, the dynamical behaviour of $\bm F$ and $\bm \tau$ are drastically different for supercooled liquids, and we think that these fundamental discrepancies can be traced back to the typical microscopic disorder of the inherent structures around which the supercooled liquids are temporarily oscillating during early microscopic regime (from $t=0$ until the very beginning of the plateau regime). These IS have a nonzero polarisation field, preventing the actual polarisation to decorrelate fully before the $\alpha$ process (these arguments apply to the volume field as well). A natural issue we will address in a future publication is to what extent the diffusion of the system in the phase space, on time scales large enough as to allow an interpretation in terms of hopping or shifting from IS to IS, is statistically correlated to the residual polarisation and volume fields of the IS.
The processes by which these relaxations take place have been shown to be very peculiar \cite{Schroderetal}, involving for instance highly anisotropic clusters of mobile particles. It may occur that these clusters are somehow related to the clusters of particles one can readily define from the IS residual polarisation field by lumping together particles with the rule that two neighbouring particles belong to the same cluster if the polarisation of one is pointing mostly to the other or vice-versa. Designed from a configuration polarisation field, one may think that these basic clusters are somehow a blurred version of a soft mode, owing to the structural polarisation/force correlation. Designed instead from a residual polarisation field of an IS, they could perhaps be correlated to the reaction path in phase space, i.e. to the path in phase space followed by the system so as to minimize the barrier from one IS to the next.
Finally, it is worth noting that $\rho_v(\bm k)$ and the longitudinal part of the polarisation field $p_L(\bm k)=-i \bm p(\bm k)\cdot \bm k$ share the same symmetries as the density field itself. As a result, they are potential candidates to devise an extended mode-coupling theory, enlarged so as to consider as relevant variables not only $\rho(\bm k)$ but also the correctly orthogonalized parts of $\rho_v$ and $p_L$. This extension would provide the ideal MCT with degrees of freedom aiming at describing the structure on a more real-space, integrated basis. The actual realization of such an extension relies on the possibility to express frequencies, vertices, etc\ldots involving variation of Voronoi volume and polarisation in terms of geometric quantities amenable to simple numerical calculations. The simplicity of the Voronoi construction allows such explicit simple expressions, an example of it being eq. \myref{gradU}. Actually, the major challenge for such a program resides mostly in the evaluation of vertices, which must be precise and simple. For the MCT, these requirements are met thanks to the factorization Ansatz by which the vertices are simple formulas of the structure factor only. The price to pay toward an enrichment of the ideal theory is a relative loss of simplicity. A certain amount of work has still to be done to make the complexification as manageable as possible.
\section{Appendices}
\subsection{Appendix : The gradient formula for a Voronoi cell}\label{AppMakse}
In this appendix, we show that when two distinct particles $i$ and $j$ in a configuration are ``nearest-neighbours'', that is their Voronoi cells share a common facet $S_{ij}$ ($S_{ij}$ names the facet or terms its area, according to the circonstance), one has
\begin{align}
\nabla_j U_i&=-\frac{S_{ij}}{r_{ij}}\bm s_{ji}\label{nabjUi}
\end{align}
where $r_{ij}=|\bm r_j-\bm r_i|$, and $\bm s_{ji}$ is the vector from the particle $j$ to the barycenter of the facet $S_{ij}$. To prove this formula, we use the following formula \cite{Makse}:
\begin{align}
U_i&=\frac{1}{d}\oint d\Omega [L_i(\bm n)]^d\\
L_i(\bm n)&=\min_{j/\bm n\cdot\hat{\bm r_{ij}}>0}\frac{r_{ij}}{2\bm n\cdot\hat{\bm r}_{ij}}\label{Lidef}
\end{align}
In this formula, valid for a Voronoi cell in any dimension $d\geq 2$, the integration is performed over the angular directions viewed from the position of the particle $i$, $\hat{\bm r}_{ij}$ stands for $\bm r_{ij}/r_{ij}$, and $\bm n$ is a unit vector pointing in the direction $\Omega$, and $L_i(\bm n)$ is precisely the distance between $i$ and its Voronoi cell boundary in the direction $\bm n$.
\medskip
We are interested in computing derivatives of $U_i$ with respect of coordinates. To this end, it is useful to note that
\begin{multline}
\min_{\text{positive items}} \{A,B,C,(\ldots)\}=\\
A \Theta(A) H(B,A)H(C,A)
+B\Theta(B)H(A,B)H(C,B)\\+C\Theta(C)H(A,C)H(B,C)
\end{multline}
with
$H(B,A)\equiv1-\Theta(B)\Theta(A-B)$,
where $\Theta$ is the Heaviside function. Now, if one derives with respect to a variable $a$ to which $B$ and $C$ are independent, we get, due to the assumed continuity of the functions
\begin{align}
\partial_a\min_{\text{positive items}} \{A,B,C,(\ldots)\}&=(\partial_a A)\Theta(A) H(B,A)H(C,A)\label{paa}
\end{align}
There is however a proviso with this last formula : we assumed that the variables $A,B,$etc\ldots are never zero. When a degeneracy between the particles is possible (exact superposition of two of them), some singularities are likely to show up. As it never happens with particles with excluded volume, we disregard these mathematical limiting cases. Formula \myref{paa} allows a controlled derivation of $U_j$: we have rigorously
\begin{align}
\bm\nabla_jU_i&=\frac{1}{2^d}\int_{S_{ij}}d\Omega\frac{r_{ij}^{2(d-1)}}{(\bm n\cdot\bm r_{ij})^{d+1}}[2\bm r_{ij}(\bm n\cdot\bm r_{ij})-r_{ij}^2\bm n]
\end{align}
Now we consider only the three dimensional case $d=3$. We have here $d\Omega=(d\bm S\cdot \bm n)/r^2=4dS(\hat{\bm r}_{ij}\cdot\bm n)^3/r_{ij}^2$ whence
\begin{align}
\bm \nabla_j U_i
&=
\frac{1}{2} \int_{S_{ij}}dS\frac{\hat{\bm r}_{ij}(\bm n\cdot \hat{\bm r}_{ij})-\bm n^\parallel}{\hat{\bm r}_{ij}\cdot \bm n}
\end{align}
where $\bm n^\parallel=\bm n-(\bm n\cdot\hat{\bm r}_{ij})\hat{\bm r}_{ij}$ is the component of $\bm n$ parallel to $S_{ij}$. The final step is to ``change the viewpoint'', i.e. replace the unit vector $\bm n$, adapted to a line of sight issued from the particle $i$ toward the surface $S_{ij}$, with the unit vector $\bm n'$ corresponding to a sight issued from $j$. The vector $\bm n'$ is the symmetrical of $\bm n$ with respect to the plane containing $S_{ij}$. We get
\begin{align}
\bm\nabla_j U_i&=-\frac{1}{2} \int_{S_{ij}}dS\frac{\hat{\bm r}_{ji}(\bm n'\cdot \hat{\bm r}_{ji})+(\bm n') ^\parallel}{\hat{\bm r}_{ji}\cdot \bm n'}\\
&=-\frac{1}{2} \int_{S_{ij}}dS\frac{2r'\bm n'}{r_{ij}}=-\frac{S_{ij}}{r_{ij}}\bm s_{ji}\label{eq21}
\end{align}
(where $r'\bm n'$ is the running vector from $j$ to a point of $S_{ij}$). It is important to stress that in general $\bm s_{ji}\neq -\bm s_{ij}$, whereas one has obviously $\bm r_{ij}\equiv \bm r_j-\bm r_i=-\bm r_{ji}$. Notice that \myref{eq21} has the simple corollary
\begin{align}
\bm \nabla_j U_i-\bm \nabla_i U_j&=S_{ij}\hat{\bm r}_{ij}
\end{align}
\subsection{Appendix : The $k^4$ behaviour of $S_v(k)$}\label{App_pol_1}
(We deal here with 3D systems, but the generalization to $n\geq 2$ dimensions is obvious). We consider first a slight variant of $S_v(k)$, by defining $\tilde{S}_v(k)=N^{-1}\langle \tilde{\rho}_v(-\bm k)\tilde{\rho}_v(\bm k)\rangle$ with
\begin{align}
\tilde{\rho}_v(\bm k)&=\rho_v(\bm k)-K(k)v^{-1}\int d^3r e^{i\bm k\cdot \bm r}\\
&=\sum_j e^{i\bm k\cdot\bm r_j}\underbrace{v^{-1}\left(U_j-K(k)\int_{U_j}d^3r e^{i\bm k\cdot [\bm r-\bm r_j]}\right)}_{\equiv A_j(k)}
\end{align}
where $K(k)$ is a function, with the sole requirement that $K(0)=1$, chosen in such a way that $\langle A_j(k)\rangle=0$ for all $k$, that is
\begin{align}
K(k)&=\frac{v}{\left\langle\displaystyle\int_{U_j}d^3r e^{i\bm k\cdot [\bm r-\bm r_j]}\right\rangle}
\end{align}
Notice that $K(0)=1$ as it must, and that the denominator is a kind of generating function for the inertial moments of the Voronoi cell.
It is readily seen that $S_v(k)$ and $\tilde{S}_v(k)$ are identical, but for $k=0$.
$A_j(k)$ has a {\em bona fide} Taylor expansion which is, up to the first order
\begin{align}
A_j(k)=-i\bm k\cdot\bm \tau_j+o(\bm k)\label{eqAA1}
\end{align}
We have besides
\begin{align}
S_v(k)&=N^{-1}\sum_{j,j'}\langle A_j(k)A_{j'}(-k)\exp(i\bm k\cdot[\bm r_j-\bm r_{j'}])\rangle\label{Svkapp}
\end{align}
and, on the assumption that $A_j$ and $A_{j'}$ have a finite correlation length $\xi_v$ (which is obviously fulfilled in the liquid range due to the high structural disorder), one can for $k\rightarrow 0$ make the replacement $\exp(\ldots)\rightarrow 1$, because for pairs $(j,j')$ such that $|\bm r_j-\bm r_{j'}|\gg \xi_v$, the actual signal is negligibly small because of the decoupled product $A_jA_{j'}$ of zero-mean variables. Therefore, one can safely write that
\begin{align}
S_v(k)&\sur{\simeq}{k\rightarrow 0} \frac{k^2N^{-1}}{3}\left\langle\left(\sum_j\bm \tau_j\right)^2\right\rangle=\frac{k^2N^{-1}}{3}\langle\bm P^2\rangle
\end{align}
(where isotropy of space has been invoked). Therefore, the fact that $S_v(k)$ is $\propto k^4$ is directly related to the vanishing of $N^{-1}\langle \bm P^2\rangle$ in the thermodynamic limit $N,V\rightarrow \infty$ with $V/N=v=\rm C^t$. We show in the next appendix that for a finite system with a external boundary $\sim V^{2/3}$, one should expect $\langle \bm P^2\rangle\sim N^{2/3}$ from the bulk conservation of $\bm P$, and thence the asymptotic proof of our result.
\bigskip
It is interesting to consider also the $k^2$ behaviour of $S_i(k)$. First, one can notice that it is easily demonstrated using the Cauchy-Schwartz inequality $|S_i(k)|\leq \sqrt{S_{v}(k)S(k)}$, provided one assumes a regular behaviour of $S_i(k)$ near $k=0$. If however one tries to follow a more explicit route, one faces the fact that $S_i(k)$ for small $k$ has something to do with the large wavelength density fluctuations (which prevents for instance any small $k$ expansion of the exponential terms). To see that, we write
\begin{align}
S_i(k)&\sur{\simeq}{k\rightarrow 0}-i\bm k\cdot \left\langle \bm \tau_j\sum_{j'} e^{i\bm k \cdot \bm r_{j'j}}\right\rangle\\
&=k^2\left\langle \sum_{j'}(\bm \tau_j\cdot\bm r_{j'j})\frac{\text{sinc}(kr_{jj'})-\cos(kr_{jj'})}{(kr_{jj'})^2}\right\rangle\label{c20}
\end{align}
where $\bm r_{j'j}=\bm r_j-\bm r_{j'}$ and the second line is obtained via the isotropy of space. It is tempting to take the $k=0$ value inside the average, and obtain a value $\frac 13\langle \sum_{j'}\bm\tau_j\cdot\bm r_{j'j}\rangle$ for the leading $k^2$ coefficient. But this is not correct, because this term is ill-defined as regards the convergence. To avoid this divergence, one introduces the conditional probability $P(\Gamma|\bm\tau_j)$ over the entire phase space, $\Gamma=(\bm r_i)_{(i=1,N)}$, and writes \myref{c20} like
\begin{align}
S_i(k)&\sur{\simeq}{k\rightarrow 0}k^2v^{-1}\int d^3 r\frac{\text{sinc}(kr)-\cos(kr)}{(kr)^2}\bm r\cdot\left\langle\bm \tau_j g_{\bm \tau_j}(\bm r)\right\rangle_{\bm\tau_j}\\
g_{\bm \tau_j}(\bm r)&=v\int d\Gamma P(\Gamma|\bm\tau_j)\sum_{i\neq j}\delta(\bm r-(\bm r_i-\bm r_j))
\end{align}
where $\langle\cdot\rangle_{\bm \tau_j}$ means a average over the different values of $\bm \tau_j$ (with the associated equilibrium probability). One has on the one hand $\langle\bm\tau_j\rangle_{\bm\tau_j}=\langle\bm\tau_j\rangle=\bm 0$, and $g_{\bm \tau_j}(\bm r)\rightarrow 1$ for large $|\bm r|$ on the other hand. On assuming as always a finite correlation length into the fluid, one can write
\begin{align}
S_i(k)&\sur{\simeq}{k\rightarrow 0} \frac{k^2v^{-1}}{3}\int d^3r \bm r\cdot \langle \bm\tau_j [g_{\bm \tau_j}(\bm r)-1]\rangle_{\bm \tau_j},\label{coeffk2}
\end{align}
an expression which is now well-defined with a convergent integral.
Qualitatively, one understands the fact that the $k^2$ coefficient is positive noticing that given a value of $\bm\tau_j$, the particles surrounding $j$ and close to the axis parallel to $\bm\tau_j$ are typically ``close'' to $j$ if one follows the tail of the vector, and ``far'' from $j$ is one follows the opposite direction. Finally, let us remark that a similar expression as \myref{coeffk2} for the $k^4$ coefficient of $S_v(k)$ could be written, but would be of few interest, for correlators build up with conditional probabilities are awkward to compute and interpret.
\subsection{Appendix : Conservation of the geometrical polarisation}\label{App_pol_2}
(We deal here with 3D systems, but the generalization to $n\geq 2$ dimensions is obvious).
For an infinite system, we will show that $\bm P$, formally defined by Eq. \myref{Pdef}, is conserved. To this end, we use the formula
\begin{align}
\bm \nabla_j U_i&=-\frac{S_{ij}}{r_{ij}}\bm s_{ji}\label{gradU}
\end{align}
demonstrated in appendix \ref{AppMakse}, which is appropriate only if (i) one has $i\neq j$, and (ii) the Voronoi cells of particles $i$ and $j$ share a common facet (otherwise $\bm \nabla_j U_i=\bm 0$). In this formula, $S_{ij}$ is the area of the common facet (notice that this symbol will be used to term both the facet and its area; if this facet does not exist, $S_{ij}=0$ will be assumed whenever necessary), $r_{ij}=|\bm r_{ij}|=|\bm r_j-\bm r_i|$ and $\bm s_{ji}$ is the vector starting from the particle $j$ and pointing to the centroid of the common facet $S_{ij}$. If $i=j$, one can use the global conservation of the volume and write
\begin{align}
\bm\nabla_jU_j&=-\sum_{i\neq j}\bm \nabla_jU_i
\end{align}
Therefore, one has
\begin{align}
\partial_{z_j}\sum_i U_i\bm r_i&=U_j\bm e_z-\sum_{i\neq j}\bm r_{ji}\frac{S_{ij}}{r_{ij}}(\bm s_{ji}\cdot \bm e_z)
\end{align}
Now, for each facet of the cell $j$, with $\bm r$ a running vector joining the particle $j$ to a point of $S_{ij}$, we have
\begin{align}
\int_{S_{ij}}d\bm S (\bm r\cdot&\bm e_z)=\nonumber\\
&\left\{\begin{array}{l}\displaystyle\frac{\bm r_{ji}}{r_{ji}}\int_{S_{ij}}d\bm S (\bm r\cdot\bm e_z)=\frac{\bm r_{ji}}{r_{ji}}S_{ij}(\bm s_{ji}\cdot\bm e_z)\\
\displaystyle\int_{P_{ji}} d^3r \bm \nabla(\bm r\cdot\bm e_z)-\int_{(\partial P_{ji})\setminus S_{ij}}d\bm S (\bm r\cdot\bm e_z)
\end{array}\right.\label{2f}
\end{align}
The second relation is also
\begin{align}
P_{ji}\bm e_z-\int_{(\partial P_{ji})\setminus S_{ij}}d\bm S (\bm r\cdot\bm e_z)\label{rajoutee}
\end{align}
where $P_{ji}$ is the (volume of) the pyramid with the particle $j$ at its summit and with $S_{ij}$ for base. The second relation comes from $\displaystyle\int_V d^3\tau \bm \nabla \phi=\oint_{\partial V} \phi d\bm S$, a corollary of the Green-Ostrogradsky theorem. As soon as it is realized that $U_j$ is constructed with the different pyramids $(P_{ji})_i$, a lateral edge of a pyramid being actually a lateral edge of exactly one another pyramid, the last term of \myref{rajoutee} cancel upon summation over $i$, and we get from \myref{2f} that
\begin{align}
\sum_{i}\frac{\bm r_{ji}}{r_{ji}}S_{ij}(\bm s_{ji}\cdot\bm e_z)=\sum_i{P_{ji}}\bm e_z=U_j\bm e_z
\end{align}
which yields $\partial_{z_j}\sum_i U_i\bm r_i=\bm 0$. As $j$ is arbitrary, as well as the choice of the $z$ coordinate, one concludes that the vector $\sum_iU_i\bm r_i$ is a constant. Besides, the vector $\sum_{i}U_i\bm s_i$ is nothing but the entire volume times a vector pointing to its geometrical center, therefore a constant, whence one concludes that $\bm P$ itself is conserved. The isotropy of space imposes moreover $\bm P=\bm 0$.
If the system is finite and large ($N\gg 1$), with ``normal'' boundaries scaling like $N^{2/3}$, $\bm P$ is not necessarily conserved, due to the fact that the pyramidal structure of Voronoi cells is no longer fulfilled for the boundary cells. Therefore, one expects fluctuations of $\bm P$ driven by $\sim N^{2/3}$ boundary cells. If one assumes rapid decorrelations among the boundary polarisation vector $\tau_i$, one should have $\langle\bm P^2\rangle\sim N^{2/3}$, i.e. very weak fluctuations, and a rapid asymptotic behaviour for growing $N$ (see preceding section).
\bigskip
Finally, the preceding demonstration highlights some useful, non obvious equalities. Whatever the vector $\bm v$, one has
\begin{align}
U_j\bm v&=\sum_{i\neq j}\hat{\bm r}_{ji}S_{ij}(\bm s_{ji}\cdot\bm v)=\sum_{i\neq j}(\hat{\bm r}_{ji}\cdot\bm v)S_{ij}\bm s_{ji}\label{coroutile}
\end{align}
(where $\hat{\bm r}_{ji}=\bm r_{ji}/r_{ji}$). A corollary of this is
\begin{align}
\forall\ (\bm v,\bm w),\ \ \bm v\cdot\bm w=0\Rightarrow \sum_{i\neq j}(\hat{\bm r}_{ji}\cdot\bm v)S_{ij}(\bm s_{ji}\cdot\bm w)=0
\end{align}
|
2,877,628,091,507 | arxiv | \section{Introduction}
Since the pioneering work of Schwarzschild \cite{Schwarzschild:1916uq} and Tolman \cite{Tolman:1939jz}, in General Relativity (GR) compact objects are usually modeled using spherically symmetric perfect fluid solutions of the Einstein field equations. With the subsequent discovery of white dwarfs and neutron stars, in the last decades there has been a lot of interest in generating new exact solutions describing spherically symmetric relativistic stars. One should note that there are known by now various algorithms to generate new spherically symmetric exact solutions, which are sourced by perfect fluids \cite{Lake:2002bq}, \cite{Martin:2003jc}, \cite{Boonserm:2005ni}. However, as shown in \cite{Delgaty:1998uy} not all the perfect fluid interior solutions considered so far are physical. Moreover, in modeling realistic compact objects in GR one should consider more sophisticated models, involving deviations either from the spherical symmetry and/or the perfect fluid distribution of the source.
As is well-known, while preserving the requirement of spherical symmetry one can consider more general solutions that are sourced by anisotropic fluids (see for instance \cite{Harko:2002db} - \cite{Herrera:2007kz}). For such anisotropic fluids the radial pressure component, $p_r$ is not equal to the components in the transverse directions, $p_t$. There are strong theoretical reasons to believe that in realistic relativistically covariant stellar models in the high density regimes the pressures inside the star are anisotropic \cite{Ruderman:1972aj}. Such anisotropies in the fluid distributions can arise from various reasons: it can be due to a mixture of two fluid components \cite{Letelier:1980mxb}, the existence of a superfluid phase, the presence of a magnetic field, etc. (for a review see \cite{Herrera:1997plx} and references there). Other non-trivial examples of anisotropic fluid distributions are provided by the bosonic stars (see for instance \cite{Liebling:2012fv} and the references within), by traversable wormholes \cite{Bronnikov:2017kvq}, or the so-called gravastars \cite{Cattoen:2005he}, which are systems where anisotropic pressures occur naturally. The pressure anisotropy can have significant effects on the structure and the properties of the stellar models \cite{Karmakar:2007fn}. Anisotropic fluid models of neutron stars could be used to model the so-called magnetars \cite{DT}, which denote a class of neutron stars whose emissions are powered by the decay of their huge magnetic field. For a magnetar the magnetic field strength can reach values as high as $10^{11}T$, while being even more intense inside the star. There are over $30$ magnetars cataloged by now \cite{Olausen:2013bpa} (for recent reviews of their properties see \cite{Esposito:2018gvp}, also \cite{Woods:2004kb}). This class of objects includes the soft gamma repeaters (SGRs) and the the anomalous X-ray pulsars (AXPs). Analytic non-perturbative solutions in GR describing anisotropic models of magnetars have been constructed in \cite{Yazadjiev:2011ks} and \cite{Stelea:2018cgm}. However, in presence of very strong magnetic fields, in order to construct more realistic magnetar models one should consider an axially symmetric treatment of the source \cite{Negreiros:2018cjk}. In absence of rotation, the most general line element describing such systems has the form:
\begin{eqnarray}
ds^2&=&-A(r,\theta)^2dt^2+B(r,\theta)^2dr^2+C(r,\theta)^2d\theta^2+D(r,\theta)^2d\varphi^2.
\label{initialm}
\end{eqnarray}
In general, this line element can be considered as a solution of Einstein equations\footnote{Note that we work using the natural units for which $G=c=1$.} $G_{\mu\nu}=8\pi T_{\mu\nu}^0$ sourced by an anisotropic fluid, which is described by a non-diagonal stress-energy tensor of the form:
\begin{eqnarray}
T_{\mu\nu}^0&=&\rho^0u_{\mu}^0u_{\nu}^0+p_r^0\chi_{\mu}^0\chi_{\nu}^0+p_{\theta}^0\xi_{\mu}^0\xi_{\nu}^0+p_{\varphi}^0\zeta_{\mu}^0\zeta_{\nu}^0+2p_{r\theta}^0\chi_{(\mu}^0\xi_{\nu)}^0,
\label{initialf}
\end{eqnarray}
where $\rho^0$ is the fluid density, $p_r^0$ is the radial pressure, while $p_{\theta}^0$, $p_{\varphi}^0$ and $p_{r\theta}^0$ are transverse components of the fluid pressure. Also $u_{\mu}^0=(-A, 0, 0, 0)$ is the $4$-velocity of the fluid, while $\chi_{\mu}^0=(0, B, 0, 0)$, $\xi_{\mu}^0=(0, 0, C, 0)$ and $\zeta_{\mu}^0=(0, 0, 0, D)$ are spacelike unit vectors in the radial and transverse directions.
The purpose of this paper is to show that starting from any solution (\ref{initialm}) sourced by the stress-energy tensor (\ref{initialf}) one can easily generate the corresponding solutions in Einstein-Maxwell theory that correspond either to an electrically charged metric or to a magnetized solution. In GR the electromagnetic field is described using the anti-symmetric Faraday tensor $F_{\mu\nu}=\nabla_{\mu}A_{\nu}-\nabla_{\nu}A_{\mu}$, where $A_{\mu}$ is the vector potential of the electromagnetic field. The Maxwell equations are then written as:
\begin{eqnarray}
\nabla_{\nu}(\star F)^{\mu\nu}&=&0,~~~~\nabla_{\nu}F^{\mu\nu}=4\pi j^{\mu},
\label{maxwell}
\end{eqnarray}
where $\star F_{\mu\nu}=\frac{1}{2}\epsilon_{\mu\nu\gamma\delta}F^{\gamma\delta}$ is the Hodge-dual tensor, while $\epsilon_{\mu\nu\gamma\delta}$ is the Levi-Civita tensor. Also, $j^{\mu}$ is the $4$-current that sources the electromagnetic field. The electromagnetic stress-energy tensor, which enters the Einstein field equations is defined as:
\begin{eqnarray}
T_{\mu\nu}^{em}&=&\frac{1}{4\pi}\left(F_{\mu\gamma}F_{\nu}^{~\gamma}-\frac{1}{4}F_{\gamma\delta}F^{\gamma\delta}g_{\mu\nu}\right).
\end{eqnarray}
For an electrically charged solution, in absence of a magnetic field, the only non-zero component of the electromagnetic potential is $A_t$. On the other hand, the magnetic field of a magnetized star can have both toroidal and poloidal components. However, when one takes into account the toroidal components of the magnetic field then the spacetime geometry (\ref{initialm}) has to be modified and it will include other non-vanishing metric components \cite{gourgoulhon}. Therefore, in our work we shall assume that the toroidal components are zero and the magnetic field is purely poloidal.
The structure of this paper is as follows: in the next section we present the general electrically charged version of the metric in (\ref{initialm}). As an example we show how to obtain the charged Bowers and Liang solution \cite{Bowers}. In section $3$ we construct the general magnetized version of the metric (\ref{initialm}). The final section contains a summary of our work and avenues for further work.
\section{The electrically charged model}
Starting with the metric (\ref{initialm}) solution of the Einstein equations sourced by an anisotropic fluid described by (\ref{initialf}) one can construct the following metric:
\begin{eqnarray}
ds^2&=&-\frac{A(r,\theta)^2}{\Lambda^2}dt^2+\Lambda^2\big[B(r,\theta)^2dr^2+C(r,\theta)^2d\theta^2+D(r,\theta)^2d\varphi^2\big],
\label{electricm}
\end{eqnarray}
where we defined $\Lambda=1-E_0^2A(r,\theta)^2$, with $E_0$ being a constant. This will be a solution of the Einstein-Maxwell-fluid equations:
\begin{eqnarray}
G_{\mu\nu}&=&8\pi T^{em}_{\mu\nu}+8\pi T_{\mu\nu}^{fluid}
\label{eqfinal}
\end{eqnarray}
together with the Maxwell equations (\ref{maxwell}) if the electromagnetic $4$-vector potential is $A_{\mu}=(A_t, 0, 0, 0)$ with
\begin{eqnarray}
A_t&=&\frac{E_0A(r,\theta)^2}{\Lambda},
\label{finalel}
\end{eqnarray}
while the fluid stress-energy tensor has the form:
\begin{eqnarray}
T_{\mu\nu}^{fluid}&=&(\rho +\sigma_e)u_{\mu}u_{\nu}+p_r\chi_{\mu}\chi_{\nu}+p_{\theta}\xi_{\mu}\xi_{\nu}+p_{\varphi}\zeta_{\mu}\zeta_{\nu}+2p_{r\theta}\chi_{(\mu}\xi_{\nu)}.
\label{finalf}
\end{eqnarray}
Here we defined
\begin{eqnarray}
\rho&=&\frac{\rho^0}{\Lambda^2},~~~p_r=\frac{p_r^0}{\Lambda^2},~~~p_{\theta}=\frac{p_{\theta}^0}{\Lambda^2},~~~p_{\varphi}=\frac{p_{\varphi}^0}{\Lambda^2},~~~p_{r\theta}=\frac{p_{r\theta}^0}{\Lambda^2}.
\end{eqnarray}
Note that $u_{\mu}=\left(-\frac{A}{\Lambda}, 0, 0, 0\right)$ is the $4$-velocity of the fluid, while $\chi_{\mu}=(0, B\Lambda, 0, 0)$, $\xi_{\mu}=(0, 0, C\Lambda, 0)$ and $\zeta_{\mu}=(0, 0, 0, D\Lambda)$ are respectively spacelike unit vectors in the radial and the transverse angular directions.
Finally, the charge density is:
\begin{eqnarray}
\sigma_e&=&2(\rho+p_r+p_{\theta}+p_{\varphi})\frac{E_0^2A(r,\theta)^2}{\Lambda}
\end{eqnarray}
and the electric current is $j_{\mu}=(j_t, 0, 0, 0)$ where:
\begin{eqnarray}
j_t&=&-2(\rho+p_r+p_{\theta}+p_{\varphi})\frac{E_0A(r,\theta)^2}{\Lambda^2}.
\end{eqnarray}
We explicitly checked using Maple \cite{Maple} that the fields given in (\ref{electricm}), (\ref{finalel}), (\ref{finalf}) are an exact solution of the coupled Einstein-Maxwell-fluid system if (\ref{initialm}) and (\ref{initialf}) is an exact solution of the Einstein-fluid equations of motion.
\subsection{The electrically charged version of the Bowers-Liang solution}
As an example of this solution-generating technique, let us consider the charged version of the anisotropic Bowers-Liang solution. This solution, which was found by Bowers and Liang \cite{Bowers} corresponds to a anisotropic fluid with an homogeneous density distribution $\rho=\rho^0=constant$. In their work they considered a spherically symmetric relativistic matter distribution and studied the behavior of such systems by incorporating the pressure anisotropy effects in the equation of the hydrostatic equilibrium. Their solution is given by (\ref{initialm}) where:
\begin{eqnarray}
A(r,\theta)^2&=&\bigg[\frac{3\left(1-\frac{2M}{R}\right)^{\frac{h}{2}}-\left(1-\frac{2m(r)}{r}\right)^{\frac{h}{2}}}{2}\bigg]^{\frac{2}{h}},~~~~B(r,\theta)^2=\frac{1}{1-\frac{2m(r)}{r}},~~~ C(r,\theta)=r, \\
D(r,\theta)&=&r\sin\theta,~~~\rho^0=\frac{3M}{4\pi R^3},~~~p_r^0=\rho^0\frac{\left(1-\frac{2m(r)}{r}\right)^{\frac{h}{2}}-\left(1-\frac{2M}{R}\right)^{\frac{h}{2}}}{3\left(1-\frac{2M}{R}\right)^{\frac{h}{2}}-\left(1-\frac{2m(r)}{r}\right)^{\frac{h}{2}}},\nonumber\\
\Delta^0&=&p_t^0-p_r^0=\frac{4\pi}{3}Cr^2\frac{(\rho^0+p_r^0)(\rho^0+3p_r^0)}{1-\frac{2m(r)}{r}},\nonumber
\end{eqnarray}
where $h=1-2C$, $m(r)=\frac{4\pi}{3}r^3\rho^0$ and $C$ is the anisotropy parameter. Note that for this solution $p_{\theta}^0=p_{\varphi}^0=p_t^0$, while $p_{r\theta}^0=0$ in (\ref{initialf}).
Then, using the results from section $2$ the electrically charged Bowers-Liang solution will simply be given by (\ref{electricm}) supplemented by (\ref{finalel}) and (\ref{finalf}). The final geometry is still spherically symmetric, while the anisotropic fluid source has the pressures different in radial and transverse directions. Note also that for $C=0$ one obtains the electrically charged interior Schwarzschild solution discussed in \cite{Yazadjiev:2004bg} in a slightly different form, in absence of the dilaton field.
Since in origin $r=0$
\begin{eqnarray}
\Lambda_0=1-E_0^2\left(\frac{3\left(1-\frac{2M}{R}\right)^{\frac{h}{2}}-1}{2}\right)^{\frac{2}{h}},
\end{eqnarray}
then in the electrically charged Bowers-Liang solution the radial pressure in origin becomes:
\begin{eqnarray}
p_r(0)&=&\frac{\rho^0}{\Lambda_0^2}\frac{1-\left(1-\frac{2M}{R}\right)^{\frac{h}{2}}}{3\left(1-\frac{2M}{R}\right)^{\frac{h}{2}}-1},
\end{eqnarray}
and the critical value of the quantity $\frac{2M}{R}$ for which the central pressure becomes infinite is\footnote{For this value $\Lambda_0\rightarrow 1$, there is no physical critical value of $\frac{2M}{R}$ for which $\Lambda_0=0$.}:
\begin{eqnarray}
\frac{2M}{R}|_{cr}&=&1-\left(\frac{1}{3}\right)^{\frac{2}{h}}.
\end{eqnarray}
The critical value of the ratio $\frac{2M}{R}$ is the same as the critical value of the original Bowers and Liang solution.
If one takes $h=0$ in the electrically charged Bowers-Liang solution one obtains the charged version of the so-called Florides solution \cite{florides}. It corresponds to an anisotropic object with zero radial pressure $p_r=0$, which is sustained only by tangential stresses.
\section{The magnetized solution}
Similarly to the electrically charged case, given a general solution (\ref{initialm}) - (\ref{initialf}) of the Einstein-fluid field equations, one can write down directly the corresponding magnetized solution in the following form:
\begin{eqnarray}
\label{finalmagm}
ds^2&=&\Lambda^2\big[-A(r,\theta)^2dt^2+B(r,\theta)^2dr^2+C(r,\theta)^2d\theta^2\big]+\frac{D(r,\theta)^2}{\Lambda^2}d\varphi^2,\\
A_{\varphi}&=&\frac{B_0D(r,\theta)^2}{\Lambda},~~~\Lambda=1+B_0^2D(r,\theta)^2,\nonumber
\end{eqnarray}
where the stress-energy tensor of the anisotropic fluid is given by:
\begin{eqnarray}
T_{\mu\nu}^{fluid}&=&\rho u_{\mu}u_{\nu}+p_r\chi_{\mu}\chi_{\nu}+p_{\theta}\xi_{\mu}\xi_{\nu}+(p_{\varphi}+\sigma_m)\zeta_{\mu}\zeta_{\nu}+2p_{r\theta}\chi_{(\mu}\xi_{\nu)},
\label{finalfmag}
\end{eqnarray}
with
\begin{eqnarray}
\rho&=&\frac{\rho^0}{\Lambda^2},~~~p_r=\frac{p_r^0}{\Lambda^2},~~~p_{\theta}=\frac{p_{\theta}^0}{\Lambda^2},~~~p_{\varphi}=\frac{p_{\varphi}^0}{\Lambda^2},~~~p_{r\theta}=\frac{p_{r\theta}^0}{\Lambda^2},
\end{eqnarray}
while:
\begin{eqnarray}
\sigma_m&=&-2(\rho-p_r-p_{\theta}+p_{\varphi})\frac{B_0^2D(r,\theta)^2}{\Lambda}
\end{eqnarray}
and the only non-vanishing component of the $4$-current $j_{\mu}$ is:
\begin{eqnarray}
j_{\varphi}&=&2(\rho-p_r-p_{\theta}+p_{\varphi})\frac{B_0D(r,\theta)^2}{\Lambda^2}.
\end{eqnarray}
Finally, $u_{\mu}=\left(-A\Lambda, 0, 0, 0\right)$ is the $4$-velocity of the fluid, while $\chi_{\mu}=(0, B\Lambda, 0, 0)$, $\xi_{\mu}=(0, 0, C\Lambda, 0)$ and $\zeta_{\mu}=(0, 0, 0, \frac{D}{\Lambda})$ are respectively spacelike unit vectors in the radial and the transverse angular directions.
This solution is a direct generalization of the magnetized solutions considered in \cite{Stelea:2018cgm}. One should note that we explicitly checked using Maple \cite{Maple} that (\ref{finalmagm}) and (\ref{finalfmag}) represent a full exact solution of the Einstein-Maxwell-fluid equations (\ref{maxwell}) - (\ref{eqfinal}).
\section{Conclusions}
In this work we presented a simple solution-generating technique that enabled us to construct the electrically charged or the magnetized solution for every axially-symmetric geometry (\ref{initialm}), sourced by a anisotropic fluid described by a non-diagonal anisotropic stress-energy tensor (\ref{initialf}). As an example, we showed how to derive the charged Bowers and Liang solution. Note that using our method one should be able to construct the charged/magnetized version of every spherically symmetric fluid solution. However, our solution-generating technique can be successfully applied to more general interior solutions with axial symmetry as found for instance in \cite{Herrera:2013hm}, \cite{Hernandez-Pastora:2016ctg}.
As avenues for further work, the magnetized solution presented in our paper should be suitable to construct more realistic models of magnetars, by adding the slow-rotation in a perturbative way, along the lines of \cite{Hartle:1967he}, \cite{Benhar:2005gi}. Another interesting extension of the present work would involve a study of the star’s anisotropy effect on the propagation of various fields in this background, on the lines of the study presented in \cite{Dariescu:2017ima}, \cite{Dariescu:2018dyy}. Work on these matters is in progress and it will be presented elsewhere.
\vspace{10pt}
{\Large Acknowledgements}
This work was supported by a grant of Ministery of Research and Innovation, CNCS - UEFISCDI, project number PN-III-P4-ID-PCE-2016-0131, within PNCDI III.
|
2,877,628,091,508 | arxiv |
\section{Introduction}
\subsection{About the problem}
\noindent A hyperbolic cone-structure on an oriented surface $S$ is a geometric structure locally modeled on the hyperbolic plane, with its group of orientation-preserving isometries $\mathrm{PSL}_2\R$. Any hyperbolic structure induces in a natural way a holonomy representation $\rho:\pi_1S\longrightarrow \mathrm{PSL}_2\R$, that encodes geometric data about the structure; but what can we say about the reverse problem? More precisely:
\begin{center}
\emph{which representations of a surface group into \textnormal{PSL}$_2\mathbb{R}$ are holonomy representations?}
\end{center}
\noindent The reverse problem to recover a hyperbolic cone-structure from a given representation $\rho$ is arduous, longer and not always possible. In \cite{TA}, Tan gives an example of a representation that does not arise as the holonomy of a hyperbolic cone-structure (see also \ref{me} below). For this reason, we will say that a representation $\rho$ is \emph{geometrizable by a hyperbolic cone-structure} (or briefly \emph{geometrizable}), if it arises as the holonomy of a hyperbolic cone-structure on $S$. For a closed surface $S$ with $\chi(S) < 0$, every representation $\rho:\pi_1S\longrightarrow \mathrm{PSL}_2\R$ determines an Euler number $\eu\rho$ (we discuss the Euler number in more detail below, see \ref{s3}). The Euler number $\eu\rho$ satisfies the so-called Milnor-Wood inequality, that is $|\eu\rho|\le -\chi(S)$; and parametrizes the connected components of the $\mathrm{PSL}_2\R-$character variety $\mathcal{X}(S)$. In \cite{GO88}, Goldman showed that every representation with $|\eu\rho|=-\chi(S)$ arises as the holonomy of a complete hyperbolic structure on $S$. For the other values of $\eu\rho$, it is not yet clear which are holonomy representations. So far as we know, it is still an open question whether the set of holonomy representations is dense among representations of Euler class $|k|<-\chi(S)$.\\
\noindent In \cite{FA}, we were interested in purely hyperbolic representations, \emph{i.e.} representations whose image consists only of hyperbolic elements other than the identity. In this work we consider another class of representations of major interest, namely \emph{almost extremal representations}, \emph{i.e.} representations such that $\eu\rho=\pm\big(\chi(S)+1\big)$ (hence the reason of such name). We may immediately rule out elementary representations from our interests because they have Euler number zero (see \cite{GO88}). For this reason, in the sequel, we will consider only non-elementary representations.\\
It was conjectured that every almost extremal representation arises as the holonomy of hyperbolic cone-structure with one cone point of angle $4\pi$. Mathews took into account this problem in the following series of papers \cite{MA1},\cite{MA2},\cite{MA3}, which are extracted from his Honor dissertation \cite{MA4}. In \cite{MA2}, he proves the following Theorem (see also \ref{T1} in section \ref{s6} below).
\begin{thmnn}[Mathews 2011]
Let $S$ be a closed surface of genus $g\ge 2$. Then almost every representation $\rho:\pi_1S\longrightarrow \mathrm{PSL}_2\R$ with $\eu\rho=\pm\big(\chi(S)+1\big)$, which sends a non-separating curve $\gamma$ on $S$ to an elliptic is the holonomy of a hyperbolic cone-structure on $S$ with one cone point of angle $4\pi$.
\end{thmnn}
\noindent Our work starts with this known result. Since we may introduce a measure on the character variety as we will describe below (see \ref{ss61}), we may note that this statement makes sense. Here will show the following stronger result.\\
\noindent \textbf{Theorem \ref{mainthm}:} \emph{Let $S$ be a closed surface of genus $g\ge2$. Then every representation $\rho:\pi_1S\longrightarrow \mathrm{PSL}_2\R$ with $\eu\rho=\pm \big(\chi(S)+1\big)$, which sends a non-separating simple curve $\gamma$ on $S$ to a non-hyperbolic element is the holonomy of a hyperbolic cone-structure on $S$ with one cone point of angle $4\pi$.}\\
\noindent By this theorem, the geometrization of almost extremal representations problem is reduced on finding a simple closed curve with non-hyperbolic holonomy. So far, we do not know under which conditions (if any) a non-Fuchsian representation (which may be not almost extremal) sends a simple closed curve to a non-hyperbolic element. This problem is known in the literature as Bowditch question or Bowditch conjecture.\\
\noindent By recent works \cite{MW2} and \cite{MW} of March\'e and Wolff, the Bowditch question is known to be true in genus two case. In particular, they show that every almost extremal representation (\emph{i.e.} $\eu\rho=\pm1$) sends a simple curve to a non-hyperbolic element (see \cite[Theorem 1.4]{MW}). However, we do not know a priori if such curve is separating or not. Even in \cite{MA2}, Mathews showed the following result, very particular to the genus $2$ case.
\begin{thmnn}[Mathews 2011]\label{T2}
Let $S$ be a closed surface of genus $2$, and let $\rho:\pi_1S\longrightarrow \mathrm{PSL}_2\R$ be a representation with $\eu\rho=\pm1$. Suppose $\rho$ sends a separating curve $\gamma$ on $S$ to a non-hyperbolic element. Then $\rho$ arises as the holonomy of a hyperbolic cone-structure on $S$ with one cone point of angle $4\pi$.
\end{thmnn}
\noindent Combining the main theorem \ref{mainthm} with \ref{T2} we will derive the following corollary.\\
\noindent \textbf{Corollary \ref{maincor}:} \emph{Let $S$ be a closed surface of genus $2$. Then any representation $\rho:\pi_1S\longrightarrow \mathrm{PSL}_2\R$ with $\eu\rho=\pm1$ is geometrizable by a hyperbolic cone-structure with one cone point of angle $4\pi$.}\\
\noindent Our strategies rely on the existence of a simple closed curve with non-hyperbolic, but we do not know if such curve exists in general. The following question naturally arises.
\begin{qs}
For a general surface, does every representation $\rho$ with Euler number $\eu\rho=\pm(\chi(S) + 1)$ arise as the holonomy of a hyperbolic cone-structure?
\end{qs}
\noindent Recently, during a conversation with the author, Bertrand Deroin announced his proof, in collaboration with Nicolas Tolozan, of the fact that every representation of the fundamental group of a closed and oriented genus $g$ surface with Euler number $\eu\rho=\pm \big(\chi(S)+1\big)$ arises as the holonomy of a hyperbolic cone-structure with a single cone point of angle $4\pi$. \\
\subsection{Structure of the paper} This paper is organized as follow. Section \ref{s2} contains the necessary background material in order to tackle the main parts of this work. This material includes, in particular, the basic definitions about hyperbolic cone-structure and holonomy representation. We discuss about the geometry of hyperbolic transformations, the Lie groups $\mathrm{PSL}_2\R$ and $\widetilde{\mathrm{PSL}_2\R}$ and the relationship between trace and commutator.\\
\noindent In section \ref{s3}, we discuss about the Euler class, giving both the geometrical and algebraic definition. Section \ref{s4} contains some generalities about the character variety and, in more detail, the character variety of the punctured torus. We discuss about virtually abelian representations and their characterization and about the action of the mapping class group on each stata of the character variety of the punctured torus. Finally, in section \ref{s6} we prove the main theorem \ref{mainthm} and the corollary \ref{maincor}. In particular, we give a brief description of the character variety of a closed surface with genus $g\ge2$ in \ref{ss61}, and in paragraphs \ref{ss63} and \ref{ss64} we explain why we can remove the \emph{''almost every condition''} from \ref{T1}. Finally the subsection \ref{ss65} and \ref{ss66} contains respectively the proof of \ref{mainthm} and \ref{maincor}. At the end of the work, we have added an appendix about the flexibility of the hyperbolic cone-structure. Unlike the Fuchsian case, there is no a bijective correspondence between hyperbolic cone-structure and holonomy representations. More precisely, the same representation $\rho$ arises as the holonomy of uncountably many non-isomorphic cone structure on $S$.\\
\noindent \textbf{Acknowlegments.} The main parts of this work were achieved during my visiting period in Heidelberg. I would like to thank Anna Wienhard for her hospitality, and Daniele Alessandrini for useful comments and suggestions about this work. I would like to thank my advisor Stefano Francaviglia for introducing me to this theory and for his constant encouragement. His advice and suggestions have been highly valuable. Finally, I would also like to thank Bertrand Deroin, Maxime Wolff and Julien March\'e for useful comments and remarks about this work. \\
\section{Some hyperbolic geometry}\label{s2}
\noindent Let $S$ be a closed, connected and orientable surface. We will denote by $\mathbb{H}^2$ the hyperbolic plane and by $\mathrm{PSL}_2\R$ its group of isometries acting by M\"obius transformations
$$\mathrm{PSL}_2\R \times \mathbb{H}^2 \to \mathbb{H}^2, \quad \left(\begin{array}{cc}
a & b\\ c & d \\ \end{array} \right), z \mapsto \dfrac{az+b}{cz+d}$$
\subsection{Hyperbolic cone-structures} We are going to define the main structure we are interested in, that is \emph{hyperbolic cone-structures}. For our purposes, we only need to define hyperbolic cone-structures in dimension $2$, though the following definition has obvious generalizations to higher dimensions and also other types of geometries. The curious reader may be seen \cite{CHK} for further details.
\begin{defn}[Hyperbolic cone-structure] A \emph{hyperbolic cone-structure} $\sigma$ on a $2$-manifold $S$ is the datum of a triangulation of $S$ and a metric, such that
\begin{itemize}
\item[1] the link of each simplex is piecewise linear homeomorphic to a circle, and
\item[2] the restriction of the metric to each simplex is isometric to a geodesic simplex in hyperbolic space.
\end{itemize}
\end{defn}
\noindent Hence a $2-$dimensional hyperbolic cone-structure is a surface obtained by piecing together geodesic triangles
in $\mathbb{H}^2$. The definition clearly includes open surfaces and surfaces with possibly geodesic boundary. \\
\noindent Any interior point $p$ of $S$ has a neighborhood locally isometric to $\mathbb{H}^2$, except possibly at some vertices of the triangulation, around which the angles sum to $\theta\neq 2\pi$. Such points are called \emph{cone points}. The neighborhood of a cone point is isometric to a wedge of angle $\theta$ in the hyperbolic plane, with sides glued (that is a cone). The angle $\theta$ is called the cone angle at $p$ and letting $\theta = 2(k+1)\pi$, we define the number $k$ as \emph{the order} \textsf{ord}$(p)$ of the cone point at $p$. If $S$ has boundary then this boundary will be piecewise geodesic. There may be vertices on the boundary around which the angles sum to $\theta\neq \pi$. Such points are called \emph{corner point} and the value of $\theta$ is the corner angle. Letting $\theta = \pi(1+2s)$, then $s$ is the order of the corner points. In such a case a corner point has neighborhood isometric to a wedge of angle $\theta$ in $\mathbb{H}^2$ (without sides glued). Singular points of $\sigma$ on $S$ are cone or corner points, whereas any other points are called \emph{regular points}. Note a cone angle may be any positive real number, in particular, it can be more than $2\pi$ for interior points or greater than $\pi$ for boundary point. In the sequel, we will only consider closed surfaces whose cone points have order $k\in\Bbb N$. \\
\noindent We note that a complete hyperbolic structure $\sigma_0$ on $S$ can be seen as hyperbolic cone-structure where all points are regular. Cone points may be considered as points on which the curvature is concentrated; however, topology imposes limits on the allowable cone angles in a $2-$dimensional hyperbolic cone-structure which can be deduced from the Gau\ss-Bonnet theorem. Precisely we have the following result.
\begin{prop}
Let $S$ be a compact, connected and orientable surface. Any hyperbolic cone-structure $\sigma$ on $S$ with cone and corner points $p_1,\dots, p_n$ having orders $k_1,\dots,k_n$ respectively satisfies the following relation
\begin{equation}
\label{gb}
\chi(S)+\sum_{i=1}^n k_i <0, \text{ where } k_i=\textsf{\emph{ord}}(p_i).
\end{equation}
Indeed the left hand side is $2\pi$ times the opposite of the hyperbolic area of $S$.
\end{prop}
\proof
By definition, $\sigma$ is the datum of a triangulation $\tau$ such that any simplex is isometric to a geodesic triangle on the hyperbolic plane. In particular cone and corner points are vertices of $\tau$.\\
Suppose $S$ is closed. Multiplying both sides of the relation \eqref{gb} by $2\pi$, it can be rewrited in the following way
\[ 2\pi\chi(S)-\sum_{i=1}^n 2\pi-\theta_{i}<0.
\] Around any vertex $p$ the cone angle could be:
\[ \theta_p=
\begin{sy}
2\pi \quad \text{ if }p\text{ is regular},\\
\theta_i \quad \text{ if }p \text{ is a cone point }.
\end{sy}
\]
The Euler characteristic of $S$ can be computed by the well-known formula $\chi(S)=V-E+F$, where $V,E,F$ are the numbers of vertices, edges and faces respectively. Since $\tau$ is a triangulation $2E=3F$, thus the formula becomes $2\chi(S)=2V-F$. Since any simplex of $\tau$ is a geodesic triangle, we may deduce that $\pi F>\sum_{i=1}^n \theta_i$, because the hyperbolic area of a triangle with angles $\alpha,\beta,\gamma$ is $\pi-\alpha-\beta-\gamma$. Hence
\[ 2\pi\chi(S)=2\pi V-\pi F<2\pi V-\sum_{i=1}^n \theta_i=\sum_{i=1}^n 2\pi-\theta_i.
\] If $S$ has geodesic boundary we doubling $S$ (where corner points are identified) to get a closed surface $S'$. Notice that the previous argument applies word-by-word to $S'$ even if some point are neither regular or cone point of angle $2k\pi$ for some $k$. Hence
\[ 2\pi\chi(S')-\sum_{q\in S'} 2\pi-\theta_q<0.
\] By symmetry we get the desider result. \qedhere
\endproof
\subsection{Holonomy representation} Let $\widetilde{S}$ be the universal cover of $S$ and let $\pi:\widetilde{S}\longrightarrow S$ be the covering projection. A hyperbolic cone-structure structure $\sigma$ on $S$ can be lifted to a hyperbolic cone-structure $\widetilde{\sigma}$ on the universal cover $\widetilde{S}$.
\begin{defn} Let $\sigma$ be a hyperbolic cone-structure on $S$ and $\widetilde{\sigma}$ the lifted hyperbolic cone-structure on $\widetilde{S}$. A \emph{developing map} $\textsf{dev}_\sigma:\widetilde{S}\longrightarrow \mathbb{H}^2$ for $\sigma$ is a smooth orientation-preserving map, with isolated critical points and such that its restriction to any simplex on $\widetilde{S}$ is an isometry.
\end{defn}
\noindent Developing maps always exist, and are essentially unique; that is two developing maps for a given structure $\sigma$ differ by post-composition with a M\"obius transformation. Explicitly a developing map can be constructed starting from a geodesic simplex $\widetilde{s_0}$ of $\widetilde{\sigma}$. Since it is isometric to a geodesic triangle $T_0\subset \mathbb{H}^2$, there exists an isometry $\varphi_0: \widetilde{s_0}\longrightarrow T_0\subset\mathbb{H}^2$. Let $\widetilde{s_1}$ be another simplex which is adjacent to $\widetilde{s_0}$; that is $\widetilde{s_1}$ shares an edge with $\widetilde{s_0}$. Then the isometry $\varphi_1: \widetilde{s_1}\longrightarrow T_1\subset\mathbb{H}^2$ may be adjusted by a M\"obius transformation so as to agree on the overlap, gluing to give a map $\widetilde{s_0}\cup\widetilde{s_1}\longrightarrow \mathbb{H}^2$. We may iterate this procedure and at the limit we get a developing map for $\sigma$. \\
\noindent Basically, any developing map gives a way to read the geometry of $\sigma$ on the hyperbolic plane, hence post-compose a developing map $\textsf{dev}_\sigma$ for $\sigma$ with any element of group $\mathrm{PSL}_2\R$ (which is the group of orientation preserving isometries) does not change the informations encoded on the developed image.
\begin{rmk}
For hyperbolic cone-structures, the developing map $\textsf{dev}$ turns out to be a branched map. Branch points are given by cone points of the hyperbolic cone-structure $\widetilde{\sigma}$ on $\widetilde{S}$. Around them, the developing map fails to be a local homeomorphism and the local degree coincides with the order of the cone point.
\end{rmk}
\noindent The developing map $\textsf{dev}_\sigma:\widetilde{S}\longrightarrow \mathbb{H}^2$ of hyperbolic cone-structure $\sigma$ has also an equivariance property with respect to the action of $\pi_1S$ on $\widetilde{S}$. For any element $\gamma$, the composition map $\textsf{dev}_\sigma\circ \gamma$ is another developing map for $\sigma$. Thus there exists an element $g\in\mathrm{PSL}_2\R$ such that
\[ g\circ\textsf{dev}_\sigma =\textsf{dev}_\sigma\circ \gamma
\] The map $\gamma\longmapsto g$ defines a homomorphism $\rho:\pi_1S\longrightarrow \mathrm{PSL}_2\R$ which is called \emph{holonomy representation}. The representation $\rho$ depends on the choice of the developing map, however different choices produce conjugated representations. Hence it makes sense to consider the conjugacy class of $\rho$, which is usually called \emph{holonomy for the structure}.
\begin{defn}
Let $S$ be a closed surface of genus $g\ge 2$. A representation $\rho:\pi_1S\longrightarrow \mathrm{PSL}_2\R$ is said to be \emph{Fuchsian} if it arises as holonomy of a complete hyperbolic structure on $S$. In particular these representations turn out be faithful and discrete.
\end{defn}
\noindent Goldman shows in \cite{GO88} that any Fuchsian representation arises as holonomy of a unique complete hyperbolic structure, that is a hyperbolic structure without cone points. However, the picture changes completely as soon as we consider non-complete hyperbolic structure.\\
\noindent Although any hyperbolic cone-structure $\sigma$ on a $2-$manifold $S$ induces a holonomy representation by standard arguments; the reverse problem to recover a hyperbolic geometry starting from a given representation $\rho$ is much arduous and not always possible as shown in the following example.
\begin{ex}
\label{me} The following example is a generalization of Tan's counterexample (see \cite{TA}); which was given for a surface of genus $3$.\\
\noindent Let $S$ be a genus $g$ surface, obtained by attaching $h$ handles to a surface of genus $g-h$, where $g-h\ge2$. We define a representation $\rho$ in the following way: $\rho$ is discrete and faithful on the original surface, and trivial on each handle we have attached. In this way $\rho(\pi_1S)$ is a discrete subgroup of $\mathrm{PSL}_2\R$ and the quotient $\mathbb{H}^2/\rho(\pi_1S)$ is a genus $g-h$ surface. However $\rho$ can not be the holonomy of a hyperbolic cone-structure on $S$.\\
\noindent Suppose now that $S$ admits a hyperbolic cone-structure $\sigma$ with holonomy $\rho$, and consider its developing map $\textsf{dev}_\sigma:\widetilde{S}\longrightarrow \mathbb{H}^2$.
Since $\textsf{dev}_\sigma$ is a $\big(\pi_1S,\rho(\pi_1S)\big)-$equivariant map; it passes down to branch map
\[ f:S\longrightarrow \ql{\rho\big(\pi_1S\big)}{\mathbb{H}^2}
\] Consider now the induced map of fundamental groups. This is the same map induced by the map that pinches to a point each handle we have attached before, hence the map $f$ is homotopic to a pinching map of degree one. Since any branch cover of degree one is just a homeomorphism we found a contradiction, that is $\rho$ cannot be the holonomy of a hyperbolic cone-structure.
\end{ex}
\noindent Hence the following definition makes sense.
\begin{defn}
A representation $\rho:\pi_1S \longrightarrow \mathrm{PSL}_2\R$ is said to be \emph{geometrizable by hyperbolic cone-structure} if it arises as holonomy of a hyperbolic cone-structure $\sigma$ on $S$. Equivalently a representation is geometrizable if there exists a possibly branched developing map $\textsf{dev}:\widetilde{S}\longrightarrow \mathbb{H}^2$ which is $\rho$-equivariant.
\end{defn}
\noindent Of course, Fuchsian representations are geometrizable by a unique complete hyperbolic structure, whereas elementary representations are never geometrizable by a hyperbolic cone-structure (see \ref{R22}).
\subsection{Geometry of hyperbolic transformations} In the sequel we shall need to consider the effect of composing several isometries. For the remainder of this section, we have some lemmata about commutators. The begin with the following lemma by Goldman (see \cite[Lemma 3.4.5]{GO03})
\begin{lem}\label{L0125}
Let $g,h$ be hyperbolic transformations. Then the following are equivalent
\begin{itemize}
\item $g,h$ are hyperbolic and their axes cross,
\item \emph{Tr}$[g,h]<2$.
\end{itemize}
\end{lem}
\begin{rmk}
Note that although $g,h$ are only defined up to sign in $\mathrm{SL}_2\R$, the commutator is a well-defined element of $\mathrm{SL}_2\R$, and has a well-defined trace (see also \ref{ss25}).
\end{rmk}
\begin{proof}
Assuming Tr$[g,h]<2$, we first show that both $g$ and $h$ must be hyperbolic. If $g$ was elliptic, up to conjugation we may assume that $g\in \text{SO}_2\mathbb{R}$ and a straightforward computation show that Tr$[g,h]=2 +\sin^2\theta (a^2+b^2+c^2+d^2-2)\ge 2$. The same holds if $g$ is parabolic, so $g$ must be hyperbolic. The same argument shows that also $h$ must be hyperbolic.\\
The second step is to show that Tr$[g,h]<2$ if and only if $\textsf{Axis}(g)$ and $\textsf{Axis}(h)$ cross. Up to conjugation we may assume that the fixed points for $g$ are $\pm 1$ and that the fixed points for $h$ are $r,\infty$. Then we write Tr$[g,h]$ as function on $r$, and it easy to see that Tr$[g,h]<2$ if and only if $-1<r<1$.
\end{proof}
\noindent Before consider the other cases, we list some lemmata about the fixed point(s) of a commutator when $g,h$ are hyperbolic and their axes cross. We denote by $g^+$ and $g^-$ the attractive and repulsive points of a hyperbolic transformation $g$.
\begin{lem}\label{L126}
Suppose $g,h$ are hyperbolic and $\emph{Tr}[g,h]<-2$, so they are hyperbolic and their axes intersect. Then $\textsf{\emph{Axis}}[g,h]$ does not intersect the axis of $g$ or $h$. Moreover the fixed points of $[g,h]$ lie on the segment of the circle at infinity between $g^+$ and $h^+$: $[g,h]^+$ is closer to $g^+$ and $[g,h]^-$ is closer to $h^+$.
\end{lem}
\noindent We have also two similar results when $[g,h]$ is parabolic or elliptic.
\begin{lem}\label{L127}
Suppose $g,h$ are hyperbolic and $\emph{Tr}[g,h]=-2$, so it is parabolic. Then $\textsf{Fix}[g,h]$ lies on the segment of the circle at infinity between $g^+$ and $h^+$. The sense of rotation is clockwise if the segment from $g^+$ to $h^+$ has che clockwise orientation, otherwise the sense is counterclockwise.
\end{lem}
\begin{lem}\label{L128}
Suppose $g,h$ are hyperbolic and $-2<\emph{Tr}[g,h]<2$, so it is elliptic. Then $\textsf{Fix}[g,h]$ lie in the region determined by $\textsf{\emph{Axis}}(g)$, $\textsf{\emph{Axis}}(h)$ which is bounded by the arc on the circle at infinity between $g^+$ and $h^+$. The sense of rotation is clockwise if the segment from $g^+$ to $h^+$ has che clockwise orientation, otherwise the sense is counterclockwise.
\end{lem}
\noindent These lemmata may be proved with a direct computation. Here we offer the following proof by Matelski \cite{MT} which is more elegant and revealing. The first arguments of the proof of \ref{L126} are the same of the proofs of \ref{L127} and \ref{L128}, hence we may merge the proofs of these lemmata in a unique one and then discussing case by case.
\begin{proof}[Proofs of lemmata \ref{L126}, \ref{L127} and \ref{L128}.]
Let $2\lambda_g,2\lambda_h$ be the traslation distance of $g,h$ and let $p\in \mathbb{H}^2$ be the point of intersection of the axes of $g$ and $h$ and let $e\in\mathrm{PSL}_2\R$ a half turn around $p$. So we have that $ege=g^{-1}$ and the same holds for $h$, further $he$ preserve $\textsf{Axis}(h)$ but reversed its sense. Thus $he$ is an elliptic element of $\mathrm{PSL}_2\R$ and let $q$ be its fixed point; observe that $q\in\textsf{Axis}(h)$ and it lies between $p$ and $h^+$ at a distance $\lambda_h$ from $p$. Now consider $ghe$, we have that $(ghe)^2=gh(ege)(ehe)=ghg^{-1}h^{-1}=[g,h]$.So $ghe$ is a hyperbolic, parabolic or an elliptic transformation, if $[g,h]$ is hyperbolic, parabolic or elliptic respectively. Let $l_1$ be the perpendicular line from $q$ to $\textsf{Axis}(g)$, and denote $r$ its foot. Let $l_2$ be the perpendicular line to $l_1$ passing through $q$. Let $s$ be point along $\textsf{Axis}(g)$ between $r$ and $g^+$ at a distance $\lambda_g$ from $r$. Finally let $_3$ be the line passing through $s$ perpendicular to $\textsf{Axis}(g)$. Denote by $R_{l_i}$ the reflection respect the line $l_i$. Then we have $he=R_{l_1}R_{l_2}$ and $g=R_{l_3}R_{l_1}$, so $ghe=R_{l_3}R_{l_2}$. Now we have the following tricotomy.
\begin{itemize}
\item[1] The axes $l_2,l_3$ do not intersect in $\mathbb{H}^2$ nor in the boundary at infinity. In this case $[g,h]$ is hyperbolic and $\textsf{Axis}[g,h]$ is the common perpendicular of $l_2$ and $l_3$. In particular the fixed points are in the desired order and this conclude the proof of \ref{L126}.
\item[2] If we are not in the first case the axes cross. In particular if $l_2,l_3$ intersect at the infinity then $[g,h]$ is parabolic and the fixed point is given by the intersection point. Moreover the fixed point is in the desired position and this conclude \ref{L127}.
\item[3] Finally $l_2,l_3$ intersect at a point $o$ and $[g,h]$ is elliptic with wixed point $o$. By construction the fixed point lies in the desider region of $\mathbb{H}^2$ and this conclude \ref{L128}. \qedhere
\end{itemize}
\end{proof}
\noindent We conclude with other two lemmata that summarize the remaining cases.
\begin{lem}\label{L0129}
Let $g$ be a parabolic element and let $h$ be any transformation. Suppose that $g,h$ have no common fixed point. Then $[g,h]$ is hyperbolic.
\end{lem}
\begin{lem}\label{L01210}
Let $g$ be an elliptic transformation with rotation angle $2\theta$ and let $p$ its fixed point. Let $h$ be any transformation not fixing $p$. Then $[g,h]$ is hyperbolic.
\end{lem}
\noindent We do not report here the proofs of lemmata \ref{L0129}, \ref{L01210} that can be found in \cite[Chapter 7]{B}.
\subsection{The Lie groups $\mathrm{PSL}_2\R$ and $\widetilde{\mathrm{PSL}_2\R}$}\label{ss25} Geometrically $\mathrm{PSL}_2\R$ is an open solid torus homeomorphic to $\mathbb{H}^2\times \mathbb{S}^1$. Indeed we may identified $\mathrm{PSL}_2\R$ with the unit tangent bundle UT$\mathbb{H}^2$, which is homeomorphic to $\mathbb{H}^2\times \Bbb S^1$. However this identification depends on a preliminar choise of a basepoint $(p_0,u_0)\in\mathbb{H}^2\times \Bbb S^1$, hence is not canonical in general. More precisely we may associate to any element $g\in\mathrm{PSL}_2\R$ the point $\big(g(p_0),g'(u_0)\big)$ and simple arguments show that this corrispondence is well defined and bijective. \\
\noindent Of course we have $\pi_1(\mathrm{PSL}_2\R)\cong \Bbb Z$ and the universal cover $\widetilde{\mathrm{PSL}_2\R}$ is naturally identified with $\mathbb{H}^2\times \mathbb{R}$. By classical covering theory, $\widetilde{\mathrm{PSL}_2\R}$ may be seen also as the set of paths $\big\{ c:[0,1]\longrightarrow \mathrm{PSL}_2\R \big\}=\big\{ c:[0,1]\longrightarrow \text{UT}\mathbb{H}^2 \big\}$ up to homotopy starting from the basepoint. Roughly speaking any element of the universal cover may be seen as a paths with a unit tangent vector attached to any point that changes continuously, regardless of where it start because the basepoint is arbitrary. By construction the projection of $c\in\widetilde{\mathrm{PSL}_2\R}$ to $\mathrm{PSL}_2\R$ is the unique isometry sending the unit tangent vector at $c(0)$ to the unit tagent vector at $c(1)$.\\
\noindent Any element $c\in\widetilde{\mathrm{PSL}_2\R}$ is elliptic, parabolic or hyperbolic accordingly as is its projection. The identity element lifts to an infinite cyclic subgroup generated by $\textbf{z}$, namely the center of $\widetilde{\mathrm{PSL}_2\R}$ which is isomorphic to $\mathbb{Z}$. In particular these lifts correspond to those paths starting and ending at basepoint $(p_0,u_0)$ of the following form
\[ c(t)=(p_0,e^{2nt\pi i})
\] for some $n\in\Bbb Z$. With this notation $\textbf{z}=(p_0,e^{2t\pi i})$.\\
\noindent Any element $g\in\mathrm{PSL}_2\R$ has infinetely many lifts that differ by a power of \textbf{z}. However we may wonder if there is a nicest lift of $g$ in some sense and the answer turns out to be positive if $g$ is hyperbolic or parabolic. \\
\noindent Suppose $g$ is hyperbolic, hence it is a translation along its axis \textsf{Axis}$(g)$ by some distance $d$. Then there exists a unique one parameter subgroup $c:\mathbb{R}\longrightarrow \mathrm{PSL}_2\R$ (with a little abuse of notation) such that $c(t)$ is a hyperbolic translation along \textsf{Axis}$(g)$ of distance $|t|d$. In particular $c(0)=\text{id}$ and $c(1)=g$.
Its restriction to $[0,1]$ gives a unique path in $\mathrm{PSL}_2\R$ which we define as the \emph{preferred or simplest} lift of $g$.\\
\noindent A similar argument works also for parabolic isometries. Indeed if $g$ is parabolic then it translates along a horocircle $h$ by some distance $d$ with respect to the Euclidean metric induced by the hyperbolic one on $h$. As above there exists a unique one parameter subgroup $c:\mathbb{R}\longrightarrow \mathrm{PSL}_2\R$ such that $c(t)$ is a parabolic translation along $h$ of distance $|t|d$ and $c(0)=\text{id}$ and $c(1)=g$. Again its restriction to $[0,1]$ gives a unique path in $\mathrm{PSL}_2\R$ which we consider as preferred lift.\\
\noindent On the other hand the situation changes drammatically when we consider elliptic elements. If $g$ is an elliptic isometry then there are infinitely many one parameter subgroups $c:\mathbb{R}\longrightarrow\mathrm{PSL}_2\R$ with $c(1)=g$, and this is reflected by the fact that anyone of them contains the cyclic subgroup generated by \textbf{z}, \emph{i.e.} the center of $\widetilde{\mathrm{PSL}_2\R}$. Thus a simplest lift does not exists but there are two simplest lifts, which are respectively the simplest counterclockwise lift $c_1$ and the simplest clockwise lift $c_{-1}$.\\
\noindent We denote the set of simplest lift of hyperbolic and parabolic elements by Hyp$_0$ and Par$_0$. For every hyperbolic element $c\in\widetilde{\mathrm{PSL}_2\R}$ there exists a unique $m\in\mathbb{Z}$ such that $\textbf{z}^{-m}c\in$Hyp$_0$, thus we define Hyp$_m=\textbf{z}^m$Hyp$_0$. In the same way Par$_m=\textbf{z}^m$Par$_0$.
\begin{rmk}
We may furtherly divide Par$_m$ into two subsets, namely Par$_m^-$ and Par$_m^+$ of parabolic elements which are clockwise and counterclockwise rotations about a point at infinity respectively. This distinction arises because clockwise rotations about a point at infinity are never conjugated to a counterclockwise rotation of $\mathrm{PSL}_2\R$ (even if they are in $\mathrm{PSL}_2\C$).
\end{rmk}
\noindent Finally we define Ell$_1$ the set of simplest counterclockwise lifts of elliptic elements in $\mathrm{PSL}_2\R$. Similarly Ell$_{-1}$ is the set of simplest clockwise lifts of elliptic elements in $\mathrm{PSL}_2\R$. For any $m>0$ we define as in the other case Ell$_m$ as $\textbf{z}^m$Ell$_1$ and Ell$_{-m}$ as $\textbf{z}^{-m+1}$Ell$_{-1}$. Since the set Ell$_0$ is not define we have Ell$_1$ = \textbf{z}Ell$_{-1}$.
\subsection{Relationship between trace and commutators} We finally consider commutators of elements in $\widetilde{\mathrm{PSL}_2\R}$ and we explain briefly the relation with their trace. Any hyperbolic isometry is characterized by its trace. A similar characterization holds also for elements in $\widetilde{\mathrm{PSL}_2\R}$. Since $\widetilde{\mathrm{PSL}_2\R}$ is the universal cover of $\mathrm{PSL}_2\R$ it covers also $\mathrm{SL}_2\R$, hence the notion of \emph{trace} is well-defined in $\widetilde{\mathrm{PSL}_2\R}$.
\begin{lem}\label{L0133}
Let $\widetilde{\emph{Tr}}$ be composition of the covering projection $\widetilde{\mathrm{PSL}_2\R}\longrightarrow \mathrm{SL}_2\R$ with the trace function \emph{Tr}$:\mathrm{SL}_2\R\longrightarrow \mathbb{R}$. Then it is continuous and
\begin{itemize}
\item[1] $\widetilde{\emph{Tr}}(\textbf{\emph{z}}^n)=2(-1)^n$
\item[2] $\widetilde{\emph{Tr}}(\emph{Par}_n)=2(-1)^n$
\item[3] $\widetilde{\emph{Tr}}(\emph{Hyp}_n)$ is the open interval $]2,\infty[$ if $n$ is even or the open interval $]-\infty,-2[$ if $n$ is odd.
\end{itemize}
\end{lem}
\noindent The proof of this result may be found in \cite{MA3}. We now consider commutators in $\mathrm{PSL}_2\R$. Since different lifts of any element $g\in\mathrm{PSL}_2\R$ differ by powers of $\textbf{z}$, the following lemma immediately follows.
\begin{lem}
Let $g,h\in \mathrm{PSL}_2\R$. Then $[g, h]$ has a well-defined lift to $\widetilde{\mathrm{PSL}_2\R}$. That is, any couple
of lifts $\widetilde{g}_1$, $\widetilde{h}_1$ and $\widetilde{g}_2$, $\widetilde{h}_2$ satisfy $\Big[\widetilde{g}_1,\widetilde{h}_1\Big]=\Big[\widetilde{g}_2,\widetilde{h}_2\Big]$.
\end{lem}
\begin{proof}
Let $\widetilde{g}_2=\textbf{z}^n\widetilde{g}_1$ and $\widetilde{h}_2=\textbf{z}^m\widetilde{h}_1$. Since $\textbf{z}$ commutes with every element of $\widetilde{\mathrm{PSL}_2\R}$ we notice that
\[ \Big[\widetilde{g}_1,\widetilde{h}_1\Big]=\Big[\widetilde{g}_2,\widetilde{h}_2\Big]
\] as desired.
\end{proof}
\noindent Even if the lift of a commutator $[g,h]$ is well-defined, it may differ from the simplest lift. More precisely its simplest lift belongs to Hyp$_0$, however for any couples of lifts $\widetilde{g}$, $\widetilde{h}$ there exists an integer $n$ such that
\[ \Big[\widetilde{g},\widetilde{h}\Big]=\textbf{z}^n\widetilde{[g,h]}
\] The previous lemma says that this integer does not dipend on the choice of the lifts and the next proposition tell us all possible values of $n$. We state it without proof that can be found in \cite{GO88, MA4, MA3,MI, WW}.
\begin{prop}\label{P0135}
Let $g,h\in \mathrm{PSL}_2\R$, then $[g,h]$ is well-defined and belongs
\[ \Big[\widetilde{g},\widetilde{h}\Big]\in \{1\}\cup\Big(\bigcup_{n=-1}^1 \text{\emph{Hyp}}_n\cup\text{\emph{Ell}}_n\Big) \cup \text{\emph{Par}}_0\cup \text{\emph{Par}}_{-1}^+\cup\text{\emph{Par}}_1^-
\] where \emph{Ell}$_0$ is the empty set for convenience.
\end{prop}
\noindent Combining \ref{L0133} with \ref{P0135} we get the following corollary.
\begin{cor}\label{C0136}
Let $g,h\in \mathrm{PSL}_2\R$ then
\begin{itemize}
\item[1] $\text{\emph{Tr}}[g,h]>2 \Longrightarrow [g,h]\in \text{\emph{Hyp}}_0,$
\item[2] $\text{\emph{Tr}}[g,h]=2 \Longrightarrow [g,h]\in \text{\emph{Par}}_0,$
\item[3] $\text{\emph{Tr}}[g,h]\in]-2,2[ \ \Longrightarrow [g,h]\in \text{\emph{Ell}}_{-1}\cup\text{\emph{Ell}}_{1},$
\item[4] $\text{\emph{Tr}}[g,h]=-2 \Longrightarrow [g,h]\in \text{\emph{Par}}_{-1}^+\cup\text{\emph{Par}}_{1}^-,$
\item[5] $\text{\emph{Tr}}[g,h]<-2 \Longrightarrow [g,h]\in \text{\emph{Hyp}}_{-1}\cup\text{\emph{Hyp}}_{1}.$
\end{itemize}
\end{cor}
\vspace{5mm}
\section{Euler class of representations}\label{s3}
\noindent Throughout this section, $S$ will be a compact surface of genus $g$. For every representation $\rho:\pi_1S\longrightarrow \mathrm{PSL}_2\R$ we may naturally associate a $\mathbb{R}\mathbb{P}^1-$bundle $\mathcal{F}_\rho$ over $S$ equipped with a flat connection. Explicitly $\mathcal{F}_\rho$ is obtained as the quotient of $\widetilde{S}\times\mathbb{R}\mathbb{P}^1$ by the diagonal action of $\pi_1S$; \emph{i.e.} for any $\gamma\in\pi_1S$ and $(p,z)\in \widetilde{S}\times \mathbb{R}\mathbb{P}^1$ we have $\gamma\cdot (p,z)=\big(\gamma.p, \rho(\gamma)(z)\big)$. The Euler class $e(\rho)$ of $\rho$ arises naturally as an obstruction to finding global sections of this bundle.
\subsection{Geometric definition of the Euler class} Suppose $S$ is closed. Let $\tau$ be a topological triangulation, then a section $s_0$ can be easily found on the $0-$skeleton choosing an element of $\mathbb{R}\mathbb{P}^1$ above every vertex. This section can be extended to a section $s_1$ over the $1-$skeleton joining the $0-$sections by paths of $\mathbb{R}\mathbb{P}^1-$elements. Since $\pi_1(\mathbb{R}\mathbb{P}^1)=\Bbb Z$ there are infinitely many extensions of $s_0$ up to homotopy. Over any $2-$cell $T$, the section over $1-$skeleton defines a $\mathbb{R}\mathbb{P}^1-$vector field along $\partial T$, hence a map $\mathfrak{s}_T:\partial T\longrightarrow \mathbb{R}\mathbb{P}^1$ of degree $d_T$ that corresponds to the number of times the vector field spins along $\partial T$. We may assign to every $2-$cell the integer $d_T$ giving a $2-$cochain $e(\rho)\in H^2(S,\mathbb{Z})$.\\
\noindent In determining $e(\rho)$ we made different choices as the triangulation $\tau$ and the $1-$section over the $1-$skeleton. Adjustment by a $2-$coboundary corresponds to altering the amount of spin chosen along each particular edge. Hence the cohomology class of this $2-$cochain does not depend on the choice of $1-$section. Moreover it can be seen that this cohomology class does not depend on the cellular decomposition of our surface $S$. Thus $e(\rho)$ is a well-defined $2-$cocycle called \emph{Euler class of} $\rho$ of $\mathcal{F}_\rho$.
\noindent Since $H^2(S,\mathbb{Z})\cong \mathbb{Z}$ we can associate to $e(\rho)$ the integer $\eu\rho$ using the Kronecker pairing. We define $\eu\rho$ as the \emph{Euler number} associates to $\rho$.
\begin{lem}\label{L321}
The Euler number satisfies the following equality
\[ \eu\rho=\sum_{T\in \tau} d_T.
\]
\end{lem}
\proof
Let $[S]$ be the fundamental class of $S$, that is a generator of $H_2(S,\mathbb{Z})$. Now $[S]=[T_1]+\cdots+[T_n]$, then
\[ \eu\rho=e(\rho)[S]=\sum_{T\in\tau} e(\rho)[T]=\sum_{T\in\tau} d_T
\] where the last equality holds by definition of $e(\rho)$.
\endproof
\noindent In \cite{WW} Wood, based on earlier work by Milnor \cite{MI}, showed that the Euler number satisfies the following inequality (which is actually known as Milnor-Wood inequality)
\[ |\eu\rho|\le -\chi(S).
\]
\noindent The equality holds as soon as the representation is Fuchsian, that is faithful and discrete, and they always arise as the holonomy of a unique and complete hyperbolic structure.
\begin{thm}[Goldman \cite{GO88}]\label{T031}
Let $S$ be a closed orientable surface with $\chi(S) < 0$, and let $\rho:\pi_1S\longrightarrow \mathrm{PSL}_2\R$. Then $\rho$ is the holonomy of a complete hyperbolic structure on $S$ if and only if $\mathcal{E}(\rho)=\pm\chi(S)$.
\end{thm}
\noindent Now suppose $\rho$ is a geometrizable representation, that is $\rho$ is the holonomy of a hyperbolic cone-structure on $S$. Let $p_1,\dots,p_n$ be the cone points of orders $k_1,\dots, k_n$, respectively. The following formula relates the Euler number of $\rho$ with the Euler characteristic and the orders of the cone points.
\begin{prop}\label{P324}
Let $\rho:\pi_1S\longrightarrow \mathrm{PSL}_2\R$ be a representation which is the holonomy of a hyperbolic cone-structure on a closed surface $S$. Then Euler number satisfies the identity
\[ \mathcal{E}(\rho)=\pm\bigg(\chi(S)+\sum_{i=1}^n k_i\bigg)
\] where the sign depends on the orientation of $S$.
\end{prop}
\begin{proof}
Among different proofs in literature we use the following argument of Mathews \cite{MA2}. Let $\tau$ be a hyperbolic triangulation, such that every cone point is a vertex of the triangulation, so we have a simplicial decomposition of $S$ with hyperbolic triangles. There is a $\mathbb{R}\mathbb{P}^1-$vector field $V$ on $S$ with one singularity for every vertex, edge and face of $S$. The orders of the singularities are $1+k_i$ at any vertex (remember that for regular points $k=0$), $-1$ on every edge, and $1$ on every face. By the Hopf-Poincar\'e theorem the sum of the indices of the singularities equals the sum of the indices of the singularities, then $\chi(S)+\sum k_i$. \\
\noindent Now perturb the vector field so that the singularities lie off the $1-$skeleton. Then the number of times the vector field spins around a triangle $T\in\tau$ is equal to the sum of the indices of singular points of $V$ inside $T$, or its negative, depending on whether the orientation induced by $\textsf{dev}$ is the same as the orientation induced by the fundamental class $[S]$. For now, assume these orientations agree; otherwise all the cohomology classes must be multiplied by $-1$. Hence the spin of $V$ around any triangle $T\in\tau$ is equal to the sum of indices of the singular point of $V$ inside $T$ which is in turn equal to the degree of the map $\mathfrak{s}_T:\partial T\longrightarrow \mathbb{R}\mathbb{P}^1$ defined above. By \ref{L321} the sum of all indices of singular points is equal to $\eu\rho$, hence
\[ \eu\rho=\pm\Big(\chi(S)+\sum_{i=1}^n k_i\Big). \qedhere
\]
\end{proof}
\begin{rmk}
As expected if $\rho$ is the holonomy of hyperbolic structure on $S$ without cone points, then every point in $S$ is regular and we found again the above equality $\mathcal{E}(\rho)=\pm\chi(S)$.
\end{rmk}
\begin{rmk}\label{R22}
If $\sigma$ is a hyperbolic cone-structure on $S$, the Gau\ss-Bonnet condition implies that Euler number is never zero. Since the Euler number of elementary representations is always zero (see \cite{GO88}), they never arise as the holonomy of a hyperbolic cone-structure on $S$.
\end{rmk}
\begin{rmk}\label{R23}
The Euler number of a representation that is the holonomy of a hyperbolic cone-structure is negative because the developing map of a hyperbolic cone-structure is assumed to be orientation-preserving.
\end{rmk}
\noindent Suppose now $S$ has boundary. We may define the \emph{relative Euler class} in the same way described above, but we first need to define a trivialization over the boundary. In the case of a surface without boundary, it does not matter how we extend the $0-$section along the $1-$skeleton since each edge belongs to two faces, different choices cancel each other out. Here $S$ has boundary, and again it does not matter how we extend the $0-$section over edges lying in the interior of our surface. However each boundary edge belongs to only one face, then here it does matter. Hence the right thing to do is to define a trivialization along the boundary, that is a $1-$section, and extend such section to a $1-$section over the $1-$skeleton.\\
\noindent Let $\gamma\subset \partial S$ be a boundary component and suppose that $\rho(\gamma)$ has not elliptic holonomy. A special trivialization along $\gamma$ is the datum of a section $\mathfrak{s}:\gamma\longrightarrow \mathcal{F}_\rho \vert_\gamma$ defined by following a fixed point of $\rho(\gamma)\in\mathbb{R}\mathbb{P}^1$ along $\gamma$ using the flat connection associate to $\mathbb{R}\mathbb{P}^1-$bundle. Note that a special trivialization exists whenever $\rho(\gamma)$ has non-elliptic holonomy and it does not depend on the choice of the fixed point.\\
\noindent Thus the relative Euler class is a $2-$cochain $e(\rho,\mathfrak{s})\in H^2(S,\partial S,\mathbb{Z})$, and it measures the obstruction to extend the special trivialization along the boundary over $S$. In the same way, the \emph{relative Euler number} is an integer $\eur{\rho}{\mathfrak{s}}$ defined using the Kronecker pairing, and the Milnor-Wood inequality is satisfied as well (for further details see \cite{GO88}).
\begin{defn}
Let $S$ be a compact connected orientable surface with boundary. We define \emph{Fuchsian} those representations $\rho:\pi_1S\longrightarrow \mathrm{PSL}_2\R$ such that $|\eur{\rho}{\mathfrak{s}}|= -\chi(S)$ with respect to the special trivialization $\mathfrak{s}$.
\end{defn}
\noindent As in the closed case Fuchsian representations arise as holonomy of a complete hyperbolic structure on $S$, precisely we have the following result which was proved by Goldman in \cite{GO88} when $S$ has boundary with hyperbolic holonomy and more generally in the non compact case by Mathews in \cite{MA2} and by Burger-Iozzi and Wienhard in \cite{BIW}.
\begin{thm}\label{ecswb}
Let $S$ be a compact connected orientable surface with $\chi(S)<0$, and let $\rho:\pi_1S\longrightarrow \mathrm{PSL}_2\R$. If $S$ has boundary, assume $\rho$ takes each boundary curve to a non-elliptic element, so the relative Euler class $\eur{\rho}{\mathfrak{s}}$ is well-defined. Then $\rho$ is the holonomy of a complete hyperbolic structure on S with totally geodesic or cusped boundary components (respectively as each boundary curve is taken by $\rho$ to a hyperbolic or parabolic) if and only if $\eur{\rho}{\mathfrak{s}}= \pm\chi(S)$.
\end{thm}
\noindent Let $S$ be a surface (possibly with boundary) and decompose $S$ in pieces, \emph{i.e.} subsurfaces; such that any the relative Euler number is well-defined for any piece. Then the Euler number of $\rho$ can be computed in terms of the relative Euler numbers of each piece with respect to the special trivialization along the boundary. More precisely we have the following lemma whose proof is immediate.
\begin{lem}\label{L334}
Let $\mathcal{F}_\rho$ be a $\mathbb{R}\mathbb{P}^1-$bundle over $S$ with holonomy $\rho$, and let $\{l_k\}$ be a finite family of disjoint simple closed curves in $S$ containing also the boundary curves of $S$. Let $\overline{\mathfrak{s}}$ be a section of $\mathcal{F}$ defined on $\{l_k\}$. Denote by $\{C_j\}$ the family of the closure of the connected components of $S\setminus \{l_k\}$, then
\[ \eur{\rho}{\overline{\mathfrak{s}}_{|\partial C}}= \sum_{j} \eur{\rho_{C_j}}{\overline{\mathfrak{s}}_{|\partial C_j}}
\]
\end{lem}
\begin{proof}
It is sufficient to observe that the spins along any common boundary cancel out so that the relative Euler class is additive.
\end{proof}
\subsection{Algebraic definition of the Euler class} There is also an algebraic interpretation of the (possibily relative) Euler class. Let $S$ be a surface with genus $k$ and with $n$ boundary components ($n$ could be eventually zero) and let $\rho:\pi_1S\longrightarrow \mathrm{PSL}_2\R$ be a representation such that $\rho(b_i)$ is not elliptic for every $i \in \underline{n}$ (if any).\\
\noindent Let $p$ be a base point in $S$, and let $\widetilde{p}$ be a lift of $p$ in its universal cover, then the fundamental group $\pi_1(S,p)$ has the following presentation
\[ \big\langle a_1,b_1,\dots,a_k,b_k,c_1,\dots,c_n | [a_1,b_1]\dots[a_k,b_k]b_1\dots b_n=1\big\rangle.
\] and defines a fundamental $(4k+n)-$gon in $S$ which is simply connected based at $p$. Set $g_i=\rho(a_i)$, $h_i=\rho(b_i)$ and $c_i=\rho(b_i)$.\\
\noindent Let $\big(p_0,u_0\big)$ be a basepoint in UT$\mathbb{H}^2\cong\mathbb{H}^2\times\mathbb{R}\mathbb{P}^1$ and draw geodesic between points which are joined by edge in $S$ starting from $p_0$. This gives a $(4k+n)-$polygon in $\mathbb{H}^2$ that may be concave, have self-intersection, or even worse it may be degenerate. We may think this point $\big(p_0,u_0\big)$ as a $0-$section over $p$, indeed the projection to the second factor gives an element of $\mathbb{R}\mathbb{P}^1$ that we take as $0-$section over $p$. We now extend it to a $1-$section in $S$ in the following way. First notice that there is a bijective corrispondence between edges of the fundamental $(4k+n)$-gon in $S$ and edges of the respective polygon in $\mathbb{H}^2$ defined as above. We begin extending the $0-$section to $1-$section along $a_1$, the respective edge in $\mathbb{H}^2$ is the geodesic segment between $p_0$ and $\rho(a_1)(p_0)$. Consider the points $\big(p_0,u_0\big)$ and $\big(g_1(p_0),g_1'(u_0)\big)$, where $g_1=\rho(a_1)$, then any lift $\widetilde{g_1}$ of $g_1$ gives a unique path in UT$\mathbb{H}^2$ (up to homotopy relative to endpoints) of tangent vectors between these endpoints. We take as $1-$section along $a_1$ the projecton to the second factor of such path in UT$\mathbb{H}^2$. We can play the same game for the other edges to define a $1-$section over $1-$skeleton (where along any boundary edge $c_i$ we consider the section given by the special lift of $\rho(c_i)$). Moving anticlockwise around the polygon in $S$, we now obtain a loop in UT$\mathbb{H}^2$ which is represented by
\[ [\widetilde{g_1},\widetilde{h_1}]\dots[\widetilde{g_k},\widetilde{h_k}]\widetilde{c}_1\dots \widetilde{c}_n
\] where $\widetilde{g_i}=\widetilde{\rho}(a_i)$ and $\widetilde{h_i}=\widetilde{\rho}(b_i)$ are arbitrarily lifts of $g_i,h_i$ and $\widetilde{c_i}=\widetilde{\rho}(c_i)$ are the simplest lifts in $\widetilde{\mathrm{PSL}_2\R}$. \\
\noindent Since $[a_1,b_1]\dots[a_k,b_k]c_1\dots c_n=1$ that product is equal to $\textbf{z}^m$ for some $m\in\Bbb Z$. Geometrically $m$ is the number of times the tangent vector field spins around the fundamental $(4k+n)-$gon in $S$. We have the following result.
\begin{prop}
Let S be an orientable surface with $\chi(S) < 0$. Let $\rho:\pi_1S\longrightarrow \mathrm{PSL}_2\R$ be a representation, and let $\pi_1(S)$ have the presentation given above, where no $c_i$ is elliptic. The (possibly relative) Euler class $e(\rho)$ takes the fundamental class $[S]$ to $m\in\mathbb{Z}$ where the unique lift of the relator
\[ [\widetilde{g_1},\widetilde{h_1}]\dots[\widetilde{g_k},\widetilde{h_k}]\widetilde{c}_1\dots \widetilde{c}_n \in \widetilde{\mathrm{PSL}_2\R}
\] is equal to \textbf{\emph{z}}$^m$.
\end{prop}
\noindent Then the Euler number of a representation $\rho: \pi_1S\longrightarrow \mathrm{PSL}_2\R$ measure also the obstruction to lift it to a representation in $\widetilde{\mathrm{PSL}_2\R}$. In particular, a representation $\rho$ lifts to a representation in $\widetilde{\mathrm{PSL}_2\R}$ if and only if there exists a nowhere zero section of the associated $\mathbb{R}\mathbb{P}^1-$ bundle with holonomy $\rho$.
\begin{rmk}
If $S$ has boundaries a representation $\rho$ lifts to a representation in $\widetilde{\mathrm{PSL}_2\R}$ if and only if there exists a nowhere zero section of the associated $\mathbb{R}\mathbb{P}^1-$ bundle with holonomy $\rho$ with respect to the special trivialization $\mathfrak{s}$ along the boundaries.
\end{rmk}
\noindent In the sequel, we will work with puncture torus, and we make a strong use of the following result.
\begin{prop}\label{PTEC}
Let $S$ be a punctured torus and $\rho:\pi_1S\longrightarrow\mathrm{PSL}_2\R$ be a representation such that the relative Euler class is well-defined. Then
\begin{itemize}
\item[1-] \emph{Tr}$[g,h]\le-2$ if and only if $\eur{\rho}{\mathfrak{s}}=-1$,
\item[2-] \emph{Tr}$[g,h]\ge 2$ if and only if $\eur{\rho}{\mathfrak{s}}=0$,
\end{itemize}
where $\mathfrak{s}$ is the special trivalization along the boundary.
\end{prop}
\begin{proof}
Suppose first Tr$[g,h]\le-2$, then $[g,h]\in$ Hyp$_{\pm 1}$ $\cup$ Par$_1^-$ $\cup$ Par$_{-1}^+$ by \ref{C0136}. We may suppose without loss of generality that $[g,h]\in$ Hyp$_{-1}$ $\cup$ Par$_1^+$, because the other case occurs reverting the orientation of $S$. Since $[g,h]c=1$ in $\mathrm{PSL}_2\R$, then $c^{-1}=[g,h]$ and its simple lift $\widetilde{c}^{-1}\in$ Hyp$_0\cup$ Par$_0^+$ and the following holds $\widetilde{c}^{-1}=\textbf{z}[g,h]$, thus $[g,h]\widetilde{c}=\textbf{z}^{-1}$ and $\eur{\rho}{\mathfrak{s}}=-1$.\\
\noindent Now suppose Tr$[g,h]\ge2$, then $[g,h]\in$ Hyp$_0$ $\cup$ Par$_0$ $\cup$ $\{1\}$ by \ref{C0136}.
Since $[g,h]c=1$ in $\mathrm{PSL}_2\R$, then $c^{-1}=[g,h]$ then $[g,h]\widetilde{c}=1$. Thus $\eur{\rho}{\mathfrak{s}}=0$.
\end{proof}
\text{}\\
\section{Geometry and algebra of punctured torus}\label{s4}
\noindent Throughout this chapter, let $H$ be a punctured torus. We prefer to use the letter $H$ here, instead of $S$ of $T$, because in the sequel will be useful to think punctured torus as a \emph{handle} attached to a surface of lower genus than the original surface $S$.
\subsection{Generalities about the character variety}\label{ss41} We start with some general facts about the character variety. Let $S$ be any surface, the representation variety \textsf{Hom}$(\pi_1S,\mathrm{SL}_2\R)$ is defined as the set of all homomorphisms $\rho:\pi_1S\longrightarrow \mathrm{SL}_2\R$. Fix a presentation of $\pi_1S$, then we may associate to any generator a matrix in $\mathrm{SL}_2\R$ such that the matrices satisfies the condition of any relators. Considering the entries of matrices as coordinates variables, the set \textsf{Hom}$(\pi_1S,\mathrm{SL}_2\R)$ may be seen as the solution set of some polynomial equations, then it is a closed algebraic variety. In general this variety has singularities.\\
The \emph{character} $\chi_\rho$ of a representation $\rho$ is the function defined as follows: Tr$\circ\rho:\pi_1S\longrightarrow \mathbb{R}$ given by Tr$\circ\rho(\alpha)=$Tr$\big(\rho(\alpha)\big)$. By using well-known trace relations, is possible to see that the function $\chi_\rho$ is determined by its values at only finitely many elements $\alpha_1,\dots, \alpha_n$ (see for instance \cite{CS}). We may define a function $T:$\textsf{Hom}$(\pi_1S,\mathrm{SL}_2\R)\longrightarrow \mathbb{R}^n$ that sends any representation
\[ \rho\longmapsto \Big(\text{Tr}\big(\rho(\alpha_1)\big), \dots, \text{Tr}\big(\rho(\alpha_n)\big) \Big)
\] and we may define the \emph{character variety} to be the image of such function, that is $\mathcal{X}(S) = T\Big(\textsf{Hom}(\pi_1S,\mathrm{SL}_2\R))\Big)$. If $S$ is the punctured torus, the character variety of representations in $\mathrm{PSL}_2\R$ can be taken as an obvious quotient and it will be described below. The case of closed surface will be consider in the sequel \ref{ss61}.
\begin{rmk}\label{rmkact}
There is an action of $\mathrm{SL}_2\R$ on the representation space \textsf{Hom}$(\pi_1S,\mathrm{SL}_2\R)$ by conjugation. We quotient space may be taught as the moduli space of flat principal $\mathrm{SL}_2\R-$bundle over $S$. In general the quotient space has singularities; however, away from singularities, such quotient space may be identified with the character variety.\\
\end{rmk}
\subsection{Characters of the punctured torus representations} In this paragraph we deal with the characters of representations $\rho:\pi_1H \longrightarrow \mathrm{PSL}_2\R$ without considering geometric structures. Let $p\in H$ be a basepoint for the fundamental group and let $(\alpha,\beta)$ be a basis.\\
\noindent Any representation $\rho:\pi_1H \longrightarrow \mathrm{PSL}_2\R$ is uniquely determined by the images $\rho(\alpha)$ and $\rho(\beta)$. A representation into $\mathrm{PSL}_2\R$ obviously lifts to $\mathrm{SL}_2\R$, and we have two choices, each for the lifts of $\rho(\alpha)$ and $\rho(\beta)$. For now consider $\rho$ as a representation into $\mathrm{SL}_2\R$ and denote $\rho(\alpha) = g$ and $\rho(\beta) = h$.\\
\noindent The character of $\rho$ is determined by the values of Tr$\circ\rho$ at finitely many elements of $\pi_1H$. For the punctured torus with $\pi_1H=\langle\alpha,\beta\rangle$, it is sufficient to consider only the three elements $\alpha,\beta,\alpha\beta$. Any word $w$ of $\pi_1H$ may be written in terms of $\alpha,\beta$ and their inverses, and the trace of $\rho(w)$ can be expressed as a polynomial in $(x, y, z) = (\text{Tr}g, \text{Tr} h, \text{Tr} gh)$. In our case we have the important relation
\[ \text{Tr }[g, h] =\text{ Tr}^2g+\text{ Tr}^2h+\text{ Tr}^2 gh-\text{ Tr}g\text{ Tr}h\text{ Tr}gh-2
\] and hence we define the polynomial
\[ k(x,y,z)=x^2 +y^2 +z^2 -xyz-2
\]
\noindent Any irreducible representation $\rho_1$ defines the same triple $(x, y, z)$ of another
representation $\rho_2$, if and only if $\rho_1$ and $\rho_2$ are conjugate. In this case the triple $(\text{Tr}g, \text{Tr} h, \text{Tr} gh)$ defines the pair $g,h\in\mathrm{SL}_2\R$ uniquely up to conjgacy.\\
A representation $\rho$ is said to be \emph{reducible} if its image is a set of matrices such that, acting as linear transformations on $\mathbb{C}^2$, leaves invariant a line in $\mathbb{C}^2$. Of course, irreducible representations are those representations which not reducible. As pointed out by Mathews in \cite{MA1}; for irreducible representation $\rho$ it is possible to deduce all the geometry of $g$ and $h$ considered as isometries of the hyperbolic plane.\\
\noindent The set of all $(x,y,z) = (\text{Tr}g, \text{Tr} h, \text{Tr} gh)$ is the character variety $\mathcal{X}(H)$ of the punctured torus $H$. In \cite[Theorem 4.3]{GO03}, Goldman described the character variety $X(H)$.
\begin{thm}[Goldman]
Given $(x, y, z)\in\mathbb{R}^3$, there exist $g, h\in\mathrm{SL}_2\R$ such that $(x,y,z) = (\text{\emph{Tr}}g, \text{\emph{Tr}} h, \text{\emph{Tr}} gh)$
if and only if
\[ \text{\emph{Tr} }[g, h] =\text{ \emph{Tr}}^2g+\text{ \emph{Tr}}^2h+\text{ \emph{Tr}}^2 gh-\text{\emph{Tr}}g\text{ \emph{Tr}}h\text{ \emph{Tr}}gh-2\ge 2
\] or at least one of $|x|,|y|,|z|$ is greater than $2$.
\end{thm}
\noindent For representations $\rho:\pi_1H\longrightarrow \mathrm{PSL}_2\R$, the character variety may be described starting from the character variety of representations into $\mathrm{SL}_2\R$. There are four different ways to lift the couple $\rho(\alpha),\rho(\beta)$ into $\mathrm{SL}_2\R$, which are related by sign changes. Thus we simply take the character variety $\mathcal{X}(H)$ of representations into $\mathrm{SL}_2\R$ modulo the equivalence relation
\[ (x,y,z) \sim (-x,-y,z) \sim (-x,y,-z) \sim (x,-y,-z).
\]
induced by these four possible lifts.\\
\begin{rmk}
The notion of reducibility still makes sense in $\mathrm{PSL}_2\R$. Indeed any element of $\mathrm{PSL}_2\R$ acts on $\mathbb{C}^2$ via linear transformations up to a reflection in the origin, hence the Riemann sphere $\mathbb{C}\mathbb{P}^1$.Thus the idea of an invariant line still makes sense.
\end{rmk}
\begin{rmk}
Also representations in $\mathrm{PSL}_2\R$ the value of $k(x,y,z)=$Tr$[g,h]$ is well-defined, even if the values of $x,y,z$ are ambiguous!
\end{rmk}
\noindent Points with $k(x, y, z) = 2$ describe reducible representations, which include also abelian representations, as shown by the following lemma.
\begin{lem}\label{redrep}
A representation $\rho:\pi_1H\longrightarrow \mathrm{PSL}_2\R$ $($or $\mathrm{SL}_2\R)$ is reducible if and only if the character of $\rho$ is such that $k(x,y,z)=2$.
\end{lem}
\noindent From now on we will deal with only irreducible representations. Points with $k(x, y, z) \neq2$ describe irreducible representations, hence describe a conjugacy class of representations precisely. For any $t\neq 2$, we define the \emph{relative character variety} as the space of all representations (up to conjugacy) with Tr$[g, h] = t$ by $\mathcal{X}_t(H) = k^{-1}(t) \cap \mathcal{X}(H)$.
\subsection{Virtually abelian representations} In this section we consider a special type of representation, namely \emph{virtually abelian representations}. We dedicate an entire paragraph to them because virtually abelian representations will play a crucial role in the next section.
\begin{defn}
A representation $\rho:\pi_1H\longrightarrow \mathrm{PSL}_2\R$ is said to be virtually abelian it its image contains an abelian subgroup of finite index.
\end{defn}
\noindent Consider the following set of $\mathbb{R}^3$:
\[ V =\{ 0 \times 0 \times \mathbb{R} \setminus [-2,2] \} \cup \{0 \times \mathbb{R} \setminus [-2,2] \times0 \} \cup \{ \mathbb{R}\setminus[-2,2]\times0\times0 \}.
\] Of course $V\subset \mathcal{X}(H)$. The following result gives a complete characterization of this type of representations.
\begin{lem}
Let $\rho:\pi_1H\longrightarrow \mathrm{PSL}_2\R$ be a representation and let $(\alpha,\beta)$ be any basis of $\pi_1H$. Then $\rho$ is virtually abelian (but not abelian) if and only if $(\text{\emph{Tr}}g, \text{\emph{Tr}} h, \text{\emph{Tr}} gh)\in V$, where $g=\rho(\alpha)$ and $h=\rho(\beta)$.
\end{lem}
\noindent We refer \cite[Lemma 4.9]{MA1} for the proof. This type of representations has also a nice geometric description given by the following lemma.
\begin{lem}
With the same notation of the previous lemma; a representation $\rho:\pi_1H\longrightarrow \mathrm{PSL}_2\R$ is virtually abelian representation if and only if two of $\{g,h,gh\}$ are half-turns about distinct points $q_1,q_2$ and the third is a non-trivial translation along the unique axis passing through $q_1$ and $q_2$.
\end{lem}
\begin{proof}
The sufficient condition follows immediately. Indeed half-turns have trace $0$ and a non-trivial translation has trace greater than $2$ in magnitude, hence if $\{g,h,gh\}$ are isometries of the required type, the triple $(\text{Tr}g, \text{Tr} h, \text{Tr} gh)\in V$.\\
We need to show the necessary condition, and we may suppose that Tr$g=0$, Tr$h=0$ and |Tr$gh|>2$ since the other cases are similar.
Hence $g$ and $h$ are half-turns about two points $q_1,q_2$. Suppose $q_1=q_2$ then $gh=$ id, that is Tr$gh=\pm2$ hence a contradiction. Since the points $q_1,q_2$ are distinct there is a unique geodesic line $q_1q_2$ passing through them. Both $g$ and $h$ preserve such line reversing its orientation. Of course also the composition $gh$ preserve the line $q_1q_2$ maintaining the orientation (because the orientation is reversed two times). Since $gh\neq$id, we can conclude that it is a non-trivial translation along $q_1q_2$.
\end{proof}
\noindent What makes virtually abelian representation interesting for our scopes is the following characterization theorem, which was proved by Mathews in \cite{MA1}.
\begin{thm}\label{vat}
Let $H$ be a punctured torus and let $\rho:\pi_1H\longrightarrow \mathrm{PSL}_2\R$ be a representation. The following are equivalent:
\begin{mi}{1em}
\begin{enumerate}
\item[1] $\rho$ is the holonomy representation of a hyperbolic cone-structure on $H$ with geodesic boundary, except for at most one corner point, and no interior cone points;
\item[2] $\rho$ is not virtually abelian.
\end{enumerate}
\end{mi}
\end{thm}
\noindent In particular, the corner angles of the hyperbolic cone-structures on the punctured torus described by this theorem range over all of $]0,3\pi[$. Indeed this can be easily checked using the Gau\ss-Bonnet theorem.\\
\subsection{The action of MCG$(H)$ on the character variety} We finally consider the action of \textsf{Aut}$(\pi_1H)$ on the character variety. The group \textsf{Aut}$(\pi_1H)$ acts simply and transitively on the set of bases of $\pi_1H$. This is equivalent to consider the effect of changing basis $(\alpha_1,\beta_1) \longrightarrow (\alpha_2,\beta_2)$ on a representation $\rho:\pi_1H\longrightarrow \mathrm{PSL}_2\R$ by pre-composition. The overlying geometry remains unchanged under the action of \textsf{Aut}$(\pi_1H)$; even if the character of the representation may change. Since trace is invariant under conjugation, this action descends to an action of
\[ \textsf{Out}( \pi_1H) = \frac{\textsf{Aut}(\pi_1H)}{\textsf{Inn}(\pi_1H)}\cong \text{MCG}(H).
\]\\ Here MCG$(H)$ is the mapping class group of $H$, \emph{i.e.} the group of homeomorphisms of $H$ up to isotopy. We have the following theorem by Nielsen, see \cite{NI}, \cite{GO03} and \cite{MKS}.
\begin{thm}[Nielsen]
An automorphism $\psi$ of $\pi_1H=\langle\alpha,\beta\rangle$ takes $[\alpha,\beta]$ to a conjugate of itself or its inverse.
\end{thm}
\begin{rmk}
A similar result holds also for closed surfaces; even if it does not for other surfaces with boundary (see \cite{NI} and \cite{JS}).
\end{rmk}
\begin{rmk}
Viewing the punctured torus as the quotient of Euclidean plane by the action of two linearly independent translations with a lattice removed; we may easily see that the group MCG$(H)$ is isomorphic to GL$_2\mathbb{Z}$.
\end{rmk}
\noindent By the Nielsen's theorem $[\alpha_1,\beta_1]$ is conjugate to $[\alpha_2,\beta_2]^{\pm1}$, hence Tr$[g_1,h_1] =$ Tr$[g_2,h_2]$ and $k(x_1,y_1,z_1) = k(x_2, y_2, z_2)$. That is, the triples $(x_1, y_1, z_1)$ and $(x_2, y_2, z_2)$ lie on the same level set of the polynomial $k$. Hence the action of the mapping class group of the punctured torus, MCG$(H)$, preserves the level set $k(x,y,z)=t$, that is preserves the relative character variety $\mathcal{X}_t(H)$ for any $t$.\\
\noindent When $t>2$ any representation is irreducible and a character corresponds uniquely to a conjugacy class of representations. Goldman in \cite{GO03} proved that there are only two different types of transformations in $\mathcal{X}_t(H)$, namely:
\begin{mi}{1em}
\begin{enumerate}
\item \emph{Pants representation}: that is $(x,y,z)$ is the character of discrete representation $\rho:\pi_1H\longrightarrow \mathrm{PSL}_2\R$, which it may be considered the holonomy of complete hyperbolic structure on a pair of pants. In particular there are no elliptics in the image of $\rho$. In this case: let $\overline{\rho}:\pi_1H\longrightarrow \mathrm{SL}_2\R$ be any lift of $\rho$, then up to change the basis of $\pi_1H$ we may suppose that the character $\big(\text{Tr}\overline{\rho}(\alpha),\text{Tr}\overline{\rho}(\beta),\text{Tr}\overline{\rho}(\alpha\beta)\big)$ lies in the octant $]-\infty, -2]^3$.
\item \emph{Representation with elliptics}: that is $(x,y,z)$ is equivalent to another character with some coordinate in the interval $]-2,2[$. In this case $(x,y,z)$ is the character of a representation $\rho$ which sends a simple closed curve to an elliptic transformation. We denote this subset with $\Omega_t$.
\end{enumerate}
\end{mi}
\noindent The subset of pants representations in $\mathcal{X}_t(H)$ consists of a disjoint union of wandering domains that appear as soon as $t\ge 18$. Such domains arising from the intersection of $\mathcal{X}_t(H)$ with the Fricke space $\mathcal{F}(P)$ of a pair of pants $P$. We may observe that MCG$(H)\cong \textsf{Out}(\pi_1H)$ preserves $\Omega_t$ and its complement in $\mathcal{X}_t(H)$.\\
\noindent For any $t$, there is a smooth symplectic structure on the space $\mathcal{X}_t(H)$, \emph{i.e.} a $2-$form $\omega_t$ which is closed and non-degenerate (see \cite{GO84} for further details). Since the relative character variety $\mathcal{X}_t(H)$ is a closed $2-$dimensional subspace of $\mathcal{X}(H)$, the symplectic $2-$form $\omega_t$ is also an area form. The action of MCG$(H)$ is somewhat of bizarre, it changes as soon as we consider different ranges of the value $t$ (see \cite[Main Theorem]{GO03}). In particular when $t>2$ we have the following theorem.
\begin{thm}[Goldman \cite{GO03}]
For any $t>2$, the action of \emph{MCG}$(H)$ on $\Omega_t$ is ergodic.
\end{thm}
\begin{rmk}
We note that $\Omega_t$ coincide with the whole relative character variety for $2<t<18$.\\
\end{rmk}
\section{Geometrizable representations with $\eu\rho=\pm\big(\chi(S)+1\big)$}\label{s6}
\noindent Throughout this section let $S$ be a closed surface of genus $g\ge2$, and let $\rho:\pi_1S\longrightarrow \mathrm{PSL}_2\R$ be a representation with $\eu\rho=\pm\big(\chi(S)+1\big)$. In this section we are going to prove the following theorem.
\begin{thm}\label{mainthm}
Let $S$ be a closed surface of genus $g\ge3$. Then every representation $\rho:\pi_1S\longrightarrow \mathrm{PSL}_2\R$ with $\eu\rho=\pm \big(\chi(S)+1\big)$, which sends a non-separating simple curve $\gamma$ on $S$ to a non-hyperbolic element is the holonomy of a hyperbolic cone-structure on $S$ with one cone point of angle $4\pi$.
\end{thm}
\noindent Finally, combining the previous result \ref{mainthm} togheter with \cite[Theorem 1.4]{MW}, we get the following stronger result in the genus two case.
\begin{cor}\label{maincor}
Let $S$ be a closed surface of genus $2$. Then any representation $\rho:\pi_1S\longrightarrow \mathrm{PSL}_2\R$ with $\eu\rho=\pm1$ is geometrizable by a hyperbolic cone-structure with one cone point of angle $4\pi$.
\end{cor}
\noindent Before continuing with the geometrization problem, some preliminars about the character variety of closed surfaces are in order.
\subsection{Brief overview about the characters variety of a closed surface}\label{ss61} In subsection \ref{ss41} we stated some generalities about the representation and the character varieties for a general surface. After that, we have described the character variety of punctured torus with more details. Now we describe the character variety for a generic closed surface of genus $g\ge2$. The representation variety describes all homomorphisms $\rho:\pi_1S\longrightarrow \mathrm{PSL}_2\R$; and it turns out to be a closed algebraic set. For a closed surface $S$ of genus $g\ge2$ the representation variety \textsf{Hom}$\big(\pi_1S,\mathrm{PSL}_2\R\big)$ is not connected. If we vary a representation continuously, the Euler number $\eu\rho$ changes continuously but, since it is an integer, it remains constant. For closed surfaces, Goldman classified the components of \textsf{Hom}$\big(\pi_1S,\mathrm{PSL}_2\R\big)$ completely.
\begin{thm}[Goldman \cite{GO88}]\label{T311}
Let $S$ be a closed surface of genus $g$ at least $2$. Then the space \emph{\textsf{Hom}}$(\pi_1S,\mathrm{PSL}_2\R)$ has $4g-3$ connected components which are parametrised by the Euler number.
\end{thm}
\noindent As remarked in \ref{rmkact}, there is an action of $\mathrm{PSL}_2\R$ on the representation space \textsf{Hom}$\big(\pi_1S,\mathrm{PSL}_2\R\big)$ by conjugation. Such action preserves the connected components; hence the quotient space $\mathcal{X}(S)$ still has $4g-3$ connected components parametrized by the Euler number $\eu\rho$, and it may be identified (away from the singularities) with the character variety $\mathcal{X}(S)$.\\
\noindent Singular points correspond to elementary representations. Since the action of $\mathrm{PSL}_2\R$ preserves the subset of non-elementary representations, we may consider the subset \textsf{Hom}$^{\text{ne}}\big(\pi_1S,\mathrm{PSL}_2\R\big)$ of all non-elementary representations. Without elementary representations, the quotient space $\mathcal{X}^\text{ne}(S)$ turns out to be a symplectic smooth manifold of dimension $6g-6$. It supports a smooth symplectic structure, \emph{i.e.} a $2-$form $\omega_S$ which is closed and non-degenerate, outside the singular locus. By taking an appropriate power of $\omega_S$, we obtain an area form on $\mathcal{X}(S)$ hence a measure $\mu_S$.
\begin{rmk}
Viewing $\mathcal{X}(S)$ as a subset of some $\mathbb{R}^{2n}$, away from the singularities $\omega^n$ is some multiple of the standard Euclidean area form, in particular, $\mu_S$ is absolutely continuous with respect to the Lebesgue measure.
\end{rmk}
\noindent We define \emph{extremal components}, those components parameterized by $|\eu\rho|=-\chi(S)$. These components turns out to be two diffeomorphic copies of the Teichm\"uller space. Similarly, we define \emph{almost extremal components}, those components parameterize by $|\eu\rho|=-\big(\chi(S)+1\big)$. Any representation $\rho$ of these components is defined as \emph{almost extremal representation}.\\
\noindent The curious reader may see \cite{GO88} for further details about the representation variety and \cite{GO84} for more details about the symplectic structure on $\mathcal{X}(S)$.\\
\subsection{Preliminar discussion} We turn back to the geometrization problem. Let $S$ be a closed surface of genus $g\ge2$, and let $\rho:\pi_1S\longrightarrow \mathrm{PSL}_2\R$ be a representation with $\eu\rho=\pm\big(\chi(S)+1\big)$. In \cite{MA2}, Mathews proved the following result.
\begin{thm}[Mathews]\label{T1}
Let $S$ be a closed surface of genus $g\ge 2$. Then almost every representation $\rho:\pi_1S\longrightarrow \mathrm{PSL}_2\R$ with $\eu\rho=\pm\big(\chi(S)+1\big)$, which sends a non-separating curve $\gamma$ on $S$ to an elliptic is the holonomy of a hyperbolic cone-structure on $S$ with one cone point of angle $4\pi$.
\end{thm}
\noindent By Goldman's theorem \ref{T311}, the set of representations with $\eu\rho=\pm\big(\chi(S)+1\big)$ is formed by two connected components of the representation space, hence characters of these representations form two connected components of character variety $\mathcal{X}(S)$. Since the singular locus in $\mathcal{X}(S)$ is given by only elementary representations and since they have Euler class zero, it follows that these components are smooth and the non-degenerate $2-$form $\omega_S$ is well-defined everywhere and defines a measure $\mu_S$.\\
\noindent Let us denote by $E$ the subset of all representations with $\eu\rho=\pm\big(\chi(S)+1\big)$ that send a simple non-separating curve to an elliptic. Then the previous claim of \ref{T1} may be restated in the following way:
\begin{quote}
\emph{let $S$ be a closed surface of genus $g\ge 2$. Then almost every representation $\rho$ in E arises as the holonomy of a hyperbolic cone-structure with a single cone point of angle $4\pi$.}
\end{quote}
\noindent Now, two simple questions may naturally arise.\\
\SetLabelAlign{center}{\null\hfill\textbf{#1}\hfill\null}
\begin{enumerate}[leftmargin=1.75em, labelwidth=1.3em, align=center, itemsep=\parskip]
\item[\bf 1.]\label{O1} \emph{How big is $E$? That is, what is the measure of the set $E$?} Let $k$ be an integer such that $|k|\le \chi(S)$, denote by $\mathcal{M}^k$ the $k^{\text{th}}$ connected component of the character variety $\mathcal{X}(S)$ and by $\mathcal{NH}^k$ the subset of $\mathcal{M}^k$ of all representations that send a simple closed curve (which may be separating or not) to a \emph{non-hyperbolic} element of $\mathrm{PSL}_2\R$. Finally, we denote, as usual, the genus of $S$ by $g$. In \cite{MW}, the Authors showed that $E$ has full measure in $\mathcal{NH}^k$ if $(g,k)\neq(2,0)$. In the genus $2$ case, the subset $\mathcal{NH}^{\pm1}$ coincide with $\mathcal{M}^{\pm1}$ (see \cite[Theorem 1.4]{MW}); however we have not any guarantee that a non-Fuchsian representation sends a simple non-separating curve to an elliptic. So far it is unknown if $\mathcal{NH}^k$ coincide with $\mathcal{M}^{k}$ for surfaces of genus greater than $2$ (regardless of the value of $k$).
\item[\bf 2.] \emph{Where does the ''almost every'' condition come from?} In the sequel we will consider separately those representations $\rho$ satisfying the following condition: there is a handle $H\subset S$ and a basis $(\alpha,\beta)$ of $\pi_1H$ such that the induced representation $\rho_H:\pi_1H\longrightarrow \mathrm{PSL}_2\R$ is virtually abelian. In this case, we will say that $\rho$ contains a virtually abelian pair (see also the definition below \ref{rvap}). These representations turn out to be pathological in the sense that will be explained below in the next paragraph \ref{ss63}. We may note that any such representation sends a simple non-separating curve to an elliptic, hence any such representation belongs to $E$. Set $B$ the subset of all representations that contain a handle $H$ such that the induced representation $\rho_H$ is virtually abelian. In \cite[Proposition 6.3]{MA2}, Mathews proved that $\mu_S(B)=0$, \emph{i.e.} $B$ is a subset of measure zero in $E$. As we see below Mathews' strategy does not apply to representations in $B$, that is his theorem holds for \emph{''almost every''} representation in $E$. However, also these representations arise as the holonomy of hyperbolic cone-structure with a single cone point of angle $4\pi$ as we show in \ref{sss651}.
\end{enumerate}
\subsection{Representations with virtually abelian pairs are problematic}\label{ss63} Let us start with the following definition.
\begin{defn}\label{rvap}
We will say that a representation contains a \emph{virtually abelian pair} if there are two simple non-separating closed loops with intersection number one and their images via $\rho$ is given by two elliptic elements of order $2$ with different fixed points. In this case, their commutator is a hyperbolic transformation along the axis passing through their fixed points.
\end{defn}
\noindent In order to understand the issues of this kind of representations, we need to explain the proof of \ref{T1}. Let $q$ be any point on $S$ and let $\rho:\pi_1(S,q)\longrightarrow \mathrm{PSL}_2\R$ be any representations with $\eu\rho=-1$ and suppose it sends a non-separating curve $\alpha$ to an elliptic. Starting from $\alpha$ we may found a separating curve $\gamma$ that split $S$ into two pieces, namely we may cut off the handle $H$ containing $\alpha$ from $S$.\\
\noindent More precisely: let $\beta$ be a closed non-separating curve such that $i(\alpha,\beta)=1$ and denote by $\gamma$ their commutator. Since $\alpha$ is elliptic, then by lemma \ref{L01210} the commutator $\gamma$ has hyperbolic holonomy, in particular Tr$\rho(\gamma)>2$ by \ref{L0125}. Splits $S$ along $\gamma$, denote by $H$ the handle containing $\alpha$ and by $\Sigma$ the remaining part of $S$. Consider their fundamental groups. We need to take basepoints $q_H\in H$, $q_\Sigma\in\Sigma$ and $q\in S$. There is nothing special to take $q_H$ and $q_\Sigma$ on the boundary of $H$ and $\Sigma$ respectively, whereas the basepoint $q$ can be taken everywhere on $S$. Consider the following inclusion $\jmath_H:\pi_1(H,q_1)\hookrightarrow \pi_1(S,q_1)$. Let $\delta$ be a path joining $q$ with $q_H$, then we have the following isomorphisms
\[
\begin{aligned}
\xi_H:\pi_1(S,q_1)&\longrightarrow \pi_1(S,q)\\
\beta &\longmapsto \delta\beta\delta^{-1}
\end{aligned}
\]
\noindent We define $\rho_H$ the representation $\rho_H:\pi_1(H,q_1)\longrightarrow \mathrm{PSL}_2\R$ gven by composition of the maps defined above
\[ \rho_H:\pi_1(H,q_1)\longrightarrow \pi_1(S,q_1)\longrightarrow \pi_1(S,q) \longrightarrow \mathrm{PSL}_2\R.
\]
\noindent Considering the other inclusion $\jmath_\Sigma:\pi_1(\Sigma,q_2)\hookrightarrow \pi_1(S,q_2)$ and applying the same procedure; we may define a representation $\rho_\Sigma:\pi_1(\Sigma,q_2)\longrightarrow \mathrm{PSL}_2\R$ in the same way. Since $\rho(\gamma)$ is hyperbolic, the relative Euler numbers $\eur{\rho_H}{\mathfrak{s}}$ and $\eur{\rho_\Sigma}{\mathfrak{s}}$ are well-defined with respect to the special trivialization defined along $\gamma$. We may immediately note that $\eur{\rho_H}{\mathfrak{s}}=0$ with respect to the special trivialization along the boundary $\gamma$ by the proposition \ref{PTEC}. Hence we have localized the deficiency of the Euler class of $\rho$, and, by additivity, the relative Euler class of the representation induced on the other piece is extremal.\\
\noindent Since $\eur{\rho_\Sigma}{\mathfrak{s}}=\chi(\Sigma)$, the representation $\rho_\Sigma$ is the holonomy of a complete hyperbolic structure with totally geodesic boundary by the theorem \ref{ecswb}.\\
\noindent Suppose $\rho$ does not contain virtually abelian pairs, in particular the representation $\rho_H$ is not virtually abelian and, by Theorem \ref{vat}, it is the holonomy of a hyperbolic cone-structure $\sigma_H$ on $H$ with geodesic boundary, except for at most one corner point of angle $\theta$ and no cone points inside $H$. Since Tr$\rho(\gamma)>2$ the angle of the corner point is greater than $2\pi$ and does not exceeds $3\pi$ (see \cite[Proposition 5.8]{MA1}).\\
\noindent The basic idea of Mathews'proof is to find a hyperbolic cone-structure on $\Sigma$ with geodesic boundary except for at most corner point of angle $\theta_1\in]\pi,2\pi[$, no cone points inside $\Sigma$ and holonomy $\rho_\Sigma$; that fits together with a hyperbolic cone-structure on $H$ with one corner point of angle $\theta_2=4\pi-\theta_1\in ]2\pi,3\pi[$ and holonomy $\rho_H$. If these structures exist, we may identify the corner points and then glue them along their boundary. Topologically the resulting surface turns out to be the original surface $S$; geometrically we get $S$ endowed with a hyperbolic cone-structure with one cone point of angle $4\pi$ and holonomy $\rho$.\\
\noindent In order to find a hyperbolic cone-structure $\sigma_\Sigma$ on $\Sigma$ with a corner point of angle $\theta_1$, we wish to truncate a flares inside the convex core of $\Sigma$. Unfornately, such truncation can not be done too far inside the convex core, but we may cut inside the collar of the geodesic boundary. We recall that the collar width $w(t)$ depends only on the trace $t$ of $\rho(\gamma)$ and it may be compute by the following formula
\[ \sinh w(t)=\frac{1}{\sinh\Big(\frac{d(t)}{2} \Big)} \quad \text{ where } d(t)=2\cosh^{-1}\Big(\frac{t}{2}\Big) \text{ is the translation distance}.
\] Let $p$ be a point inside the collar, consider the geodesic representative of $\gamma$ based at $p$ and cut along it to obtain a hyperbolic cone-structure on $\Sigma$ with one corner point of angle $\theta_2$. The developed image $\widehat{p}$ of $p$ lies inside the $w(t)-$neighbourhood of the axis of $\rho(\gamma)$. Using classical notion of hyperbolic geometry we may see that the magnitude of $\theta_2$ depends only on the distance of the point $\widehat{p}$ from the axis of $\rho(\gamma)$; that is the distance of $p$ from the geodesic boundary of the (unique) complete hyperbolic structure on $\Sigma$ with holonomy $\rho_\Sigma$. Hence the possible values of $\theta_1$ are bounded from above by some value $\theta_{\text{max}}$ which depends on the width of the collar and on the value $t$ of the trace of $\rho(\gamma)$. \\
\noindent What remain to do is to find a hyperbolic cone-structure on $H$ with one corner point of angle $4\pi-\theta_1$ that fits together with $\Sigma$ endowed with $\sigma_\Sigma$. Despite theorem \ref{vat} ensures the existence of hyperbolic cone-structure on $H$ with holonomy $\rho_H$, we do not know a priori if $\rho_H$ may be the holonomy of hyperbolic cone-structure such that the angle of the corner point lies in the range $]\theta_{\text{max}}, 3\pi[$.
\begin{rmk}
The condition that $\rho:\pi_1S\longrightarrow \mathrm{PSL}_2\R$ does not contain virtually abelian pair is necessary. Indeed when $\rho$ contains a virtually abelian pair the representation $\rho_H$ turns out to be virtually abelian, and such representation does not arise as holonomy of a hyperbolic cone-structure on $H$ by \ref{vat}.
\end{rmk}
\noindent Following Mathews we give the following definition.
\begin{defn}
Let $H$ be a punctured torus, and let $(\alpha,\beta)$ be a basis for $\pi_1(H,q)$, where $q$ is a point on the boundary of $H$. Let $\rho : \pi_1(H,q)\longrightarrow \mathrm{PSL}_2\R$ be a representation with Tr$[g,h]>2$, where $g=\rho(\alpha)$ and $h=\rho(\beta)$. For any point $p$ in $\mathbb{H}^2$, we define $\mathcal{P}(g,h;p)$ to be the (possibily degenerate) hyperbolic pentagon given by the following closed polygonal
\[ p\to [g^{-1},h^{-1}](p) \to h(p) \to gh(p) \to h^{-1}gh(p)\to p
\]
\end{defn}
\noindent With the same notation we may state the following lemma (whose the proof may be found in \cite[Lemma 3.4]{MA1}).
\begin{lem}
Let $\rho_H:\pi_1(H,q)\longrightarrow \mathrm{PSL}_2\R$ be a representation. The representation $\rho_H$ is the holonomy of a branched hyperbolic structure on $H$ with no interior cone points and at most one corner point if and only if there exist a free basis $(\alpha,\beta)$ of $\pi_1(H,q)$ and a point $p\in\mathbb{H}^2$ such that $\mathcal{P}(g,h;p)$ is a non-degenerate pentagon bounding an immersed open disc in $\mathbb{H}^2$.
\end{lem}
\noindent Consider the representation $\rho_H$ and the basis $(\alpha,\beta)$ of $\pi_1(H,q)$ given by the construction above. To find a hyperbolic cone-structure on $H$, with holonomy $\rho_H$ and such that fits with the hyperbolic cone-structure on $\Sigma$; means to find a basis $(\alpha',\beta')$ (possibly different to the given one!) and point $p$ inside the $w(t)-$neighbourhood of the axis $\rho_H\big([\alpha',\beta']\big)$ such that the pentagon $\mathcal{P}(g',h';p)$ is a non-degenerate pentagon bounding an immersed open disc in $\mathbb{H}^2$; where $g'=\rho_H(\alpha')$ and $h'=\rho_H(\beta')$.
\begin{rmk}
We remember that the change of basis of $\pi_1(H,q)$ changes the character of $\rho_H$, but it has no effect on the geometry on $H$. This reasoning may be extended to the entire surface $S$. Changing the basis of the handle $H$, we change the presentation of the fundamental group $\pi_1(S,q)$ but, as in the case of the punctured torus, the change of basis does not effect on the geometry on $S$. Hence we do not care if we need to change the basis of $\pi_1(H,q)$ in order to find a \emph{good} hyperbolic cone-structure with holonomy $\rho_H$.
\end{rmk}
\noindent With this spirit, we give the following definition.
\begin{defn}\label{gr}
Consider a punctured torus $H$, with a basis $(\alpha,\beta)$ for $\pi_1(H,q)$, where $q$ is a point on the boundary of $H$; and a representation $\rho : \pi_1(H,q)\longrightarrow \mathrm{PSL}_2\R$ with Tr$[g,h]>2$, where $g=\rho(\alpha)$ and $h=\rho(\beta)$. Define $\rho$ to be $\varepsilon-$good for a specified orientation of $H$, if there exists a basis $\alpha',\beta'$, of the same orientation as $\alpha,\beta$, and a point $p$ at distance less than $\varepsilon$ from \textsf{Axis}$\rho([\alpha',\beta'])$, such that the pentagon $\mathcal{P}(g',h';p)$ is non-degenerate, bounds an embedded disc, and is of the specified orientation.
\end{defn}
\noindent We will say that a character is $\varepsilon-$good for a specified orientation of $H$ if it is the character of a $\varepsilon-$good representation. Note that since Tr$[g,h]>2$ the representation $\rho$ is irreducible, then any character correspond to a unique conjugacy class of representations. Thus
\[ \text{a character is }\varepsilon-\text{good } \iff \text{ all corresponding representations are }\varepsilon-\text{good}.
\] We define $\varepsilon-$\emph{bad} representations (characters) those representations (characters) which are not $\varepsilon-$good. We will define \emph{bad}-representations, those representations which $\varepsilon-$bad for any $\varepsilon$. \\
\noindent By the theorem \ref{vat}, any non-virtually abelian representation is a $\varepsilon-$good representation for some $\varepsilon$, whereas virtually abelian representation are $\varepsilon-$bad for any $\varepsilon$ as we see below \ref{L439}. In particular they are $w(t)-$bad for any $t>2$. On the other hand a non-virtually abelian representation may be $w(t)-$bad even if it is good for other values of $\varepsilon$. Indeed the ''goodness'' condition is weaker than $w(t)-$goodness condition, because in the second case the point $p$ must lie at a certain specified distance from \textsf{Axis}$\rho([\alpha',\beta'])$. In the next section we will show that for any $t>2$ the \emph{only} $w(t)-$bad representation are virtually abelian. So far we have the following result.
\begin{lem}\label{L439}
Let $\rho:\pi_1(H,q)\longrightarrow \mathrm{PSL}_2\R$ be a virtually abelian representation. Then $\rho$ is $\varepsilon-$bad for any $\varepsilon$.
\end{lem}
\noindent We recall for convenience the following characterization of virtually abelian representations. Let $G\subset \mathrm{PSL}_2\R$ be a subgroup generated by two elements $g,h$, then $G$ is virtually abelian (but not abelian) if and only if two of $\{g,h,gh\}$ are half-turns about points $q_1\neq q_2\in \mathbb{H}^2$ and the third is a non trivial translation along the axis $q_1q_2$. In particular, their commutator is also a translation along the same axis of $gh$.
\begin{proof}
We may suppose without loss of generality that $g,h$ are elliptics of order two and $gh$ is hyperbolic. If we take $p$ in \textsf{Axis} $gh$, then all vertices of $\mathcal{P}(g,h;p)$ lies on the axis and the pentagon does not bound a disc. Thus we may suppose $p$ lies outside the axis, in particular: the points $p$, $gh(p)$ and $[g,h](p)$ lie in the same side and at the same distance from the axis, whereas the points $hgh^{-1}(p)$ and $h(p)$ lie on the other side. Since the segment $p\to[g,h](p)$ lies between \textsf{Axis} $gh$ and the point $gh(p)$ the pentagon $\mathcal{P}(g,h;p)$ does not bound a disc.
\end{proof}
\subsection{Bad representations are virtually abelian}\label{ss64} It is natural to ask if there are $w(t)-$bad representations which are not virtually abelian. Let $t>2$ be a fixed real number. We consider the following subset of the relative character variety $\mathcal{X}_t(H)$:
\begin{mi}{1em}
\begin{itemize}
\item $\Omega_t$ the subset of characters of representations taking some simple closed curve to an elliptic;
\item $B_t$ the set of $w(t)-$bad characters in $\Omega_t$ (where $w(t)$ is the quantity defined above). It turns out to be closed and nowhere dense in $\Omega_t$; and
\item $V_t$ the closed subset of characters of virtually abelian representations in $\Omega_t$.
\end{itemize}
\end{mi}
\noindent By Lemma \ref{L439} the following inclusion holds $V_t\subset B_t$. We recall for convenience that the symplectic $2-$form $\omega$ on $\mathcal{X}(H)$ induces on a symplectic form $\omega_t$ on any level set $\mathcal{X}_t(H)$. In particular, since $\mathcal{X}_t(H)$ is $2-$dimensional, $\omega_t$ turns out to be an area form $\mu_t$ which is invariant with respect to the action of the mapping class group MCG$(H)$. Finally, since we are considering representations $\rho$ that sends a simple non-separating curve to an elliptic element; any representation $\rho_H$ defined as above is a representation with elliptics; hence it belongs to open set $\Omega_t$.
\begin{prop}\emph{\cite[Proposition 6.2]{MA2}}\label{aerg}
For all $t>2$, $\mu_t(B_t)=0$ where $\mu_t$ is the measure induced by $\mu_H$ on the level set $\mathcal{X}_t(H)$. That is:
$\mu_t-$almost every character in $\Omega_t$ is $w(t)-$good.
\end{prop}
\noindent The proof of such proposition relies on the following idea. It is always possible to construct explicitly a representation $\rho^\star:\pi_1H\longrightarrow \mathrm{PSL}_2\R$ which is $w(t)/2-$good for any $t>2$ and let $(x^\star,y^\star,z^\star)$ its character. Little perturbation of such character in $\mathcal{X}_t(H)$ is still a character of a $w(t)/2-$good representation. The set $V$ of perturbations turns out to be an open subset of $\mathcal{X}_t(H)$ of positive measure and since the action of $\Gamma=$MCG$(H)$ is ergodic on $\Omega_t$ the claim follows because invariant sets have null, or conull, measure. For further details, we invite the reader to read \cite{MA4, MA2}. We are going to show the following result.
\begin{prop}\label{bev}
For all $t>2$, $B_t=V_t$. Equivalently: if $\rho$ is a $w(t)-$bad representation, then it is virtually abelian.
\end{prop}
\noindent \textbf{Notation:} for the sake of readability and simplicity we will make a little abuse of notation confusing a character $(x,y,z)$ with the conjugacy class of representations having it as character. Indeed, under our assumptions, any representation we are considering is irreducible, hence any character corresponds to a unique conjugacy class of representation.\\
\noindent Let $\rho_0\in \mathcal{X}_t(H)$ be a non-virtually abelian representation, hence by Theorem \ref{vat} it is the holonomy of hyperbolic cone-structure on $H$ with geodesic boundary, except for at most one corner point, and no interior cone points. In particular we can find a basis $(\alpha,\beta)$ for $\pi_1(H,q)$ and a point $p\in\mathbb{H}^2$ such that the pentagon $\mathcal{P}(g,h;p)$ bounds a disc in $\mathbb{H}^2$, where $g=\rho_0(\alpha)$ and $h=\rho_0(\beta)$. Clearly $\rho_0$ is a $d-$good representation where $d$ is the distance of $p$ from the axis of $\rho_0([\alpha,\beta])$. If $d<w(t)$ there is nothing to prove, hence suppose that $\rho_0$ is not $w(t)-$good, \emph{i.e.} $w(t)-$bad, in particular $w(t)/2-$bad. Hence for any basis $(\alpha,\beta)$ and any point $p$ at distance less than $w(t)/2$ from the axis $\rho_0([\alpha, \beta])$, the pentagon $\mathcal{P}(g,h;p)$ does not bounds a disc in $\mathbb{H}^2$. Fix a particular basis $(\alpha',\beta')$ and a particular point $p'$ at distance less than $w(t)/2$ from the axis $\rho_0([\alpha', \beta'])$ and we define the quantity $\xi(\alpha',\beta';p')$ as the maximal radious for a Euclidean ball centered in $\rho_0$ such that any representation $\rho$ inside such ball satisfy the following condition
\[
\mathcal{P}(\rho(\alpha'),\rho(\beta');p') \text{ bounds a disc if and only if the pentagon } \mathcal{P}(g',h';p') \text{ does}
\]
\noindent The quantity $\xi(\alpha',\beta';p')$ depends on the choices of the basis and the point; and we define the following
\[ \xi=\inf_{\substack{(\alpha,\beta) \text{ basis of } \pi_1(H,q); \\ p \in w(t)/2-\text{neighbourhood of \textsf{Axis}}\rho([\alpha,\beta])}} \xi(\alpha',\beta';p')
\]
\noindent From now on, throughout this subsection, we will say that representation $\rho$ is \emph{$\varepsilon-$good with respect to a fixed basis} $(\alpha,\beta)$, if there is a point at distance less than $\varepsilon$ from the axis of $\rho\big([\alpha,\beta]\big)$ such that the pentagon $\mathcal{P}(\rho(\alpha),\rho(\beta);p)$ bounds a disc. To be good with respect to a fixed basis is an open condition. Indeed, for any sufficiently little perturbation $p'$ of the point $p$ the shape of $\mathcal{P}(\rho(\alpha),\rho(\beta);p)$ does not change; \emph{i.e.} $\mathcal{P}(\rho(\alpha),\rho(\beta);p')$ still bounds a disc. Conversely; to be \emph{bad} with respect to a fixed basis is a closed condition, \emph{i.e.} the subset of points $p$ such that $\mathcal{P}(\rho(\alpha),\rho(\beta);p)$ does not bounds a disc turns out to be a closed subset of $U\subseteq\mathbb{H}^2$. For any point $p\in \partial U$, the pentagon $\mathcal{P}(\rho(\alpha),\rho(\beta);p)$ is self-intersecting, but whereas some pertubations of $p$ produce another self-intersecting pentagon, other pertubations produce non-degenerate pentagons.
\begin{minipage}{\linewidth}
\begin{minipage}{0.5\linewidth}
\begin{figure}[H]
\includegraphics[height=70]{but}
\caption{ A self-intersecting pentagon when $p\in \partial U$. In this situation, little
pertubations of the point $p$ in a judicious directions produce a non-degenerate pentagon,
whereas other perturbations produce a self-intersecting pentagons like in the picture on the
right. The same holds if we perturb the character and we maintain the point $p$ fixed.}
\end{figure}
\end{minipage}
\hspace{0.01\linewidth}
\begin{minipage}{0.5\linewidth}
\begin{figure}[H]
\includegraphics[height=75]{but2}
\caption{A self-intersecting pentagon when $p\notin \partial U$. In this situation, every
sufficiently little pertubations of the point $p$ in any directions produce a degenerate
pentagon. The same holds if we perturb the character and we maintain the point $p$ fixed.}
\end{figure}
\end{minipage}
\end{minipage}
\text{}\\
\begin{rmk}
Since $\rho_0$ is $w(t)/2-$bad, the pentagon $\mathcal{P}(\rho_0(\alpha),\rho_0(\beta);p)$ is self-intersecting for any basis $(\alpha,\beta)$ and any point $p$ in the $w(t)/2-$neighoborhood of the axis of $\rho\big([\alpha,\beta]\big)$. Following the previous remark, any little pertubation $p'$ of $p$ maintains the shape of $\mathcal{P}(\rho_0(\alpha),\rho_0(\beta);p)$, \emph{i.e.} $\mathcal{P}(\rho_0(\alpha),\rho_0(\beta);p)$ does not bounds a disc for any $p'$ sufficiently near to $p$.
\end{rmk}
\begin{rmk}
Let $p\notin \partial U$, and consider the pentagon $\mathcal{P}(\rho_0(\alpha),\rho_0(\beta);p)$. Since $\rho_0$ is not virtually abelian, then any little perturbation of the character of $\rho_0$ preserves the shape of the pentagon.
\end{rmk}
\noindent We have the following lemma.
\begin{lem}\label{vachac}
$\xi=0$ if and only if $\rho_0$ is virtually abelian.
\end{lem}
\begin{proof}[Proof of Lemma \ref{vachac}]
If $\rho_0$ is virtually abelian, then almost every little perturbation of $\rho_0$ is $w(t)-$good; hence $\xi=0$. Suppose $\xi=0$, fix a basis $(\alpha, \beta)$ of $\pi_1(H,q)$ and define
\[
\xi(\alpha,\beta)=\inf_{\substack{p \in w(t)/2-\text{neighbourhood of \textsf{Axis}}\rho([\alpha,\beta])}} \xi(\alpha,\beta;p).
\] The function $\xi(\alpha,\beta;p)$ depends continuously only on the distance of the point $p$ from the $\textsf{Axis}\rho([\alpha,\beta])$, and $\xi(\alpha,\beta)$ turns out to be the minimum value. Suppose there is a particular basis $(\alpha,\beta)$ such that $\xi(\alpha,\beta)=0$, then we may see that $\rho_0$ is virtually abelian. Indeed, $\rho_0$ is $w(t)/2-$bad with respect to $(\alpha,\beta)$ (because $w(t)-$bad), and any little pertubations of the character changes the shape of the pentagon $\mathcal{P}(\rho_0(\alpha),\rho_0(\beta);p)$, where $p$ is a point at distance such that the minimum is atteined.\\
\noindent Suppose that for any basis $(\alpha,\beta)$ of $\pi_1(H,q)$ the following holds $\xi(\alpha,\beta)>0$. For any basis, consider the open Euclidean ball $B_{\xi(\alpha,\beta)}(\rho_0)$. Then any representation inside $B_{\xi(\alpha,\beta)}(\rho_0)$ is $w(t)/2-$bad with respect to the basis $(\alpha,\beta)$, because $\rho_0$ is. By ergodicity, we claim that almost every representation is $w(t)/2-$bad, because $\mu_t\big( B_{\xi(\alpha,\beta)}(\rho_0)\big)>0$ for any basis. Indeed, consider the following subspace
\[ \mathcal{B}=\bigcap_{(\alpha,\beta) \text{ basis of } \pi_1(H,q)} \text{MCG}(H)\cdot B_{\xi(\alpha,\beta)}(\rho_0).
\] This is a subspace of $\Omega_t$ of full measure, because it is a countable intersection of subsets of full measure. We may easily note that
\[ \rho \in \mathcal{B} \iff \rho \text{ is } w(t)/2-\text{bad for any basis }(\alpha',\beta')\in\text{ MCG}(H)\cdot \big\{ \text{conjugacy class of basis of } \pi_1(H,q)\big\},
\] but since the action of MCG$(H)$ is transitive on the set $\{ \text{conjugacy class of basis of } \pi_1(H,q)\}$, then $\rho$ is $w(t)/2-$bad with respect to any basis, hence $\rho\in\mathcal{B}$ if and only if it is $w(t)/2-$bad. Since $\mathcal{B}$ has full measure, we get a contradiction with \ref{aerg}. Hence there necessarily exists a basis $(\alpha,\beta)$ such that $\xi(\alpha,\beta)=0$, and then $\rho_0$ is virtually abelian.
\end{proof}
\begin{proof}[Proof of proposition \ref{bev}]
Since $\rho_0$ is not virtually abelian by assumption, the quantity $\xi$ must be strictly positive by \ref{vachac}. In particular, any representation inside the Euclidean ball $B_\xi(\rho_0)$ is $w(t)-$bad. By ergodicity, almost every character in $\Omega_t$ is $w(t)-$bad; hence a contradiction. Thus $\rho_0$ is $w(t)-$good and $B_t=V_t$ and this conclude the proof of \ref{bev}.
\end{proof}
\subsection{Proof of the Theorem \ref{mainthm}}\label{ss65} We divide the proof of \ref{mainthm} in two parts. In the first one, we consider representations $\rho$ that contain a virtually abelian pair and we show that any such representation arises as the holonomy of hyperbolic cone-structure with a single cone point of angle $4\pi$.
\subsubsection{Representations with virtually abelian pairs}\label{sss651} In this paragraph, we consider representations with virtually abelian pairs. We have seen in the previous section \ref{ss63} that this kind of representations is problematic in the sense that Mathews' proof does not apply to them. However, using different geometrical techniques, we are going to prove the following proposition.
\begin{prop}\label{GVAR}
Let $S$ be a surface of genus $g\ge2$ and let $\rho:\pi_1S\longrightarrow \mathrm{PSL}_2\R$ be a representation with $\eu\rho=\pm\big(\chi(S)+1\big)$ whose image contains a virtually abelian pair. Then $\rho$ is geometrizable by a hyperbolic cone-structure with a cone point of angle $4\pi$.
\end{prop}
\begin{proof}[Proof of proposition \ref{GVAR}]
Up to change the orientation of $S$, we may suppose that $\eu\rho=\chi(S)+1$. We divide the proof into three lemmata. In the first one we show the existence of a simple closed separating curve $\gamma$ dividing $S$ in a punctured torus $H$ and surface with boundary $\Sigma$ such that the induced representation $\rho_1:\pi_1H\longrightarrow \mathrm{PSL}_2\R$ is virtually abelian. In order to do this, we fix a base point $q\in S$ for the fundamental group $\pi_1(S,q)$.
\begin{lem}
There exists a simple separating curve with hyperbolic holonomy such that the induced representation on $H$ is virtually abelian.
\end{lem}
\begin{proof}
Since $\rho$ contains virtually abelian pairs there are two simple non-separating curves $\alpha_1$ and $\beta_1$ based at $q$ such that $g_1=\rho(\alpha_1)$ and $h_1=\rho(\beta_1)$ are elliptics of order $2$ and their intersection number is one. Their commutator $\gamma$ is a separating simple curve with hyperbolic holonomy and divides $S$ into two pieces and one of them is a punctured torus. We define $H$ as the punctured torus containing $\alpha_1$ and $\beta_1$, and define $\rho_1$ to be the induced representation of $\pi_1(H,q_1)$ via $\rho$, where $q_1$ is the point on the boundary of $H$ that coincide with $q$ on the overall surface. By construction, $\rho_1$ is virtually abelian.
\end{proof}
\noindent Let $\Sigma$ be the second piece and define $\rho_2$ to be the induced representation of $\pi_1(\Sigma, q_2)$. Let $\alpha_2,\beta_2,\dots,\alpha_g,\beta_g$ be a basis for $\pi_1(\Sigma, q_2)$ so that $[\alpha_2,\beta_2]\cdots[\alpha_g,\beta_g]$ is homotopic to $\gamma$ but traversed in opposite direction with respect to $[\alpha_1,\beta_1]$.
\begin{lem}
The representation $\rho_2:\pi_1(\Sigma,q_2)\longrightarrow \mathrm{PSL}_2\R$ is Fuchsian.
\end{lem}
\begin{proof}
Consider the basis for $\pi_1(H,q_1)$ and $\pi_1(\Sigma,q_2)$ defined above. Since $[\alpha_1,\beta_1]=\gamma$ and $[\alpha_2,\beta_2]\cdots[\alpha_g,\beta_g]=\gamma^{-1}$, the fundamental group of $S$ has the following standard presentation
\[ \pi_1S=\big\langle \alpha_1,\beta_1,\dots\alpha_g,\beta_g \text{ }|\text{ } [\alpha_1,\beta_1]\cdots[\alpha_g,\beta_g]=\text{id}\big\rangle
\]
\noindent In terms of hyperbolic transformations the relation above becomes, namely $\rho([\alpha_1,\beta_1]\cdots[\alpha_g,\beta_g])=id$. It lifts to the relation $\rho([\alpha_1,\beta_1])\cdots\rho([\alpha_g,\beta_g])=-id$ in $\mathrm{SL}_2\R$ because $\eu\rho=\chi(S)+1$. Since $\rho(\alpha_1)=\rho_1(\alpha_1)$ is elliptic, the trace of $\rho([\alpha_1,\beta_1])$ is greater than $2$ by \ref{L0125} and \ref{L01210}, hence $\eur{\rho_1}{\mathfrak{s}}=0$ by \ref{PTEC}. The relative Euler class $\eur{\rho_2}{\mathfrak{s}}=\chi(\Sigma)$ by that is $\rho_2$ is a Fuchsian representation by \ref{ecswb} and it is the holonomy of a complete hyperbolic structure with geodesic boundary on $\Sigma$.
\end{proof}
\begin{figure}[!h]
\centering
\includegraphics[height=250]{pic2}
\caption[]{Fundamental domain of a representation with a virtually abelian pairs of a surface of genus $2$}\label{pic2}
\end{figure}
\begin{lem}
There exists a fundamental domain for $\rho$. It turns out to be a pentagon, namely a degenerate octagon with four sides aligned.
\end{lem}
\begin{proof}
\noindent We are now going to construct a fundamental domain for $\rho$. So let $p$ be a point on \textsf{Axis} $\rho_2(\gamma)$. Since $\rho_2$ is Fuchsian we may start from $p$ to define a fundamental domain for $\rho_2$ such that the sum of all inner angles is exactly $\pi$ that turns out to be a $4g-3$-gon in $\mathbb{H}^2$. Observe that $\rho_2(\gamma)p$ $\in$\textsf{Axis} $\rho_2(\gamma)$ so the entire segment joining them lies on the axis of $\rho_2(\gamma)$. Now we use the representation $\rho_1$ to divide such segment into four smaller pieces so that the sum of all interior angles is exactly $4\pi$.
\noindent Gluing the correspondent sides using $\rho$ we get a closed surface of genus $g$ endowed with a hyperbolic cone-structure with exactly one cone point of angle $4\pi$, and this conclude the proof of \ref{GVAR}
\end{proof}
\noindent We finally glue the correspondent sides using $\rho$ to obtain a closed surface of genus $g$ endowed with a hyperbolic cone-structure with exactly one cone point of angle $4\pi$, and this conclude the proof of \ref{GVAR}
\end{proof}
\noindent By Theorem \ref{T1} and Proposition \ref{GVAR}, we get the following result
\begin{quote}
\emph{let $S$ be a closed surface of genus $g\ge 2$. Then every representation $\rho:\pi_1S\longrightarrow \mathrm{PSL}_2\R$ with $\eu\rho=\pm\big(\chi(S)+1\big)$, which sends a non-separating curve $\gamma$ on $S$ to an elliptic arises as the holonomy of a hyperbolic cone-structure on $S$ with one cone point of angle $4\pi$.}
\end{quote}
\begin{figure}
\centering
\subfloat[][\emph{Puntured torus with totally geodesic boundary}]
{\includegraphics[width=.35\textwidth]{bglued}} \qquad \quad
\subfloat[][\emph{Final surface}]
{\includegraphics[width=0.6\textwidth]{glued}} \\
\caption{Gometric interpretation of \ref{GVAR} in the case of genus two. Gluing the four sides as shown in the picture $(\text{A})$, the marked points are identified in a unique point of angle $4\pi$. The final surface turns out to be a closed surface of genus two endowed with a hyperbolic cone-structure.}
\label{fig:subfig}
\end{figure}
\subsubsection{Representations with a parabolic non-separating curve} In this paragraph we show that if a representation $\rho$ sends a simple non-separating closed curve to a parabolic element, then, under suitable conditions, it sends a simple closed non-separating curve to an elliptic. First of all, we show that none representation sends a simple closed curve to the identity.
\begin{lem}\label{L443}
Let $S$ be a surface of genus $g\ge2$ and let $\rho:\pi_1S\longrightarrow \mathrm{PSL}_2\R$ be a representation such that $\eu\rho=\pm\big(\chi(S)+1\big)$. Then no simple closed loop is sent to the identity.
\end{lem}
\begin{proof}
Up to change the orientation of $S$, we suppose that $\eu\rho=\chi(S)+1$. Suppose that $\alpha$ is a simple curve such that $\rho(\alpha)=id$. If $\alpha$ is a non-separating curve, then let $\beta$ be any non-separating simple curve such that $i(\alpha,\beta)=1$, and denote by $\gamma$ their commutator. Of course $\rho(\gamma)=id$. Hence suppose that $\rho$ send a separating simple curve $\gamma$ to the identity and split $S$ in a punctured torus $H$ and a subsurface $\Sigma$ cutting along $\gamma$. Consider their fundamental groups and define $\rho_H$ and $\rho_\Sigma$ the representations induced by $\rho$ as described in section \ref{ss63}. The relative Euler numbers are well-defined because $\rho(\gamma)=id$, and by additivity $\eur{\rho_H}{\mathfrak{s}}+\eur{\rho_\Sigma}{\mathfrak{s}}=\eu\rho=\chi(S)+1$. Then $\rho_\Sigma$ is the holonomy of a complete hyperbolic structure on $\Sigma$ with totally geodesic or cusped boundary (see \ref{ecswb}). On the other hand the holonomy of the boundary $\gamma$ must be hyperbolic or parabolic, then a contradiction.\qedhere
\end{proof}
\noindent Suppose $\rho$ sends a non-separating simple curve $\alpha$ to a parabolic element, let $\beta$ be a simple curve such that $i(\alpha,\beta)=1$ and denote by $\gamma$ their commutator, by the previous lemma $\beta$ and $\gamma$ have not trivial holonomy. If $h=\rho(\beta)$ is elliptic we have done by \ref{T1}. Then we may assume $h$ as a parabolic or hyperbolic transformation. Since $g(\alpha)$ is a parabolic transformation, it might share a fixed point with $h$. In this case the commutator $\rho(\gamma)$ turns out to be a parabolic transformation by lemma \ref{L0129}. The following lemma shows that we can always find a non-separating curve $\beta$, such that $i(\alpha,\beta)=1$ and $h=\rho(\beta)$ does not share any fixed point with $\rho(\alpha)$.
\begin{lem}
Let $\rho:\pi_1S\longrightarrow \mathrm{PSL}_2\R$ be a representation with $\eu\rho=\pm\big(\chi(S)+1\big)$. Suppose $\rho$ sends a non-separating curve $\alpha$ to a parabolic element. Then there exists a simple non separating curve $\beta$ such that $i(\alpha,\beta)=1$ and \textsf{\emph{Fix}}$(\rho(\alpha))$ $\cap$ \textsf{\emph{Fix}}$(\rho(\beta))=\phi$. In particular $\rho\big([\alpha,\beta]\big)$ is hyperbolic.
\end{lem}
\begin{proof}
Let $\alpha$ and $\beta'$ two hyperbolic transformations and suppose they share a fixed point $q$ at the boundary at infinity. Since $\rho$ is non-elementary (because it has non trivial Euler number), there is a simple curve $\xi$ such that
\begin{mi}{1em}
\begin{enumerate}
\item[$\bullet$] $\xi$ does not meet $\alpha$ and $\beta'$ and
\item[$\bullet$] $q$ is not a fixed point for $\xi$.
\end{enumerate}
\end{mi}
\noindent Take $\xi$ with the orientation so that $\beta=\beta'\xi$ is homotopic to a simple curve, then it not fix $q$ because $\xi$ does not and $i(\alpha,\beta)=1$ by construction.
\end{proof}
\noindent Thus we may assume that $g=\rho(\alpha)$ and $h=\rho(\beta)$ have not a common fixed point and their commutator is hyperbolic by \ref{L0129}. We have the following result.
\begin{prop}\label{NSP}
Let $S$ be a surface of genus $g\ge2$ and let $\rho:\pi_1S\longrightarrow \mathrm{PSL}_2\R$ be a representation with $\eu\rho=\pm\big(\chi(S)+1\big)$. Suppose $\rho$ sends a non-separating simple closed loop $\alpha$ to a parabolic element and there exists a simple closed curve $\beta$ such that $i(\alpha,\beta)=1$ and \textsf{\emph{Fix}}$(\rho(\alpha))$ $\cap$ \textsf{\emph{Fix}}$(\rho(\beta))=\phi$. Then $\rho$ arises as the holonomy of hyperbolic cone-structure on $S$ with one cone point of angle $4\pi$.
\end{prop}
\begin{proof}[Proof of proposition \ref{NSP}]
\noindent Let $q$ be a point on $S$ and consider $\pi_1(S,q)$, that is we may consider that all curves are based at $q$. Let $\alpha$ be a non-separating curve with parabolic image and $\beta$ a simple non-separating curve such that $i(\alpha,\beta)=1$ and \textsf{Fix}$(\rho(\alpha))$ $\cap$ \textsf{Fix}$(\rho(\beta))=\phi$. Define $\gamma$ their commutator. Since $\gamma$ is a simple closed separating curve, it splits $S$ in two pieces and let $H$ be the one containing $\alpha$. Of course $H$ it is a handle, and it contains also the curves $\beta$, and $\gamma$ as its boundary component. Let $\rho_H:\pi_1(H,q_1)\longrightarrow \mathrm{PSL}_2\R$ the induced representation of $\pi_1(H,q_1)$ by $\rho$; where $q_1$ is a point on the boundary that coincide with $q$ on the overall surface. The trace of $\rho_H(\gamma)$ is greater than $2$ by \ref{L0125}, hence the relative Euler class $\eur{\rho_H}{\mathfrak{s}}=0$ by \ref{PTEC}. In particular, the representation $\rho_H$ can be:
\begin{mi}{1em}
\begin{enumerate}
\item[\bf 1.] a representation with elliptics, or
\item[\bf 2.] a pants representation.
\end{enumerate}
\end{mi}
\noindent In the first case, the representation $\rho$ arises as the holonomy of a hyperbolic cone-structure by \ref{T1}. In the second case, $\rho_H$ is the holonomy of a complete hyperbolic structure on a pair of pants. Since $\rho_H$ is not a virtually abelian representation, it arises as the holonomy of hyperbolic cone-structure on $H$, without interior cone points and with at most one corner point on $q$ of angle $\theta>2\pi$. In particular $\rho_H$ is $w(t)-$good, where $t$ is the trace of $\rho_H(\gamma)$, hence there exists a particular basis $(\alpha',\beta')$ of $\pi_1(H,q)$ and a point $p$ at distance less than $w(t)$ from the axis of $\rho_H\big([\alpha',\beta']\big)$ such that the pentagon $\mathcal{P}\big(\rho_H(\alpha'),\rho_H(\beta');p\big)$ bounds a disc. Note that the value of $\theta$ depends only on the distance $\delta$ of $p$ from the axis of $\rho_H\big([\alpha',\beta']\big)$. Define $\Sigma$ as the closure of $S\setminus H$, and let $\rho_\Sigma$ be the representation induces by the inclusion $\pi_1(\Sigma, q)\hookrightarrow \pi_1(S,q)$. We may note that $\eur{\rho_\Sigma}{\mathfrak{s}}=\chi(S)+1$ by the additivity of the Euler number. Hence $\rho_\Sigma$ is the holonomy of a complete hyperbolic structure on $\Sigma$ with totally geodesic boundary. Take a point $q_2$ at distance $\delta$ from the geodesic boundary and consider the geodesic representative of the boundary based at $q_2$. It turns out to be a piecewise geodesic boundary with a single corner point of angle $\theta_2=4\pi-\theta_1<2\pi$. Finally, the hyperbolic cone-structure on $H$ may be glued to the one on $\Sigma$ along their piecewise geodesic boundary and identifying the corner points. This gives a hyperbolic cone-structure on $S$ with a single cone point of angle $4\pi$. Hence the desired result.
\end{proof}
\noindent By Theorem \ref{T1} and Propositions \ref{GVAR} and \ref{NSP} we get the following\\
\begin{quote}
\emph{let $S$ be a closed surface of genus $g\ge 2$. Then every representation $\rho:\pi_1S\longrightarrow \mathrm{PSL}_2\R$ with $\eu\rho=\pm\big(\chi(S)+1\big)$, which sends a non-separating curve $\gamma$ on $S$ to a non-hyperbolic element arises as the holonomy of a hyperbolic cone-structure on $S$ with one cone point of angle $4\pi$,}\\
\end{quote}
\noindent that is Theorem \ref{mainthm}.
\subsection{The case of surfaces of genus $2$}\label{ss66} From now on; let $S$ be a closed surface of genus $2$ and let $\rho:\pi_1S\longrightarrow \mathrm{PSL}_2\R$ be a representation with $\eu\rho=\pm1$. Up to change the orientation of $S$, we may suppose that $\eu\rho=-1$. As above, we denote by $\mathcal{M}^{-1}$ the connected component of the character variety $\mathcal{X}(S)$ of all representations with $\eu\rho=-1$.\\
\noindent Recently, March\'e and Wolff proved the following result in \cite[Theorem 1.4]{MW}.
\begin{thm}
Any representation $\rho \in \mathcal{M}^{-1}$ sends a simple closed curve to a non-hyperbolic element.
\end{thm}
\noindent By their theorem, we have the following possibilities:
\begin{mi}{2em}
\SetLabelAlign{center}{\null\hfill\textbf{#1}\hfill\null}
\begin{enumerate}[leftmargin=1.75em, labelwidth=1.3em, align=center, itemsep=\parskip]
\item[\bf 1.] $\rho$ send a simple curve to the identity;
\item[\bf 2.] $\rho$ send a separating simple curve $\gamma$ to an elliptic element;
\item[\bf 3.] $\rho$ send a separating simple curve $\gamma$ to a parabolic element;
\item[\bf 4.] $\rho$ send a non-separating simple curve $\gamma$ to an elliptic element;
\item[\bf 5.] $\rho$ send a non-separating simple curve $\gamma$ to a parabolic element.\\
\end{enumerate}
\end{mi}
\noindent Infact, the case $\bf 1$ does not occur by \ref{L443}. In \cite{MA2}, Mathews give the following result, very specific to genus $2$ case.
\begin{thm}[Mathews 2011]\label{T2}
Let $S$ be a closed surface of genus $2$. Let $\rho:\pi_1S\longrightarrow \mathrm{PSL}_2\R$ be a representation with $\eu\rho=\pm1$. Suppose $\rho$ sends a separating curve $\gamma$ to a non-hyperbolic element. Then $\rho$ arises as the holonomy of a hyperbolic cone-structure on $S$ with one cone point of angle $4\pi$.
\end{thm}
\noindent By the previous result, the cases $\bf 2$ and $\bf 3$ of the list above are completely covered. Theorem \ref{T1} togheter with \ref{GVAR} imply that any representation $\rho$, which sends a simple non-separating curve to an elliptic, arises as the holonomy of hyperbolic cone-structure on $S$ with one cone point of angle $4\pi$. Finally, by \ref{NSP} any representation that sends a simple non-separating curve to a parabolic arises as the holonomy of hyperbolic cone-structure on $S$ with one cone point of angle $4\pi$. Hence we have the following
\begin{quote}
\emph{let $S$ be a closed surface of genus $2$. Then any representation $\rho:\pi_1S\longrightarrow \mathrm{PSL}_2\R$ with $\eu\rho=\pm1$ arises as the holonomy of hyperboli cone-structure on $S$ with a single cone point of angle $4\pi$},
\end{quote}
\noindent that is our main corollary \ref{maincor}.
|
2,877,628,091,509 | arxiv | \section{Introduction}
\label{sec:intro}
The magnetic field holds a central position within solar research such
as sunspots, coronal loops, prominences, and spectacular solar
phenomena like flares and coronal mass ejections (CMEs). It has been
commonly accepted that the energy released by solar flare (which is
usually up to the order of $10^{32}$~erg during the major events) must
be sourced from magnetic field of the active region since all the
other possible energy sources are completely inadequate
\citep{Priest1987Book}. To help quantitative understanding the solar
explosive phenomena such as flare and CMEs, it is essential to get the
knowledge of the amount of free magnetic energy and its temporal
variation during the events. Magnetic reconnection is attributed by
most flare models to the basic mechanism for energy conversion rapidly
from the magnetic field into the kinetic and thermal counterparts
\citep{Priest2002,Shibata2011}. To locate where magnetic reconnection
is prone to happen and produce flare needs the three-dimensional (3D)
coronal magnetic field, by which the important topological and
geometrical features that are favorable sites for reconnection, e.g.,
the null point, the separatrices or more commonly the quasi-separatrix
layers \citep{Priest2002,Titov2002,Longcope2005}, can then be
found. Unfortunately, the 3D magnetic field in the corona is very
difficult to be directly observed, although information of the 3D
geometrical configuration of the field lines can be partially
reconstructed by the method of stereoscopy using coronal loops
observed in different aspect angles in the EUV and X-ray wavelengths
\citep[see living review by][]{Aschwanden2011}. Up to the present, a
routinely direct measurement of the solar magnetic field on which we
can rely is mainly restricted to the solar surface, i.e., the
photosphere \citep[there are only a few cases available with the
measurements of the chromospheric and coronal fields, e.g.,][]
{Solanki2003,Lin2004}.
With in hand the observed magnetic field on the photosphere, there are
several ways to study the evolution of 3D magnetic field in the
corona. One of them is the well-known model of field extrapolation
from the magnetogram, especially, the nonlinear force-free (NLFF)
field extrapolation
\citep{Wiegelmann2008,Schrijver2008,Derosa2009}. As the solar corona
is dominated by the magnetic field environment with very small plasma
$\beta$ (the ratio of gas pressure to magnetic pressure), the force
free model is usually valid and serves as a good approximation for the
low corona (but above the photosphere) in a near-static state. A
variety of numerical codes have been developed to implement the
force-free field extrapolation in the past decade
\citep[e.g.,][]{Wheatland2000,Wiegelmann2004,Amari2006,Jiang2012apj}. These
methods have been applied to analyze the magnetic structures of the
active regions, the electric current distributions, energy budget of
the eruptions, etc
\citep{Regnier2006,Guo2008,Thalmann2008,Jing2010,Valori2012,Sun2012},
with success made, such as reproducing the field lines comparable with
the observed coronal loops \citep[e.g.,][]{Wiegelmann2012} and
extrapolating complex flux rope which is believed to be associated
with the filament channel \citep[e.g.,][]{Canou2010}. However, it
should be noted that the success is still limited when applied to
realistic solar data \citep{Schrijver2008,Derosa2009,Schrijver2009},
which is mainly because of the intrinsic non-force-freeness in the
field close to the photosphere \citep{Metcalf1995}. Thus the observed
data can generally not provide a consistent boundary condition for the
model based on an exact force-free assumption, and some {\it ad hoc}
preprocessing (to {\it remove} the force in the raw magnetogram) is
usually made to prepare the vector magnetograms for the extrapolation
codes \citep{Wiegelmann2006b}.
Another method is using data-driven magnetohydrodynamics (MHD) model
which is more general than the force-free one
\citep{Wu2006,Wu2009,Wang2008ApJ,Jiang2010,Fan2011,Fan2012RAA}. This
is because in the MHD model, nonlinear dynamic interactions of the
magnetic field and plasma flow field are treated in a self-consistent
way, in which the near force-free state of the coronal magnetic field
is included. A first data-driven MHD model was developed by
\citet{Wu2006} for simulating the evolution of active regions. In
their original work, the initial setup of the model is established by
seeking a MHD equilibrium started from an arbitrarily prescribed
plasma and a potential magnetic field based on the {\it Solar and
Heliospheric Observatory (SOHO)}/MDI magnetogram at a given
time. Then a time-series of MDI magnetograms observed afterward were
continuously inputted at the bottom boundary to drive the above field
to respond to the changes on the photosphere. In particular, the
procedure of continuously feeding observed data on the bottom boundary
is made to be self-consistent by a projected-characteristic method
\citep[e.g.,][]{Nakagawa1981,Wu1987}.
If in an ideal or strict condition, this dynamic process of the
data-driven model can indeed be regarded as evolution of the
corona. However, in the reality, there are still many difficulties and
problems in using a data-driven MHD model to study the active region
evolution. First of all is the lack of observations for the
photospheric parameters of plasma such as the surface flow velocity,
which is important boundary information for the driving process
\citep{Abbett2004,Welsch2004}. This is especially essential by
regarding that at the photosphere the magnetic field may be dominated
by the dense plasma (with high $\beta$), and the field lines anchored
in the photosphere can usually be considered as line-tied by the
photospheric plasma because of the high electric conductivity
\citep{Priest1987Book,Mikic1994,Solanki2006}. This means that the
field-line footpoints are passively advected by the plasma flow which
itself is induced in the convection zone below. Without the
information of the surface flow, response of the coronal field lines
driven by photospheric footpoint-motion cannot be fully followed. This
encourages people to recover the photospheric flow velocity from the
time-varying magnetograms by using local correlation tracking
technique or similar methods \citep[e.g.,
see][]{Chae2001,Welsch2004,Demoulin2009}. The second problem comes
from the cadence of the observed data which is generally too low for a
data-driven model that needs a highly continuous data flow. To address
this problem, \citet{Wu2006} simply used a time-linear interpolation
on the 96-minute cadence MDI magnetograms to provide the data needed
at each time step \citep[about 6-second used by][]{Wu2006} of the
model. This obviously over-simplifies the real evolution of the
photospheric field which is very time-nonlinear, but it may be the
only choice one can made \footnote{This problem can be alleviated now
by using the recently available data recorded by HMI on-board the
new observatory {\it SDO}, which has a higher data cadence.}. In
view of these two problems, it may be more practical to construct
independent MHD equilibrium for each one of the magnetograms and
consider these successive equilibria as the continuous time-evolution
of the corona, as done by \citet{Wu2009,Fan2011}.
For the third problem, it is difficult to couple the photospheric and
the coronal plasma in a single model because of the highly stratified
plasma, of which the parameters, i.e., the density and temperature
change drastically by several orders of magnitude within an extremely
thin layer (the chromosphere and transition region) above the
photosphere due to some kind of unknown coronal heating process. As a
realistic model with inputted magnetic field data observed on the
photosphere, it is required to describe the behavior of the magnetic
field in this stratified environment with plasma $\beta$ varying from
$> 1$ (the photosphere) to $\ll 1$ (the corona). However, this
challenges greatly the numerical scheme and computational resource to
treat the transition region and additionally, one may need to
incorporate the complicated thermodynamic processes of the real
corona, such as the thermal conduction and radiative losses
\citep[e.g., see models by][]{Abbett2007,Fang2010}. We note that in
the works of \citep{Wu2006,Wu2009,Wang2008ApJ,Fan2011}, only the
photospheric or near-photosphere plasma is considered in the models
and thus these models are mainly used for studying the evolution of
photospheric parameters, such as the plasma flow, the Poynting flux,
the current helicity and some other non-potential parameters at the
photosphere level. The evolution of the 3D coronal magnetic field, on
the other hand, was rarely studied by using these models because of
the unjustified high-$\beta$ and dense plasma environment. This is due
to the reason mentioned above that a coupled modeling of the
photospheric and coronal fields is still computationally prohibitive.
In this work we will use the data-driven MHD model to study the 3D
coronal field within a low plasma--$\beta$ condition. The numerical
model is developed following our previous work \citep{Jiang2011},
which has been devoted to a validation of the CESE--MHD method for
reconstructing the 3D coronal fields using a semi-analytic force-free
field solution proposed by \citet{Low1990}. We will study the 3D
magnetic field and its evolution of active region NOAA AR 11117 around
the time of a small C-class flare happened on 2010 October 25,
observed by {\it SDO}/AIA with a time-series of vector-magnetograms
recorded by {\it SDO}/HMI. While the 3D magnetic field of the same
active region has been studied by \citet{Sun2010} and
\citet{Tadesse201211117} using the NLFFF model, this is the first
study we apply the CESE--MHD model to realistic solar data. Similarly
assuming that the evolution of the coronal magnetic field in active
region can be described by successive equilibria
\citep[e.g.,][]{Regnier2006,Wu2009,Sun2012,Tadesse201211117}, we use
each vector-magnetogram of the data set to get a snapshot MHD
equilibrium and study the temporal evolution of the field by a series
of these equilibria. This method is justified by considering that the
evolution of the active region, driven by the photospheric motion with
flow speed on the order of several km~s$^{-1}$, is sufficiently slow
compared with the speed of the coronal magnetic field relaxing to
equilibrium, which is up to thousands of km~s$^{-1}$
\citep{Antiochos1987,Seehafer1994}. It is also valid for the present
studied objective, the AR 11117, which shows no major changes of the
magnetic field in the chosen time period.
The remainder of the paper is organized as follows. In
Section~\ref{sec:model} we give a brief description of the CESE--MHD
model. Magnetic field data used for driven the model is described in
Section~\ref{sec:data}. The modeling result for the AR 11117 is
presented in Section~\ref{sec:res}, including a qualitative inspection
of the the 3D magnetic configurations, topological analysis of the
field at the flare site, as well as study of the magnetic energy
budget and current distribution. Finally we draw conclusions and give
some outlooks for future work in Section~\ref{sec:conclude}.
\section{The Data-Driven CESE--MHD Model}
\label{sec:model}
In a nutshell, what we intend to solve is a set of MHD equilibria of
which each is consistent with a snapshot of magnetic field observed on
the photosphere. We thus start from an arbitrarily initial field,
e.g., a potential or linear force-free field, with a plasma and input
at the bottom of the model the vector magnetogram to drive the system
away from its initial state and then let the system relax to a new
equilibrium. The numerical model follows our previous work
\citep{Jiang2011}. Since the computation is focused on the magnetic
field and its dynamics with plasma in the low corona, here we use a
simplified solar atmosphere with a low plasma $\beta$ and a uniform
constant temperature. Thus the numerical scheme need only to handle
the plasma density $\rho$, the flow velocity $\vec v$ and the magnetic
field $\vec B$. The MHD equations are written as follows:
\begin{eqnarray}
\label{eq:main_equ}
\frac{\partial \rho}{\partial t}+\nabla\cdot (\rho\vec v) = 0,
\nonumber \\
\rho\frac{D\mathbf{v}}{D t} = - {\bf \nabla } p+\vec J\times \vec B+\rho\vec
g + \nabla\cdot(\nu\rho\nabla\mathbf{v})-\nu_{f}\rho\vec v,
\nonumber \\
\frac{\partial\mathbf{B}}{\partial t} =
\nabla\times(\mathbf{v}\times\mathbf{B}).
\end{eqnarray}
In these equations: $\vec J$ is the electric current; $p$ is the gas
pressure given by $p=\rho R T_{0}$ where $R$ is gas constant and
$T_{0}$ is the constant temperature; $\vec g$ is the solar gravity and
is assumed to be constant as its photospheric value since we simulate
the low corona with height of about $100$~Mm from the photosphere. A
small kinematic viscosity $\nu$ with a value of $\sim \Delta
x^{2}/\Delta t$ ($\Delta x$, $\Delta t$ are respectively the grid
spacing and the time step in the numerical scheme) is added for
consideration of numerical stability.
Different from the equations used in \citet{Jiang2011}, here we
include an additional frictional force $-\nu_{f}\rho\vec v$ to deal
with the problem that in some odd places near the magnetogram (i.e.,
the bottom) the plasma velocity is prone to be accelerated to
extremely high due to very large gradients or some kind of
uncertainties intrinsically contained in the observed data. This is
because the data is very intermittent in the observed magnetograms,
which usually show a large number of small-scale polarities and even
apparent discontinuities, and these features cannot be adequately
resolved by the grid resolution. We find that such problems can
severely restrict the time step and slow the relaxation process of the
entire system, even making the computation unmanageable. It should be
noted that including the friction force is only an {\it ad hoc} choice
for numerical consideration in the case of dealing with the original
data in the model. Alternatively, one can perform certain smoothing on
the original magnetograms beforehand to remove noise and decrease
large gradients in the raw data. This, however, may erase some of the
important parasitic polarities around the major sunspots and also
probably change the locations of the polarity inversion lines (PILs),
which could influence the analysis of the local field configurations
responsible for small-scale energy dissipation near the photosphere
(e.g., the small flare in the present study). Also there is magnetic
flux loss and the energy content of the field may be affected, if the
vertical component of the magnetogram is modified
\citep{Metcalf2008}. Although the field at the coronal base ought to
be smoother than the photospheric field because of the field expansion
from the high-$\beta$ to the low-$\beta$ regions, in which way such
smoothness can be modeled is still problematic. To this end, it is
prudent not to smooth the original magnetograms and thus we use the
frictional force to control the above-mentioned problem in the
numerical computation. We have tried different values for the
frictional coefficient $\nu_{f}$ and adopt a optimized one $\nu_{f} =
1/(50\Delta t)$, which can control the plasma flow in a reasonable
level, i.e., the flow speed is suppressed under the maximum Alfv\'en
speed but not too small. Our tests show that the adjustment of
$\nu_{f}$ affects the MHD relaxation process but gives almost the same
final solution. Finally, no explicit resistivity is included in the
magnetic induction equation, since the numerical diffusion can lead to
topological changes of the field when necessary.
The above equation system~(\ref{eq:main_equ}) is solved by our
CESE--MHD code \citep{Jiang2010}. The CESE method deals with the 3D
governing equations in a substantially different way that is unlike
traditional numerical methods (e.g., the finite-difference or
finite-volume schemes). The key principle, also a conceptual leap of
the CESE method, is treating space and time as one entity. By
introducing the conservation-element (CE) and the solution-element
(SE) as the vehicles for calculating the spacetime flux, the CESE
method can enforce conservation laws both locally and globally in
their natural spacetime unity form. Compared with many other numerical
schemes, the CESE method can achieve higher accuracy with the same
mesh resolution and provide simple mathematics and coding free of any
type of Riemann solver or eigendecomposition. For more detailed
descriptions of the CESE method for MHD simulations including the
multi-method control of the well-known $\nabla\cdot\mathbf{B}$ numerical errors,
please refer to our previous work, e.g., \citet{Feng2006,Feng2007},
\citet{Jiang2010} and \citet{Jiang2011}. We use a non-uniform grid
within the framework of a block-structured, distributed-memory
parallel computation. The grid configuration is depicted in
{Figure}~\ref{fig:grid}. Specifically, the whole computational volume is
divided into blocks with different spatial resolution, and the blocks
are evenly distributed among the CPU processors. In the $x$--$y$
plane, i.e., the plane parallel with the photosphere, the blocks have
the same resolution. In the vertical direction, resolutions of the
blocks are decreased with height, e.g., near the photosphere, the grid
spacing matches the resolution of the magnetogram and up to the top of
the model box, the grid spacing is increased by four times. As shown
by {Figure}~\ref{fig:grid}, at a height of only 10~Mm the magnetic field
has become far less intermittent, i.e., much smoother than that at the
photosphere. Thus using this non-uniform mesh can affect the
computational accuracy little compared with a uniform mesh, but can
save significant computational resources.
\begin{figure*}[htbp]
\centering
\includegraphics[width=\textwidth]{model_grid}
\caption{The configuration of the computational grid. The entire
volume is divided into blocks and each block has $8\times 8\times
8$ cells. In the left panel, two slices through the volume are
plotted to show the structure of the blocks, which are outlined by
the black grid lines; the bottom contour map represents $B_{z}$ on
the photosphere and the curved lines show the potential field
lines. The right panels show the 2D contour images of $B_{z}$
sliced at $z=0$ and $z=10$~Mm (locations in the 3D grid are shown
by the arrows).}
\label{fig:grid}
\end{figure*}
The initial configuration of the simulation consists of a potential
field matching the vertical component of the magnetogram and a plasma
in hydrostatic equilibrium in the solar gravitational field. The
potential field is obtained by a Green's function method
\citep[e.g.,][]{Metcalf2008}. The plasma density is given by $\rho(z)
= \rho_{0}\exp(-z/H)$ where $H$ is the pressure scale height $H =
RT_{0}/g$ and $z=0$ denotes the photosphere. Nondimensionalization of
the parameters is same as \citet{Jiang2011} and {Figure}~\ref{fig:params}
shows a typical configuration of the parameters along a vertical line
through the computation volume at the strong magnetic region. It is
noteworthy that the plasma $\beta$ can be large in the relatively weak
field region and thus the intrinsic force in the vector magnetogram
can be self-consistently balanced by the plasma in the MHD relaxation
process. This is unlike the force-free model, as aforementioned (see
Section~\ref{sec:intro}), which generally cannot deal with the
observed data directly. The boundary conditions are also very similar
to those used in \citet{Jiang2011}: the bottom boundary is fed with
the observed vector magnetogram incrementally in tens of Alfv\'en time
until the observed data is fully matched, and all other boundaries are
set by the non-reflecting boundary conditions. Besides, the flow
velocity on the bottom is set by extrapolation from the neighboring
inner grid. This has a function to increase the communication between
the magnetogram and the computational volume \citep{Valori2007}.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.48\textwidth]{params}
\caption{Typical configurations of the magnetic field strength $B$,
the Alfv\'en speed $V_{\rm A}$ and the plasma $\beta$ along a
vertical line through the computation volume.}
\label{fig:params}
\end{figure}
\section{Data}
\label{sec:data}
Active region NOAA AR 11117 was observed by SDO from 2010 October 20
to 2010 November 2, mainly during Carrington Rotation 2102. On 2010
October 25, it was crossing the central meridian of the solar disk
with latitude of $22^{\circ}$ as shown in the full-disk HMI and AIA
images ({Figure}~\ref{fig:hmi_fulldisk}). On this date solar activity
was dominated by AR 11117 with many small B-class flares observed and
near the end of the day, a C2.3-class flare happened. NOAA records
indicate that the event began in soft X-rays (SXRs) which were
detected by the GOES (Geostationary Operational Environmental
Satellite) 15 satellite at 22:06 UT, reaching a peak at 22:12 UT and
ending at 22:18 UT (see {Figure}~\ref{fig:goes}). As observed by AIA (see
{Figure}~\ref{fig:AIA_compare}), the central part of the active region
shows distinct brightenings at the flare peak time and the flare is
confined in rather low altitude without inducing major changes in the
coronal loops or eruptions.
\begin{figure*}[htbp]
\centering
\includegraphics[width=\textwidth]{hmi_fulldisk}
\caption{Full-disk {\it SDO}/HMI line-of-sight (LoS) magnetogram
(left) and full-disk {\it SDO}/AIA 171 {\AA} image. Both images
are obtained at the same time of 22:12 UT on 2010 October 25, and
have been co-aligned. AR 11117 is outlined by the white rectangle
on the images.}
\label{fig:hmi_fulldisk}
\end{figure*}
\begin{figure*}[htbp]
\centering
\includegraphics[width=\textwidth]{GOES}
\caption{GOES soft-X ray flux from 20:00 UT to 24:00 UT on 2010
October 25 in the wavelength range of 1--8~{\AA}. Horizontal
dotted line indicates the C-minor flare class and the vertical
dotted line indicates the peak time of the flux.}
\label{fig:goes}
\end{figure*}
We select a set of vector magnetograms for AR 11117 which were taken
by HMI around the flare peak time with a cadence of roughly half an
hour. The data is de-rotated to the disk center, the field vectors
are transformed to Heliographic coordinates with projection effect
removed and finally remapped to a local Cartesian coordinate using
Lambert equal area projection. For a detailed processing of the HMI
vector magnetograms please refer to
\url{http://jsoc.stanford.edu/jsocwiki/VectorMagneticField}.
Specifically, six magnetograms taken at 21:00, 21:36, 22:00, 22:12,
22:36, and 23:00 UT, respectively, are used for the
simulation. {Figure}~\ref{fig:magnetogram} shows examples of the vector
magnetograms before and at the flare peak time (the gray image shows
the vertical component $B_{z}$ and the arrows indicate the transverse
field). There are four regions with flux greater than 1000~G
concentrated in areas of about 10 arcsec square and these regions are
manifested as four main sunspots observed in the AIA 4500 {\AA} (white
light) image. Strong shear of the transverse field can be seen near
the image center, with the vector almost parallel to the PIL (see the
regions where the color of the vectors changes while their directions
are nearly the same). The original resolution of the magnetogram is
about 0.5~arcsec ($\sim 360$~km) per pixel and we bin the data to
1~arcsec~pix$^{-1}$ for our simulation with a field of view of
$256\times 256$~arcsec$^{2}$ ($184\times 184$~Mm$^{2}$). The height of
the computational box is set to 160~arcsec ($115$~Mm). To reduce the
side and top boundary influence, the following analysis of the results
is performed on a subvolume with $200\times 128\times
100$~arcsec$^{3}$ centered in the full computational domain.
\begin{figure*}[htbp]
\centering
\includegraphics[width=\textwidth]{magnetogram}
\caption{Vector magnetograms for AR 11117 at time of 22:00 and
22:12. The gray images represent $B_{z}$ with a saturation level
of $\pm 500$~G. The tangential fields are shown by the vectors
(plotted at every third pixel point) with blue color in the
positive $B_{z}$ region and red in the negative $B_{z}$ region.}
\label{fig:magnetogram}
\end{figure*}
\section{Results}
\label{sec:res}
\subsection{Comparison with AIA Loops}
The high-resolution coronal loops observed by {\it SDO}/AIA in the
wavelength of 171~{\AA} give us a proxy of the magnetic field-line
geometry (see the left column of {Figure}~\ref{fig:AIA_compare}) and also
a good constraint for the magnetic field model. In the middle column
of {Figure}~\ref{fig:AIA_compare} we show some selected magnetic field
lines of the model results. In these images, the yellow lines
represent the magnetic field lines and the background contours outline
the vertical component of the magnetogram. For a visual comparison
with the observed coronal loops, we plot the figures side-by-side with
the AIA 171~{\AA} images observed at the same time. The field lines
are selected roughly according to the visible bright loops and the
angle of view of the MHD results is co-aligned with the AIA image. As
shown from an overview of the figures, the simulated field lines
resemble quite well the observed loops, especially at the central
region of the AR where the field lines are sheared strongly. This
means that the field there is very non-potential. The potential fields
at each time are shown in the third column of
{Figure}~\ref{fig:AIA_compare}. Compared with the potential field lines,
the MHD field lines exhibit some twists, although not strong, implying
the existence of field-line-aligned currents (i.e., currents along the
field lines). We find that there are features well reproduced by the
MHD model but failed to be recovered by the potential model, for
example, the structures pointed out by the white arrows in the
figure. Reconstruction of these features, obviously needing variation
of the field-line connectivities from those of the initial potential
field, demonstrates that our model can indeed reconstruct the 3D
magnetic topology which is implied in the observed transverse field.
\begin{figure*}[htbp]
\centering
\includegraphics[width=\textwidth]{AIA_compare}
\caption{Comparison of the modeled field lines with {\it SDO}/AIA
171~{\AA} image for AR 11117. The left column is the AIA images,
the middle column is the selected magnetic field lines from the
MHD results. Also the potential field lines are plotted in the
last column. The field lines in all the panels are traced from the
same footpoints on the photosphere and the color contours of
photospheric $B_{z}$ are plotted on the background.}
\label{fig:AIA_compare}
\end{figure*}
By comparison we conclude that within this AR, the MHD model gives
much better results than the potential field model. Although during
this time interval some small changes can be recognized in the loops
and the MHD field lines, it is difficult to find any variation in the
magnetic topology around the time of flare from 22:00 to 22:12. This
means that the flare-related reconnection takes place at rather small
scale and low height near the photosphere (as indicated by the
analysis of magnetic topology in the following section). It can be
clearly seen in the AIA image at 22:36 there are two groups of loops
with much more brightness than the other loops, and the MHD model
appears to fail to reproduce the groups in the north (i.e., the loops
pointed out by the black arrow in the image, note that this group of
loops is difficult to find in all the other AIA images in the
figure). This may be due to that the central part of the active region
is very dynamic after the flare since hot plasma from the chromosphere
`evaporated' to the post-flare loops, thus cannot be well described by
the quasi-static state which we have sought.
\subsection{Topological Analysis of the Flare Location}
It has been thought that flare is plausible to take place in the
regions with strong variation of the field line connectivity
\citep[e.g.,][]{Mandrini1995,Demoulin1997}. Such regions are called
quasi-separatrix layers (QSLs), which are generalized from the concept
of magnetic separatrices where the field-line linkage (or
connectivity) is discontinuous \citep{Priest1995,Demoulin1996}. To
locate the QSLs in the 3D coronal field, \citet{Titov2002} introduced
a so-called {\it squashing factor} ($Q$) to quantify the change of the
field linkage basing on the field-line mapping. For the corona field,
the mapping is defined from one photospheric footpoint $(x,y)$ of a
given field line to the other photospheric footpoint
$(X(x,y),Y(x,y))$, which is also called the magnetically conjugate
footpoint\footnote{Here we need not to distinguish the footpoints of
the positive and negative polarities, since $Q$ is designed with the
same value at conjugate footpoints of the same field line
\citep{Titov2002}.}. Then the squashing factor is given by
\begin{equation}
\label{eq:Q}
Q = \frac{a^{2}+b^{2}+c^{2}+d^{2}}{|ad-bc|}
\end{equation}
where
\begin{equation}
a = \frac{\partial X}{\partial x},\ \
b = \frac{\partial X}{\partial y},\ \
c = \frac{\partial Y}{\partial x},\ \
d = \frac{\partial Y}{\partial y}.
\end{equation}
Producing a map of $Q$ factor is a robust way to find the topological
elements (including both the QSLs and the separatrices) in the 3D
magnetic field, but its calculation is a computational intensive work
because field lines are required to be traced from not only each point
but also its neighboring points to estimate the derivatives of the
field line mapping. We thus use the following algorithm: first, field
line from each grid point $(i,j)$ on the bottom surface is traced
either forward or backward and location of the other (conjugate)
footpoint is denoted by $(X(i,j),Y(i,j))$; then at each grid point a
centered difference involving with its neighboring four grid points
$(i-1,j),(i+1,j),(i,j-1),(i,j+1)$ is used to approximate the elements
needed by $Q$, i.e.,
\begin{eqnarray}
\label{eq:abcd}
a = \frac{X(i+1,j)-X(i-1,j)}{2\Delta x},\nonumber\\
b = \frac{X(i,j+1)-X(i,j-1)}{2\Delta y},\nonumber\\
c = \frac{Y(i+1,j)-Y(i-1,j)}{2\Delta x},\nonumber\\
d = \frac{Y(i,j+1)-Y(i,j-1)}{2\Delta y}.
\end{eqnarray}
where $\Delta x$ and $\Delta y$ are the grid spacings. To avoid the
numerical uncertainties of tracing field lines with very small
structures near the photosphere (i.e., structures smaller than the
grid resolution), we raise the bottom surface by three pixels (about
2~Mm) above the photosphere in the computation. This might smooth out
some very fine structures in the $Q$ map but the basic topological
features remain since they depend mainly on the large-scale current
distribution \citep{Titov1999}. As suggested by \citet{Titov2002}, it
is also useful to compute the expansion-contraction degree ($K$) which
is defined by the ratio of the normal components of the magnetic field
at the two ends of the field lines. While the factor $K$ has similar
function to locate the QSLs as $Q$, it is much simpler to compute than
the latter, and may be more reliable since its computation is free of
the numerical errors of the finite difference in {Equation}~(\ref{eq:abcd}).
Considering that the magnetic field is nearly steady with time, we
only compute the QSLs for a single frame. {Figure}~\ref{fig:FlareBP}
depicts the $Q$ and $K$ maps (panel (b) and (d)) for the magnetic
field at 22:12 and compares with the AIA image at the same time. We
use a logarithmic scale since the squashing factor becomes abruptly
very large inside ths QSLs (e.g., \citet{Titov2002} defined the QSL as
a region with $Q \gg 2$). Note that there are data gaps in the maps
because the field lines there are opened, i.e., with ends of the lines
reaching the side or top boundaries of the computational volume. As
shown by the $Q$ and $K$ maps, the structures associated with data
abrupt change, i.e., the QSLs, are consistent between both maps. The
whole structure of the $Q$ map is rather complicated and may deserve a
comprehensive study, while here we put our focus on analyzing the
relation of the flare location (outlined by the dashed rectangle on
the AIA image) with the QSLs. Indeed, the flare location is clearly
co-spatial with the QSL of which the squashing factor reaches $\sim
10^{3}$ (see the region in the dashed rectangle in the $Q$ map). Then
why is this subregion has a strong change of magnetic connectivity? In
the same figure, we show the vector magnetogram (panel (e)) and the
field lines (panel (f)) in the same but a little larger subregion
outlined by the black rectangle in panel (c). The vector magnetogram
and the field lines reveal that underlying the flare region a bald
patch (BP) is located at the central portion of a long PIL (enhanced
by the thick white line in panels (e) and (f)). By its definition, BP
is a portion of PIL with $(\vec B\cdot \nabla)B_{z} > 0$, which means
the horizontal field at PIL crosses from the negative to positive
$B_{z}$ \citep{Titov1993,Bungey1996}, just oppossite to the normal
case. In the middle of the BP, the transverse field is nearly parallel
with the PIL, suggesting that it is not a single BP but fragments into
two parts. BP can also be defined as the locations where the magnetic
field is tangent to the photosphere, and the continuous set of field
lines that graze the surface at the BP form two separatrix surfaces
which separates three different topological regions. The separatrix
field lines are shown in {Figure}~\ref{fig:FlareBP} (f) and with 3D views
in {Figure}~\ref{fig:FlareBP3D}. Several studies have demonstrated that BP
can be correlated with flares and even CMEs due to the BP
separatrices, in which strong current sheets can be formed by
photospheric motions or flux emergence and trigger reconnection
\citep[e.g.,][]{Aulanier1998,Fletcher2001,Mandrini2002,WangTJ2002}. The
topological analysis of the flare site here, thus, suggests that the
AR 11117 flare can also be interpreted as a {\it bald-patch flare}
\citep{Aulanier1998,Delannee1999}, and may provide an evidence in
favor of reconnection along BP separatrices. The heights of the apexes
of the BP separatrices is about $2\sim 3$~Mm, meaning that the flare
happened rather low near the photosphere. However, in which way the
current sheet is formed and how the reconnection is triggered are not
clear and further study relying on higher-resolution and cadence data
could be needed. For another evidence of the presence of the BP
co-spatial with the flare, a curved dark feature near the flare
location, shown by the arrows on the AIA image (see panel (a) of
{Figure}~\ref{fig:FlareBP}), has the same shape with the field lines near
the right end of the BP (indicated by the arrows in panel (f)). This
can be well explained by the dip of field lines just above the BP,
which can support dense cold plasma against gravity in the same way as
filaments.
\begin{figure*}[htbp]
\centering
\includegraphics[width=\textwidth]{FlareBP}
\caption{(a) The AIA 171 image at 22:12, with the contour lines
plotted for the LoS photospheric field at $\pm 1000$~G and the
dashed box showing the location of flare with loops
brightened. (b) The squashing factor $Q$ in logarithmic scale;
samely contours are plotted at $\pm 1000$~G for photospheric
$B_{z}$ and the dashed box outlines the flare location. (c) The
$B_{z}$ map with the flare location outlined by a black box which
is enlarged in panel (e). (d) Same as (b) but for the
expansion-contraction degree $K$. (e) The vector magnetogram in
the flare location and the thick white line denotes the bald patch
(BP). Panel (f) gives some examples of the field lines (i.e., the
BP separatrix field lines) that are tangent to the photosphere at
the BP (the thick white line).}
\label{fig:FlareBP}
\end{figure*}
\begin{figure*}[htbp]
\centering
\includegraphics[width=\textwidth]{FlareBP3D}
\caption{Different 3D views of the BP-separatrix field lines plotted
in panel (f) of {Figure}~\ref{fig:FlareBP}. The BP is denoted by the
thick white lines. The $z$--axis scale is doubled for a better
view of the field lines.}
\label{fig:FlareBP3D}
\end{figure*}
\subsection{Energy and current}
In order to quantify the change of the field in the time series, we
computed a set of parameters and summarized them in
Table~\ref{tab:table1}. They include the total unsigned magnetic flux
$|\Phi|_{\rm tot} = \int_{S}|B_{z}| {\ \mathrm d} S$ (where $S$ represents the
photosphere), the total unsigned current $|I|_{\rm tot} =
\int_{S}|j_{z}| {\ \mathrm d} S$ (i.e., integral of the unsigned vertical
current on the photosphere), the total energy $E_{\rm tot} =
\frac{1}{8\pi}\int_{V}B^{2}dV$, the potential energy $E_{\rm pot}$,
the free energy $E_{\rm free} = E_{\rm tot}-E_{\rm pot}$, and the
ratio of free energy to potential energy. All these parameters are
important to characterize the evolution of the coronal magnetic
field. The first four parameters, i.e., the magnetic flux, the
current, the total and the potential energies, all keep increasing
with time in spite of the small flares. This is due to that the energy
injected by emerging magnetic flux is larger than the energy released
by the flares \citep[e.g.,][]{Regnier2005,He2011}. The total energy
and the potential energy are on the order of $10^{32}$~erg which is a
typical energy content of a medium-sized active region. Because of the
non-potentiality of the field, the total energy $E_{\rm tot}$ is
always higher than that of the potential field, which holds a energy
minimum state with a given magnetic flux on the photosphere. It is
commonly believed that the free magnetic energy plays a fundamental
role in flares, because the source of the energy for the flare must be
magnetic and only a fraction of the total magnetic energy, i.e., the
free energy can be converted to the kinetic energy and radiation of
flare \citep{Priest2002}. Our computation shows that the free energy
is on the order of $10^{31}$~erg (close to $10^{32}$~erg) which seems
to suffice to power a moderate flare, and this energy initially
increased like the total and potential energies before the C-class
flare started at 22:06 UT. One should bear in mind that even the free
energy is only partially involved with flares since the field after
flares is still non-potential and nonlinear \citep[e.g.,
see][]{Schrijver2009}. The energy released by the flare ought to be
quantified by the change in free energy from immediately before to
after the flare. Although the total energy increased even in the
interval of the flare, the free energy dropped as expected at 22:12,
i.e., the peak time of the flare, with a small amount of about
$1.7\times 10^{30}$~erg. It has been estimated that for the largest
flares up to X-class, the energy released are on the order of
$10^{32}$~erg \citep[e.g.,][]{Priest1981Book,Priest1987Book}. Thus by
a rough estimation, the decrease in the free energy of pre- and
post-flare is actually adequate to power this minor flare, of which
the energy needed is about several percents of the largest
class. Nevertheless, caution is needed when estimating the energy
budget of the flare by the drop of free energy in our modeling, since
many aspects of the model and the specific approach may influence the
results. We will discuss this in the conclusion section. In the last
column of Table~\ref{tab:table1} we calculated a mean vector deviation
between field $\vec b$ at each time with respect to the field $\vec B$
at time 21:00
\begin{equation}
e_{\rm m} = \frac{1}{M}\sum_{i}\frac{|\vec b_{i}-\vec
B_{i}|}{|\vec B_{i}|}
\end{equation}
where $i$ denotes the indices of all the pixels of the computational
volume and $M$ is the total number of the pixels. As a metric
monitoring the numerical variation of the field with the time, the
very low values of $e_{\rm m}$ again show that the change of the field
are rather small.
\begin{table*}[htbp]
\centering
\begin{tabular}{llllllll}
\hline
\hline
Time & $|\Phi|_{\rm tot}$[$10^{22}$~Mx] & $|I|_{\rm tot}$[$10^{13}$~A] &
$E_{\rm tot}$[$10^{32}$~erg] & $E_{\rm pot}$[$10^{32}$~erg]
& $E_{\rm free}$[$10^{31}$~erg] & $E_{\rm free}/E_{\rm pot}$ &
$e_{\rm m}$\\
\hline
21:00 &1.60 & 4.77 & 4.80 & 4.04 & 7.66 & 0.19 &0.00\\
21:36 &1.63 & 4.80 & 4.95 & 4.18 & 7.69 & 0.18 &0.09\\
22:00 &1.65 & 4.84 & 5.05 & 4.27 & 7.78 & 0.18 &0.11\\
22:12 &1.66 & 4.92 & 5.09 & 4.33 & 7.61 & 0.18 &0.10\\
22:36 &1.68 & 4.96 & 5.21 & 4.41 & 7.95 & 0.18 &0.11\\
23:00 &1.70 & 5.05 & 5.28 & 4.50 & 7.79 & 0.17 &0.13\\
\hline
\end{tabular}
\caption{Variation of parameters with evolution of the
field, see text for details.}
\label{tab:table1}
\end{table*}
In addition to the global energy content, we can also study the
spatial distribution of the magnetic free energy, i.e., the locations
of the free energy storage. As an example, for the magnetic field at
time 22:00 we computed the vertical integration of the free energy
\begin{equation}
E_{\rm free}(x,y) = A\int \frac{\vec B^{2}-\vec B_{\rm pot}^{2}}
{8\pi} {\ \mathrm d} z
\end{equation}
where $ A = {\ \mathrm d} x {\ \mathrm d} y$, and plotted the distribution of $E_{\rm
free}(x,y)$ on the horizontal plane ({Figure}~\ref{fig:Efree_xy}). A sum
of the energy $E_{\rm free}(x,y)$ in the images gives the total free
energy listed in Table~\ref{tab:table1}. Over the left image of
{Figure}~\ref{fig:Efree_xy}, the contour lines show the vertically
integrated current density $\int |\vec J| {\ \mathrm d} z$ and the lines are
color-coded with strength of the integrated current (increasing from
black to white); Over the right image the strongest regions of $B_{z}$
of photospheric field are outlined by the contour lines ($\pm
1000$~G). It can be clearly seen that the distribution of the free
energy is largely co-spatial with that of the current. This can be
easily understood because the coronal free energy (or the
non-potential energy) is actually stored in the current-carrying field
(where non-potentiality is strong). On the other hand, as shown by the
right image, the concentrations of free energy is not generally
spatially-correlated with those of strongest magnetic flux. It should
be noted that in the image there are some places with negative values
of the vertically integrated free energy. This is physically valid
since the there is no restriction that the energy density (and thus
any sub-volume energy) must always be greater than that of the
potential field, although a non-potential field must have a global
energy content greater than the potential field
\citep[e.g.,][]{Mackay2011}. In {Figure}~\ref{fig:Efree_zline} we plot the
horizontal surface-integral of the total energy, the potential energy,
and the free energy, e.g.,
\begin{equation}
E_{\rm free}(z) = {\ \mathrm d} z\int\frac{\vec B^{2}-\vec B_{\rm pot}^{2}}
{8\pi} {\ \mathrm d} x {\ \mathrm d} y
\end{equation}
as functions of the height $z$. The total and the potential energies
are predominantly located near the photosphere where the magnetic
field strength is high, whereas the free energy (the red curve) is
situated mainly above the photosphere in a range of 5~Mm to 30~Mm with
its maximum at about 10~Mm. It is interesting to find that near the
photosphere, the free energy is negative with a minimum value at the
photosphere, which says that the observed vector field has a lower
surface energy content than the potential field. This is, however, not
surprising as we have noted that any sub-volume energy content of the
non-potential field may be lower than potential energy.
\begin{figure*}[htbp]
\centering
\includegraphics[width=0.48\textwidth]{Efree_xy}
\includegraphics[width=0.48\textwidth]{Efree_xy1}
\caption{The images represent vertical integral of the free energy,
showing the locations of free energy storage. The contour lines in
the left panel represent the vertically integrated current density
$\int |\vec J| {\ \mathrm d} z$ and the lines are color-coded with strength
of the integrated current (increasing from black to white). The
contour lines in the right panel represent $B_{z}$ on the
photosphere with value of $\pm 1000$~G.}
\label{fig:Efree_xy}
\end{figure*}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.48\textwidth]{Efree_zline}
\caption{Variation of the horizontal surface integration of magnetic
field energy along the $z$-axis. Note that the left vertical axis
(black) indicates values for the $E_{\rm pot}$ and $E_{\rm tot}$
and the right vertical axis (red) indicates values for $E_{\rm
free}$.}
\label{fig:Efree_zline}
\end{figure}
Electric current can characterize the non-potentiality of the field,
e.g., the patterns of strong current concentration may serve as a
proxy for the non-potential structures (e.g., the sigmoids) in the
corona \citep{Schrijver2008,Archontis2009,Sun2012}. In particularly,
the current structures are regions where reconnection may happen and
magnetic energy is converted to thermal energy and heating, and thus
creating hot emission. In {Figure}~\ref{fig:AIA_304} we give examples of
the synthetic images of the current which is computed by vertical
integration of $J^{2}$ (i.e., $\int_{z}J^{2} {\ \mathrm d} z$, see
\citet{Archontis2009}) and compared with the AIA 304 {\AA}
images. Since the term $J^{2}$ is proportional to the Joule heating
term, thus it simulates but very roughly the hot emission. As can be
seen in the figure, the strong current regions are indeed coincident
with the regions with high intensity of emission. However, the result
does not show any intensive current associated with the flare
site. This may be because the current sheet in the BP separatrices is
very thin and is failed to be resolved by the present grid resolution.
\begin{figure*}[htbp]
\centering
\includegraphics[width=\textwidth]{AIA_304}
\caption{Left column is the AIA 304 {\AA} image and right column is
the synthetic images of current using vertical integral of $J^{2}$
computed by the MHD model.}
\label{fig:AIA_304}
\end{figure*}
\section{Conclusions}
\label{sec:conclude}
In this work, we have applied the data-driven CESE--MHD model to
investigate the 3D magnetic field of AR 11117 around the time of a
C-class confined flare occurred on 2010 October 25. Similar to the
field extrapolation method our model is designed to focus on the
magnetic field, but its nonlinear dynamic interactions with plasma and
finite gas pressure (denoted by plasma $\beta$) are also embedded,
although simplified. Assuming that the dynamic evolution of the
coronal magnetic field can be approximated by successive equilibria,
we have solved a time sequence of MHD equilibria basing on a set of
vector magnetograms for AR 11117 taken by {\it SDO}/HMI around the
time of flare. By analyzing the computed 3D magnetic field along with
the observation, we have the following results:
\begin{enumerate}
\item The model has qualitatively reproduced the basic structures of
the magnetic field, as supported by the visual similarity between
the field lines and the {\it SDO}/AIA loops, which shows that the
coronal field can indeed be well characterized by the MHD
equilibrium in most time. The magnetic field is very non-pontential
with strong shear locally and some twists compared with the
potential model. There are also some loops failed to be recovered by
the MHD model, but only at the time set very near the happening of
the flare. This means that the magnetic field is rather dynamic when
the energy is suddenly released in a time-scale far shorter than
that of relaxation by Alfv\'en speed.
\item The magnetic configuration changes very limited during the
studied time interval of two hours, and the flare-related
reconnection takes place at rather small scale and low height near
the photosphere. The topological analysis reveals that the small
flare is correlated with a BP and the energy dissipation can be
understood by the reconnection associated with the BP
separatrices. However, no intensive current is found in the flare
site related with the BP separatrices. This may be because the
current sheet associated with the separatrices is very thin and
cannot be resolved by the present grid resolution. Further study
exploiting the full resolution and high-cadense observations is
required for explaining how the BP-flare is actived, e.g., where the
current sheet is formed and how the reconnection is triggered.
\item Because of the continuous flux emergence, the total unsigned
magnetic flux and current through the photosphere keep increasing
(but very slightly) in spite of the flare. Although evolution of the
total magnetic energy also exhibits the same tendency as that of the
total magnetic flux, the sum of free energy for the computational
volume drops when the flare happened, indicating that some of the
non-potential energy is released by the flare. Our computation shows
that the amount of the free energy loss is on the order of
$10^{30}$~erg, which is adequate to power a minor C-class flare.
\end{enumerate}
In summary, our model capture the basic features of the 3D magnetic
field of the target active region both qualitatively and
quantitatively, also the results give some hints on the trigger
mechanism of the flare. Nevertheless, we remind the readers that the
results, especially in the quantitative aspect, should be interpreted
with caution because they can be influenced by many uncertainties in
the modeling. The uncertainties also exist in other models of the
similar kind, for example, the NLFF modeling, and even for the same
model, different codes may produce very inconsistent results
\citep{Schrijver2008,Derosa2009}. The uncertainties may first come
from measurement error of the HMI magnetogram data. For example,
\citet{Sun2012} have estimated that the free energy content could be
affected with several percents by the spectropolarimetric noise in the
magnetogram; even such small error is large for the present case in
which the flare may only release a very small fraction of the free
energy. It should be noted that in the NLFF model the systematic error
can be greater because of the force-free assumption and the
preprocessing and smoothing of the original data. Although our model
does not suffer from such preprocessed-related problem, the systematic
uncertainties can still come from the simplified configuration of the
solar atmosphere, the boundary conditions and the data interpolation
from the original non-uniform grid to a uniform grid when computing
the parameters. Especially the using of a low-$\beta$ plasma globally
is far from the realistic case in which the solar atmosphere is highly
stratified with much larger gas pressure near the
photosphere. Furthermore the assumption of static state of the
magnetic field is unjustified by the onset of the flare, which can
drive the field lines very dynamic and make our computation
unreliable, as discussed in the comparison of the MHD results with the
AIA images. This is a much more basic problem (than the others
aforementioned) encountered by any extrapolation of magnetic field
with static or quasi-static models.
Future improvements merit to be made in several aspects. To increase
the capability of adaptive resolving the small-scale structures can be
realized hopefully by the aid of the adaptive-mesh-refinement
technique. Exploiting more observations like the surface flows
computed by the LCT-type methods can further constraint the model and
provide important information for the realistic dynamic evolution of
the magnetic field. More physics-based thermodynamic model for the
solar atmosphere with stratified temperature will also be considered
to couple the photosphere and corona, in order to model the behavior
of the magnetic field in a highly stratified and inhomogeneous plasma
with $\beta$ from $>1$ to $\ll 1$.
\acknowledgments
The work is jointly supported by the 973 program under grant
2012CB825601, the Chinese Academy of Sciences (KZZD-EW-01-4), the
National Natural Science Foundation of China (41031066, 40921063,
40890162, and 41074122), and the Specialized Research Fund for State
Key Laboratories. Data are courtesy of NASA/SDO and the AIA and HMI
science teams. Special thanks go to our anonymous reviewer for
valuable suggestions for the improvement of the paper.
|
2,877,628,091,510 | arxiv | \section{Introduction}
\label{Sec:Intro}
Our nearest neighbouring large spiral galaxy, the Andromeda galaxy, also known as \object{M~31}\ or \object{NGC~224}, is an ideal target for an X-ray source population study of a galaxy similar to the Milky Way. Its proximity \citep[distance 780 kpc,][]{1998AJ....115.1916H,1998ApJ...503L.131S} and the moderate Galactic foreground absorption \citep[\hbox{$N_{\rm H}$} = 7\hcm{20}, ][]{1992ApJS...79...77S} allow a detailed study of source populations and individual sources.
After early detections of \object{M~31}\ with X-ray detectors mounted on rockets \citep[\eg\ ][]{1974ApJ...190..285B} and the {\it Uhuru} satellite \citep[][]{1974ApJS...27...37G}, the imaging X-ray optics flown on the {\it Einstein}\ X-ray observatory permitted the resolution of individual X-ray sources in \object{M~31} for the first time. In the entire set of {\it Einstein}\ imaging observations of \object{M~31}, \citet[][hereafter TF91]{1991ApJ...382...82T} found 108 individual X-ray sources brighter than $\sim6.4$\ergs{36}, of which 16 sources showed variability \citep{1979ApJ...234L..45V,1990ApJ...356..119C}.
In July 1990, the bulge region of \object{M~31}\ was observed with the {\it ROSAT}\ High Resolution Imager (HRI) for $\sim 48$\,ks.\@ \citet[][hereafter PFJ93]{1993ApJ...410..615P} reported 86 sources brighter than $\sim1.8$\ergs{36} in this observation. Of the {\it ROSAT}\ HRI sources located within 7\farcm5 of the nucleus, 18 sources were found to vary when compared to previous {\it Einstein}\ observations and about three of the sources may be ``transients".
Two deep PSPC (Position Sensitive Proportional Counter) surveys of \object{M~31}\ were performed with {\it ROSAT}, the first in July 1991 \citep[][hereafter SHP97]{1997A&A...317..328S}, the second in July/August 1992 \citep[][hereafter SHL2001]{2001A&A...373...63S}. In total 560 X-ray sources were detected in the field of \object{M~31}; of these, 491 sources were not detected in previous {\it Einstein}\ observations. In addition, a comparison with the results of the {\it Einstein}\ survey revealed long term variability in 18 sources, including 7 possible transients. Comparing the two {\it ROSAT}\ surveys, 34 long term variable sources and 8 transient candidates were detected. The derived luminosities of the detected \object{M~31}\ sources ranged from 5\ergs{35} to 5\ergs{38}.\@ Another important result obtained with {\it ROSAT}\ was the establishment of supersoft sources (SSSs) as a new class of \object{M~31}\ X-ray sources \citep[\textit{cf.}\ ][]{1999A&A...344..459K} and the identification of the first SSS with an optical nova in \object{M~31}\ \citep{2002A&A...389..439N}.
\citet{2000ApJ...537L..23G} reported on first observations of the nuclear region of \object{M~31}\ with {\it Chandra}. They found that the nuclear source has an unusual X-ray spectrum compared to the other point sources in \object{M~31}. \citet{2002ApJ...577..738K} report on eight {\it Chandra}\ ACIS-I observations taken between 1999 and 2001, which cover the central $\sim 17\hbox{$^\prime$}\!\times\!17\hbox{$^\prime$}$ region of \object{M~31}. They detected 204 sources, of which $\sim$50\% are variable on timescales of months and 13 sources were classified as transients. \citet{2002ApJ...578..114K} detected 142 point sources ($L_X=2\!\times\!10^{35}$ to 2\ergs{38} in the 0.1--10\,keV band) in a 47\,ks {\it Chandra}/HRC observation of the central region of \object{M~31}. A comparison with a {\it ROSAT}\ observation taken 11\,yr earlier, showed that 46$\pm$26\% of the sources with $L_X>5$\ergs{36} are variable. Three different \object{M~31}\ disc fields, consisting of different stellar population mixtures, were observed by {\it Chandra}. \citet{2002ApJ...570..618D} investigated bright X-ray binaries (XRBs) in these fields, while \citet{2004ApJ...610..247D} examined the populations of supersoft sources (SSSs) and quasisoft sources (QSSs), including observations of the central field. Using {\it Chandra}\ HRC observations, \citet{2004ApJ...609..735W} measured the mean fluxes and long-term time variability of 166 sources detected in these data. \citet{2007A&A...468...49V} used {\it Chandra}\ data to examine the low mass X-ray binaries (LMXBs) in the bulge of \object{M~31}. Good candidates for LMXBs are the so-called transient sources. Studies of transient sources in \object{M~31}\ are presented in numerous papers, e.\,g.~\citet{2006ApJ...643..356W}, \citet[][ hereafter TPC06]{2006ApJ...645..277T}, \citet{2005ApJ...632.1086W}, \citet[][ hereafter WGM06]{2006ApJ...637..479W}, and \citet{2008A&A...489..707V}.
Using {XMM-Newton}\ and {\it Chandra}\ data, \citet{2004ApJ...616..821T} detected 43 X-ray sources coincident with globular cluster candidates from various optical surveys. They studied their spectral properties, time variability and log\,N-log\,S relations.
\citet{2001A&A...378..800O} used {XMM-Newton}\ Performance Verification observations to study the variability of X-ray sources in the central region of \object{M~31}. They found 116 sources brighter than a limiting luminosity of 6\ergs{35} and examined the $\sim60$ brightest sources for periodic and non-periodic variability. At least 15\% of these sources appear to be variable on a time scale of several months. \citet{2003A&A...411..553B} used {XMM-Newton}\ to study the X-ray binary RX J0042.6+4115 and suggested it as a Z-source. \citet{2006ApJ...643..844O} studied the population of SSSs and QSSs with {XMM-Newton}.
Recently, \citet{2008ApJ...676.1218T} reported the discovery of 217s pulsations in the bright persistent SSS XMMU~J004252.5+411540.\@ \citet[][hereafter SBK2009]{2009A&A...495..733S} presented the results of a complete spectral survey of the 335 X-ray point sources they detected in five {XMM-Newton}\ observations located along the major axis of \object{M~31}.\@ They obtained background subtracted spectra and lightcurves for each of the 335 X-ray sources. Sources with more than 50 source counts were individually spectrally fitted. In addition, they selected 18 HMXB candidates, based on a power law photon index of $0.8\!\le\!\Gamma\!\le\!1.2$.
\citet[][ hereafter PFH2005]{2005A&A...434..483P} prepared a catalogue of \object{M~31}\
point-like X-ray sources analysing all observations available at that time in the {XMM-Newton}\ archive which overlap at least in part with the optical $\mathrm{D}_{25}$ extent of the galaxy.
In total, they detected 856 sources. The central part of the galaxy was covered four times with a separation of the observations of about half a year starting in June 2000. PFH2005 only gave source properties derived from an analysis of the combined observations of the central region. Source identification and classification were based on hardness ratios, and correlations with sources in other wavelength regimes. In follow-up work, (i) \citet[][]{2005A&A...430L..45P} searched for X-ray burst sources in globular cluster (GlC) sources and candidates and identified two X-ray bursters and a few more candidates, while (ii) \citet[][ hereafter PFF2005]{2005A&A...442..879P} searched for correlations with optical novae. They identified 7 SSSs and 1 symbiotic star from the catalogue of PFH2005 with optical novae, and identified anadditional {XMM-Newton}\ source with an optical nova. This work was continued and extended on archival {\it Chandra}\ HRC-I and ACIS-I observations by \citet[][ hereafter PHS2007]{2007A&A...465..375P}.
\citet[][hereafter SPH2008]{2008A&A...480..599S} presented a time variability analysis of all of the \object{M~31}\ central sources.
They detected 39 sources not reported at all in PFH2005. 21 sources were detected in the July 2004 monitoring observations of the low mass X-ray binary RX J0042.6+4115 (PI Barnard), which became available in the meantime. Six sources, which were classified as ``hard" sources by PFH2005, show distinct time variability and hence are classified as XRB candidates in SPH2008. The SNR classifications of three other sources from PFH2005 had to be rejected due to the distinct time variability found by SPH2008. \citet{2009A&A...500..769H} reported on the first two SSSs ever discovered in the \object{M~31}\ globular cluster system, and \citet{2009A&A...498L..13H} discussed the very short supersoft X-ray state of the classical nova M31N 2007-11a. A comparative study of supersoft sources detected with {\it ROSAT}, {\it Chandra}\ and {XMM-Newton}, examining their long-term variability, was presented by \citet{2010AN....331..212S}.
An investigation of the log\,N-log\,S relation of sources detected in the 2.0--10.0\,keV range will be presented in a forthcoming paper (Stiele et al. 2011 in prep.). In this work the contribution of background objects and the spatial dependence of the log\,N-log\,S relations for sources of \object{M~31}\ is studied.
In this paper we report on the large {XMM-Newton}\ survey of \object{M~31}, which covers the entire $\mathrm{D}_{25}$ ellipse of \object{M~31}, for the first time, down to a limiting luminosity of $\sim$\oergs{35} in the 0.2--4.5\,keV band. In Sect.\,\ref{sec:obsana} information about the observations used is provided. The analysis of the data is presented in Sect.\,\ref{Sec:analys}. Section \,\ref{Sec:coim} presents the combined colour image of all observations used. The source catalogue of the deep {XMM-Newton}\ survey of \object{M~31}\ is described in Sect.\,\ref{Sec:srccat}.
The results of the temporal variability analysis are discussed in Sect.\,\ref{Sec:var}. Cross-correlations with other \object{M~31}\ X-ray catalogues are discussed in Sect.\,\ref{Sec:CrossX-ray}, while Sect.\,\ref{SEC:CCow} discusses cross-correlations with catalogues at other wavelengths. Our results related to foreground stars and background sources in the field of \object{M~31}\ are presented in Sect.\,\ref{Sec:fgback}.
Individual source classes belonging to M31 are discussed in Sect.\,\ref{Sec:Srcsm31}.
We draw our conclusions in Sect.\,\ref{Sec:Concl}.
\begin{table*}
\scriptsize
\begin{center}
\caption{A selection of important X-ray surveys of \object{M~31}.}
\begin{tabular}{lcrcll}
\hline\noalign{\smallskip}
\hline\noalign{\smallskip}
Paper & S$^{+}$ & \#ofSrc$^{*}$ & \multicolumn{1}{c}{L$_{X}^{\dagger}$} & field & comments \\
& & & erg cm$^{-2}$ s$^{-1}$& \\
\noalign{\smallskip}
\hline\noalign{\smallskip}
\protect{\citet[][TF91]{1991ApJ...382...82T}} & E & 108 & $6.4\!\times\!10^{36}$--$1.3\!\times\!10^{38}$ & entire set of {\it Einstein}\ & 16 sources showed variability\\
& & & (0.2--4\,keV) &imaging observations & \\
\protect{\citet[][PFJ93]{1993ApJ...410..615P}} & R (HRI) & 86 & $\ga1.8\!\times\!10^{36}$ & bulge region & 18 sources variable; $\sim$3 transients\\
& & & (0.2--4\,keV) & &\\
\protect{\citet{1997A&A...317..328S,2001A&A...373...63S}} & R (PSPC) & 560 & $5\!\times\!10^{35}$--$5\!\times\!10^{38}$ &
whole galaxy & two deep surveys\\
(SPH97, SHL2001) & & & (0.1--2.4\,keV) & & 491 sources not detected with {\it Einstein} \\
& & & & & 11 sources variable, 7 transients compared to {\it Einstein} \\
& & & & & 34 sources variable, 8 transients between {\it ROSAT}\ surveys \\
\protect{\citet{2001A&A...378..800O}} & X & 116 & $\ga6\!\times\!10^{35}$ & centre & examined the $\sim60$ brightest sources for variability\\
& & & (0.3--12\,keV) & & \\
\protect{\citet{2002ApJ...577..738K}} & C (ACIS-I) & 204 & $\ga2\!\times\!10^{35}$ & central $\sim 17\hbox{$^\prime$}\!\times\!17\hbox{$^\prime$}$ & observations between 1999 and 2001\\
& & & (0.3--7\,keV) & & $\sim$50\% of the sources are variable, 13 transients\\
\protect{\citet{2002ApJ...578..114K}} & C (HRC) & 142 & $2\!\times\!10^{35}$--$2\!\times\!10^{38}$ & centre & one 47\,ks observation; 46$\pm$26\% of the sources \\
& & & (0.1--10\,keV) & & with $L_X>5$\ergs{36} are variable\\
\protect{\citet{2002ApJ...570..618D}} & C (ACIS-I/S) & 28 & $5\!\times\!10^{35}$--$3\!\times\!10^{38}$ & 3 disc fields & bright X-ray binaries\\
& & & (0.3--7\,keV) & & \\
\protect{\citet{2004ApJ...610..247D}} & C (ACIS-S S3) & 33 & & 3 disc fields + centre & supersoft sources and quasisoft sources\\
\protect{\citet{2004ApJ...609..735W}} & C (HRC) & 166 & $1.4\!\times\!10^{36}$--$5\!\times\!10^{38}$ & major axis + centre & $\ga$25\% showed significant variability\\
& & & (0.1--10\,keV) & & \\
\protect{\citet{2004ApJ...616..821T}} & C, X & 43 & $\sim10^{35}$--$\sim10^{39}$ & major axis + centre & globular cluster study \\
& & & (0.3--10\,keV) & & \\
\protect{\citet[][PFH2005]{2005A&A...434..483P}} & X & 856 & $4.4\!\times\!10^{34}$--$2.8\!\times\!10^{38}$ & major axis + centre & source catalogue\\
& & & (0.2--4.5\,keV) & & \\
\protect{\citet[][PFF2005]{2005A&A...442..879P}} & C, R, X & 21 & $\sim10^{35}$--$\sim10^{38}$ & centre & correlations with optical novae\\
& & & (0.2--1\,keV) & & \\
\protect{\citet{2006ApJ...643..844O}} & C, X & 42 & $6\!\times\!10^{35}$--$\sim10^{39}$ & major axis + centre & supersoft sources and quasisoft sources\\
& & & (0.2--2\,keV) & & \\
& & & (0.3--10\,keV) & & \\
\protect{\citet[][PHS2007]{2007A&A...465..375P}} & C, X & 46 & $\sim10^{35}$--$\sim10^{38}$ & centre & correlations with optical novae\\
& & & (0.2--1\,keV) & & \\
\protect{\citet{2007A&A...468...49V}} & C & 263 & $5\!\times\!10^{33}$--$1.5\!\times\!10^{38}$ & bulge region & low mass X-ray binary study\\
& & & (0.5--8\,keV) & & \\
\protect{\citet[][SPH2008]{2008A&A...480..599S}} & X & 39 & $7\!\times\!10^{34}$--$6\!\times\!10^{37}$ & centre & re-analysis of archival and new 2004 observations\\
& & 300 & $4.4\!\times\!10^{34}$--$2.8\!\times\!10^{38}$ & & time variability analysis; 149 sources with a significance\\
& & & (0.2--4.5\,keV) & & for variability $>$3; 6 new X-ray binary candidates,\\
& & & & & 3 supernova remnant classifications were rejected\\
\protect{\citet[][SBK2009]{2009A&A...495..733S}} & X & 335 & $\sim10^{34}$--$\sim10^{39}$ & 5 fields along & background subtracted spectra and lightcurves for\\
& & & (0.3--10\,keV) & major axis & each source; 18 HMXB candidates, selected from their\\
& & & & & power law photon index\\
\protect{\citet{2010AN....331..212S}} & X & 40 & & whole galaxy & supersoft sources; comparing {\it ROSAT}, {\it Chandra}\ and\\
& & & & & {XMM-Newton}\ catalogues\\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\end{tabular}
\label{Tab:VarSNRs1}
\end{center}
Notes:\\
$^{ +~}$: X-ray satellite(s) on which the study is based: E for {\it Einstein}, R for {\it ROSAT}, C for {\it Chandra}, and X for {XMM-Newton}\ (EPIC)\\
$^{ *~}$: Number of sources\\
$^{ \dagger~}$: observed luminosity range in the indicated energy band, assuming a distance of 780\,kpc to \object{M~31}
\normalsize
\end{table*}
\section{Observations}
\label{sec:obsana}
Figure~\ref{fig:deepsurveyfields} shows the layout of the individual {XMM-Newton}\ observations over the field of \object{M~31}. The observations of the ``Deep {XMM-Newton}\ Survey of \object{M~31}'' (PI Pietsch) mainly point at the outer parts of \object{M~31}, while the area along the major axis is covered by archival {XMM-Newton}\ observations (PIs Watson, Mason, Di Stefano).
To treat all data in the same way, we re-analysed all archival {XMM-Newton}\ observations of \object{M~31}, which were used in \citet{2005A&A...434..483P}.\@ In addition we included an {XMM-Newton}\ target of opportunity (ToO) observation of source CXOM31~J004059.2+411551 and the four observations of source RX J0042.6+4115 (PI Barnard).
All observations of the ``Deep {XMM-Newton}\ Survey of \object{M~31}'' and the ToO observation were taken between June 2006 and February 2008.\@ All other observations were available via the {XMM-Newton}\ Data Archive\footnote{\url{http://xmm.esac.esa.int/xsa/}} and were taken between June 2000 and July 2004.
The journal of observations is given in Table~\ref{tab:observations}. It includes the \object{M~31}\ field name (Column~1), the identification number (2) and date (3) of the observation and the pointing direction (4, 5), while col.~6 contains the systematic offset (see Sect.\,\ref{SubSec:AstCorr}). For each EPIC camera the filter used and the exposure time after screening for high background is given (see Sect.\,\ref{sec:Screening}).
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics[clip]{pics/fields_new.ps}}
\caption{A deep optical image of \object{M~31}\ (private communication: V.~Burwitz) overplotted with the {XMM-Newton}\ fields of the survey. The area covered by individual EPIC observations is approximated by circles with 14 arcmin radius. Fields observed in the ``Deep {XMM-Newton}\ Survey of \object{M~31}'' are marked with thicker lines. For presentation purposes, the ToO observation and the observations of RX J0042.6+4115 are omitted.}
\label{fig:deepsurveyfields}
\end{figure}
\begin{table*}
\scriptsize
\begin{center}
\caption[]{{XMM-Newton}\ log of the {\em Deep Survey} and archival \object{M~31}\ observation overlapping with the optical $D_{25}$ ellipse.\label{tab:observations}}
\begin{tabular}{llllrrrlrlrlr}
\hline\noalign{\smallskip}
\hline\noalign{\smallskip}
\multicolumn{2}{c}{M 31 field} & \multicolumn{1}{c}{Obs. id.} &\multicolumn{1}{c}{Obs. dates} &
\multicolumn{2}{c}{Pointing direction} & \multicolumn{1}{c}{Offset~$^*$} & \multicolumn{2}{c}{EPIC PN} &
\multicolumn{2}{c}{EPIC MOS1} & \multicolumn{2}{c}{EPIC MOS2} \\
\noalign{\smallskip}
& & & & \multicolumn{2}{c}{RA/Dec (J2000)} & \multicolumn{1}{c}{}
& \multicolumn{1}{c}{Filter$^{+}$} & \multicolumn{1}{c}{$T_{exp}^{\dagger}$}
& \multicolumn{1}{c}{Filter$^{+}$} & \multicolumn{1}{c}{$T_{exp}^{\dagger}$}
& \multicolumn{1}{c}{Filter$^{+}$} & \multicolumn{1}{c}{$T_{exp}^{\dagger}$}\\
\noalign{\smallskip}
\multicolumn{2}{c}{(1)} & \multicolumn{1}{c}{(2)} & \multicolumn{1}{c}{(3)} &
\multicolumn{1}{c}{(4)} & \multicolumn{1}{c}{(5)} & \multicolumn{1}{c}{(6)} &
\multicolumn{1}{c}{(7)} & \multicolumn{1}{c}{(8)} & \multicolumn{1}{c}{(9)} &
\multicolumn{1}{c}{(10)} & \multicolumn{1}{c}{(11)} & \multicolumn{1}{c}{(12)}\\
\noalign{\smallskip}\hline\noalign{\smallskip}
Centre 1 & (c1) & 0112570401 & 2000-06-25 & 0:42:36.2 & 41:16:58 & $-1.9,+0.1$ & medium & 23.48(23.48) & medium & 29.64(29.64) &medium & 29.64(29.64) \\
Centre 2 & (c2) & 0112570601 & 2000-12-28 & 0:42:49.8 & 41:14:37 & $-2.1,+0.2$ & medium & 5.82( 5.82) & medium & 6.42( 6.42) &medium & 6.42( 6.42) \\
Centre 3 & (c3) & 0109270101 & 2001-06-29 & 0:42:36.3 & 41:16:54 & $-3.2,-1.7$ & medium & 21.71(21.71) & medium & 23.85(23.85) &medium & 23.86(23.86) \\
N1 & (n1) & 0109270701 & 2002-01-05 & 0:44:08.2 & 41:34:56 & $-0.3,+0.7$ & medium & 48.31(48.31) & medium & 55.68(55.68) &medium & 55.67(55.67) \\
Centre 4 & (c4) & 0112570101 & 2002-01-06/07 & 0:42:50.4 & 41:14:46 & $-1.0,-0.8$ & thin & 47.85(47.85) & thin & 52.87(52.87) &thin & 52.86(52.86) \\
S1 & (s1) & 0112570201 & 2002-01-12/13 & 0:41:32.7 & 40:54:38 & $-2.1,-1.7$ & thin & 46.75(46.75) & thin & 51.83(51.83) &thin & 51.84(51.84) \\
S2 & (s2) & 0112570301 & 2002-01-24/25 & 0:40:06.0 & 40:35:24 & $-1.1,-0.3$ & thin & 22.23(22.23) & thin & 24.23(24.23) &thin & 24.24(24.24) \\
N2 & (n2) & 0109270301 & 2002-01-26/27 & 0:45:20.0 & 41:56:09 & $-0.3,-1.5$ & medium & 22.73(22.73) & medium & 25.22(25.22) &medium & 25.28(25.28) \\
N3 & (n3) & 0109270401 & 2002-06-29/30 & 0:46:38.0 & 42:16:20 & $-2.3,-1.7$ & medium & 39.34(39.34) & medium & 43.50(43.50) &medium & 43.63(43.63) \\
H4 & (h4) & 0151580401 & 2003-02-06 & 0:46:07.0 & 41:20:58 & $+0.3,+0.0$ & medium & 10.14(10.14) & medium & 12.76(12.76) &medium & 12.76(12.76) \\
RX~1 & (b1)$^{\ddagger}$ & 0202230201 & 2004-07-16 & 0:42:38.6 & 41:16:04 & $-1.3,-1.2$ & medium & 16.32(16.32) & medium & 19.21(19.21) &medium & 19.21(19.21) \\
RX~2 & (b2) & 0202230301 & 2004-07-17 & 0:42:38.6 & 41:16:04 & $-1.0,-0.9$ & medium & 0.0(0.0) & medium & 0.0(0.0) &medium & 0.0(0.0) \\
RX~3 & (b3)$^{\ddagger}$ & 0202230401 & 2004-07-18 & 0:42:38.6 & 41:16:04 & $-1.7,-1.5$ & medium & 12.30(12.30) & medium & 17.64(17.64) &medium & 17.68(17.68) \\
RX~4 & (b4)$^{\ddagger}$ & 0202230501 & 2004-07-19 & 0:42:38.6 & 41:16:04 & $-1.4,-1.8$ & medium & 7.94(7.94) & medium & 10.12(10.12) &medium & 10.13(10.13) \\
S3 & (s3) & 0402560101 & 2006-06-28 & 0:38:52.8 & 40:15:00 & $-3.1,-3.0$ & thin & 4.99(4.99) & medium & 6.96(6.96) &medium & 6.97(6.97)\\
SS1 & (ss1) & 0402560201 & 2006-06-30 & 0:43:28.8 & 40:55:12 & $-4.4,-3.7$ & thin & 14.07(9.57) & medium & 24.56(10.65) &medium & 24.58(10.66) \\
SN1 & (sn1) & 0402560301 & 2006-07-01 & 0:40:43.2 & 41:17:60 & $-2.7,-1.5$ & thin & 41.23(35.42) & medium & 47.60(39.40) &medium & 47.64(39.44) \\
SS2 & (ss2) & 0402560401 & 2006-07-08 & 0:42:16.8 & 40:37:12 & $-1.2,-1.3$ & thin & 21.64(9.92) & medium & 25.59(11.04) &medium & 25.64(11.05) \\
SN2 & (sn2) & 0402560501 & 2006-07-20 & 0:39:40.8 & 40:58:48 & $-0.8,-0.7$ & thin & 48.79(21.45) & medium & 56.13(23.85) &medium & 56.17(23.86) \\
SN3 & (sn3) & 0402560701 & 2006-07-23 & 0:39:02.4 & 40:37:48 & $-0.9,-2.0$ & thin & 23.80(15.43) & medium & 28.02(17.16) &medium & 28.04(17.17) \\
SS3 & (ss3) & 0402560601 & 2006-07-28 & 0:40:45.6 & 40:21:00 & $-1.8,-1.7$ & thin & 27.77(20.22) & medium & 31.92(22.49) &medium & 31.94(22.5) \\
S2~& (s21) & 0402560801 & 2006-12-25 & 0:40:06.0 & 40:35:24 & $-1.6,-0.7$ & thin & 39.12(39.12) & medium & 45.19(45.19) &medium & 45.21(45.21) \\
NN1 & (nn1) & 0402560901 & 2006-12-26 & 0:41:52.8 & 41:36:36 & $-1.5,-1.5$ & thin & 37.9(37.9) & medium & 43.08(43.08) &medium & 43.1(43.1) \\
NS1 & (ns1) & 0402561001 & 2006-12-30 & 0:44:38.4 & 41:12:00 & $-1.0,-1.3$ & thin & 45.11(45.11) & medium & 50.9(50.9) &medium & 50.93(50.93) \\
NN2 & (nn2) & 0402561101 & 2007-01-01 & 0:43:09.6 & 41:55:12 & $-0.0,-1.2$ & thin & 41.73(41.73) & medium & 46.45(46.45) &medium & 46.47(46.47) \\
NS2 & (ns2) & 0402561201 & 2007-01-02 & 0:45:43.2 & 41:31:48 & $-2.3,-1.7$ & thin & 34.96(34.96) & medium & 40.55(40.55) &medium & 40.58(40.58) \\
NN3 & (nn3) & 0402561301 & 2007-01-03 & 0:44:45.6 & 42:09:36 & $-1.4,-0.7$ & thin & 31.04(31.04) & medium & 34.81(34.81) &medium & 34.81(34.81) \\
NS3 & (ns3) & 0402561401 & 2007-01-04 & 0:46:38.4 & 41:53:60 & $-2.1,+0.3$ & thin & 39.41(39.41) & medium & 45.50(45.50) &medium & 45.52(45.52) \\
N2~& (n21) & 0402561501 & 2007-01-05 & 0:45:20.0 & 41:56:09 & $-2.6,-1.3$ & thin & 37.18(37.18) & medium & 41.98(41.98) &medium & 42.03(42.03) \\
SS1 & (ss11) & 0505760201 & 2007-07-22 & 0:43:28.8 & 40:55:12 & $-2.5,-2.6$ & thin & 30.07(23.90) & medium & 34.01(26.70) &medium & 34.02(26.72) \\
S3~& (s31) & 0505760101 & 2007-07-24 & 0:38:52.8 & 40:15:00 & $-1.8,-1.0$ & thin & 21.86(15.74) & medium & 24.74(17.65) &medium & 24.74(17.65) \\
CXOM31& (sn11)$^{\diamond}$ & 0410582001 & 2007-07-25 & 0:40:59.2 & 41:15:51 & $-1.2,-0.3$ & thin & 11.27(11.27) & medium & 14.01(14.01) &medium & 14.02(14.02) \\
SS3 & (ss31) & 0505760401 & 2007-12-25 & 0:40:45.6 & 40:21:00 & $-1.0,+0.1$ & thin & 23.56(22.82) & medium & 28.18(25.8) &medium & 28.2(25.82) \\
SS2 & (ss21) & 0505760301 & 2007-12-28 & 0:42:16.8 & 40:37:12 & $+1.3,-0.1$ & thin & 35.28(35.28) & medium & 40.00(40.00) &medium & 40.01(40.01) \\
SN3 & (sn31) & 0505760501 & 2007-12-31 & 0:39:02.4 & 40:37:48 & $-1.6,-1.3$ & thin & 24.26(24.26) & medium & 28.77(28.77) &medium & 28.78(28.78) \\
S3 & (s32) & 0511380101 & 2008-01-02 & 0:38:52.8 & 40:15:00 & $-1.7,-3.3$ & thin & 38.31(38.31) & medium & 44.92(44.92) &medium & 44.95(44.95) \\
SS1 & (ss12) & 0511380201 & 2008-01-05 & 0:43:28.8 & 40:55:12 & $-0.9,-1.4$ & thin & 8.85( 8.85) & medium & 11.28(11.28) &medium & 11.29(11.29) \\
SN2 & (sn21) & 0511380301 & 2008-01-06 & 0:39:40.8 & 40:58:48 & $-0.2,-0.4$ & thin & 24.79(24.79) & medium & 29.28(29.28) &medium & 29.29(29.29) \\
SS1 & (ss13) & 0511380601 & 2008-02-09 & 0:43:28.8 & 40:55:12 & $-0.8,-1.8$ & thin & 13.35(13.35) & medium & 15.07(15.07) &medium & 15.08(15.08)\\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\end{tabular}
\end{center}
\scriptsize{Notes:\\
$^{ *~}$: Systematic offset in RA and Dec in arcsec determined from correlations with 2MASS, USNO-B1, LGGS and {\it Chandra}\ catalogues \\
$^{ +~}$: All observations in full frame imaging mode\\
$^{ {\dagger}~}$: Exposure time in units of ks after screening for high
background used for detection, for colour image in brackets\\
$^{ {\ddagger}~}$: Combination of the three observations is called b (see text), RX denotes RX J0042.6+4115 \\
$^{ {\diamond}~}$: CXOM31 denotes CXOM31~J004059.2+411}
\normalsize
\end{table*}
\section{Data analysis}
\label{Sec:analys}
In this section, the basic concepts of the X-ray data reduction and source detection processes are described.
\subsection{Screening for high background}
\label{sec:Screening}
The first step was to exclude times of increased background, due to soft proton flares. Most of these times are located at the start or end of an orbit window. We selected good time intervals (GTIs) -- intervals where the intensity was lower than a certain threshold -- using 7--15\,keV light curves constructed from source-free regions of each observation.
The GTIs with PN and MOS data were determined from the higher statistic PN light curves. Outside the PN time coverage, GTIs were determined from the combined MOS light curves. For each observation, the limiting thresholds for the count rate were adjusted individually; this way we avoided cutting out short periods (up to a few hundred seconds) of marginally increased background. Short periods of low background, which were embedded within longer periods of high background, were omitted. For most observations, the PN count rate thresholds were 2--8\,cts\,ks$^{-1}$\,arcmin$^{-2}$.
As many of the observations were affected by strong background flares, the net exposure which can be used for our analysis was strongly reduced.
The GTIs of the various observations ranged over 6--56\,ks,
apart from observation b2 (ObsID 0202230301) which had to be rejected, because it showed high background throughout the observation. The exposures for all three EPIC instruments are given in Cols. 8, 10 and 12 of Table~\ref{tab:observations}.\@ The observations obtained during the summer visibility window of \object{M~31}\ were affected more strongly by background radiation than those taken during the winter window. The most affected observations of the deep survey were reobserved.
After screening for times of enhanced particle background, the second step was to examine the influence of solar wind charge exchange. This was done by producing soft energy \linebreak($<\!2$\,keV) background light curves. These lightcurves varied only for 10 observations, for which additional screening was necessary.
The screening of enhanced background due to solar wind charge exchange was applied to the observations only for the creation of colour images, in order to avoid that these observations will appear in the mosaic image with a tinge of red. The screening was not used for source detection.
The third and last step includes the study of the background due to detector noise. The processing chains take into account all known bad or hot pixels and columns and flag the affected pixels in the event lists. We selected data with {\tt (FLAG \& 0xfa0000)=0}, excluded rows and columns near edges, and searched by eye for additional warm or hot pixels and columns in each observation.
To avoid background variability over the PN images, we omitted the energy range from 7.2--9.2\,keV where strong fluorescence lines cause higher background in the outer detector area \citep{2004SPIE.5165..112F}.
An additional background component can occur during the EPIC PN offset map calculation.
If this period is affected by high particle background, the offset calculation will lead to a slight underestimate of the offset in some pixels which can then result in blocks of pixels ($\approx 4\!\times\!4$) with enhanced low energy signal.\footnote{See also \url{http://xmm2.esac.esa.int/docs/documents/CAL-TN-0050-1-0.ps.gz}} These blocks will be found by the {\tt SAS} detection tools and appear as sources with extremely soft spectrum (so called supersoft sources). To reduce the number of false detections in this source class, we decided to include the task {\tt epreject} in {\tt epchain}, which locates the pixels with a slight underestimate of the offset and corrects this underestimate. To ensure that {\tt epreject} produces reliable results, difference images of the event lists obtained with and without {\tt epreject}, were created. Only events with energies above 200\,eV were used. We checked whether {\tt epreject} removed all pixels with an enhanced low energy signal. Only in observation ns1 the difference image still shows a block of pixels with enhanced signal. As this block is also visible at higher energies (PHA$>30$) it cannot be corrected with {\tt epreject}. Additionally, we ascertained that almost all pixels not affected during the offset map calculation have a value consistent with zero in the difference images, with two exceptions discussed in Sect.\,\ref{Sec:srccat}.
\subsection{Images}
\label{Sec:Images}
For each observation, the data were split into five energy bands: (0.2--0.5)\,keV, (0.5--1.0)\,keV, (1.0--2.0)\,keV, (2.0--4.5)\,keV, and (4.5--12)\,keV. For the PN data, we used only single-pixel events (PATTERN\,$=$\,0) in the first energy band, while for the other bands, single-pixel and double-pixel events were selected (PATTERN\,$\le$\,4). In the MOS cameras, single-pixel to quadruple-pixel events (PATTERN\,$\le$\,12) were used. We created images, background images and exposure maps (with and without vignetting correction) for PN, MOS\,1 and MOS\,2 in each of the five energy bands and masked them for the acceptable detector area. The image bin size is 2\hbox{$^{\prime\prime}$}. The same procedure was applied in our previous \object{M~31} and M~33 studies \citep[PFH2005 and][]{2004A&A...426...11P}.
To create background images, the {\tt SAS} task {\tt eboxdetect} was run in local mode, in which it determines the background from the surrounding pixels of a sliding box, with box sizes of $5\times5$, $10\times10$ and $20\times20$ pixels (10\hbox{$^{\prime\prime}$}$\times$10\hbox{$^{\prime\prime}$}, 20\hbox{$^{\prime\prime}$}$\times$20\hbox{$^{\prime\prime}$} and 40\hbox{$^{\prime\prime}$}$\times$40\hbox{$^{\prime\prime}$}). The detection threshold is set to {\tt likemin\,=\,15}, which is a good compromise between cutting out most of the sources and leaving sufficient area to derive the appropriate background. For the background calculation, a two dimensional spline is fitted to a rebinned and exposure corrected image (task {\tt esplinemap}). The number of bins used for rebinning is controlled by the parameter {\tt nsplinenodes}, which is set to 16 for all but the observations of the central region, where it was set to 20 (maximum value). For PN, the background maps contain the contribution from the ``out of time (OoT)" events.
\subsection{Source detection}
\label{Sec:SrcDet}
For each observation, source detection was performed simultaneously on 5 energy bands for each EPIC camera, using the XMM-{\tt SAS} detection tasks {\tt eboxdetect} and {\tt emldetect}, as such fitting provides the most statistically robust measurements of the source positions by including all of the data. This method was also used to generate the 2XMM catalog \citep[cf][]{2009A&A...493..339W}. In the following we describe the detection procedure used.
The source detection procedure consists of two consecutive detection steps. An initial source list is created with the task {\tt eboxdetect} (\textit{cf.}\ Sect.\,\ref{Sec:Images}). To select source candidates down to a low statistical significance level, a low likelihood threshold of four was used at this stage. The background was estimated from the previously created background images (see Sect.\,\ref{Sec:Images}).
This list is the starting point for the XMM-{\tt SAS} task {\tt emldetect} (v.~4.60.1).\@ The {\tt emldetect} task performs a Maximum Likelihood fit of the distribution of source counts \citep[based on Cash C-statistics approach;][]{1979ApJ...228..939C}, using a point spread function model obtained from ray tracing calculations. If $P$ is the probability that a Poissonian fluctuation in the background is detected as a spurious source, the likelihood of the detection is then defined as $\mathcal{L}=-\ln\left( P \right)$.\footnote{This is a simplified description as {\tt emldetect} transforms the derived likelihoods to equivalent likelihoods, corresponding to the case of two free parameters. This allows comparison between detection runs with different numbers of free parameters.} The fit is performed simultaneously in all energy bands for all three cameras by summing the likelihood contribution of each band and each camera. Sources exceeding the detection likelihood threshold in the full band (combination of the 15 bands) are regarded as detections; the catalogue is thus full band selected.
The detection threshold used is 7, as in PFH2005. Some other parameters differ from the values used in PFH2005, as in this work a parameter setting optimised for the detection of extended sources was used (G.~Lamer; private communication). The parameters in question are the event cut-out ({\tt ecut\,=\,30.0}) and the source selection radius ({\tt scut\,=\,0.9}) for multi-source fitting, the maximum number of sources into which one input source can be split ({\tt nmulsou\,=\,2}), and the maximum number of sources that can be fitted simultaneously ({\tt nmaxfit\,=\,2}).\@ Multi-PSF fitting was performed in a two stage process for objects with a detection likelihood larger than ten. All of the sources were also fitted with a convolution of a $\beta$-model cluster brightness profile \citep[][]{1976A&A....49..137C} with the {XMM-Newton}\ point spread function, in order to detect any possible extension in the detected signal. Sources which have a core radius significantly larger than the PSF are flagged as extended. The free parameters of the fit were the source location, the source extent and the source counts in each energy band of each telescope.
To derive the X-ray flux of a source from its measured count rate, one uses the so-called energy conversion factors (ECF):
\begin{equation}
\mathrm{Flux}=\frac{\mathrm{Rate}}{\mathrm{ECF}}
\end{equation}
These factors were calculated using the detector response, and depended on the used filter, the energy band in question, and the spectrum of the source. As we wanted to apply the conversion factors to all sources found in the survey, we assumed a power law model with photon index $\Gamma\!=\!1.7$ and the Galactic foreground absorption of $N_{\mathrm{H}}\!=\!7\times10^{20}$\,cm$^{-2}$ \citep[][see also PFH2005]{1992ApJS...79...77S} to be the universal source spectrum for the ECF calculation.
The ECFs (see Table~\ref{tab:ECFvalues}) were derived with {\tt XSPEC}\footnote{\url{http://heasarc.gsfc.gov/docs/xanadu/xspec}}(v~11.3.2) using response matrices (V.7.1) available from the {XMM-Newton}\ calibration homepage\footnote{\url{http://xmm2.esac.esa.int/external/xmm_sw_cal/calib/epic_files.shtml}}. As all necessary corrections of the source parameters (\eg\ vignetting corrections) were included in the image creation and source detection procedure\footnote{especially in the {\tt emldetect} task}, the \emph{on axis} ECF values were derived \citep[\textit{cf.}\ ][]{2009A&A...493..339W}. The fluxes determined with the ECFs given in Table~\ref{tab:ECFvalues} are absorbed (\ie\ observed) fluxes and hence correspond to the observed count rates, which are derived in the {\tt emldetect} task.
During the mission lifetime, the MOS energy distribution behaviour has changed. Near the nominal boresight positions, where most of the detected photons hit the detectors, there has been a decrease in the low energy response of the MOS cameras \citep{2006ESASP.604..925R}. To take this effect into account, different response matrices for observations obtained before and after the year 2005 were used (see Table~\ref{tab:ECFvalues}).
\begin{table}
\begin{center}
\caption{Count rate to energy conversion factors.
The ECFs used for observations obtained before revolution 534 are marked with ``OLD".}
\begin{tabular}{lrrrrrr}
\hline\noalign{\smallskip}
\hline\noalign{\smallskip}
\multicolumn{1}{c}{Detector} & \multicolumn{1}{c}{Filter} & \multicolumn{1}{c}{B1} & \multicolumn{1}{c}{B2} & \multicolumn{1}{c}{B3} &
\multicolumn{1}{c}{B4} & \multicolumn{1}{c}{B5} \\
\noalign{\smallskip}
& & \multicolumn{5}{c}{$(10^{11}\mathrm{cts\,cm^2\,erg^{-1}})$} \\
\noalign{\smallskip}\hline\noalign{\smallskip}
EPIC PN & thin & $11.33$ & $8.44$ & $5.97$ & $1.94$ & $0.58$ \\
& medium & $10.05$ & $8.19$ & $5.79$ & $1.94$ & $0.58$ \\
EPIC MOS\,1 & thin & $2.25$ & $1.94$ & $2.06$ & $0.76$ & $0.14$ \\
& medium & $2.07$ & $1.90$ & $2.07$ & $0.75$ & $0.15$ \\
EPIC MOS\,2 & thin & $2.29$ & $1.98$ & $2.09$ & $0.78$ & $0.15$ \\
& medium & $2.06$ & $1.90$ & $2.04$ & $0.75$ & $0.15$ \\
EPIC MOS\,1 & thin & $2.59$ & $2.04$ & $2.12$ & $0.76$ & $0.15$ \\
OLD & medium & $2.33$ & $1.98$ & $2.09$ & $0.76$ & $0.15$ \\
EPIC MOS\,2 & thin & $2.58$ & $2.04$ & $2.13$ & $0.76$ & $0.15$ \\
OLD & medium & $2.38$ & $1.99$ & $2.09$ & $0.75$ & $0.16$ \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\end{tabular}
\label{tab:ECFvalues}
\end{center}
\normalsize
\end{table}
For most sources, band 5 just adds noise to the total count rate. If converted to flux, this noise often dominates the total flux due to the small ECF.\@ To avoid this problem we calculated count rates and fluxes for detected sources in the ``XID" (0.2--4.5)\,keV band (bands 1 to 4 combined). While for most sources this is a good solution, for extremely hard or soft sources there may still be bands
just adding noise. This, then, may lead to rate and flux errors that seem to falsely indicate a lower source significance. A similar effect occurs in the combined rates and fluxes, if a source is detected primarily by one instrument (\eg\ soft sources in PN).
Sources are entered in the XMM\ LP-total\ catalogue from the observation in which the highest source detection likelihood is obtained (either combined or single observations). For variable sources this means that the source properties given in the XMM\ LP-total\ catalogue (see Sect.\,\ref{Sec:srccat} and Table~5) are those observed during their brightest state.
We rejected spurious detections in the vicinity of bright sources. In regions with a highly structured background, the {\tt SAS} detection task {\tt emldetect} registered some extended sources. We also rejected these ``sources" as spurious detections. In an additional step we checked whether an object had visible contours in at least one image out of the five energy bands. The point-like or extended nature, which was determined with {\tt emldetect}, was taken into account. In this way, ``sources" that are fluctuations in the background, but which were not fully modelled in the background images, were detected. In addition, objects located on hot pixels, or bright pixels at the rim or in the corners of the individual CCD chips (which were missed during the background screening) were recognised and excluded from the source catalogue, especially if they were detected with a likelihood larger than six in one detector only.
To allow for a statistical analysis, the source catalogue only contains sources detected by the {\tt SAS} tasks {\tt eboxdetect} and {\tt emldetect} as described above, \ie\ the few sources that were not detected by the analysis program, despite being visible on the X-ray images, have not been added by hand as it was done in previous studies (SPH2008; PFH2005).
To classify the source spectra, we computed four hardness ratios. The hardness ratios and errors are defined as:
\begin{equation}
\mathrm{HR}i = \frac{B_{i+1} - B_{i}}{B_{i+1} + B_{i}}\; \mbox{and}\;\; \mathrm{EHR}i = 2 \frac{\sqrt{(B_{i+1} EB_{i})^2 + (B_{i} EB_{i+1})^2}}{(B_{i+1} + B_{i})^2},
\label{Eq:hardr}
\end{equation}
for {\it i} = 1 to 4, where $B_{i}$ and $EB_{i}$ denote count rates and corresponding errors in energy band {\it i}.
\subsection{Astrometrical corrections}
\label{SubSec:AstCorr}
To obtain astrometrically-corrected positions for the sources of the five central fields we used the {\tt SAS}-task {\tt eposcorr} with {\it Chandra}\ source lists \citep{2002ApJ...577..738K,2002ApJ...578..114K,2004ApJ...609..735W}.\@
For the other fields we selected sources from the USNO-B1 \citep{2003AJ....125..984M}, 2MASS \citep{2006AJ....131.1163S} and Local Group Galaxy Survey \citep[LGGS; ][]{2006AJ....131.2478M} catalogues\footnote{For the remainder of the subsection we will call all three catalogues ``optical catalogues" for easier readability, although the 2MASS catalogue is an infrared catalogue.}.
\subsubsection{Astrometry of optical/infrared catalogues}
In a first step, we examined the agreement between the positions given by the various optical catalogues.\footnote{From the LGGS catalogue only sources brighter than 21\,mag were used in order to be comparable to the brightness limit of the USNO-B1 catalogue.} A close examination of the shifts obtained, showed significant differences between the positions given in the individual catalogues. In summary, between the USNO-B1 and LGGS catalogues we found an offset of: $-$0\farcs197 in R.A.\ and 0\farcs067 in Dec\footnote{the offset in declination is negligible}; and between the USNO-B1 and 2MASS catalogues we found an offset of: $-$0\farcs108 in R.A.\ and 0\farcs204 in Dec. We chose the USNO-B1 catalogue as a reference, since it covers the entire field observed in the Deep {XMM-Newton}\ survey, and in addition it provides values for the proper motion of the optical sources.
Since the optical catalogues, as well as the Deep {XMM-Newton}\ catalogue, are composed of individual observations of sub-fields of \object{M~31}, we searched for systematic drifts in the positional zero points from region to region. However no systematic offsets were found.
Finally, we applied the corrections found to the sources in the LGGS and 2MASS catalogues, to bring all catalogues to the USNO-B1 reference frame.
The offsets found between the USNO-B1 and 2MASS catalogues can be explained by the independent determination of the astrometric solutions for these catalogues. Given that the positions provided in the LGGS catalogue are corrected with respect to the USNO-B1 catalogue \citep[see][]{2006AJ....131.2478M}, the offset found in right ascension was totally unexpected and cannot be explained.
\subsubsection{Corrections of the X-ray observations}
From the positionally corrected catalogues, we selected sources which either correlate with globular clusters from the Revised Bologna Catalogue \citep[V.3.4, January 2008; ][]{2004A&A...416..917G,2005A&A...436..535G,2006A&A...456..985G,2007A&A...471..127G} or with foreground stars, characterised by their optical to X-ray flux ratio \citep{1988ApJ...326..680M} and their hardness ratio \citep[see source selection criteria given in Table~\ref{Tab:class} and][]{2008A&A...480..599S}. For sources selected from the USNO-B1 catalogue, we used the proper motion corrected positions. We then used the {\tt SAS}-task {\tt eposcorr} to derive the offset of the X-ray aspect solution. Four observations did not have enough optical counterparts to apply this method. The lack of counterparts is due to the very short exposure times resulting after the screening for high background (obs.~s3, ss12, ss13) and the location of the observation (obs.~sn11). In these cases, we used bright persistent X-ray sources, which we correlated with another observation of the same field. We checked for any residual systematic uncertainty in the source positions and found it to be well characterised by a conservative $1\sigma$ value of 0\,\farcs5.\@ This uncertainty is due to positional errors of the optical sources as well as inaccuracy in the process of the determination of the offset between optical and X-ray sources, and is called systematic positional error. The appropriate offset, given in Col.~6 of Table~\ref{tab:observations}, was applied to the event file of each pointing, and images and exposure maps were then reproduced with the corrected astrometry.\\
Fields that were observed at least twice are treated in a special way, which is described in the following section.
\subsection{Multiple observations of selected fields}
The fields that were observed more than once were the central field, the fields pointing on RX J0042.6+4115\footnote{The combination of observations b1, b3 and b4 is called b.}, two fields located on the major axis of \object{M~31}\ (S2, N2) and all fields of the ``Large Survey" located in the southern part of the galaxy (SS1, SS2, SS3, S3, SN3, SN2, SN1).\@ To reach higher detection sensitivity we merged the images, background images and exposure maps of observations which have the same pointing direction and were obtained with the same filter setting. Subsequently, source detection, as described in Sect.~\ref{Sec:SrcDet}, was repeated on the merged data. For the S2 field, there are two observations with different filter settings. In this case, source detection was performed simultaneously on all 15 bands of both observations, \ie\ on 30 bands simultaneously. The N2 field was treated in the same way. For the central field images, background images and exposure maps of observations c1, c2 and c3 were merged. These merged data were used together with the data of observation c4 to search for sources simultaneously; in this way it was possible to take into account the different ECFs for the different filters. One field was observed twice with slightly different pointing direction in observations sn1 and sn11; simultaneous source detection was used for these observations also.
\subsection{Variability calculation}
\label{Sec:DefVar}
To examine the time variability of each source listed in the total source catalogue, we determined the XID flux at the source position in each observation or at least an upper limit for the XID flux. We used the task {\tt emldetect}
with fixed source positions when calculating the total flux. To get fluxes and upper limits for all sources in the input list we set the detection likelihood threshold to 0.
A starting list was created from the full source catalogue, which only contains the identification number and position of each source located in the field examined. To give correct results, the task {\tt emldetect} has to process the sources from the brightest one to the faintest one. We, therefore, had to first order the sources in each observation by the detection likelihood. For sources not visible in the observation in question we set the detection likelihood to 0. This list was used as input for a first {\tt emldetect} run. In this way we achieved an output list in which a detection likelihood was allocated to every source. For a final examination of the sources in order of detection likelihood, a second {\tt emldetect} run was necessary.
We only accepted XID fluxes for detections $\ge$ 3 $\sigma$; otherwise we used a 3 $\sigma$ upper limit.
To compare the XID fluxes between the different observations, we calculated the significance of the difference
\begin{equation}
S=\frac{F_{\mathrm{max}}- F_{\mathrm{min}}}{\sqrt{\sigma_{\mathrm{max}}^2+\sigma_{\mathrm{min}}^2}}
\end{equation}
and the ratio of the XID fluxes $V=F_{\mathrm{max}}/F_{\mathrm{min}}$, where $F_{\mathrm{max}}$ and $F_{\mathrm{min}}$ are the maximum and minimum (or upper limit) source XID flux, and $\sigma_{\mathrm{max}}$ and $\sigma_{\mathrm{min}}$ are the errors of the maximum and minimum flux, respectively. This calculation was not performed whenever $F_{\mathrm{max}}$ was an upper limit. Finally, the largest XID flux of each source was derived, excluding upper limits.
\subsection{Spectral analysis}
To extract the X-ray spectrum of individual sources, we selected an extraction region and a corresponding background region which was at least as large as the source region, was located on the same CCD at a similar off axis angle as the source, and did not contain any point sources or extended emission. For EPIC PN, we only accepted single-pixel events for the spectra of supersoft sources, while for all other spectra single and double-pixel events were used. For the EPIC-MOS detectors, single-pixel through to quadruple-pixel events were always used. Additionally, we only kept events with FLAG\,$=$\,0 for all three detectors. For each extraction region, we produced the corresponding response matrix files and ancillary response files.
For each source, the spectral fit was obtained by fitting all three EPIC spectra simultaneously, using the tool {\tt XSPEC}.\@ For the absorption, we used the {\tt TBabs} model, with abundances from \citet{2000ApJ...542..914W} and photoelectric absorption cross-sections from \citet{1992ApJ...400..699B} with a new He cross-section based on \citet{1998ApJ...496.1044Y}.\@
\subsection{Cross correlations}
\label{Sec:CrossCorr_Tech}
Sources were regarded as correlating if their positions overlapped within their 3$\sigma$ (99.73\%) positional errors, defined as \citep{2009A&A...493..339W}:
\begin{equation}
\Delta\mathrm{pos}\le3.44\times\sqrt{\sigma_{\mathrm{stat}}^2 + \sigma_{\mathrm{syst}}^2}+3\times\sigma_{\mathrm{ccat}}
\label{Eq:Cor}
\end{equation}
where $\sigma_{\mathrm{stat}}$ is the statistical and $\sigma_{\mathrm{syst}}$ the systematic error of the X-ray sources detected in the present study. The statistical error was derived by {\tt emldetect}.\@ The determination of the systematic error is described in Sect.\,\ref{SubSec:AstCorr}. We use a value of 0\,\farcs5, for all sources. The positional error of the sources in the catalogue used for cross-correlation is given by $\sigma_{\mathrm{ccat}}$. The values of $\sigma_{\mathrm{ccat}}$ (68\% error) used for the different X-ray catalogues can be found in Table~\ref{Tab:XrayRefCat}.\@ Exceptions to Eq.~\ref{Eq:Cor} are sources that are listed in more than one catalogue or that are resolved into multiple sources with {\it Chandra}.\@ The first case is restricted to catalogues with comparable spatial resolution and hence positional uncertainty.
To identify the X-ray sources in the field of \object{M~31}\ we searched for correlations with catalogues in other wavelength regimes. The {XMM-Newton}\ source catalogue was correlated with the following catalogues and public data bases:
\begin{description}
\item [Globular Clusters:] Bologna Catalogue \citep[V.3.5, March 2008; ][$\sigma_{\mathrm{ccat}}=0.\arcsec2$; RBV~3.5]{2004A&A...416..917G,2005A&A...436..535G,2006A&A...456..985G,2007A&A...471..127G,2009A&A...508.1285G}, \citet[][$\sigma_{\mathrm{ccat}}=0.\arcsec2$]{2009AJ....137...94C}, \citet[][$\sigma_{\mathrm{ccat}}=0.\arcsec5$]{2009AJ....138..770H}, \citet[][$\sigma_{\mathrm{ccat}}=0.\arcsec2$]{2008PASP..120....1K}, \citet[][$\sigma_{\mathrm{ccat}}=0.\arcsec2$]{2007PASP..119....7K}, \citet[][]{2005PASP..117.1236F}, \citet[][$\sigma_{\mathrm{ccat}}=1\hbox{$^{\prime\prime}$}$]{1993PhDT........41M}
\item [Novae:] Nova list of the \object{M~31}\ Nova Monitoring Project\footnote{\url{http://www.mpe.mpg.de/~m31novae/opt/m31/M31_table.html}} ($\sigma_{\mathrm{ccat}}$ is given for each individual source), PHS2007, \citet{2010AN....331..187P}
\item [Supernova Remnants:] \citet[][19 srcs]{1980A&AS...40...67D}, \citet[][967 srcs]{1992A&AS...92..625W} and \citet[][58 srcs]{1993A&AS...98..327B}, \citet[][233 srcs]{1995A&AS..114..215M}; An X-ray source is considered as correlating with a SNR, if the X-ray source position (including 3$\sigma$ error) lies within the extent given for the SNR.
\item [Radio Catalogues:] \citet[][$\sigma_{\mathrm{ccat}}$ is given for each individual source]{2005ApJS..159..242G}, \citet[][$\sigma_{\mathrm{ccat}}$ is given for each individual source]{2004ApJS..155...89G}, \citet[][$\sigma_{\mathrm{ccat}}=3\hbox{$^{\prime\prime}$}$]{2008AJ....136..684K}, \citet[][$\sigma_{\mathrm{ccat}}$ is given for each individual source]{1990ApJS...72..761B}, NVSS \citep[NRAO/VLA Sky Survey\footnote{\url{http://www.cv.nrao.edu/nvss/NVSSlist.shtml}};][$\sigma_{\mathrm{ccat}}$ is given for each individual source]{1998AJ....115.1693C}
\item [H {\small II} Regions, H $\alpha$ Catalogue:] \citet[][$\sigma_{\mathrm{ccat}}$ is given for each individual source]{1992A&AS...92..625W}, \citet[][$\sigma_{\mathrm{ccat}}=0.\arcsec2$]{2007AJ....134.2474M}
\item [Optical Catalogues:] USNO-B1 \citep[][$\sigma_{\mathrm{ccat}}$ is given for each individual source]{2003AJ....125..984M}, Local Group Survey \citep[LSG; ][$\sigma_{\mathrm{ccat}}=0.\arcsec2$]{2006AJ....131.2478M}
\item [Infrared catalogues:] 2MASS \citep[][$\sigma_{\mathrm{ccat}}$ is given for each individual source]{2006AJ....131.1163S}, \citet[][$\sigma_{\mathrm{ccat}}=0.\arcsec8$, for Table~2: $\sigma_{\mathrm{ccat}}=0.\arcsec5$]{2008ApJ...687..230M}
\item [Data bases:] the SIMBAD catalogue\footnote{\url{http://simbad.u-strasbg.fr/simbad}} (Centre de Donn\'ees astronomiques de Strasbourg; hereafter SIMBAD) , the NASA Extragalactic Database\footnote{\url{http://nedwww.ipac.caltech.edu}} (hereafter NED)
\end{description}
\begin{table}
\begin{center}
\caption{X-ray source catalogues used for cross-correlation and the used positional errors}
\begin{tabular}{lrlr}
\hline\noalign{\smallskip}
\hline\noalign{\smallskip}
\multicolumn{1}{l}{X-ray catalogue$^{\ddagger}$} & \multicolumn{1}{l}{$\sigma_{\mathrm{ccat}}^{\dagger}$} & \multicolumn{1}{l}{X-ray catalogue$^{\ddagger}$} & \multicolumn{1}{l}{$\sigma_{\mathrm{ccat}}^{\dagger}$}\\
\hline\noalign{\smallskip}
PFH2005 & $*$ & DKG2004 & 0\,\farcs3 \\
SPH2008 & $*$ & WNG2006 & 0\,\farcs3 \\
SHP97 & $*$ & VG2007 & 0\,\farcs4 \\
SHL2001 & $*$ & OBT2001 & 3\hbox{$^{\prime\prime}$} \\
PFJ93 & $*$ & O2006 & 1\hbox{$^{\prime\prime}$} \\
TF91 & $*$ & SBK2009 & 3\hbox{$^{\prime\prime}$}$^{+}$ \\
Ka2002 & 0\,\farcs3 & D2002 & 0\,\farcs5 \\
KGP2002 & $*$ & TP2004 & 1\hbox{$^{\prime\prime}$} \\
WGK2004 & 1\hbox{$^{\prime\prime}$}$^{+}$ & ONB2010 & 1\hbox{$^{\prime\prime}$} \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\end{tabular}
\label{Tab:XrayRefCat}
\end{center}
Notes:\\
$^{ {\dagger}~}$: $*$ indicates that the catalogue provides $\sigma_{\mathrm{ccat}}$ values for each source individually\\
$^{ +~}$: value taken from indicated paper\\
$^{ {\ddagger}~}$: TF91: \citet{1991ApJ...382...82T}, PFJ93: \citet{1993ApJ...410..615P}, SHP97: \citet{1997A&A...317..328S}, SHL2001: \citet{2001A&A...373...63S}, OBT2001: \citet{2001A&A...378..800O}, D2002: \citet{2002ApJ...570..618D}, KGP2002: \citet{2002ApJ...577..738K}, Ka2002: \citet{2002ApJ...578..114K}, WGK2004: \citet{2004ApJ...609..735W}, DKG2004: \citet{2004ApJ...610..247D}, TP2004: \citet{2004ApJ...616..821T}, PFH2005: \citet{2005A&A...434..483P}, O2006: \citet{2006ApJ...643..844O}, WNG2006: \citet{2006ApJ...643..356W}, VG2007: \citet{2007A&A...468...49V}, SPH2008: \citet{2008A&A...480..599S}, SBK2009: \citet{2009A&A...495..733S}, ONB2010: \citet{2010ApJ...717..739O}
\normalsize
\end{table}
\section{Colour image}
\label{Sec:coim}
Figure~\ref{Fig:cimage} shows the combined, exposure corrected EPIC PN, MOS\,1 and MOS\,2 RGB (red-green-blue) mosaic image of the Deep Survey and archival data. The colours represent the X-ray energies as follows: red: 0.2--1.0\,keV, green: 1.0--2.0\,keV and blue: 2.0--12\,keV. The optical extent of \object{M~31}\ is indicated by the $\mathrm{D_{25}}$ ellipse and the boundary of the observed field is given by the green contour. The image is smoothed with a 2D-Gaussian of 20\hbox{$^{\prime\prime}$}\ FWHM. In some observations, individual noisy MOS\,1 and MOS\,2 CCDs are omitted.
The images have not been corrected for the background of the detector or for vignetting.\\
The colour of the sources reflects their class. Supersoft sources appear in red. Thermal SNRs and foreground stars are orange to yellow. ``Hard" sources (background objects, mainly AGN, and X-ray binaries or Crab-like SNRs) are blue to white.
\begin{figure*}
\sidecaption
\includegraphics[width=12cm]{pics/M31_colourimage_th21.ps}
\caption{Combined EPIC PN, MOS\,1 and MOS\,2 RGB image of the Deep \object{M~31}\ Survey including archival data. The optical extent of \object{M~31}\ is indicated by the $\mathrm{D_{25}}$ ellipse and the boundary of the observed field is given by the green contour. The central region, marked with the yellow square, is shown in higher resolution in the upper right corner. For more details see Sect.\,\ref{Sec:coim}.
\label{Fig:cimage}}
\end{figure*}
Logarithmically scaled {XMM-Newton}\ EPIC low background images made up of the combined images from the PN, MOS\,1 and MOS\,2 cameras in the (0.2--4.5) keV XID band for each \object{M~31}\ observation can be found in the Appendix. The images also show X-ray contours, and the sources from the XMM\ LP-total\ catalogue are marked with boxes.
\section{Source catalogue (XMM\ LP-total)}
\label{Sec:srccat}
The source catalogue of the Deep {XMM-Newton}\ survey of \object{M~31}\ (hereafter XMM\ LP-total\ catalogue) contains 1\,897 X-ray sources. Of these sources 914 are detected for the first time in X-rays.
The source parameters are summarised in Table~5, which gives the source number (Col.~1), detection field from which the source was entered into the catalogue (2), source position (3 to 9) with $3\sigma$ (99.73\%) uncertainty radius (10), likelihood of existence (11), integrated PN, MOS\,1 and MOS\,2 count rate and error (12,13) and flux and error (14,15) in the (0.2--4.5) keV XID band, and hardness ratios and errors (16--23). Hardness ratios are calculated only for sources for which at least one of the two band count rates has a significance greater than $2\sigma$. Errors are the properly combined statistical errors in each band and can extend beyond the range of allowed values of hardness ratios as defined previously (--1.0 to 1.0; Eq.~\ref{Eq:hardr}). The ``Val'' parameter (Col 24) indicates whether the source is within the field of view (true or false, ``T'' or ``F'') in the PN, MOS\,1 and MOS\,2 detectors respectively.
Table~5 also gives the exposure time (25), source existence likelihood (26), the count rate and error (27, 28) and the flux and error (29, 30) in the (0.2--4.5)\,keV XID band, and hardness ratios and errors (31--38) for the EPIC PN. Columns 39 to 52 and 53 to 66 give the same information corresponding to Cols.\ 25 to 38, but for the EPIC MOS\,1 and MOS\,2 instruments. Hardness ratios for the individual instruments were again screened as described above. From the comparison between the hardness ratios derived from the integrated PN, MOS\,1 and MOS\,2 count rates (Cols. 16--23) and the hardness ratios from the individual instruments (Cols. 31--38, 45--52 and 59--66), it is clear that the combined count rates from all instruments yielded a significantly larger fraction of hardness ratios above the chosen significance threshold.
Column~67 shows cross correlations with published \object{M~31}\ X-ray catalogues (\textit{cf.}\ Sect.\,\ref{Sec:CrossCorr_Tech}). We discuss the results of the cross correlations in Sects.\,\ref{Sec:fgback} and \ref{Sec:Srcsm31}.
In the remaining columns of Table~5,
we give information extracted from the USNO-B1, 2MASS and LGGS catalogues (\textit{cf.}\ Sect.\,\ref{Sec:CrossCorr_Tech}). The information from the USNO-B1 catalogue (name, number of objects within search area, distance, B2, R2 and I magnitude of the brightest\footnote{in B2 magnitude} object) is given in Cols.~68 to 73.\@ The 2MASS source name, number of objects within search area, and the distance can be found in Cols.~74 to 76. Similar information from the LGGS catalogue is given in Cols.~77 to 82 (name, number of objects within search area, distance, V magnitude, V-R and B-V colours of the brightest\footnote{in B magnitude} object).\@ To improve the reliability of source classifications we used the USNO-B1 B2 and R2 magnitudes to calculate
\begin{equation}
\log\left(\frac{f_{\mathrm{x}}}{f_{\mathrm{opt}}}\right) = \log\left(f_{\mathrm{x}}\right) + \frac{m_{\mathrm{B2}} + m_{\mathrm{R2}}}{2\times2.5} + 5.37,
\label{Eq:fxopt}
\end{equation}
and the LGGS V magnitude to calculate
\begin{equation}
\log\left(\frac{f_{\mathrm{x}}}{f_{\mathrm{opt}}}\right) = \log\left(f_{\mathrm{x}}\right) + \frac{m_{\mathrm{V}}}{2.5} + 5.37,
\label{Eq:fxvopt}
\end{equation}
following \citet[][ see Cols. 83--86]{1988ApJ...326..680M}.
The X-ray sources in the XMM\ LP-total\ catalogue are identified or classified based on properties in X-rays (HRs, variability, extent) and of the correlated objects in other wavelength regimes (Cols.\, 87 and 88 in Table~5).\@ For classified sources the class name is given in angled brackets. Identification and classification criteria are summarised in Table~\ref{Tab:class}, which provides, for each source class (Col.\,1), the classification criteria (2), and the numbers of identified (3) and classified (4) sources.
The hardness ratio criteria are based on model spectra. Details on the definition of these criteria can be found in Sect.\,6 of PFH2005. As we have no clear hardness ratio criteria to discriminate between XRBs, Crab-like supernova remnants (SNRs) or AGN we introduced a $<$hard$>$ class for those sources. If such a source shows strong variability (i.\,e.\ V$\ge$10) on the examined time scales it is likely to be an XRB. Compared with SPH2008
the HR2 selection criterion for SNRs was tightened (from HR2$<\!-0.2$ to HR2$+$EHR2$<\!-0.2$) to exclude questionable SNR candidates from the class of SNRs. If we applied the former criterion to the survey data, $\sim$35 sources would be classified as SNRs in addition to those listed in Table~\ref{Tab:class}. Most of the 35 sources are located outside the D$_{25}$ ellipse, and none of them correlates with an optically identified SNR, a radio source, or an H{\small II} region. In addition, the errors in HR2 are of the same order as the HR2 values. It is therefore very likely that these sources do belong to other classes, since the strip between $-0.3\!<$HR2$<$0 is populated by foreground stars, XRBs, background objects, and candidates for these three classes.
Outcomes of the identification and classification processes are discussed in detail in Sects.\,\ref{Sec:fgback} and \ref{Sec:Srcsm31}.
The last column (89) of Table~5 contains the {XMM-Newton}\ source name as registered to the IAU Registry. Source names consist of the acronym XMMM31 and the source position as follows: XMMM31~Jhhmmss.s+ddmmss, where the right ascension is given in hours~(hh), minutes~(mm) and seconds~(ss.s) truncated to decimal seconds and the declination is given in degrees~(dd), arc minutes~(mm) and arc seconds~(ss) truncated to arc seconds, for equinox 2000. In the following, we refer to individual sources by their source number (Col.\,1 of Table~5), which is marked with a ``\hbox{N$^{\underline{o}}$}" at the front of the number.
Of the 1\,897 sources, 1\,247 can only be classified as $<$hard$>$ sources, while 123 sources remain without classification. Two of them (\hbox{N$^{\underline{o}}$}\ 482, \hbox{N$^{\underline{o}}$}\ 768) are highly affected by optical loading; both ``X-ray sources" coincide spatially with very bright optical foreground stars (USNO-B1 R2 magnitudes of 6.76 and 6.74 respectively). The spectrum of source \hbox{N$^{\underline{o}}$}\ 482 is dominated by optical loading. This becomes evident from the hardness ratios which indicate an SSS. For \hbox{N$^{\underline{o}}$}\ 768 the hardness ratios would allow a foreground star classification. The obtained count rates and fluxes of both sources are affected by the usage of {\tt epreject}, which neutralises the corrections applied for optical loading. Therefore residuals are visible in the difference images created from event lists obtained with and without {\tt epreject}. As we cannot exclude the possibility that some of the detected photons are true X-rays -- especially for source \hbox{N$^{\underline{o}}$}\ 768 --, we decided to include them in the XMM\ LP-total\ catalogue, but without a classification.
\begin{table*}
\addtocounter{table}{+1}
\scriptsize
\begin{center}
\caption{Summary of identifications and classifications.}
\begin{tabular}{llrr}
\hline\noalign{\smallskip}
\hline\noalign{\smallskip}
\multicolumn{1}{c}{Source class} &
\multicolumn{1}{c}{Selection criteria} &
\multicolumn{1}{c}{identified} &
\multicolumn{1}{c}{classified} \\
\noalign{\smallskip}\hline\noalign{\smallskip}
fg Star & ${\rm log}({{f}_{\rm x} \over {f}_{\rm opt}})\!<\!-1.0$ and HR2$-$EHR$2\!<\!0.3$ and HR3$-$EHR$3\!<\!-0.4$ or not defined & 40 & 223 \\
AGN & Radio source and not classification as SNR from HR2 or optical/radio & 11 & 49 \\
Gal & optical id with galaxy & 4 & 19 \\
GCl & X-ray extent and/or spectrum & 1 & 5\\
SSS & HR$1\!<\!0.0$, HR2$-$EHR$2\!<\!-0.96$ or HR2 not defined, HR3, HR4 not defined & & 30 \\
SNR & HR$1\!>\!-0.1$ and HR2$+$EHR$2\!<\!-0.2$ and not a fg Star, or id with optical/radio SNR & 25 & 31 \\
GlC & optical id & 36 & 16\\
XRB & optical id or X-ray variability & 10 & 26 \\
hard & HR2$-$EHR$2\!>\!-0.2$ or only HR3 and/or HR4 defined, and no other classification& & 1\,247 \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\end{tabular}
\normalsize
\label{Tab:class}
\end{center}
\end{table*}
\subsection{Flux distribution}
\label{Sec:flux_dist}
The faintest source (\hbox{N$^{\underline{o}}$}\ 526) has an XID band flux of 5.8\ergcm{-16}.\@ The source with the highest XID Flux (\hbox{N$^{\underline{o}}$}\ 966, XID band flux of 3.75\ergcm{-12}) is located in the centre of \object{M~31}\ and identified as a Z-source LMXB \citep{2003A&A...411..553B}. This source has a mean absorbed XID luminosity of 2.74\ergs{38}.\@
Figure \ref{Fig:XIDfluxdist} shows the distribution of the XID (0.2--4.5\,keV) source fluxes. Plotted are the number of sources in a certain flux bin. We see from the inlay that the number of sources starts to decrease in the bin from 2.4 to 2.6\ergcm{-15}. This XID flux roughly determines the completeness limit of the survey and corresponds to an absorbed 0.2--4.5\,keV limiting luminosity of $\sim\!2$\ergs{35}.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics[clip]{pics/XID_Fluxdist_new_V5.ps}}
\caption{Distribution of the source fluxes in the 0.2--4.5\,keV (XID) band. The diagrams show the number of sources at each flux bin, plotted versus the flux, using logarithmic scales. The inlay shows the number of sources for XID fluxes smaller than 5\ergcm{-15}, on linear scales.
The blue histogram gives the distribution of sources classified or identified as either SSSs, SNRs, XRBs or GlCs.
}
\label{Fig:XIDfluxdist}
\end{figure}
Previous X-ray studies \citep[][and references therein]{2004ApJ...609..735W} noted a lack of bright sources ($L_{\mathrm{X}}\!\ga$\oergs{37}; 0.1--10\,keV) in the northern half of the disc compared to the southern half. This finding is not supported in the present study. Excluding the pointings to the centre of \object{M~31}, we found in the remaining observations 13 sources in each hemisphere that were brighter than $L_{\mathrm{X\,abs}}\!\ga$\oergs{37}.\footnote{The luminosity is based on XID Fluxes. Using the total 0.2--12\,keV band the result does not change (23 in the northern half and 24 in the southern half).}\@ The reason our survey does not support the old results is that we found several bright sources in the outer regions of the northern half of the disk, which have not been covered in \citet[][and references therein]{2004ApJ...609..735W}. In the central field of \object{M~31}, a total of 41 sources brighter than $L_{\mathrm{X}}\!\ga$\oergs{37} (0.2--4.5\,keV) were found.
Figure~\ref{Fig:brightS} shows the spatial distribution of the bright sources. Striking features are the two patches located north and south of the centre. The southern one seems to point roughly in the direction of M~32 (\hbox{N$^{\underline{o}}$}\ 995), while the northern one ends in the globular cluster B\,116 (\hbox{N$^{\underline{o}}$}\ 947). However there is no association to any known spatial structure of \object{M~31}, like \eg\ the spiral arms.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics[clip]{pics/spdist_brsrc_new3.ps}}
\caption{{XMM-Newton}\ Deep Survey image over plotted with sources that have an absorbed 0.2--4.5\,keV luminosity larger than \oergs{37}. Striking features are the two patches located north and south of the centre. The central region (same as in Fig.\,\ref{Fig:cimage}) is shown with higher resolution in the upper right corner.}
\label{Fig:brightS}
\end{figure}
\subsection{Exposure map}
\label{Sec:ExpMap}
Figure~\ref{Fig:ExpMap} shows the exposure map
used to create the colour image of all {XMM-Newton}\ Large Survey and archival observations (Fig.\,\ref{Fig:cimage}). The combined MOS exposure was weighted by a factor of 0.4, before being added to the PN exposure. However, this map does not quite represent the exposures used in source detection; overlapping regions were not combined during source detection.
From Fig.\,\ref{Fig:ExpMap} we see that the exposure for most of the surveyed area is rather homogeneous. Exceptions are the central area, overlapping regions and observation h4.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics[clip]{pics/expmap_th_new.ps}}
\caption{Exposure map of all fields of the XMM\ LP-total\ catalogue. For details see Sect.\,\ref{Sec:ExpMap}.}
\label{Fig:ExpMap}
\end{figure}
\subsection{Hardness ratio diagrams}
We plot X-ray colour/colour diagrams based on the HRs (see Fig.\,\ref{Fig:HR_diagrams}). Sources are plotted as dots if the error in both contributing HRs is below 0.2. Classified and identified sources are plotted as symbols in all cases. Symbols including a dot therefore mark the well-defined HRs of a class.
From the HR1-HR2 diagram (upper panel in Fig.\,\ref{Fig:HR_diagrams}) we note that the class of SSSs is the only one that can be defined based on hardness ratios alone. In the part of the HR1-HR2 diagram that is populated by SNRs, most of the foreground stars and some background objects and XRBs are also found.
Foreground star candidates can be selected from the HR2-HR3 diagram (middle panel in Fig.\,\ref{Fig:HR_diagrams}), where most of them are located in the lower left corner. The HR3-HR4 diagram (lower panel in Fig.\,\ref{Fig:HR_diagrams}) does not help to disentangle the different source classes. Thus, we need additional information from correlations with sources in other wavelengths or on the source variability or extent to be able to classify the sources.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics[clip]{pics/HR1_HR2_clean_optcorr_V6n.ps}}
\resizebox{\hsize}{!}{\includegraphics[clip]{pics/HR2_HR3_clean_optcorr_V6n.ps}}
\resizebox{\hsize}{!}{\includegraphics[clip]{pics/HR3_HR4_clean_optcorr_V6n.ps}}
\caption{Hardness ratios of sources detected by {XMM-Newton}\ EPIC.\@ Sources with HR errors smaller then 0.20 on both HR$(i)$ and HR$(i+1)$ are shown as dots. Foreground stars and candidates are marked as big and small stars, AGN and candidates as big and small crosses, background galaxies and galaxy clusters as big ``X" and their candidates as small ``X", SSS candidates as triangles, SNRs and candidates as big and small octagons, GlCs and XRBs as big squares and their candidates as small squares.}
\label{Fig:HR_diagrams}
\end{figure}
\subsection{Extended sources}
\label{Sec:ExtSrcs}
The XMM\ LP-total\ catalogue contains 12 sources which are fitted as extended sources with a likelihood of extension larger than 15.\@ This value was chosen so as to minimise the number of spurious detections of extended sources (H.~Brunner; private communication), as well as keeping all sources that can clearly be seen as extended sources in the X-ray images. A convolution of a $\beta$-model cluster brightness profile \citep{1976A&A....49..137C} with the {XMM-Newton}\ point spread function was used to determine the extent of the sources (\textit{cf.}\ Sect.\,\ref{Sec:SrcDet}). This model describes the brightness profile of galaxy clusters, as
\begin{equation}
f\left(x,y\right)=\left(1+\frac{\left(x-x_0\right)^2+\left(y-y_0\right)^2}{r_{\rm{c}}^2}\right)^{-3/2},
\end{equation}
where $r_{\rm{c}}$ denotes the core radius; this is also the extent parameter given by {\tt emldetect}.
Table~\ref{Tab:ExtSrcs} gives the source number (Col.~1), likelihood of detection (2), the extent found (3) and its associated error (4) in arcsec, the likelihood of extension (5), and the classification of the source (6, see Sect.\,\ref{SubSec:Gal_GCl_AGN}) for each of the 12 extended sources. Additional comments taken from Table~5 are provided in the last column.
\begin{table*}
\scriptsize
\begin{center}
\caption{Extended sources in the XMM\ LP-total\ catalogue}
\begin{tabular}{rrrrrrrcl}
\hline\noalign{\smallskip}
\hline\noalign{\smallskip}
\multicolumn{1}{l}{SRC} & \multicolumn{1}{l}{DET\_ML}& \multicolumn{1}{l}{EXT$^{+}$}& \multicolumn{1}{l}{EEXT$^{+}$}& \multicolumn{1}{l}{EXT\_ML}& \multicolumn{1}{l}{XFLUX$^{*}$}& \multicolumn{1}{l}{XEFLUX$^{*}$}& \multicolumn{1}{l}{class}& \multicolumn{1}{l}{comment$^{\dagger}$}\\
\multicolumn{1}{c}{(1)} & \multicolumn{1}{c}{(2)}& \multicolumn{1}{c}{(3)}& \multicolumn{1}{c}{(4)}& \multicolumn{1}{c}{(5)}& \multicolumn{1}{c}{(6)}& \multicolumn{1}{c}{(7)}& \multicolumn{1}{c}{(8)}& \multicolumn{1}{c}{(9)}\\
\hline\noalign{\smallskip}
141 & 65.08 & 11.22 & 1.29 & 23.68 & 1.45 & 0.20 & $<$GCl$>$ & GLG127(Gal), 37W 025A (IR, RadioS; NED) \\
199 & 275.16 & 17.33 & 1.05 & 174.73 & 4.31 & 0.29 & $<$hard$>$ & \\
252 & 222.05 & 14.64 & 1.12 & 81.60 & 4.40 & 0.49 & $<$GCl$>$ & 5 optical objects in error box \\
304 & 299.75 & 15.10 & 0.92 & 133.62 & 2.20 & 0.18 & $<$GCl$>$ & B242 [CHM09]; RBC3.5: $<$GlC$>$ \\
442 & 33.76 & 11.60 & 1.71 & 15.44 & 1.62 & 0.28 & $<$hard$>$ & \\
618 & 271.08 & 6.20 & 0.73 & 42.86 & 3.15 & 0.21 & $<$hard$>$ & \\
718 & 77.75 & 7.18 & 1.23 & 21.47 & 0.58 & 0.07 & Gal & B052 [CHM09], RBC3.5 \\
1\,130 & 168.31 & 10.80 & 0.97 & 44.23 & 3.27 & 0.31 & $<$hard$>$ & \\
1\,543 & 70.49 & 11.87 & 1.37 & 28.63 & 1.51 & 0.19 & $<$GCl$>$ & [MLA93] 1076 PN (SIM,NED) \\
1\,795 & 11\,416.36 & 18.79 & 0.29 & 4\,169.74 & 98.87 & 1.43 & GCl & GLG253 (Gal), [B90] 473, z=0.3 [KTV2006] \\
1\,859 & 107.09 & 13.73 & 1.40 & 43.89 & 1.23 & 0.19 & $<$hard$>$ & \\
1\,912 & 332.06 & 23.03 & 1.23 & 213.90 & 5.43 & 0.37 & $<$GCl$>$ & cluster of galaxies candidate \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\end{tabular}
\label{Tab:ExtSrcs}
\end{center}
Notes:\\
$^{ +~}$: Extent and error of extent in units of 1\hbox{$^{\prime\prime}$}; 1\hbox{$^{\prime\prime}$}\ corresponds to 3.8\,pc at the assumed distance of \object{M~31} \\
$^{ *~}$: XID Flux and flux error in units of 1\ergcm{-14} \\
$^{ \dagger~}$: Taken from Table~5
\normalsize
\end{table*}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics[clip]{pics/extent.ps}}
\caption{Distribution of extent parameter.}
\label{Fig:extdist}
\end{figure}
The extent parameter found for the sources ranges from 6\farcs2
to 23\farcs03 (see Fig.\,\ref{Fig:extdist}). The brightest source (\hbox{N$^{\underline{o}}$}\ 1\,795), which has the highest likelihood of extension and the second largest extent, was identified from its X-ray properties as a galaxy cluster located behind \object{M~31}\ \citep{2006ApJ...641..756K}.\@ The iron emission lines in the X-ray spectrum yield a cluster redshift of $z\!=\!0.29$. For further discussion see Sect.\,\ref{SubSec:Gal_GCl_AGN}.
\section{Variability between \textit{XMM-Newton} observations}
\label{Sec:var}
To examine the long-term time variability of each source, we determined the XID flux at the source position in each observation or at least an upper limit for the XID flux. The XID fluxes were used to derive the variability factor and the significance of variability (\textit{cf.}\ Sect.\,\ref{Sec:DefVar}).
The sources are taken from the XMM\ LP-total\ catalogue (Table~5). Table~8 contains all information necessary to examine time variability. Sources are only included in the table if they are observed at least twice. Column 1 gives the source number. Columns 2 and 3 contain the flux and the corresponding error in the (0.2--4.5) keV XID band. The hardness ratios and errors are given in columns 4 to 11. Column 12 gives the type of the source. All this information was taken from Table~5.
The subsequent 140 columns provide information related to individual observations in which the position of the source was observed. Column 13 gives the name of one of these observations, which we will call observation 1. The EPIC instruments contributing to the source detection in observation 1, are indicated by three characters in the ``obs1\_val" parameter (Col. 14, first character for PN, second MOS\,1, third MOS\,2), each one being either a ``T" if the source is inside the FoV, or ``F" if it lies outside the FoV.\@ Then the count rate and error (15,16) and flux and error (17,18) in the (0.2--4.5) keV XID band, and hardness ratios and error (19--26) of observation 1 are given. Corresponding information is given for the remaining observations which cover the position of the source: obs.~2 (cols.~27--40), obs.~3 (41--54), obs.~4 (55--68), obs.~5 (69--82), obs.~6 (83--96), obs.~7 (97--110), obs.~8 (111--124), obs.~9 (125--138), obs.~10 (139--152). Whether the columns corresponding to obs.~3 -- obs.~10 are filled in or not, depends on the number of observations in which the source has been covered in the combined EPIC FoV.\@ This number is indicated in column 153. The maximum significance of variation and the maximum flux ratio (fvar\_max) are given in columns 154 and 155.\@ As described in Sect.\,\ref{Sec:DefVar}, only detections with a significance greater than 3$\sigma$ were used, otherwise the 3$\sigma$ upper limit was adopted. Column 156 indicates the number of observations that provide only an upper limit. The maximum flux (fmax) and its error are given in columns 157 and 158.
In a few cases a maximum flux value could not be derived, because each observation only yielded an upper limit. There can be two reasons for this: The first reason is that faint sources detected in merged observations may not be detected in the individual observations at the 3$\sigma$ limit. The second reason is that in cases where the significance of detection was not much above the 3$\sigma$ limit, it can become smaller than the 3$\sigma$ limit when the source position is fixed to the adopted final mean value from all observations.
\addtocounter{table}{+2}
\begin{table*}
\scriptsize
\begin{center}
\caption{Sources with maximum flux larger than 8\ergcm{-13}, a statistical significance of variability larger than 10 and a flux variability smaller than 5, ordered by flux.}
\begin{tabular}{llrrrcl}
\hline\noalign{\smallskip}
\hline\noalign{\smallskip}
\multicolumn{1}{c}{Source} &
\multicolumn{1}{c}{fvar} &
\multicolumn{1}{c}{svar} &
\multicolumn{1}{c}{fmax$^{\ddagger}$} &
\multicolumn{1}{c}{efmax$^{\ddagger}$} &
\multicolumn{1}{c}{class$^{+}$} &
\multicolumn{1}{c}{Comment$^{\dagger}$} \\
\noalign{\smallskip}\hline\noalign{\smallskip}
966 & 1.63 & 49.01 & 46.73 & 0.59 & XRB & 1(sv,z), 2, 10(v), 12(v), 13, 14, 20, 22(v), 25(LMXB), 27, 28(1.56)\\
877 & 3.13 & 49.13 & 16.06 & 0.20 & $<$hard$>$ & 1(sv), 2, 10(v), 12(v), 13, 14, 20(v), 22(v), 27, 28(3.05)\\
745 & 2.43 & 26.89 & 12.65 & 0.18 & AGN & 13, 14\\
1\,157 & 1.32 & 11.10 & 9.87 & 0.25 & GlC & 1(sv), 2, 5, 10, 12, 13, 14, 20, 21, 22(v), 27, 28(1.37)\\
1\,060 & 2.13 & 30.00 & 9.04 & 0.14 & $<$XRB$>$ & 1(sv), 2, 10, 12, 13, 14, 20(v, NS-LMXB), 22(v), 27\\
1\,171 & 4.14 & 18.86 & 9.02 & 0.41 & GlC & 1(d,sv), 2(t, 53.4), 5, 10, 12, 13, 14, 16, 20, 22, 27, 28(2.47)\\
1\,116 & 3.76 & 51.98 & 8.16 & 0.10 & GlC & 1(sv), 2(t, 58.6), 3(t, 33), 5, 10, 12, 13, 14, 16, 20, 21, 22(v,t), 27\\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\end{tabular}
\label{Tab:varlist_bright}
\end{center}
Notes:\\
$^{ {\ddagger}~}$: maximum XID flux and error in units of 1\ergcm{-13} or maximum absorbed 0.2--4.5\,keV luminosity and error in units of 7.3\ergs{36}\\
$^{ {+}~}$: class according to Table~\ref{Tab:class}\\
$^{{\dagger}~}$: for comment column see Table~\ref{Tab:varlist}
\normalsize
\end{table*}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics[clip,angle=-90,bb=52 90 355 460]{pics/var_fmax_3sn_V5_lab.ps}}
\caption{
Variability factor of sources from the XMM\ LP-total\ catalogue in the 0.2--4.5 keV band derived from average fluxes of the {XMM-Newton}\ EPIC observations plotted versus maximum detected flux (\hbox{erg cm$^{-2}$ s$^{-1}$}). Source classification is indicated: Foreground stars and candidates are marked as big and small stars, AGN and candidates as big and small crosses, background galaxies and galaxy clusters as big ``X" and their candidates as small ``X", SSS candidates as triangles, SNRs and candidates as big and small octagons, GlCs and XRBs as big squares and their candidates as small squares. Sources with a statistical significance for the variability below 3 are marked in green.
}
\label{Fig:var_fmax}
\end{figure}
\begin{figure*}
\resizebox{\hsize}{!}{\includegraphics[clip,angle=-90,bb=50 85 360 490]{pics/var_HR1_3sn_V5.ps}\hskip0.2cm\includegraphics[clip,angle=-90,bb=50 85 360 490]{pics/var_HR2_3sn_V5.ps}}
\caption{
Variability factor of sources from the XMM\ LP-total\ catalogue in the 0.2--4.5 keV band (derived from the average fluxes of the {XMM-Newton}\ EPIC observations) plotted versus HR1 in the left panel and HR2 in the right panel. For source classification see Fig.\,\ref{Fig:var_fmax}. Sources with a statistical significance of the variability below 3 are marked in green.
}
\label{Fig:var_hr}
\end{figure*}
\vspace{5mm}
Figure~\ref{Fig:var_fmax} shows the variability factor plotted versus maximum detected XID flux. Apart from XRBs, or XRBs in GlCs, or candidates of these source classes, which were selected based on their variability, there are a few SSS candidates showing pronounced temporal variability.
The sources classified or identified as AGN, background galaxies or galaxy clusters all show $F_{\mathrm{var}}\!<\!4$.\@ Most of the foreground stars show $F_{\mathrm{var}}\!<\!4$.
Out of the 1\,407 examined sources, we found 317 sources with a variability significance $>\!3.0$, \ie\ 182 more than reported in SPH2008.
For bright sources it is much easier to detect variability than for faint sources, because the difference between the maximum observed flux and the detection limit is larger. Therefore the significance of the variability declines with decreasing flux. This is illustrated by the distribution of the sources marked in green in Fig.\,\ref{Fig:var_fmax}.
Table~\ref{Tab:varlist} lists all sources with a variability factor larger than five. There are 69 such sources (34 in addition to SPH2008). The sources are sorted in descending order with respect to their variability factors. Table~\ref{Tab:varlist} gives the source number (Col.~1), maxima of flux variability (2) and maxima of the significance parameter (3).
The next columns (4, 5) indicate the maximum observed flux and its error. Column~6 contains the class of the source. Sources with $F_{\mathrm{var}}\!\ge\!10$ that were not already classified as SSS or foreground stars, were classified as XRB.
Time variability can also be helpful to verify a SNR candidate classification. If there is significant variability, the SNR classification must be rejected, and if an optical counterpart is detected, the source has to be re-classified as foreground star candidate. Column 7 contains references to the individual sources in the literature. In some cases the reference provides information on the temporal behaviour and a more precise classification (see brackets). The numbers given in connection with \citet{2007A&A...468...49V} and \citet{2006ApJ...643..356W} are the variability factors obtained in these papers from {\it Chandra}\ data. From the 69 sources of Table~\ref{Tab:varlist}, ten show a flux variability larger than 100.\@ With a flux variability factor $>\!690$ source \hbox{N$^{\underline{o}}$}\ 523 is the most variable source in our sample. Source \hbox{N$^{\underline{o}}$}\ 57 has the largest significance of variability, with a value of $\approx 97$. The variability significance is below 10 for just 33 sources, 15 of which show significance values below 5.
Thirty-five of the variable sources are classified as XRBs or XRB candidates, and eight of them are located in globular clusters. Nine of the variable sources are SSS candidates, while six variable sources are classified as foreground stars and foreground star candidates.
Table~\ref{Tab:varlist_bright} lists all ``bright" sources with a maximum flux larger than 8\ergcm{-13} and a flux variability smaller than five (the description of the columns is the same as in Table~\ref{Tab:varlist}).
All seven sources listed in Table~\ref{Tab:varlist_bright} (three in addition to SPH2008) have a significance of variability $>\!10$.\@ Apart from two sources, they are XRBs (three in globular clusters) or XRB candidates. The most luminous source in our sample is source \hbox{N$^{\underline{o}}$}\ 966 with an absorbed 0.2--4.5\,keV luminosity of $\approx 3.3$\ergs{38} at maximum.
Figure~\ref{Fig:var_hr} shows the relationship between the variability factor and the hardness ratios HR1 and HR2, respectively. The hardness ratios are taken from Table~5. The HR1 plot shows that the sample of highly variable sources includes SSS and XRB candidates, which occupy two distinct regions in this plot \citep[see also ][ for the LMC]{1999A&A...344..521H}. The SSSs marked by triangles, appear on the left hand side, while the XRBs or XRB candidates have much harder spectra, and appear on the right. It seems that foreground stars, SSSs and XRBs can be separated, on the HR2 diagram, although there is some overlap between foreground stars and XRBs.
Individual sources are discussed in the Sects.\,\ref{Sec:fgback} and \ref{Sec:Srcsm31}.
\section{Cross-correlations with other \object{M~31}\ X-ray catalogues}
\label{Sec:CrossX-ray}
Cross-correlations were determined by applying Eq.\,\ref{Eq:Cor} to the sources of the XMM\ LP-total\ catalogue and to sources reported in earlier X-ray catalogues. The list of X-ray catalogues used is given in Table~\ref{Tab:XrayRefCat}.
\subsection{Previous \textit{XMM-Newton} catalogues}
\label{SubSec:prevXMM}
Previous source lists based on archival {XMM-Newton}\ observations were presented in \citet{2001A&A...378..800O}, PFH2005, \citet{2006ApJ...643..844O},
SPH2008, and SBK2009. Of these four studies, PFH2005 covers the largest area of \object{M~31}. Table \ref{Tab:CompXMM} lists all sources from previous {XMM-Newton}\ studies that are not detected in the present investigation.
\begin{table*}
\scriptsize
\begin{center}
\caption{Sources from previous {XMM-Newton}\ studies that are not listed in the XMM\ LP-total\ catalogue.}
\begin{tabular}{ll}
\hline\noalign{\smallskip}
\hline\noalign{\smallskip}
\multicolumn{2}{l}{PFH2005 856 sources}\\
\multicolumn{2}{l}{103 not detected}\\
6 not detected, LH$>$100: & 327 ($<$SNR$>$,LH$=$2140.0), 384 (XRB,667.0), 332 ($<$SNR$>$,654.0), \\
& 316 ($<$SNR$>$,259.0), 312 ($<$SNR$>$,241.0), 281($<$hard$>$,160.0)\\
10 not detected, 20$\le$LH$<$50: & 75 ($<$SSS$>$), 423 ($<$fg Star$>$), 120 ($<$hard$>$), 505 ($<$hard$>$), \\
& 220 ($<$SNR$>$), 304 ($<$fg Star$>$), 819 ($<$hard$>$), 799 ($<$SSS$>$), 413 ($<$SNR$>$), 830 ($<$hard$>$)\\
14 not detected, 15$\le$LH$<$20: & 427($<$hard$>$), 734 ($<$hard$>$), 424 ($<$hard$>$), 518 ($<$SSS$>$), \\
& 232 ($<$hard$>$), 339 ($<$hard$>$), 446 ($<$SSS$>$), 219 ($<$fg Star$>$), 567 ($<$hard$>$), 256 ($<$fg Star$>$), \\
& 356 ($<$hard$>$), 248 ($<$hard$>$), 160 ($<$hard$>$), 399 ()\\
21 not detected, 10$\le$LH$<$15: & 375 ($<$hard$>$), 17 ($<$hard$>$), 195 ($<$hard$>$), 417 ($<$SNR$>$), \\
& 783 ($<$hard$>$), 803 ($<$hard$>$), 829 ($<$hard$>$), 135 ($<$hard$>$), 151 ($<$hard$>$), 131 ($<$hard$>$), \\
& 426 ($<$hard$>$), 593 ($<$fg Star$>$), 526 ($<$hard$>$), 250 ($<$hard$>$), 62 ($<$hard$>$), 67 ($<$hard$>$), \\
& 188 ($<$hard$>$), 186 ($<$AGN$>$), 510 ($<$hard$>$), 529 ($<$hard$>$), 754 ($<$hard$>$)\\
52 not detected, LH$<$10: & 599 ($<$hard$>$), 439 ($<$hard$>$), 809 ($<$hard$>$), 14 ($<$SNR$>$), 743 ($<$hard$>$),\\
& 433 ($<$hard$>$), 5 (), 210 ($<$hard$>$), 97 ($<$hard$>$), 708 ($<$hard$>$), 476 (), 534 ($<$hard$>$), 501 (),\\
& 170 ($<$hard$>$), 146 (SNR), 769 (), 838 ($<$hard$>$), 571 ($<$hard$>$), 816 ($<$hard$>$, 554 (), 627 ($<$hard$>$),\\
& 464 ($<$fg Star$>$), 811 ($<$hard$>$), 655 ($<$hard$>$), 184 ($<$hard$>$), 447 ($<$hard$>$), 380 ($<$hard$>$),\\
& 566 ($<$hard$>$), 137 ($<$fg Star$>$), 63 (), 48 (), 152 ($<$fg Star$>$), 291 ($<$hard$>$), 559 ($<$hard$>$),\\
& 102 ($<$hard$>$), 740 ($<$hard$>$), 540 ($<$fg Star$>$), 240 ($<$hard$>$), 485 (), 668 ($<$hard$>$), 44 (),\\
& 560 ($<$hard$>$), 836 ($<$hard$>$), 436 ($<$hard$>$), 484 ($<$fg Star$>$), 216 ($<$hard$>$), 362 ($<$hard$>$), 527 ($<$$>$), 179 ($<$hard$>$),\\
& 834 ($<$hard$>$), 86 ($<$hard$>$), 455 ()\\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\multicolumn{2}{l}{SPH2008 39 sources}\\
\multicolumn{2}{l}{15 not detected}\\
3 not detected, 50$\le$LH$<$100: & 874 ($<$SNR$>$,LH$=$85.5), 895 ($<$hard$>$,75.9), 882 ( ,56.4)\\
6 not detected, 10$\le$LH$<$50: & 869 (), 885 ($<$SNR$>$), 863 ($<$hard$>$), 875 ($<$SSS$>$), 893 ($<$hard$>$), 866 ($<$hard$>$)\\
6 not detected, LH$<$10: & 870 ($<$SNR$>$), 891 ($<$hard$>$), 889 ($<$hard$>$), 872 ($<$SNR$>$), 867 ($<$hard$>$), 862 ($<$SNR$>$)\\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\multicolumn{2}{l}{SBK2009 335 sources}\\
\multicolumn{2}{l}{31 not detected}\\
& 4 ($<$hard$>$), 18 ($<$hard$>$), 29 ($<$hard$>$), 32 ($<$hard$>$), 34 ($<$hard$>$), 45 ($<$SSS$>$), 67 ($<$hard$>$),\\
& 102 ($<$hard$>$), 106 ($<$hard$>$), 117 ($<$hard$>$), 149 ($<$hard$>$), 152 ($<$hard$>$), 179 ($<$hard$>$),\\
& 183 ($<$hard$>$), 184 ($<$hard$>$), 188 ($<$hard$>$), 191 ($<$hard$>$), 192 ($<$AGN$>$), 202 ($<$hard$>$),\\
& 204 ($<$fg star$>$), 217 ($<$hard$>$), 249 ($<$hard$>$), 250 ($<$hard$>$), 260 ($<$hard$>$), 274 ($<$hard$>$),\\
& 279 ($<$hard$>$), 285 ($<$hard$>$), 295 ($<$hard$>$), 306 ($<$hard$>$), 325 ($<$hard$>$), 333 ($<$hard$>$)\\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\end{tabular}
\label{Tab:CompXMM}
\end{center}
\normalsize
\end{table*}
In the ten observations covering the major axis, and a field in the halo of \object{M~31}, PFH2005 detected 856 X-ray sources with a detection likelihood threshold of 7 (\textit{cf.}\ Sect.\,\ref{Sec:SrcDet}). Of these 856 sources, only 753 sources are also present in the XMM\ LP-total\ catalogue, \ie\ 103 sources of PFH2005 were not detected. This can be due to:
the search strategy; the parameter settings used in the {\tt emldetect} run; the determination of the extent of a source for the XMM\ LP-total\ catalogue; the more severe screening for GTIs for the XMM\ LP-total\ catalogue, which led to shorter final exposure times; the use of the {\tt epreject} task and last but not least due to the {\tt SAS} versions and calibration files applied. The search strategy of PFH2005 was optimised to detect sources located close to each other in crowded fields. This point especially explains the non-detection of the bright PFH2005 sources [PFH2005] 281, 312, 316, 327, 332, 384 ($\mathcal{L}>50$) in the present study, as four of them ([PFH2005] 312, 316, 327, 332) are located in the innermost central region of \object{M~31}\ where source detection is complicated by the bright diffuse X-ray emission, while [PFH2005] 281 and 384 lie in the immediate vicinity of two bright sources ([PFH2005] 280 and 381 at distances of 7.7\hbox{$^{\prime\prime}$} and 5.5\hbox{$^{\prime\prime}$}, respectively). The changes in the {\tt SAS} versions and the GTIs, in particular, affect sources with small detection likelihoods ($\mathcal{L}<10$).
The improvements in the {\tt SAS} detection tools and calibration files should reduce the number of spurious detections, which increase with decreasing detection likelihood. However, this does not necessarily imply that \emph{all} undetected sources with $\mathcal{L}<10$ of PFH2005 are spurious detections. The changes in the {\tt SAS} versions, calibration files and GTIs do not only affect the source detection tasks, but also can cause changes in the background images. These changes may increase the assumed background value at the position of a source, which would result in a lower detection likelihood. Going from {\tt mlmin\,=\,7} to {\tt mlmin\,=\,6}, but leaving everything else unchanged, we detected an additional nine sources of PFH2005. One of the previously undetected sources ([PFH2005] 75) was classified as $<$SSS$>$, but correlates with blocks of pixels with enhanced low energy signal in the PN offset map and was corrected by {\tt epreject}. Another source classified as $<$SSS$>$ ([PFH2005] 799) is only detected in the MOS\,1 camera, but not in MOS\,2. From an examination by eye, it seems that source [PFH2005] 799 is the detection of some noisy pixels at the rim of the MOS\,1 CCD\,6 and not a real X-ray source.\@
SPH2008 extended the source catalogue of PFH2005 by re-analysing the data of the central region of \object{M~31}\ and also including data of monitoring observations of LMXB RX J0042.6+4115.\@ Of the 39 new sources presented in SPH2008, 24 are also listed in the XMM\ LP-total\ catalogue, \ie\ 15 sources of SPH2008 were not detected. Differences between the two studies include the detection likelihood thresholds used for {\tt eboxdetect} (SPH2008: {\tt likemin}=5) and {\tt emldetect} (SPH2008: {\tt mlmin}=6), the lower limit for the likelihood of extention (SPH2008: {\tt dmlextmin}\,=\,4; XMM\ LP-total: 15), the screening for GTIs, the use of the {\tt epreject} task and the {\tt SAS} versions and calibration files used. Concerning the GTIs, images, background images and exposure maps SPH2008 followed the same procedures as in PFH2005. The arguments given above are therefore also valid here. From the 14 undetected sources, three sources were detected in SPH2008 with {\tt mlmin} $<$ 7.\@
One source ([SPH2008] 882) was added by hand to the final source list, as SPH2008 could not find any reason why {\tt emldetect} did not automatically find it. The two extended sources ([SPH2008] 863, 869) detected with extent likelihoods between 4.7 and 5.1 in SPH2008, are neither detected as extended nor as pointlike sources in the present study, where the extent likelihood has to be larger than 15.\@
SBK2009 re-analysed the {XMM-Newton}\ observations located along the major axis of \object{M~31}, ignoring all observations pointing to the centre of the galaxy. They used a detection likelihood threshold of ten. Of the 335 sources detected by SBK2009, 304 sources are also contained in the XMM\ LP-total\ catalogue, \ie\ 31 sources are not detected. Of the 304 re-detected sources, two ([SBK2009] 298, 233) are found with a detection likelihood below ten. Of the 31 undetected sources, 27 were also not detected in PFH2005. The remaining four sources correlate with PFH2005 sources, which were not detected in the present study. SBK2009 state that they find 34 sources not present in the source catalogue of PFH2005. A possible reason for this may be that SBK2009 used different energy bands for source detection. They also had five bands, but they combined bands 2 and 3 from PFH2005 into one band in the range 0.5--2\,keV, and on the other hand they split band 5 of PFH2005 into two bands from 4.5--7\,keV and from 7--12\,keV, respectively. This might also explain why most of the additional found sources were classified as $<$hard$>$.
\citet{2006ApJ...643..844O} addressed the population of SSSs and QSSs based on the same archival observations as PFH2005. \citet{2006ApJ...643..844O} detected 15 SSSs, 18 QSSs and 10 SNRs of which one ([O2006]~Table\,4, Src.\,3) is also listed as an SSS ([O2006]~Table\,2, Src.\,13). Of these sources two SSSs, four QSSs and two SNRs (among them is the source [O2006]~Table\,4, Src.\,3) are not contained in the XMM\ LP-total\ catalogue.
These seven sources are also not present in the PFH2005 catalogue.
The nine bright variable sources from \citet{2001A&A...378..800O} were all detected.
\subsection{\textit{Chandra} catalogues}
\label{SubSec:Chcat}
The {\it Chandra}\ catalogues used for cross-correlations were presented in Sect.\,\ref{Sec:Intro} (see also Table~\ref{Tab:XrayRefCat}).
Details of the comparison between the XMM\ LP-total\ catalogue and the different {\it Chandra}\ catalogues can be found in Table~\ref{Tab:CompChan}.\@ Here, we only give a few general remarks. A non-negligible number of {\it Chandra}\ sources not reported in the XMM\ LP-total\ catalogue have already been classified as transient or variable sources. Thus, it is not surprising that these sources were not detected in the {XMM-Newton}\ observations \citep[parts of:][DKG2004]{2007A&A...468...49V,2006ApJ...643..356W}. One {\it Chandra}\ source (n1-66) lies outside the field of \object{M~31}\ covered by the {XMM-Newton}\ observations. For the innermost central region of \object{M~31}, the point spread function of {XMM-Newton}\ causes source confusion and therefore only {\it Chandra}\ observations are able to resolve the individual sources, especially if they are faint compared to the diffuse emission or nearby bright sources \citep{2002ApJ...577..738K,2002ApJ...578..114K,2004ApJ...609..735W,2004ApJ...610..247D,2006ApJ...643..356W,2007A&A...468...49V}. This explains why a certain number of these sources are not detected in {XMM-Newton}\ observations.
\begin{table*}
\scriptsize
\begin{center}
\caption{Sources detected in previous {\it Chandra}\ studies that are not present in the XMM\ LP-total\ catalogue.}
\begin{tabular}{ll}
\hline\noalign{\smallskip}
\hline\noalign{\smallskip}
\multicolumn{2}{l}{\citet{2002ApJ...577..738K} 204 sources}\\
\multicolumn{2}{l}{58 not detected}\\
5 transient: & r3-46,r3-43,r2-28,r1-23,r1-19 \\
20 variable: & r3-53,r3-77,r3-106,r3-76,r2-52,r2-31,r2-23,r1-31,r2-20,r1-24,r1-28,r1-27,r1-33,r1-21,r1-20,r1-7,r2-15,r1-17,r1-16,r2-47\\
33 unclassified: & r3-102,r3-92,r3-51,r3-75,r3-91,r3-89,r3-101,r3-88,r2-44,r2-55,r2-54,r3-32,r2-53,r1-30,r3-99,r1-22,r1-26,r1-18,r3-26,\\
&r2-41,r2-40,r3-71,r2-50,r2-49,r2-38,r3-97,r2-46,r3-12,r3-66,r3-104,r3-82,r3-5,r3-4\\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\multicolumn{2}{l}{\citet{2002ApJ...578..114K} 142 sources}\\
\multicolumn{2}{l}{26 not detected}\\
3 transient: & J004217.0+411508,J004243.8+411604,J004245.9+411619\\
7 variable: & J004232.7+411311,J004242.0+411532,J004243.1+411640,J004244.3+411605,J004245.2+411611,J004245.5+411608,\\
&J004248.6+411624\\
16 unclassified: & J004207.3+410443,J004229.1+412857,J004239.5+411614,J004239.6+411700,J004242.5+411659,J004242.7+411503,\\
&J004243.1+411604,J004244.2+411614,J004245.0+411523,J004246.1+411543,J004247.4+411507,J004249.1+411742,\\
& J004251.2+411639,J004252.3+411734,J004252.5+411328,J004318.5+410950\\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\multicolumn{2}{l}{\citet{2004ApJ...609..735W} 166 sources}\\
\multicolumn{2}{l}{28 not detected}\\
12 transient: & s1-79,s1-80,s1-82,r3-46,r2-28,r1-23,r1-19,r2-69,r1-28,r1-35,r1-34,n1-85 \\
7 variable: & r2-31,r1-31,r1-24,r1-20,r1-7,r1-17,r1-16\\
9 unclassified: & s1-81,r2-68,s1-85,r1-30,r1-22,r1-26,r1-18,n1-77,n1-84\\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\multicolumn{2}{l}{\citet{2007A&A...468...49V} 261 sources}\\
\multicolumn{2}{l}{104 not detected}\\
11 transient: & 6,12,29,32,41,51,59,84,118,130,146 \\
15 variable: & 3,5,8,9,18,22,24,27,44,63,92,96,99,149,169\\
78 unclassified: & 4,19,21,25,26,30,37,39,40,42,48,49,53,56,57,58,60,62,65,70,73,75,76,77,80,82,84,86,87,89,91,94,97,98,104,109,114,\\
&115,117,119,122,124,129,133,138,141,143,144,145,150,152,158,162,164,167,171,173,182,183,188,189,191,193,194,\\
&197,202,205,206,210,213,217,219,220,225,256,257,263\\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\multicolumn{2}{l}{\citet{2006ApJ...643..356W} 45 sources}\\
\multicolumn{2}{l}{25 not detected}\\
25 transient: & n1-26,n1-85,n1-86,n1-88,n1-89,r1-19,r1-23,r1-28,r1-34,r1-35,r2-28,r2-61,r2-62,r2-66,r2-69,r2-72,r3-43,r3-46,s1-18,\\
& s1-27,s1-69,s1-79,s1-80,s1-82,s2-62 \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\multicolumn{2}{l}{DKG2004 43 sources}\\
\multicolumn{2}{l}{15 not detected}\\
9 transient: & s2-62,s1-27,s1-69,s1-18,n1-26,r2-62,r1-35,r2-61,r2-66 \\
5 unclassified: & s2-27,s2-10,n1-29,n1-46,r2-54\\
1 not in FoV: & n1-66\\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\multicolumn{2}{l}{\citet{2002ApJ...570..618D} 28 sources}\\
\multicolumn{2}{l}{2 not detcted}\\
2 unclassified: & 17 ($\hat{=}$ r2-15), 28 ($\hat{=}$ r3-71) \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\end{tabular}
\label{Tab:CompChan}
\end{center}
Notes:\\
Variability information (transient, variable) is taken from the papers. ``Unclassified" denotes sources which are not indicated as transient or variable sources in the papers.
\normalsize
\end{table*}
Of the 28 bright X-ray sources located in globular clusters \citep{2002ApJ...570..618D}, two were not found in the {XMM-Newton}\ data (see Table~\ref{Tab:CompChan}). They are also not included in the source catalogue of PFH2005 and SPH2008. Hence, both objects are good candidates for being transient or at least highly variable sources (\textit{cf.}\ Sect.\,\ref{SubSub:comp_GlC}). Another study of the globular cluster population of \object{M~31}\ is presented by \citet{2004ApJ...616..821T}.\@ Their work is based on {XMM-Newton}\ and {\it Chandra}\ data and contains 43 X-ray sources. Of these sources three were not found in the present study. One of them ([TP2004] 1) is located well outside the field of \object{M~31}\ covered by the Deep {XMM-Newton}\ Survey\footnote{The source was observed with {XMM-Newton}\ on 11 January 2001. Obs.~id.: 0065770101}. The second source ([TP2004] 21) correlates with r3-71, which is discussed above (see \citet{2002ApJ...570..618D} in Table~\ref{Tab:CompChan}). The transient nature of the third source ([TP2004] 35), and the fact that it was not observed in any {XMM-Newton}\ observation taken before 2004 was already reported by \citet{2004ApJ...616..821T}. The source was first detected with {XMM-Newton}\ in the observation from 31 December 2006.
\subsection{\textit{ROSAT} catalogues}
Of the 86 sources detected with {\it ROSAT}\ HRI in the central $\sim$34\hbox{$^\prime$}\ of \object{M~31} (PFJ93), all but eight sources ([PFJ93] 1,2,31,33,40,48,63,85) are detected in the {XMM-Newton}\ observations. Six of these eight sources ([PFJ93] 1,2,31,33,63,85) have already been discussed in PFH2005 and classified as transients. Sources [PFJ93] 40 and 48 correlate with [PFH2005] 312 and 332, respectively, which are discussed in Sect.\,\ref{SubSec:prevXMM}. In addition to these eight sources, PFH2005 did not detect source [PFJ93] 51. This source was detected in the {XMM-Newton}\ observations centred on RX J0042.6+4115 and was thus classified as a recurrent transient (see SPH2008).
In each of the two {\it ROSAT}\ PSPC surveys of \object{M~31}, 396 individual X-ray sources were detected (SHP97 and SHL2001). From the SHP97 catalogue 130 sources were not detected. Of these sources 48 are located outside the FoV of our {XMM-Newton}\ \object{M~31}\ survey. From the SHL2001 catalogue, 93 sources are not detected, 60 of which lie outside the {XMM-Newton}\ FoV.\@ For information on individual sources see Table~\ref{Tab:CompRos}.
\begin{table*}
\scriptsize
\begin{center}
\caption{Sources from the {\it ROSAT}\ PSPC catalogues that are not present in the XMM\ LP-total\ catalogue.}
\begin{tabular}{ll}
\hline\noalign{\smallskip}
\hline\noalign{\smallskip}
\multicolumn{2}{l}{SHP97 396 sources}\\
\multicolumn{2}{l}{130 not detected}\\
48 outside FoV: & 1,2,3,4,5,7,8,14,31,41,72,91,98,104,120,125,159,202,209,271,276,285,286,290,300,312,314,\\
& 320,342,350,363,367,371,374,383,385,386,387,388,389,390,391,392,393,394,395,396 \\
1 transient: & 69 \\
21 not detected, LH$<$12: & 19,24,27,33,46,52,59,63,68,71,133,149,161,264,273,275,307,329,330,358,377\\
16 not detected, 12$\le$LH$<$15: & 12,15,49,82,93,113,114,128,196,230,262,283,334,364,372,376\\
44 not detected, LH$\ge$15: & 16(LH$=$26.6),32(30.2),43(18.2),45(51.2),60(20.1),66(36.2),67(4536.2),78(20.5),80(16.3),81(26.6),\\
& 88(33.7),95(548.0),102(16.4),126(217.3),141(843.3),145(46.9),146(673.7),166(17.4),167(90.0),\\
& 171(54.3),182(454.4),186(39.8),190(113.0),191(54.5),192(54.3),203(103.3),214(400.2),215(251.0),\\
& 232(104.4),245(26.0),260(54.6),263(38.1),265(24.6),268(54.3),270(40.4),277(15.6),309(81.8),\\
& 319(23.4),331(19.5),335(51.2),340(27.5),341(28.1),365(22.4),373(69.5)\\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\multicolumn{2}{l}{SHL2001 396 sources}\\
\multicolumn{2}{l}{93 not detected}\\
60 outside FoV: & 1,2,3,4,5,6,7,8,9,10,11,12,14,15,16,21,22,32,39,58,67,69,75,77,81,83,85,90,93,125,141,146,\\
& 160,164,192,202,243,260,282,296,298,302,325,326,328,355,371,372,378,379,383,388,389,390,\\
& 391,392,393,394,395,396 \\
4 not detected, LH$<$12: & 62,96,238,269\\
2 not detected, 12$\le$LH$<$15: & 231,361\\
27 not detected, LH$\ge$15: & 51(LH$=$28.4),104(901.2),121(94.1),126(46.2),143(34.7),168(131.9),171(43.0),173(317.8),190(215.8),\\
& 207(98.0),208(298.8),226(73.1),230(75.6),232(1165.6),240(218.4),246(39.9),248(219.6),256(60.0),\\
& 267(22.2),271(52.8), 322(2703.3),324(147.7),344(40.7),356(15.3),365(19.0),380(17.4),384(15.8)\\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\end{tabular}
\label{Tab:CompRos}
\end{center}
\normalsize
\end{table*}
Forty-four (out of 302) sources from SHP97 and 27 (out of 293) sources from SHL2001, have {\it ROSAT}\ detection likelihoods larger than 15, but are not listed in the XMM\ LP-total\ catalogue. These sources have to be regarded as transient or at least highly variable.
\subsection{\textit{Einstein} catalogue}
The list of {\it Einstein}\ X-ray sources in the field of \object{M~31}\ reported by TF91 contains 108 sources, with 81 sources taken from the {\it Einstein}\ HRI data with an assumed positional error of 3\hbox{$^{\prime\prime}$}\ \citep[reported by][]{1984ApJ...284..663C}, and 27 sources based on {\it Einstein}\ IPC data with a 45\hbox{$^{\prime\prime}$}\ positional error. Applying the above mentioned correlation procedure to the {\it Einstein}\ HRI sources, 64 of these sources are also detected in this work and listed in the XMM\ LP-total\ catalogue, \ie\ 17 sources are not detected ([TF91] 29, 31, 35, 39, 40, 43, 46, 50, 53, 54, 65, 66, 72, 75, 78, 93, 96). For the {\it Einstein}\ IPC sources only the 1~$\sigma$ positional error was used to search for counterparts among the {XMM-Newton}\ sources. Of the 27 {\it Einstein}\ IPC sources six remain without a counterpart in our catalogue ([TF91] 15, 99, 100, 106, 107, 108), of which [TF91] 15 and 108 are located outside the field of \object{M~31}\ covered by the XMM\ LP-total\ catalogue. Sources [TF91] 50 and 54 correlate with [PFH2005] 312 and 316, respectively. Both sources were already discussed in Sect.\,\ref{SubSec:prevXMM}. Apart from [TF91] 106, which is suggested as a possible faint transient by SHL2001, the remaining 18 sources are also not detected by PFH2005. They classified those sources as transient.
\section{Cross-correlations with catalogues at other wavelengths}
\label{SEC:CCow}
The XMM\ LP-total\ catalogue was correlated with the catalogues and public data bases given in Sect.\,\ref{Sec:CrossCorr_Tech}. Two sources (from the XMM\ LP-total\ and from the reference catalogues) were be considered as correlating, if their positions matched within the uncertainty (see Eq.~\ref{Eq:Cor}).
However, the correlation of an X-ray source with a source from the reference catalogue does not necessarily imply that the two sources are counterparts. To confirm this, additional information is needed, like corresponding temporal variability of both sources or corresponding spectral properties. We should also take into account the possibility that the counterpart of the examined X-ray source is not even listed in the reference catalogue used (due to faintness for example).
The whole correlation process will get even more challenging if an X-ray source correlates with more than one source from the reference catalogue. In this case we need a method to decide which of the correlating sources is the most likely to correspond to the X-ray source in question. Therefore, the method used should indicate how likely the correlation is with each one of the sources from the reference catalogue. Based on these likelihoods one can define criteria to accept a source from the reference catalogue as being the most likely source to correspond to the X-ray source.
The simplest method uses the spatial distance between the X-ray source and the reference sources to derive the likelihoods. In other words, the source from the reference catalogue that is located closest to the X-ray source is regarded as the most likely source corresponding to the X-ray source.
An improved method is a ``likelihood ratio" technique, were an additional source property (\eg\ an optical magnitude in deep field studies) is used to strengthen the correlation selection process. This technique was applied successfully to deep fields to find optical counterparts of X-ray sources \citep[\eg\ ][]{2007ApJS..172..353B}. A drawback of this method is that one a priori has to know the expected probability distribution of the optical magnitudes of the sources belonging to the studied object. In our case, this means that we have to know the distribution function for all optical sources of \object{M~31}\ that can have X-ray counterparts, \emph{without} including foreground and background sources. Apart from the fact that such distribution functions are unknown, an additional challenge would be the time dependence of the magnitude of the optical sources (\eg\ of novae) and of the connection between optical and X-ray sources (\eg\ optical novae and SSSs). Therefore it is not possible to apply this ``likelihood ratio" technique to the sources in the XMM\ LP-total\ survey. The whole correlation selection process becomes even more challenging if more than one reference catalogue is used.
To be able to take all available information into account, we decided not to automate the selection process, but to select the class and most likely correlations for each source by hand (as it was done \eg\ in PFH2005). Therefore the source classification, and thus the correlation selection process, is based on the cross correlations between the different reference catalogues, on the X-ray properties (hardness ratios, extent and time variability), and on the criteria given in Table~\ref{Tab:class}. For reasons of completeness we give for each X-ray source the number of correlations found in the USNO-B1, 2MASS and LGGS catalogues in Table~5.\@ The caveat of this method is that it cannot quantify the probability of the individual correlations.
\section{Foreground stars and background objects}
\label{Sec:fgback}
\subsection{Foreground stars}
\label{Sec:fgStar}
X-ray emission has been detected from many late-type -- spectral types F, G, K, and M -- stars, as well as from hot OB stars \citep[see review by][]{2000RvMA...13..115S}. Hence, X-ray observations of nearby galaxies also reveal a significant fraction of Galactic stars.
With typical absorption-corrected luminosities of $L_{\mathrm{0.2-10\,keV}}\!<$\oergs{31}, single stars in other galaxies are too faint to be detected with present instruments. However, concentrations of stars can be detected, but not resolved.
Foreground stars (fg Stars) are a class of X-ray sources which are homogeneously distributed over the field of \object{M~31} (Fig.\,\ref{Fig:fgS_spdist}). The good positional accuracy of {XMM-Newton}\ and the available catalogues USNO-B1, 2MASS and LGGS allow us to efficiently select this type of source. The selection criteria are given in Table~\ref{Tab:class}. The optical follow-up observations of \citet{2006A&A...451..835H} and \citet{2009A&A...507..705B} have confirmed the foreground star nature of bright foreground star candidates selected in PFH2005, based on the same selection criteria as used in this paper.
Somewhat different criteria were applied for very red foreground stars, with an LGGS colour $\mathrm{V}-\mathrm{R}\!>\!1$ or USNO-B1 colour $\mathrm{B2}-\mathrm{R2}\!>\!1$. These are classified as foreground star candidates, if $f_{\mathrm{x}}/f_{\mathrm{opt}}\!<\!-0.65$ and $f_{\mathrm{x}}/f_{\mathrm{opt,R}}\!<\!-1.0$.\@ A misclassification of symbiotic systems in \object{M~31} as foreground objects by this criterion can be excluded, as symbiotic systems typically have X-ray luminosities below \oergs{33}, which is more than a factor 100 below the detection limit of our survey.
If the foreground star candidate lies within the field covered by the LGGS we checked its presence in the LGGS images (as the LGGS catalogue itself does not list bright stars, because of saturation problems). Otherwise DSS2 images were used. Correlations with bright optical sources from the USNO-B1 catalogue, with an $f_{\mathrm{x}}/f_{\mathrm{opt}}$ in the range expected for foreground stars, that were not visible in the optical images were rejected as spurious. We found 223 foreground star candidates. Fourty sources were identified as foreground stars, either because they are listed in the globular cluster catalogues as spectroscopically confirmed foreground stars or because they have a spectral type assigned to them in the literature \citep[][SIMBAD]{2009A&A...507..705B,2006A&A...451..835H}.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics[clip,angle=0]{pics/spdist_fgStar_new4.ps}}
\caption{The spatial distribution of foreground stars and candidates, classified in the XMM\ LP-total\ catalogue. The image shows the homogeneous distribution of the sources over the covered field (marked with green dots).}
\label{Fig:fgS_spdist}
\end{figure}
Two of the foreground star candidates close to the centre of \object{M~31}\ (\hbox{N$^{\underline{o}}$}\ 826, \hbox{N$^{\underline{o}}$}\ 1\,110) have no entry in the USNO-B1 and LGGS catalogues, and one has no entry in the USNO-B1 R2 and B2 columns (\hbox{N$^{\underline{o}}$}\ 976). However, they are clearly visible on LGGS images, they are 2MASS sources and they fulfil the X-ray hardness ratio selection criteria. Therefore, we also classify them as foreground stars.
The following 19 sources were selected as very red foreground star candidates: \hbox{N$^{\underline{o}}$}\ 54, \hbox{N$^{\underline{o}}$}\ 118, \hbox{N$^{\underline{o}}$}\ 384, \hbox{N$^{\underline{o}}$}\ 391, \hbox{N$^{\underline{o}}$}\ 393, \hbox{N$^{\underline{o}}$}\ 585, \hbox{N$^{\underline{o}}$}\ 646, \hbox{N$^{\underline{o}}$}\ 651, \hbox{N$^{\underline{o}}$}\ 711, \hbox{N$^{\underline{o}}$}\ 1\,038, \hbox{N$^{\underline{o}}$}\ 1\,119, \hbox{N$^{\underline{o}}$}\ 1\,330, \hbox{N$^{\underline{o}}$}\ 1\,396, \hbox{N$^{\underline{o}}$}\ 1\,429, \hbox{N$^{\underline{o}}$}\ 1\,506, \hbox{N$^{\underline{o}}$}\ 1\,605, \hbox{N$^{\underline{o}}$}\ 1\,695, \hbox{N$^{\underline{o}}$}\ 1\,713 and \hbox{N$^{\underline{o}}$}\ 1\,747.
A further 10 sources (\hbox{N$^{\underline{o}}$}\ 210, \hbox{N$^{\underline{o}}$}\ 269, \hbox{N$^{\underline{o}}$}\ 278, \hbox{N$^{\underline{o}}$}\ 310, \hbox{N$^{\underline{o}}$}\ 484, \hbox{N$^{\underline{o}}$}\ 714, \hbox{N$^{\underline{o}}$}\ 978, \hbox{N$^{\underline{o}}$}\ 1\,591, \hbox{N$^{\underline{o}}$}\ 1\,908 and \hbox{N$^{\underline{o}}$}\ 1\,930) fulfil the hardness ratio criteria, but violate the $f_{\mathrm{x}}/f_{\mathrm{opt}}$ criteria and are therefore marked as ``foreground star candidates" in the comment column of Table~5.\@
\begin{figure*}
\subfigure[\hbox{N$^{\underline{o}}$}\ 473]{\includegraphics[scale=0.3, angle=-90]{pics/lc_src23.ps}\label{SubFig:fgS_flare_1}}
\subfigure[\hbox{N$^{\underline{o}}$}\ 780]{\includegraphics[scale=0.3, angle=-90]{pics/lc_src15.ps}\label{SubFig:fgS_flare_2}}\\
\subfigure[\hbox{N$^{\underline{o}}$}\ 1\,551]{\includegraphics[scale=0.3, angle=-90]{pics/lc_src1551.ps}\label{SubFig:fgS_flare_3}}
\subfigure[\hbox{N$^{\underline{o}}$}\ 1\,585]{\includegraphics[scale=0.3, angle=-90]{pics/lc_src1585.ps}\label{SubFig:fgS_flare_4}}\\
\subfigure[\hbox{N$^{\underline{o}}$}\ 1\,676]{\includegraphics[scale=0.3, angle=-90]{pics/lc_src1676.ps}\label{SubFig:fgS_flare_5}}
\subfigure[\hbox{N$^{\underline{o}}$}\ 1\,742]{\includegraphics[scale=0.3, angle=-90]{pics/lc_src16.ps}\label{SubFig:fgS_flare_6}}\\
\subfigure[\hbox{N$^{\underline{o}}$}\ 714]{\includegraphics[scale=0.3, angle=-90]{pics/lc_src5.ps}\label{SubFig:fgS_flare_8}}\\
\caption{X-ray light curves of foreground stars and candidates that, with a binning of 1000\,s, show flares.}
\label{Fig:fgS_flare}
\end{figure*}
Six sources (\hbox{N$^{\underline{o}}$}\ 473, \hbox{N$^{\underline{o}}$}\ 780, \hbox{N$^{\underline{o}}$}\ 1\,551, \hbox{N$^{\underline{o}}$}\ 1\,585, \hbox{N$^{\underline{o}}$}\ 1\,676, \hbox{N$^{\underline{o}}$}\ 1\,742), classified as foreground star candidates, have X-ray light curves that in a binning of 1\,000\,s showed flares (see Fig.\,\ref{Fig:fgS_flare}). These observations strengthen the foreground star classification. A seventh source (\hbox{N$^{\underline{o}}$}\ 714) is classified as a foreground star candidate, since its hardness ratios and its $f_{\mathrm{x}}/f_{\mathrm{opt}}$ ratio in the quiescent state fulfil the selection criteria of foreground star candidates. In addition, the source shows a flare throughout observation ss3.\@ Hence, the $f_{\mathrm{x}}/f_{\mathrm{opt}}$ ratio for this observation, in which the source is brightest, is too high to be consistent with the range of values expected for foreground stars.
\begin{table}
\begin{center}
\caption{Infrared colours and spectral types of foreground stars that show flares.}
\begin{tabular}{rrrrrr}
\hline\noalign{\smallskip}
\hline\noalign{\smallskip}
\multicolumn{1}{l}{\hbox{N$^{\underline{o}}$}} & \multicolumn{1}{c}{J mag} & \multicolumn{1}{l}{H mag} & \multicolumn{1}{c}{K mag} & \multicolumn{1}{c}{SpT$^{*}$}& \multicolumn{1}{c}{err$^{+}$}\\
\hline\noalign{\smallskip}
473 & 12.984 & 12.681 & 12.558 & K0 & 0.4 \\
714 & 14.310 & 13.618 & 13.458 & M0 & 0.2 \\
780 & 14.251 & 13.595 & 13.351 & M3 & 0.1 \\
1551 & 12.666 & 12.009 & 11.806 & M2 & 0.1 \\
1585 & 13.488 & 12.899 & 12.650 & M2 & 0.1 \\
1676 & 10.460 & 9.878 & 9.798 & K1 & 0.2 \\
1742 & 13.722 & 13.138 & 12.896 & M1 & 0.2 \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\end{tabular}
\label{Tab:fgStar_flare}
\end{center}
Notes:\\
$^{ *~}$: spectral type\\
$^{ +~}$: error (in subtypes)
\normalsize
\end{table}
Table~\ref{Tab:fgStar_flare} gives the J, H and K magnitudes taken from the 2MASS catalogue for each of the six flaring foreground stars. Using the standard calibration of spectral types for dwarf stars based on their near infrared colours (from the fourth edition of Allen's astrophysical quantities, edt. A.Cox, p.151) we derived the spectral classification for the objects, using both H-K and J-K. The spectral types (and "error") we give in Table~\ref{Tab:fgStar_flare} are derived from averaging the two classes (derived from the two colours). The spectral types are entirely consistent with those expected for flare stars (usually K and M types).
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics[clip,angle=0]{pics/fgStar_fluxdist_all_loglog_V5n.ps}}
\caption{Distribution of the source fluxes in the 0.2--4.5\,keV (XID) band. The diagram shows a histogram of the number of foreground stars and candidates per flux bin, in logarithmic scales.}
\label{Fig:fgS_fldist}
\end{figure}
Figure~\ref{Fig:fgS_fldist} shows the XID flux distribution for foreground stars and foreground star candidates, which ranges from 6.9\ergcm{-16} to 2.0\ergcm{-13}. Most of the foreground stars and candidates (257 sources) have fluxes below 5\ergcm{-14}.
\subsubsection{Comparing \textit{XMM-Newton}, \textit{Chandra} and \textit{ROSAT} catalogues}
In the combined {\it ROSAT}\ PSPC survey (SHP97, SHL2001) 55 sources were classified as foreground stars. Of these, 14 sources remain without counterparts in the present {XMM-Newton}\ survey. Five of these 14 sources are located outside the field observed with {XMM-Newton}. Forty-one {\it ROSAT}\ foreground star candidates have counterparts in the XMM\ LP-total\ catalogue. Of these counterparts, 16 were classified as foreground star candidates and four were identified as foreground stars \citep[spectral type from][or SIMBAD]{2009A&A...507..705B,2006A&A...451..835H}.\@ In addition 12 sources were listed as $<$hard$>$, two as AGN candidates and one as a globular cluster candidate in the XMM\ LP-total\ catalogue. The counterparts of three {\it ROSAT}\ sources remain without classification in the XMM\ LP-total\ catalogue.
Another three {\it ROSAT}\ sources have more than one counterpart in the {XMM-Newton}\ data. Source [SHP97]~109 correlates with sources \hbox{N$^{\underline{o}}$}\ 597, \hbox{N$^{\underline{o}}$}\ 604, \hbox{N$^{\underline{o}}$}\ 606, and \hbox{N$^{\underline{o}}$}\ 645. The former three are classified as $<$hard$>$, while source \hbox{N$^{\underline{o}}$}\ 645 is classified as a foreground star candidate. However source \hbox{N$^{\underline{o}}$}\ 645 has the largest distance from the position of [SHP97]~109 compared to the other three {XMM-Newton}\ counterparts. Furthermore, this source had a flux below the {\it ROSAT}\ detection threshold (about a factor 2.6) in the {XMM-Newton}\ observations and is about a factor 3--34 fainter than the three other possible {XMM-Newton}\ counterparts. Thus it is very unlikely that [SHP97]~109 represents the X-ray emission of a foreground star.
Source [SHL2001]~156 has two {XMM-Newton}\ counterparts and is discussed in Sect.\,\ref{Sec:SSS_comp}.\@ The third source ([SHL2001]~374) correlates with sources \hbox{N$^{\underline{o}}$}\ 1\,922 and \hbox{N$^{\underline{o}}$}\ 1\,924. The two {XMM-Newton}\ sources are classified as $<$hard$>$ and as a foreground star candidate, respectively. In the source catalogue of SHL2001 source [SHP97]~369 is listed as the counterpart of [SHL2001]~374. The source in the first {\it ROSAT}\ survey has a smaller positional error and only correlates with source \hbox{N$^{\underline{o}}$}\ 1\,924. Although this seems to indicate that source \hbox{N$^{\underline{o}}$}\ 1\,924 is the counterpart of [SHL2001]~374, we cannot exclude the possibility that [SHL2001]~374 is a blend of both {XMM-Newton}\ sources, as these two sources have similar luminosities in the {XMM-Newton}\ observations.
\citet{2002ApJ...577..738K} classified four sources as foreground stars. For two sources (\hbox{N$^{\underline{o}}$}\ 960$\hat{=}$r2-42 and \hbox{N$^{\underline{o}}$}\ 976$\hat{=}$r3-33) the classification is confirmed by our study. The third source (\hbox{N$^{\underline{o}}$}\ 1000$\hat{=}$r2-19) remained without classification in the XMM\ LP-total\ catalogue, as it is too soft to be classified as $<$hard$>$ and the optical counterpart found in the LGGS catalogue does not fulfil the $f_{\mathrm{x}}/f_{\mathrm{opt}}$ criteria. The fourth source (r2-46) was not detected in the {XMM-Newton}\ observations.
The foreground star classification of three sources (s1-74, s1-45, n1-82) in \citet{2004ApJ...609..735W} is confirmed by the XMM\ LP-total\ study (\hbox{N$^{\underline{o}}$}\ 289, \hbox{N$^{\underline{o}}$}\ 603, \hbox{N$^{\underline{o}}$}\ 1\,449). For source \hbox{N$^{\underline{o}}$}\ 289 the spectral type F0 was determined \citep{2006A&A...451..835H}.
The source list of DKG2004 contains six sources (s2-46, s2-29, s2-37, s1-45, s1-20, r3-122) that are classified as foreground stars. All six sources are confirmed as foreground star candidates by our {XMM-Newton}\ study (\textit{cf.}\ Table~5). For source \hbox{N$^{\underline{o}}$}\ 696 ($\hat{=}$s1-20) \citet{2006A&A...451..835H} obtained the spectral type G0.
Of the four sources listed as foreground stars in \citet{2007A&A...468...49V} only one source (\hbox{N$^{\underline{o}}$}\ 936$\hat{=}$ [VG2007]~168) was confirmed as a foreground star, based on the entry in the RBC\,V3.5 and \citet{2009AJ....137...94C}. The second source (\hbox{N$^{\underline{o}}$}\ 1\,118$\hat{=}$[VG2007]~180) is listed in the RBC\,V3.5 and \citet{2009AJ....137...94C} as a globular cluster. The third source (\hbox{N$^{\underline{o}}$}\ 829$\hat{=}$[VG2007]~181) does not have a counterpart in the USNO-B1, 2MASS or LGGS catalogues, nor does it fulfil the hardness ratio criteria for foreground stars. Hence, the source is classified as $<$hard$>$. The fourth source ([VG2007]~81) is not spatially resolved from its neighbouring source [VG2007]~79 in our {XMM-Newton}\ observations (source \hbox{N$^{\underline{o}}$}\ 1\,078). Hence source \hbox{N$^{\underline{o}}$}\ 1\,078 is classified as $<$hard$>$.
\subsection{Galaxies, galaxy clusters and AGN}
\label{SubSec:Gal_GCl_AGN}
The majority of background sources belong to the class of active galactic nuclei (AGN). This was shown by the recent deepest available surveys of the X-ray background \citep[][]{2000Natur.404..459M,2001A&A...365L..45H,2005ARA&A..43..827B}. The class of AGN is divided into many sub-sets. The common factor in all the sub-sets is that their emission emanates from a small, spatially unresolved galactic core. The small size of the emitting region is implied by the X-ray flux variability observed in many AGN, which is on time scales as short as several minutes (to years). The observed X-ray luminosities range from \oexpo{39} to \oergs{46}, sometimes even exceeding \oergs{46}. Although AGN show many different properties, like the amount of radio emission or the emission line strengths and widths, they are believed to be only different facets of one underlying basic phenomenon \citep[\textit{cf.}\ ][]{1995PASP..107..803U}: the accretion of galactic matter onto a supermassive black hole ($\sim\!10^{6}\!-\!10^{9}$\,M\mbox{$_{\odot}$}) in the centre of the galaxy.
It is difficult and, to some extent, arbitrary to distinguish between active and normal galaxies, since most galaxies are believed to host a black hole at the position of their kinetic centre \citep{2005ApJ...631..280B}. In normal galaxies the accretion rate to the central supermassive BH is so low, that only weak activity can be detected -- if at all. The overall thermal emission of the nuclear region is due to bremsstrahlung from hot gas. The total X-ray luminosity of a normal galaxy can reach some \oergs{41}, at most. It consists of diffuse emission and emission of unresolved individual sources.
Galaxy clusters (GCls) are by far the largest and most massive virialised objects in the Universe. Their masses lie in the range of $10^{14}$--\,$10^{15}$M\mbox{$_{\odot}$}\ and they have sizes of a few megaparsecs (Mpc). A mass-to-light ratio of $M/L\!\simeq\!200\,$\,M\mbox{$_{\odot}$}/L\mbox{$_{\odot}$}\ indicates that galaxy clusters are clearly dominated by their dark matter content. Furthermore, galaxy clusters allow us to study the baryonic matter component, as they define the only large volumes in the Universe from which the majority of baryons emit detectable radiation. This baryonic gas, the {\em hot intracluster medium} (ICM), is extremely thin, with electron densities of $n_{\mathrm{e}}\!\simeq\!10^2$--10$^5$\,m$^{-3}$, and fills the entire cluster volume. Owing to the plasma temperatures of $k_{\mathrm{B}}\,T\!\simeq\!2$--10\,keV, the thermal ICM emission gives rise to X-ray luminosities of $L_{\mathrm{X}}\!\simeq\!10^{43}$--\,$3\!\times\!10^{45}$\,erg\,s$^{-1}$. Therefore galaxy clusters are the most X-ray luminous objects in the Universe next to AGN.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics[clip,angle=0]{pics/spdist_bgsrcs_newc.ps}}
\caption{The spatial distribution of background sources and candidates, classified in the XMM\ LP-total\ catalogue. AGN are marked with blue dots, ``normal" galaxies with red dots and galaxy clusters with green dots.}
\label{Fig:BG_spdist}
\end{figure}
We identified four sources as background galaxies and 11 as AGN, and classified 19 galaxy and 49 AGN candidates. The classification is based on SIMBAD and NED correlations and correlations with sources listed as background objects in the globular cluster catalogues \citep[RBC\,V3.5 and ][]{2009AJ....137...94C}. Sources are classified as AGN candidates, if they have a radio counterpart \citep[NVSS;][]{1990ApJS...72..761B,2004ApJS..155...89G} with the additional condition of being neither a SNR nor a SNR candidate from X-ray hardness ratios, as well as not being listed as a ``normal" background galaxy in \citet{2004ApJS..155...89G}. Most AGN will be classified as $<$hard$>$ ((HR2$-$EHR2)$>-$0.2, see Table~\ref{Tab:class}) because of their intrinsic power law component. Additional absorption in the line of sight by the interstellar medium of \object{M~31}\ will lead to an even higher HR2. Only the few AGN with a dominant component in the measured flux below 1\,keV may lead to a classification $<$SNR$>$ or $<$fg Star$>$ in our adapted scheme.
One (\hbox{N$^{\underline{o}}$}\ 995) of the four identified galaxies is M~32. An overview of previous X-ray observations of this galaxy is given in PFH2005. They also discuss the fact that {\it Chandra}\ resolved the X-ray emission of M~32 into several distinct point sources (maximum separation of the three central {\it Chandra}\ sources 8\farcs3). Although M~32 is located closer to the centre of the FoV in the observations of field SS1, than it was in the s1 observation used in PFH2005, {XMM-Newton}\ still detects only one source. The remaining three sources (\hbox{N$^{\underline{o}}$}\ 88, \hbox{N$^{\underline{o}}$}\ 403, \hbox{N$^{\underline{o}}$}\ 718) are identified as galaxies, because they are listed as background galaxies in both the RBC\,V3.5 and \citet{2009AJ....137...94C}. For source \hbox{N$^{\underline{o}}$}\ 403 (B\,007) NED gives a redshift of $0.139692\pm0.000230$ \citep{2007AJ....134..706K}.
Eleven X-ray sources are identified as AGN. The first one (\hbox{N$^{\underline{o}}$}\ 363) correlates with a BL Lac object located behind \object{M~31}\ (NED, see also PFH2005). The second source (\hbox{N$^{\underline{o}}$}\ 745) correlates with a Seyfert 1 galaxy (5C~3.100), which has a redshift of $\approx 0.07$ (SIMBAD). The third source (\hbox{N$^{\underline{o}}$}\ 1\,559) correlates with a quasar (Sharov~21) that showed a single strong optical flare, during which its UV flux has increased by a factor of $\sim$20 \citep{2010A&A...512A...1M}. The remaining sources were spectroscopically confirmed (from our optical follow-up observations) to be AGN (D.~Hatzidimitriou, private communication; and Hatzidimitriou et al.~(2010) in prep.).
\begin{table}
\begin{center}
\caption{Spectral fit parameters for extended sources}
\begin{tabular}{rcllc}
\hline\noalign{\smallskip}
\hline\noalign{\smallskip}
\multicolumn{1}{l}{Src ID} & \multicolumn{1}{c}{$N_{\rm H}$/10$^{21}$\,cm$^{-2}$} & \multicolumn{1}{l}{$k_{\mathrm{B}}T$/keV} & \multicolumn{1}{c}{redshift} & \multicolumn{1}{c}{$\chi^2$/dof}\\
\hline\noalign{\smallskip}
141 & $1.19^{+1.63}_{-0.88}$ & $2.17^{+2.30}_{-0.68}$ & $0.24^{+1.24}_{-0.11}$ & 78.5/53\\
\noalign{\smallskip}
252 & $0.61^{+1.16}_{-0.43}$ & $1.95^{+0.64}_{-0.29}$ & $0.22^{+0.15}_{-0.07}$ & 56.4/151\\
\noalign{\smallskip}
304 & $2.68^{+2.64}_{-1.85}$ & $0.95^{+3.32}_{-1.95}$ & $0.12^{+0.07}_{-0.05}$ & 50.9/57\\
\noalign{\smallskip}
1543 & $2.74^{+6.91}_{-1.76}$ & $2.08^{+2.31}_{-1.11}$ & $0.61^{+1.11}_{-0.26}$ & 32.9/34\\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\end{tabular}
\label{Tab:spfit_ext}
\end{center}
\normalsize
\end{table}
In Sect.\,\ref{Sec:ExtSrcs} the 12 extended sources in the XMM\ LP-total\ catalogue were presented. \citet{2006ApJ...641..756K} showed that the brightest of these sources (\hbox{N$^{\underline{o}}$}\ 1795) is a galaxy cluster located at a redshift of $z\!=\!0.29$.
For the remaining 11 sources, X-ray spectra were created and fitted with the {\tt MEKAL} model in {\tt XSPEC}. Unfortunately, for most of the examined sources the spectral parameters (foreground absorption, temperature and redshift) are not very well constrained. Nevertheless four sources (\hbox{N$^{\underline{o}}$}\ 141, \hbox{N$^{\underline{o}}$}\ 252, \hbox{N$^{\underline{o}}$}\ 304, \hbox{N$^{\underline{o}}$}\ 1543) with temperatures in the range of $\sim\!1$--2\,keV and proposed redshifts between 0.1\,--\,0.6 were found (Table~\ref{Tab:spfit_ext}). Inspection of optical images (DSS\,2 images and if available LGGS images) revealed an agglomeration of optical sources at the positions of these four extended X-ray sources. Thus they are classified as galaxy cluster candidates.
Although, B242 (the optical counterpart of source \hbox{N$^{\underline{o}}$}\ 304) is listed as a globular cluster candidate in the RBC3.5 catalogue, \citet{2009AJ....137...94C} classified this source as a background object. Our findings from the X-rays favour the background object classification. Hence a globular cluster classification for this source seems to be excluded.
Source \hbox{N$^{\underline{o}}$}\ 1\,912 was already classified as a galaxy cluster candidate in PFH2005. The spectrum confirms this classification. The best fit parameters are \hbox{$N_{\rm H}$}$=\!1.29^{+0.53}_{-0.41}$\hcm{21}, $T\!=\!2.8^{+0.8}_{-0.5}$\,keV and redshift of $0.06^{+0.03}_{-0.04}$.
A plot of the spatial distribution of the classified\,/\,identified background sources is given in Fig.\,\ref{Fig:BG_spdist}, which shows that these sources are rather homogeneously distributed over the observed field. However, in the fields located along the major axis of \object{M~31}\ we mainly see AGN, which are bright enough to be visible through \object{M~31}, while most of the galaxies and galaxy clusters are detected in the outer fields.
\subsubsection{Comparing \textit{XMM-Newton}, \textit{Chandra} and \textit{ROSAT} catalogues}
Of the ten {\it ROSAT}\ PSPC survey sources classified as background galaxies one is located outside the field of the Deep {XMM-Newton}\ Survey. The remaining objects are confirmed to be background sources and are classified or identified as galaxies or AGN. The only case which is worth discussing in more detail is the source pair [SHP97]~246 and [SHL2001]~252. From the {XMM-Newton}\ observations it is evident that this source pair is not one source, as indicated in the combined {\it ROSAT}\ PSPC source catalogue (SHL2001), but consists of three individual sources (\hbox{N$^{\underline{o}}$}\ 1\,269, \hbox{N$^{\underline{o}}$}\ 1\,279 and \hbox{N$^{\underline{o}}$}\ 1\,280). [SHL2001]~252 correlates spatially with all three {XMM-Newton}\ sources, while [SHP97]~246 correlates only with source \hbox{N$^{\underline{o}}$}\ 1\,269, which is identified as a foreground star of type K2 (SIMBAD). The two other {XMM-Newton}\ counterparts of [SHL2001]~252 are classified as a galaxy candidate and an AGN candidate, respectively. In summary, [SHL2001]~252 is most likely a blend of both background sources and maybe even a blend of all three {XMM-Newton}\ sources, while [SHP97]~246 seems to be the X-ray counterpart of the foreground star mentioned above.
\citet{2002ApJ...577..738K} classified source r3-83 (\hbox{N$^{\underline{o}}$}\ 1\,132) as an extragalactic object, as it is listed in SIMBAD and NED as an emission line object. Following PFH2005, we classified source \hbox{N$^{\underline{o}}$}\ 1\,132 as $<$hard$>$. The BL Lac object (\hbox{N$^{\underline{o}}$}\ 363) was also detected in {\it Chandra}\ observations \citep{2004ApJ...609..735W}.
\section{M~31 sources}
\label{Sec:Srcsm31}
\subsection{Supersoft sources}
Supersoft source (SSS) classification is assigned to sources showing extremely soft spectra with equivalent blackbody temperatures of $\sim$15--80\,eV. The associated bolometric luminosities are in the range of \oexpo{36}--\oergs{38} \citep[][]{1997ARA&A..35...69K}.
Because of the phenomenological definition, this class is likely to include objects of several types. The favoured model for these sources is that they are close binary systems with a white dwarf (WD) primary, burning hydrogen on the surface \citep[\textit{cf.}\ ][]{1997ARA&A..35...69K}. Close binary SSSs include post-outburst, recurrent, and classical novae, the hottest symbiotic stars, and other LMXBs containing a WD (cataclysmic variables, CVs).\@ Symbiotic systems, which contain a WD in a wide binary system, may also be observed as SSSs \citep[][]{1997ARA&A..35...69K}. Because matter that is burned can be retained by the WD, some SSS binaries may be progenitors of type-Ia supernovae \citep[\textit{cf.}\ ][]{1992A&A...262...97V}.
The XMM\ LP-total\ catalogue contains 30 SSS candidates that were selected on the basis of their hardness ratios (see Fig.\,\ref{Fig:HR_diagrams} and Table~\ref{Tab:class}).
\subsubsection{Spatial and flux distribution}
Figure \ref{Fig:SSS_spdist} shows the spatial distribution of the SSSs. Clearly visible is a concentration of sources in the central field. There are two explanations for that central enhancement. The first is that the central region was observed more often than the remaining fields and therefore there is a higher chance of catching a transient SSS in outburst. The second reason is that the major class of SSSs in the centre of \object{M~31}\ are optical novae (PFF2005, PHS2007). Optical novae are part of the old stellar population which is much denser in the centre of \object{M~31}.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics[clip,angle=0]{pics/spdist_SSS_new4.ps}}
\caption{The spatial distribution of SSSs classified in the XMM\ LP-total\ catalogue. The positions of the SSSs are marked with red and green dots. Sources that correlate with optical novae are given in green. An enhancement of sources in the central field is clearly visible.}
\label{Fig:SSS_spdist}
\end{figure}
Figure \ref{Fig:SSS_fldist} gives the distribution of 0.2--1.0\,keV source fluxes for all SSSs (black) and for those correlating with optical novae (blue). The unabsorbed fluxes were determined assuming a 50\,eV blackbody model (PFF2005). The two brightest SSSs ($F_{\mathrm{X}}>$\oergcm{-12}) consist of a persistent source with 217\,s pulsations \citep[\hbox{N$^{\underline{o}}$}\ 1\,061;][]{2008ApJ...676.1218T} and the nova M31N~2001-11a \citep[\hbox{N$^{\underline{o}}$}\ 1\,416;][]{2006IBVS.5737....1S}. A large fraction of SSSs are rather faint, with fluxes below 5\ergcm{-14}. Four sources have absorption-corrected luminosities below \oergs{36} (0.2--1.0\,keV), which was indicated as the limiting luminosity for SSSs. That does not necessarily imply that these sources are not SSSs, since it is possible that the blackbody fit chosen does not represent well the properties of these sources. A higher absorption or a lower temperature would lead to increased unabsorbed luminosities. We also have to take into account that we might have observed the source during a phase of rising or decaying luminosity, \ie\ not at maximum luminosity.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics[clip,angle=0]{pics/SSS_fluxdist_all_log_V5n.ps}}
\caption{Distribution of the source fluxes in the 0.2--1.0\,keV band. The diagram shows the number of SSSs per flux bin plotted versus the flux in logarithmic scale.
The blue histogram gives the distribution of SSSs correlating with optical novae.}
\label{Fig:SSS_fldist}
\end{figure}
\subsubsection{Correlations with optical novae}
\label{SubSec:opt_novae}
By cross-correlating with the nova catalogue\footnote{\url{http://www.mpe.mpg.de/~m31novae/opt/m31/M31_table.html}} indicated in Sect.\,\ref{Sec:CrossCorr_Tech}, 14 of the 30 SSSs can be classified as X-ray counterparts of optical novae. Of these 14 novae, eight (\hbox{N$^{\underline{o}}$}\ 748, \hbox{N$^{\underline{o}}$}\ 993, \hbox{N$^{\underline{o}}$}\ 1\,006, \hbox{N$^{\underline{o}}$}\ 1\,046, \hbox{N$^{\underline{o}}$}\ 1\,051, \hbox{N$^{\underline{o}}$}\ 1\,076, \hbox{N$^{\underline{o}}$}\ 1\,100, and \hbox{N$^{\underline{o}}$}\ 1\,236) are already discussed in PFF2005 and PHS2007.\@ Nova M31N~2001-11a was first detected as a supersoft X-ray source. Motivated by that SSS detection, \citet{2006IBVS.5737....1S} found an optical nova at the position of the SSS in archival optical plates which had been overlooked in previous nova searches. Nova M31N~2007-06b has been discussed in \citet[][]{2009A&A...500..769H}. The remaining four novae are discussed individually in more detail below.
As was shown in the {XMM-Newton}/{\it Chandra}\ \object{M~31}\ nova monitoring project\footnote{\url{http://www.mpe.mpg.de/~m31novae/xray/index.php}}, it is absolutely necessary to have a homogeneous and dense sample of deep optical and X-ray observations in order to study optical novae and their connections to supersoft X-ray sources. In the optical, the outer regions of \object{M~31}\ are regularly observed down to a limiting magnitude of $\sim$17\, mag (Texas Supernova Search (TSS); \citealt{Quimby2006}), while in X-rays only ``snapshots" are available. Hence,
the correlations of optical novae with detected SSSs have to be regarded as lucky coincidences. That also means that the identified nova counterparts are detected at a random stage of their SSS evolution which does not allow us to constrain the exact start or end point of the SSS phase, nor the maximum luminosity of the SSS. We also cannot exclude the possibility that some of the SSSs observed in the outer parts of \object{M~31}\ correspond to the supersoft phase of optical novae for which the optical outburst was missed. In the outer regions of \object{M~31}, the samples of optical novae and X-ray SSSs are certainly incomplete, due to the rather high luminosity limit in the optical monitoring, and the lack of complete monitoring in X-rays, respectively. So one should be cautious in deriving properties of the disc nova population of \object{M~31}\ from the available data.
\paragraph{Nova M31N~1997-10c} was detected on 2 October 1997 at a B-band magnitude of $16.6$ \citep[ShA~58;][]{1998AstL...24..641S}.\@ An upper limit of 19 mag on 29 September 1997 was reported by the same authors. They classified this source as a very fast nova. In the {XMM-Newton}\ observation c1 (25 June 2000), an SSS (\hbox{N$^{\underline{o}}$}\ 871), located within $\sim$1\,\farcs9 of the optical nova, was detected. The source was fitted with an absorbed blackbody model. The formal best fit parameters of the {XMM-Newton}\ EPIC PN spectrum are: absorption $N_{\mathrm{H}}\approx3.45$\hcm{21} and $k_{\mathrm{B}}T\approx41$\,eV.\@ The unabsorbed luminosity in the 0.2--1\,keV band is $\approx5.9$\ergs{37}.\@ Confidence contours for absorption column density and blackbody temperature are shown in Fig.\,\ref{Fig:M31N1997-10c_ccont}. In the subsequent {XMM-Newton}\ observation of that region taken about half a year later (c2; 27 December 2000) the source is not detected.
Although the source position is covered in observations c3 (29 June 2001), c4 (6/7 January 2002) and b (16--19 July 2004) the source was not re-detected. Using the count rates derived for the variability study (see Sect.\,\ref{Sec:var}) and assuming the same spectrum for the source as in observation c1, upper limits of the source luminosity can be derived, which are given in Table~\ref{Tab:M31N1997-10c_uplim}.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics[clip,angle=90]{pics/grid_c1_930_bb.ps}}
\caption{Column density-temperature confidence contours inferred from the fit to the {XMM-Newton}\ EPIC PN spectrum of M31N1997-10c. The formal best fit parameters are indicated by the star. Also drawn are lines of constant bolometric luminosity (in erg s$^{-1}$).
The vertical dashed line indicates the Galactic foreground absorption in the direction of \object{M~31}.}
\label{Fig:M31N1997-10c_ccont}
\end{figure}
\begin{table}[t]
\begin{center}
\caption{3$\sigma$ upper limits for the absorption-corrected luminosities for Nova M31N~1997-10c}
\begin{tabular}{cc}
\hline\noalign{\smallskip}
\hline\noalign{\smallskip}
\multicolumn{1}{l}{observation} & \multicolumn{1}{c}{$L_{\rm X}$/10$^{37}$\,erg\,s$^{-1}$ (0.2--1.0\,keV)}\\
\hline\noalign{\smallskip}
c2 & 10.8$^{+}$\\
c3 & 1.9\\
c4 & 1.0\\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\end{tabular}
\label{Tab:M31N1997-10c_uplim}
\end{center}
Notes:\\
$^{ +~}$: The count rate detected in observation c2 gives a luminosity of 2.4$\pm$2.8\ergs{37}, which results in the upper limit given in the Table. The fact that this upper limit is higher than the luminosity detected in observation c1 is, at least in part, attributed to the very short effective observing time of less than 6\,000\,s.
\normalsize
\end{table}
\paragraph{Nova M31N~2005-01b} was discovered on 19 January 2005 at a white light magnitude of 16.3 by R.~Quimby.\footnote{\url{http://www.supernovae.net/sn2005/novae.html}} An SSS (\hbox{N$^{\underline{o}}$}\ 764) that correlates with the optical nova (distance: 4\,\farcs3; 3$\sigma$ error: 5\,\farcs5) was found in observation ss2 taken on 8 July 2006, which is 535 days after the discovery of the optical nova. Due to the severe background screening applied to observation ss2, there is not enough statistics to obtain a spectrum of the X-ray source. To get an estimate of the spectral properties of that source we created a spectrum in the 0.2--0.8\,keV range of the \emph{unscreened} data. Although the spectrum was background corrected, we cannot totally exclude a contribution from background flares.
The spectrum is best fitted by an absorbed blackbody model with an absorption of $N_{\mathrm{H}}\approx1.03$\hcm{21} and a blackbody temperature of $k_{\mathrm{B}}T\approx45$\,eV.\@ The unabsorbed 0.2--1\,keV luminosity is $L_{\mathrm{X}}\sim$1.0\ergs{37}. In another {XMM-Newton}\ observation taken 1\,073 days after the optical outburst (ss21; 28 December 2007) the X-ray source is no longer visible. The 3$\sigma$ upper limit of the unabsorbed source luminosity is $\sim3.3$\ergcm{35} in the 0.2--4.5\,keV band, assuming the spectral model used for source detection.
\paragraph{Nova M31N~2005-01c} was discovered on 29 January 2005 at a white light magnitude of 16.1 by R.~Quimby.\footnote{\url{http://www.supernovae.net/sn2005/novae.html}} In the {XMM-Newton}\ observation from 02 January 2007 (ns2, 703 days after optical outburst) an SSS was detected (\hbox{N$^{\underline{o}}$}\ 1\,675) at a position consistent with that of the optical nova (distance: 0\,\farcs9).
The X-ray spectrum (Fig.\,\ref{Fig:M31N2005-01c_spec}) can be well fitted by an absorbed blackbody model with the following best fit parameters: absorption $N_{\mathrm{H}}=1.58^{+0.65}_{-0.45}$\hcm{21} and $k_{\mathrm{B}}T=40\pm6$\,eV.\@ The unabsorbed 0.2--1\,keV luminosity is $L_{\mathrm{X}}\sim$1.2\ergs{38}. Confidence contours for absorption column density and blackbody temperature are shown in Fig.\,\ref{Fig:M31N2005-01c_ccont}.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics[clip,angle=-0]{pics/ns2_5_20_all.ps}}
\caption{{XMM-Newton}\ EPIC spectrum of nova M31N~2005-01c. The absorbed black body fit to the data is shown in the upper panel.}
\label{Fig:M31N2005-01c_spec}
\end{figure}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics[clip,angle=0]{pics/grid_ns2_5_20_all_bb.ps}}
\caption{Column density (\hbox{$N_{\rm H}$}) - temperature ($k_{\mathrm{B}}T$) confidence contours inferred from the blackbody fit to the {XMM-Newton}\ EPIC spectrum of M31N~2005-01c (see Fig.\,\ref{Fig:M31N2005-01c_spec}). The formal best fit parameters are indicated by the star. Also drawn are lines of constant bolometric luminosity and the vertical dashed line indicates the Galactic foreground absorption (see Fig~\ref{Fig:M31N1997-10c_ccont}).}
\label{Fig:M31N2005-01c_ccont}
\end{figure}
\paragraph{Nova M31N~2005-09b} was discovered in optical images taken on 01 and 02 September 2005 at white light magnitudes of $\sim$18.0 and $\sim$16.5 respectively. From 31 August 2005, an upper limit of $\sim$18.7\,mag was reported \citep{2005ATel..600....1Q}.\@ The nova was spectroscopically confirmed \citep{2006ATel..850....1P} and classified as a possible Fe\,{\small II} or hybrid nova\footnote{\url{http://cfa-www.harvard.edu/iau/CBAT_M31.html}}. An X-ray counterpart (\hbox{N$^{\underline{o}}$}\ 92) was detected in the {XMM-Newton}\ observation s3 (299 days after the optical outburst). Its position is consistent with that of the optical nova (distance: 0\,\farcs57). As observation s3 was heavily affected by background flares, we only could estimate the spectral parameters from the \emph{unscreened} data (see also paragraph about Nova M31N~2005-01b). A blackbody fit of the 0.2--0.8\,keV gives $N_{\mathrm{H}}\approx2.7$\hcm{21}, k$T\approx35$\,eV, and an unabsorbed 0.2--1\,keV luminosity of $L_{\mathrm{X}}\sim$5.4\ergs{38}. The X-ray source was no longer visible in observation s31, which was taken 391 days after observation s3.
\subsubsection{Comparing \textit{XMM-Newton}, \textit{Chandra} and \textit{ROSAT} catalogues}
\label{Sec:SSS_comp}
The results and a detailed discussion of a study of the long-term variability of the SSS population of \object{M~31}\ are presented in \citet{2010AN....331..212S}. In summary our comparative study of SSS candidates in \object{M~31}\ detected with {\it ROSAT}, {\it Chandra}\ and {XMM-Newton}\ demonstrated that strict selection criteria have to be applied to securely select SSSs. It also underlined the high variability of the sources in this class and the connection between SSSs and optical novae.
\subsection{Supernova remnants}
\label{Sec:SNR_Diss}
After an supernova explosion the interaction between the ejected material and the ISM forms a supernova remnant (SNR).\@ The SNR X-ray luminosities typically vary between $10^{35}$ and \oergs{37} (0.2--10\,keV).\@
SNRs can be divided into two categories, (i) sources where the thermal components dominate the X-ray spectrum below 2\,keV, and (ii) the so-called ``plerions" or Crab-like SNRs with power law spectra. The former are located in areas of the X-ray colour/colour diagrams that overlap only with foreground star locii. If we assume that we have identified all foreground star candidates from the optical correlation and inspection of the optical images, the remaining sources can be classified as SNR candidates using the criteria given in Table~\ref{Tab:class}. Similar criteria were used to select supernova remnant candidates in {XMM-Newton}\ observations of M~33 \citep{2004A&A...426...11P, 2006A&A...448.1247M}. \citet{2005AJ....130..539G} and \citet{2010ApJS..187..495L} confirmed the supernova remnant nature of many of these candidates based on optical and
radio follow-up observations. They also used a hardness ratio criterion to select supernova remnant candidates from {\it Chandra}\ data.
An X-ray source is classified as a SNR candidate if it either fulfils the hardness ratio criterion given in Table~\ref{Tab:class} (these are 25 such sources), or if it correlates with a known optical or radio SNR candidate (six sources). The sources assigned the classification of a SNR candidate based on the latter criterion alone, are marked in the comment column of Table~5 with the flag `\emph{only correlation}'. As these six SNR candidates would be classified as $<$hard$>$ on the basis of their hardness ratios, they are good candidates for being ``plerions". SNRs are taken as identified when they coincide with SNR candidates from the optical or radio and fulfil the hardness ratio criterion. For a discussion of detection of SNRs in different wavelength bands see \citet{2010ApJS..187..495L}. All together, we identified 25 SNRs and 31 SNR candidates in the XMM\ LP-total\ catalogue.
This number is in the range expected from an extrapolation of the X-ray
detected SNRs in the Milky Way as shown below. Assuming that our own Galaxy contains about 1\,440 X-ray sources which are brighter than $\sim$1\ergs{35} \citep{2006A&A...452..169R}, and that it contains $\sim$110 SNRs detected in X-rays \citep{2009BASI...37...45G}, we would expect to detect $\sim$50 SNRs in the XMM\ LP-total\ catalogue ($0.4\times\left(1\,897 \mathrm{sources} - 263 \mathrm{fg Stars} \right)$). This number is in good agreement with the number of identified and classified SNRs.
The XID fluxes for SNRs range between 5.9\ergcm{-14} for source \hbox{N$^{\underline{o}}$}\ 1\,234 and 1.5\ergcm{-15} for source \hbox{N$^{\underline{o}}$}\ 419. These fluxes correspond to luminosities of 4.3\ergs{36} to 1.1\ergs{35}. A diagram of the flux distribution of the detected SNRs and candidates is shown in Fig.\,\ref{Fig:SNR_fldist}.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics[clip]{pics/SNR_fluxdist_all_log_V5m.ps}}
\caption{Distribution of SNR fluxes in the 0.2--4.5\,keV (XID) band. The diagrams show the number of identified and classified SNRs at each flux bin, plotted versus the flux.}
\label{Fig:SNR_fldist}
\end{figure}
Among the 25 identified SNRs, there are 20 SNRs from the PFH2005 catalogue. Source [PFH2005]~146, which correlates with the radio source [B90]~11 and the SNR candidate BA146, was not found in the present study.
Source [SPH2008]~858, which coincides with a source reported as a ring-like extended object in {\it Chandra}\ observations that was also detected in the optical and radio wavelength regimes and identified as a SNR \citep{2003ApJ...590L..21K}, was re-detected (\hbox{N$^{\underline{o}}$}\ 1\,050). Of the 31 SNR candidates ten have been reported by PFH2005.
In the following, we first discuss in more detail the remaining four identified SNRs, that appear in the new catalogue but were not included in PFH2005:
\paragraph{XMMM31~J003923.5+404419} (\hbox{N$^{\underline{o}}$}\ 182) was classified as a SNR candidate from its [S\,{\small II}]:H$\alpha$ ratio. It appears as an \textsl{`irregular ring with southerly projection'} \citep[][and Fig.\,\ref{Fig:src182_opt}]{1980A&AS...40...67D} and correlates with a radio source \citep{1969MNRAS.144..101P}. X-ray radiation of that source was first detected in the present study.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics[clip,angle=-90]{pics/LGSimage_182_new2.ps}}
\caption{H$\alpha$, R, S\,{\small II} and O\,{\small III} images, taken from the LGG Survey. Over-plotted is a circle at the position of source XMMM31~J003923.5+404419 with a radius of 5\farcs5 (3$\sigma$ positional error of the X-ray source).\@ The ring-like SNR is clearly visible in the H$\alpha$ and S\,{\small II} bands.}
\label{Fig:src182_opt}
\end{figure}
\paragraph{XMMM31~J004413.5+411954} (\hbox{N$^{\underline{o}}$}\ 1\,410) was classified as a SNR candidate from its [S\,{\small II}]:H$\alpha$ ratio \citep{1993A&AS...98..327B,1995A&AS..114..215M}. From Fig.\,\ref{Fig:src1410_opt} we can see that the source \textsl{`appears as a bright knot'}, as was already reported by \citet{1993A&AS...98..327B}. The source has counterparts in the radio \citep{1990ApJS...72..761B} and X-ray (SHP97) range. It was reported as a SNR by SHP97.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics[clip,angle=-90]{pics/LGSimage_1410_new2.ps}}
\caption{H$\alpha$, R, S\,{\small II} and O\,{\small III} images, taken from the LGG Survey. Over-plotted is a circle at the position of source XMMM31~J004413.5+411954 with a radius of 3\farcs6 (3$\sigma$ positional error of the X-ray source).\@ The SNR `appears as a bright knot'.}
\label{Fig:src1410_opt}
\end{figure}
\paragraph{XMMM31~J004510.5+413251 and XMMM31~J004512.3+420029}
(\hbox{N$^{\underline{o}}$}\ 1\,587 and \hbox{N$^{\underline{o}}$}\ 1\,593, respectively) are new X-ray detections and correlate with the radio sources: \#354 and \#365 in the list of \citet{1990ApJS...72..761B}. Source \hbox{N$^{\underline{o}}$}\ 1\,587 also correlates with source 37W209 from the catalogue of \citet{1985A&AS...61..451W}. No optical counterparts were reported in the literature.
\vspace{5mm}
In the following, we discuss two SNR candidates in more detail:
\paragraph{XMMM31~J004434.8+412512} (\hbox{N$^{\underline{o}}$}\ 1\,481) lies in the periphery of a super-shell with [S\,{\small II}]:H$\alpha\!>$0.5 \citep[][src 490]{1993A&AS...98..327B}. Located next to this source is a SNR candidate reported in \citet[][src 3-086]{1995A&AS..114..215M}, which has a radio counterpart from the NVSS catalogue. \hbox{N$^{\underline{o}}$}\ 1\,481 also correlates with {\it ROSAT}\ source [SPH97]~284, which was identified as a SNR in SPH97 due to its spatial correlation with source 3-086.\@ Figure~\ref{Fig:src1481_opt} shows the {XMM-Newton}\ error circle over-plotted on LGGS images. From the {XMM-Newton}\ source position it looks more likely that the X-rays are emitted from the HII region rather than from the SNR candidate visible in the optical and radio wavelengths. Nevertheless the {XMM-Newton}\ source detected is point-like and its hardness ratios lie in the range expected for SNRs. If the X-ray emission originated from the \hbox{H\,{\sc ii}}-region, it should have been detected as spatially extended emission. Thus, \hbox{N$^{\underline{o}}$}\ 1\,481 is classified as SNR candidate. A puzzling fact, however, is the pronounced variability between {\it ROSAT}\ and {XMM-Newton}\ observations of $F_{\mathrm{var}}\!=\!9.82$ with a significance of $S_{\mathrm{var}}\!\approx\!4$ (see Table~\ref{Tab:VarSNRs1}), which is not consistent with the long term behaviour of SNRs. There is still the possibility that the detected X-ray emission does not belong to either the \hbox{H\,{\sc ii}}-region or a SNR at all.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics[clip,angle=-90]{pics/LGSimage_1481_new2.ps}}
\caption{H$\alpha$, R, S\,{\small II} and O\,{\small III} images, taken from the LGG Survey. Over-plotted is a blue circle at the position of source XMMM31~J004434.8+412512 with a radius of 5\farcs9 (3$\sigma$ positional error of the X-ray source).\@ Source [SPH97]~284 is indicated by a black circle with a radius of 21\hbox{$^{\prime\prime}$}\ (3$\sigma$ positional error), source 3-086 by the magenta circle with a radius of 10\hbox{$^{\prime\prime}$}; the position of the radio counterpart is marked by the yellow circle.}
\label{Fig:src1481_opt}
\end{figure}
\paragraph{XMMM31~J004239.8+404318} (\hbox{N$^{\underline{o}}$}\ 969) was already observed with {\it ROSAT}\ (SHP97, SHL2001) and {\it Chandra}\ \citep[][s1-84]{2004ApJ...609..735W}. No optical counterpart is visible on the LGGS images. The X-ray spectrum, which is shown in Fig.\,\ref{Fig:src969_sp}, is well fitted by an absorbed non-equilibrium ionisation model with the following best fit values: an absorption of $N_{\mathrm{H}}=1.76^{+0.46}_{-0.60}$\hcm{21}, a temperature of $k_{\mathrm{B}}T=219^{+32}_{-19}$\,eV, and an ionisation timescale of $\tau=1.75^{+0.82}_{-1.75}\times10^8$\,s\,cm$^{-3}$. The unabsorbed 0.2--5\,keV luminosity is $L_{\mathrm{X}}\sim6.5$\ergs{37}.\@ The soft spectrum with the temperature of $\sim$200\,eV is in good agreement with spectra of old SNRs \eg\ in the SMC \citep{2008A&A...485...63F}. Although the unabsorbed luminosity is rather high for an old SNR, it is still in the range found for other SNRs \citep[\textit{cf.}\ ][]{2002ApJ...580L.125K,2007ApJ...663..234G}. Hence, XMMM31~J004239.9+404318 is classified as a SNR candidate.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics[clip,angle=-90]{pics/spectrum_971n.ps}}
\caption{0.2--3.0\,keV EPIC spectrum of source \hbox{N$^{\underline{o}}$}\ 969. The best fit absorbed non-equilibrium ionisation model is indicated by the solid lines.}
\label{Fig:src969_sp}
\end{figure}
\subsubsection{Comparing SNRs and candidates in \textit{XMM-Newton}, \textit{Chandra} and \textit{ROSAT} catalogues}
The second {\it ROSAT}\ PSPC catalogue (SHL2001) contains 16 sources classified as SNRs. The counterparts of 12 of these sources are also classified as SNRs or SNR candidates in the XMM\ LP-total\ catalogue.
\begin{table*}
\scriptsize
\begin{center}
\caption{Flux comparison of SNRs and SNR candidates from the XMM\ LP-total\ catalogue with counterparts classified as SNRs in {\it ROSAT}\ and {\it Chandra}\ catalogues}
\begin{tabular}{rrccrcrrl}
\hline\noalign{\smallskip}
\hline\noalign{\smallskip}
\hbox{N$^{\underline{o}}$}\ & \multicolumn{1}{c}{XLPt} & \multicolumn{1}{c}{SHP97} & \multicolumn{1}{c}{SHL2001} & \multicolumn{1}{c}{KGP2002$^{+}$} & \multicolumn{1}{c}{WGK2004$^{+}$} & fvar & svar & reason why indicated vaiability is not reliable\\
& \multicolumn{5}{c}{XID Flux with error in \oergcm{-15}} & & &\\
\noalign{\smallskip}
\hline\noalign{\smallskip}
474 & 5.27 $\pm$ 0.56 & 21.18 $\pm$ 4.46 & & & & 4.01 & 3.54 \\
668 & 7.94 $\pm$ 1.36 & 26.30 $\pm$ 6.69 & & & & 3.31 & 2.69 \\
883 & 2.83 $\pm$ 0.33 & & & 3.33 $\pm$ 0.83 & & 1.18 & 0.56 \\
1\,040 & 7.12 $\pm$ 0.47 & & & 12.49 $\pm$ 1.67 & & 1.75 & 3.11 \\
1\,050 & 8.25 $\pm$ 0.70 & & & 2.50 $\pm$ 0.83 & & 3.30 & 5.28 \\
1\,066 & 28.35 $\pm$ 1.16 & & 256.16$\pm$ 16.19 & 39.13 $\pm$ 3.33 & 25.29 $\pm$ 5.32 & 10.13 & 14.06 & {\it ROSAT}\ source is a blend of two {XMM-Newton}\ sources\\
1\,234 & 59.12 $\pm$ 1.10 & 152.91 $\pm$13.82 & 268.98 $\pm$17.09 & 54.11 $\pm$ 3.33 & 109.13 $\pm$ 11.31 & 4.97 & 12.34 & embedded in diffuse emission in central area of \object{M~31}\\
1\,275 & 23.88 $\pm$ 1.08 & 53.50 $\pm$ 8.47 & 79.39 $\pm$ 9.90 & & & 3.32 & 5.58 \\
1\,328 & 9.25 $\pm$ 0.74 & & 26.99 & & & 1.00 & 0.00 \\
1\,351 & 4.96 $\pm$ 0.68 & 24.96 $\pm$ 8.92 & 17.77 & & & 1.00 & 0.00 \\
1\,372 & 2.12 $\pm$ 0.84 & & 29.91 & & & 1.00 & 0.00 \\
1\,410 & 7.40 $\pm$ 0.94 & 29.87 $\pm$ 7.13 & & & & 4.04 & 3.12 \\
1\,481 & 3.43 $\pm$ 0.97 & 33.66 $\pm$ 7.36 & & & & 9.82 & 4.07 & see Sect.\,\ref{Sec:SNR_Diss}\\
1\,535 & 14.73 $\pm$ 1.31 & 53.94 $\pm$ 9.14 & 34.41 $\pm$ 7.20 & & & 3.66 & 4.25 \\
1\,599 & 16.08 $\pm$ 0.92 & 54.39 $\pm$10.03 & 33.51 $\pm$ 6.97 & & & 3.38 & 3.80 \\
1\,637 & 12.72 $\pm$ 1.33 & & 27.21 & & & 1.00 & 0.00 \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\end{tabular}
\label{Tab:VarSNRs1}
\end{center}
Notes:\\
$^{ +~}$: KGP2002: \citet{2002ApJ...577..738K},WGK2004: \citet{2004ApJ...609..735W}\\
{\it ROSAT}\ and {\it Chandra}\ count rates are converted to 0.2--4.5\,keV fluxes, using WebPIMMS and assuming a foreground absorption of \hbox{$N_{\rm H}$}\,$=\!6.6$\hcm{20} and a photon index of $\Gamma\!=\!1.7$: ECF$_{\mathrm{SHP97}}\!=\!2.229\times$10$^{-14}$\,erg\,cm$^{-2}$\,cts$^{-1}$, ECF$_{\mathrm{SHL2001}}\!=\!2.249\times$10$^{-14}$\,erg\,cm$^{-2}$\,cts$^{-1}$, and ECF$_{\mathrm{KGP2002}}\!=\!8.325\times$10$^{-14}$\,erg\,cm$^{-2}$\,cts$^{-1}$. For WGK2004 the luminosity given in Table~2 of WGK2004 was converted to XID flux using $F_{\mathrm{XID}}$[erg\,cm$^{-2}$\,s$^{-1}$]\,$=\!6.654\times$10$^{-15}\!\times\!L_{\mathrm{WGK2004}}$[10$^{36}$erg\,s$^{-1}$].
\normalsize
\end{table*}
Table~\ref{Tab:VarSNRs1} lists the {XMM-Newton}, {\it ROSAT}, and {\it Chandra}\ fluxes of all SNRs and SNR candidates from the XMM\ LP-total\ catalogue that have counterparts classified as SNRs in {\it ROSAT}\ or {\it Chandra}\ source lists. In addition, the maximum flux variability and the maximum significance of the variability (following the variability calculation of Sect.\,\ref{Sec:DefVar}) are given. Three SNRs that have {\it ROSAT}\ counterparts show variability changing in flux by more than a factor of five. The most variable source (\hbox{N$^{\underline{o}}$}\ 1\,066) is discussed below, the second source was discussed in Sect.\,\ref{Sec:SNR_Diss} (XMMM31~J004434.8+412512, \hbox{N$^{\underline{o}}$}\ 1\,481), and the third source (\hbox{N$^{\underline{o}}$}\ 1\,234) is embedded in the diffuse emission of the central area of \object{M~31}. In this environment the larger PSF of {\it ROSAT}\ results in an overestimate of the source flux, since the contribution of the diffuse emission could not be totally separated from the emission of the point source.
The remaining four {\it ROSAT}\ sources classified as SNRs and their {XMM-Newton}\ counterparts are discussed in the following paragraph.
\begin{table*}
\scriptsize
\begin{center}
\caption{Flux comparison of SNRs and SNR candidates from the XMM\ LP-total\ catalogue which have counterparts in {\it ROSAT}, and/or {\it Chandra}\ catalogues that are not classified as SNRs}
\begin{tabular}{lrccrcrrr}
\hline\noalign{\smallskip}
\hline\noalign{\smallskip}
\hbox{N$^{\underline{o}}$}\ & \multicolumn{1}{c}{XLPt} & \multicolumn{1}{c}{Chandra} & \multicolumn{1}{c}{PFJ93} & \multicolumn{1}{c}{SHP97} & \multicolumn{1}{c}{SHL2001} & fvar & svar & remark$^{\ddagger}$\\
& \multicolumn{5}{c}{XID Flux with error in \oergcm{-15}} & & & \\
\multicolumn{1}{c}{(1)} & \multicolumn{1}{c}{(2)} & \multicolumn{1}{c}{(3)} & \multicolumn{1}{c}{(4)} & \multicolumn{1}{c}{(5)} & \multicolumn{1}{c}{(6)} & \multicolumn{1}{c}{(7)} & \multicolumn{1}{c}{(8)} & \multicolumn{1}{c}{(9)} \\
\hline\noalign{\smallskip}
294 & $18.50 \pm 0.85$ & & &$53.27 \pm 6.69$ & $46.78 \pm 7.87$ & 2.88 & 5.16 & \\
472 & $ 3.15 \pm 0.69$ & & & & $26.09 \pm 6.07$ & 8.28 & 3.76 & 468 brt \\
969 & $53.51 \pm 1.35$ & $84.51\pm 15.97^{+}$ & & $34.55 \pm 6.91$ & $89.06 \pm 11.92$ & 2.58 & 3.96 & \\
1\,079 & $ 4.19 \pm 0.59$ & & & $20.06 \pm 6.24 $& & 4.79 & 2.53 & brt \\
1\,291 & $14.55 \pm 0.75$ & $16.04^{*} $ & $>$24.0 & $35.22 \pm 8.47$ & $40.93 \pm 7.87$ & 2.81 & 3.33 & \\
1\,741 & $ 4.12 \pm 0.65$ & $4.17^{\dagger} $ & & & & 1.01 & --- & brt \\
1\,793 & $ 3.70 \pm 0.52$ & & & $26.08 \pm 6.46$ & & 7.06 & 3.46 & 1\,799 brt \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\end{tabular}
\label{Tab:VarSNRs}
\end{center}
Notes:\\
$^{ \ddagger~}$: Source number (from XMM\ LP-total\ catalogue) of another (brighter) XMM\ LP-total\ source which correlate with the same {\it ROSAT}\ source as the XMM\ LP-total\ source given in Col. 1; brt: XMM\ LP-total\ flux is below the {\it ROSAT}\ detection threshold (5.3\ergcm{-15}).\\
{\it ROSAT}\ and {\it Chandra}\ count rates are converted to 0.2--4.5\,keV fluxes, using WebPIMMS and assuming a foreground absorption of \hbox{$N_{\rm H}$}\,$=\!6.6$\hcm{20} and a photon index of $\Gamma\!=\!1.7$: ECF$_{\mathrm{SHP97}}\!=\!2.229\times$10$^{-14}$\,erg\,cm$^{-2}$\,cts$^{-1}$, ECF$_{\mathrm{SHL2001}}\!=\!2.249\times$10$^{-14}$\,erg\,cm$^{-2}$\,cts$^{-1}$, ECF$_{\mathrm{HRI}}\!=\!6.001\times$10$^{-14}$\,erg\,cm$^{-2}$\,cts$^{-1}$, $^{ \dagger~}$: ECF$_{\mathrm{DKG2004}}\!=\!5.56\times$10$^{-12}$\,erg\,cm$^{-2}$\,cts$^{-1}$.
$^{ +~}$: For WGK2004 the luminosity given in Table~2 of WGK2004 was converted to XID flux using $F_{\mathrm{XID}}$[erg\,cm$^{-2}$\,s$^{-1}$]\,$=\!6.654\times$10$^{-15}\!\times\!L_{\mathrm{WGK2004}}$[10$^{36}$erg\,s$^{-1}$].
$^{ *~}$: For VG2007 the luminosity given in Table~2 of VG2007 was converted to XID flux using $F_{\mathrm{XID}}$[erg\,cm$^{-2}$\,s$^{-1}$]\,$=\!9.433\times$10$^{-15}\!\times\!L_{\mathrm{VG2007}}$[10$^{36}$erg\,s$^{-1}$].
\normalsize
\end{table*}
SHP97 report that [SHP97]~203 and [SHP97]~211 ($\hat{=}$[SHL2001]~206) correlate with the same SNR ([DDB80]~1.13), have the same spectral properties and have luminosities within the range of SNRs. A correlation with the {\it ROSAT}\ HRI catalogue (PFJ93) reveals that the true X-ray counterpart of [DDB80]~1.13 is located between the two {\it ROSAT}\ PSPC sources. Furthermore, PFJ93 report that this SNR is located `\textsl{within 19\,\hbox{$^{\prime\prime}$} of a brighter X-ray source}' which matches positionally with [SHP97]~211. These findings are confirmed by {XMM-Newton}\ and {\it Chandra}\ observations. The X-ray counterpart of [DDB80]~1.13 is source \hbox{N$^{\underline{o}}$}\ 1\,066 in the XMM\ LP-total\ catalogue (or [PFH2005]~354 or r3-69 in \citealp{2002ApJ...577..738K}).\@ The second source, which correlates with [SHP97]~211, is the {XMM-Newton}\ source \hbox{N$^{\underline{o}}$}\ 1\,077, which has a ``hard" spectrum and is $\sim\!6.7$ times brighter than \hbox{N$^{\underline{o}}$}\ 1\,066. Hence, [SHP97]~211 is a blend of the two {XMM-Newton}\ sources \hbox{N$^{\underline{o}}$}\ 1\,066 and \hbox{N$^{\underline{o}}$}\ 1\,077. This also explains the pronounced variability between [SHL2001]~206 and \hbox{N$^{\underline{o}}$}\ 1\,066 given in Table~\ref{Tab:VarSNRs1}. Comparing the {\it Chandra}\ detections of the SNR counterpart with the {XMM-Newton}\ flux gives a variability factor of $F_{\mathrm{var}}\!\approx\!1.12$.
The distance between [SPH97]~203 and [DDB80]~1.13 is $\ga\!20$\hbox{$^{\prime\prime}$}. [SPH97]~203 was reported only in the first {\it ROSAT}\ PSPC catalogue. It was not detected in the observations of the second {\it ROSAT}\ PSPC catalogue or in any {XMM-Newton}\ or {\it Chandra}\ observation of that region. Thus it seems very likely that [SPH97]~203 was either a transient source or a false detection. In both cases [SPH97]~203 cannot be a SNR.
As the field of [DDB80]~1.13 was observed many times with {\it Chandra}, and as {\it Chandra}\ has detected weak SNRs in the central part of \object{M~31}\ \citep[][and \eg\ \protect{[DDB80]~1.13}]{2003ApJ...590L..21K}, {\it Chandra}\ should have detected X-ray emission corresponding to the {\it ROSAT}\ source [SPH97]~203, if it really belonged to a SNR.
The remaining two {\it ROSAT}\ SNRs correlate with {XMM-Newton}\ sources, which were not classified as SNRs or SNR candidates. Source [SHP97]~258 correlates with source \hbox{N$^{\underline{o}}$}\ 1\,337 and has a 3$\sigma$ positional error of 30\hbox{$^{\prime\prime}$}. From the improved spatial resolution of {XMM-Newton}\ the total positional error reduces to 2\,\farcs3. Hence, we can see that the X-ray source belongs to a foreground star candidate (\textit{cf.}\ Table~5) and not to the very nearby SNR. Source [SHL2001]~129 correlates with sources \hbox{N$^{\underline{o}}$}\ 743 and \hbox{N$^{\underline{o}}$}\ 761, which are classified as a GlC and a GlC candidate, respectively. The SNR candidate listed as the counterpart of [SHL2001]~129 is located between these two {XMM-Newton}\ sources. In addition PFH2005 gives a third source which lies within the error circle of [SHL2001]~129 and which is classified as an AGN candidate. Thus it is very likely that [SHL2001]~129 is a blend of these three {XMM-Newton}\ sources and that the correlation with the SNR candidate has to be considered as a chance coincidence.
From the sources listed as SNRs in the different {\it Chandra}\ studies many are re-detected. Nevertheless two SNRs from {\it Chandra}\ were not detected in the {XMM-Newton}\ observations. Source n1-85 has been reported as spatially correlated with an optical SNR by \citet{2004ApJ...609..735W}, but has also been classified as a repeating transient source in the same paper. An {XMM-Newton}\ counterpart to n1-85 was detected neither in the study of PFH2005 nor in the XMM\ LP-total\ catalogue. The transient nature of this source is at odds with the SNR classification.
Source CXOM31~J004247.8+411556 \citep{2003ApJ...590L..21K}, which correlates with the radio source [B90]~95, is located in the vicinity of two bright sources and close to the centre of \object{M~31}. Due to {XMM-Newton}'s larger point spread function this source cannot be resolved by {XMM-Newton}\ in this environment. The larger PSF of {XMM-Newton}\ is also the reason why source \hbox{N$^{\underline{o}}$}\ 1\,050 has a significant variability in Table~\ref{Tab:VarSNRs1}, since this source is located within the central diffuse emission of \object{M~31}.
Finally, we wanted to determine whether any of the XMM\ LP-total\ SNRs and SNR candidates were previously observed, but not classified as SNRs. In total there are seven such sources.
One of them (\hbox{N$^{\underline{o}}$}\ 1741) is classified as a SNR candidate based on its {XMM-Newton}\ hardness ratios, and correlates with the {\it Chandra}\ source n1-48 (DKG2004). The fluxes obtained with {XMM-Newton}\ and {\it Chandra}\ are in good agreement (see Table~\ref{Tab:VarSNRs}), but below the {\it ROSAT}\ detection threshold (5.3 \ergcm{-15}).
For a further four sources, the corresponding sources were only detected previously with ROSAT.
One of them (\hbox{N$^{\underline{o}}$}\ 1\,793$\hat{=}$[SHP97]~347) also correlates with a radio source \citep[source 472 of][]{1990ApJS...72..761B} and is therefore identified as a SNR. The rather high flux variability between the {\it ROSAT}\ and {XMM-Newton}\ observations (see Table~\ref{Tab:VarSNRs}) can be attributed to source \hbox{N$^{\underline{o}}$}\ 1\,799, which is located within 19\,\farcs9 of \hbox{N$^{\underline{o}}$}\ 1\,793. This suggests that [SHP97]~347 is a combination of both {XMM-Newton}\ sources, but as [SHP97]~347 was not detected in SHL2001, we cannot exclude a transient source or false detection as an explanation for the {\it ROSAT}\ source. Source \hbox{N$^{\underline{o}}$}\ 472 ($\hat{=}$[SHL2001]~84), source \hbox{N$^{\underline{o}}$}\ 294 ($\hat{=}$[SHP97]~53$\hat{=}$[SHL2001]~56), and source \hbox{N$^{\underline{o}}$}\ 1\,079 ($\hat{=}$[SHP97]~212) are SNR candidates based on their hardness ratios. The pronounced flux variability of source \hbox{N$^{\underline{o}}$}\ 472 is due to source \hbox{N$^{\underline{o}}$}\ 468, which is located within 18\farcs5 of \hbox{N$^{\underline{o}}$}\ 472 and is $\sim$8.6 times brighter than \hbox{N$^{\underline{o}}$}\ 472.\@ The observed flux for source \hbox{N$^{\underline{o}}$}\ 1\,079 was below the ROSAT threshold. Furthermore, the ROSAT source [SHP97]~212 was classified as a SNR, but did not appear in the SHL2001 catalogue. Hence ROSAT may have detected an unrelated transient instead.
Sources corresponding to the remaining two {XMM-Newton}\ sources were detected with {\it ROSAT}\ and {\it Chandra}. Source \hbox{N$^{\underline{o}}$}\ 969 was detected in both {\it ROSAT}\ PSPC surveys ([SHP97]~185$\hat{=}$[SHL2001]~186) and correlates with {\it Chandra}\ source s1-84 \citep{2006ApJ...643..356W}. We classify it as a SNR candidate due to its hardness ratios and X-ray spectrum (see XMMM31~J004239.8+404318). Counterparts for source \hbox{N$^{\underline{o}}$}\ 1\,291 were reported in the literature as [PFJ93]~84, [SHP97]~251, [SHL2001]~255, [VG2007]~261 and source 4 in Table~5 of \citet{2006ApJ...643..844O}. Based on the {XMM-Newton}\ hardness ratios and the correlation with radio source [B90]~166 \citep{1990ApJS...72..761B}, we identified the source as a SNR.
For sources \hbox{N$^{\underline{o}}$}\ 294, \hbox{N$^{\underline{o}}$}\ 969, and \hbox{N$^{\underline{o}}$}\ 1\,291 the variability between different observations may not be real because of systematic cross-calibration uncertainties. Therefore, we keep the $<$SNR$>$ and SNR classifications for these sources.
\subsubsection{The spatial distribution}
To examine the spatial distribution of the {XMM-Newton}\ SNRs and SNR candidates, we determined projected distances from the centre of M~31. The distribution of SNRs and SNR candidates (normalised per deg$^{2}$) is shown in Fig.\,\ref{Fig:SNRdepro_dist}. It shows an enhancement of sources around $\sim$3\,kpc, which corresponds to the SNR population in the 'inner spiral arms' of M~31. In addition, a second enhancement of sources around $\sim$10\,kpc is detected; this corresponds to the well known dust ring or star formation ring in the disc of \object{M~31}\ \citep{2006Natur.443..832B}. Only a few sources are located beyond this ring. Figure~\ref{Fig:SNR_dudist} shows the spatial distribution of the SNRs and SNR candidates from the XMM\ LP-total\ catalogue plotted over the IRAS 60$\mu$m image \citep{1994STIN...9522539W}. We see that most of the SNRs and SNR candidates are located on features that are visible in the IRAS image. This again demonstrates that SNRs and SNR candidates are coincident with the dust ring at $\sim\!10$\,kpc. In addition, the locations of star forming regions obtained from \textit{GALEX} data \citep[][and private communication]{2009ApJ...703..614K} are indicated in Fig.\,\ref{Fig:SNR_dudist}.\@ We see that many of the SNRs and SNR candidates are located within or next to star forming regions in M~31.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics[clip,angle=-90]{pics/SNR_spdist_deg2_V5.ps}}
\caption[Projected radial distribution of SNRs and SNR candidates from the XMM\ LP-total\ catalogue.]{Projected radial distribution of SNRs and SNR candidates from the XMM\ LP-total\ catalogue. An enhancement in the source distribution corresponding to the 10\,kpc dust ring of \object{M~31}\ is visible.}
\label{Fig:SNRdepro_dist}
\end{figure}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics[clip]{pics/spdist_SNR_new4.ps}}
\caption[An IRAS 60$\mu$m image, which clearly shows the dust ring located at $\sim\!10$\,kpc, over-plotted with the location of SNRs and candidates (red dots) from the XMM\ LP-total\ catalogue.]{An IRAS 60$\mu$m image \citep{1994STIN...9522539W}, which clearly shows the dust ring located at $\sim\!10$\,kpc, over-plotted with the location of SNRs and candidates (red dots) from the XMM\ LP-total\ catalogue. The coincidence between the SNRs and candidates and the structures of the image is visible. In addition the locations of star forming regions, which were obtained from GALEX data \citep{2009ApJ...703..614K}, are indicated by blue dots. Furthermore the two ellipses (green) at 3 and 10\,kpc from the centre correspond to the enhancemnets of sources from Fig.\,\ref{Fig:SNRdepro_dist}.}
\label{Fig:SNR_dudist}
\end{figure}
\subsection{X-ray binaries}
\label{SubSec:XRB}
X-ray binaries consist of a compact object plus a companion star. The compact object can either be a white dwarf (these systems are a subclass of CVs), a neutron star (NS), or a black hole (BH).\@ A common feature of all these systems is that a large amount of the emitted X-rays is produced due to the conversion of gravitational energy from the accreted matter into radiation by a mass-exchange from the companion star onto the compact object.
X-ray binaries containing an NS or a BH are divided into two main classes, depending on the mass of the companion star:
\begin{itemize}
\item Low mass X-ray binaries (LMXBs) contain companion stars of low mass ($M\la$ 1\,M\mbox{$_{\odot}$}) and late type (type A or later), and have a typical lifetime of $\sim$\oexpo{8-9}~yr \citep{2006ARA&A..44..323F}. LMXBs can be located in globular clusters. Mass transfer from the companion star into an accretion disc around the compact object occurs via Roche-lobe overflow.
\item High mass X-ray binaries (HMXBs) contain a massive O or B star companion \citep[$M_{\mathrm{star}}\ga10$\,M\mbox{$_{\odot}$},][]{Verbunt1994} and are short-lived with lifetimes of $\sim$\oexpo{6-7}\,yr \citep{2006ARA&A..44..323F}. One has to distinguish between two main groups of HMXBs: super-giant and the Be/X-ray binaries. In these systems wind-driven accretion onto the compact object powers the X-ray emission. Mass-accretion via Roche-lobe overflow is less frequent in HMXBs, but is still known to occur in several bright systems (\eg\ LMC\,X-4, SMC\,X-1, Cen\,X-3). HMXBs are expected to be located in areas of relatively recent star formation, between 25--60\,Myr ago \citep{2010ApJ...716L.140A}.
\end{itemize}
We should expect about 45 LMXBs in \object{M~31}, following a similar estimation as the one presented in Sect.\,\ref{Sec:SNR_Diss}. Here the number of LMXBs in the Galaxy was estimated from \citet{2002A&A...391..923G}. In the XMM\ LP-total\ catalogue 88 sources are identified/classified as XRBs. This is not surprising as we may expect \object{M~31}\ to have a higher fraction of XRBs than the Galaxy since it is an earlier type galaxy composed of a higher fraction of old stars.
XRBs are the main contribution to the population of ``hard" X-ray sources in \object{M~31}. Despite some more or less reliable candidates, not a single, definitely detected HMXB is known in \object{M~31}. The results of a new search for HMXB candidates are presented in Sect.\,\ref{SubSec:XRB_HMXB}. The LMXBs can be separated into two sub-classes: the field LMXBs (discussed in this section) and those located in globular clusters. Sources belonging to the latter sub-class are discussed in Sect.\,\ref{SubSec:GlC}.\@
The sources presented here are classified as XRBs, because they have HRs indicating a $<$hard$>$ source and are either transient or show a variability factor larger than ten (see Sect.\,\ref{Sec:var}).
In total 10 sources are identified and 26 are classified as XRBs by us, according to the classification criteria given in Table~\ref{Tab:class}. Apart from source \hbox{N$^{\underline{o}}$}\ 57 (XMMM31 J003833.2+402133, see below), the identified XRBs had been reported as X-ray binaries in the literature (see comment column of Table~5). Figure~\ref{Fig:XRB_fldist} (red histogram) shows the flux distribution of XRBs. We see that this class contains only rather bright sources. This is not surprising as the classification criterion for XRBs is based on their variability, which is more easily detected for brighter sources (\textit{cf.}\ Sect.\,\ref{Sec:var}).
The XID fluxes range from 1.4\ergcm{-14} (\hbox{N$^{\underline{o}}$}\ 378) to 3.75\ergcm{-12} (\hbox{N$^{\underline{o}}$}\ 966), which correspond to luminosities from 1.0\ergs{36} to 2.7\ergs{38}.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics[clip]{pics/XRB_GlC_fluxdist_log_new.ps}}
\caption{Distribution of the source fluxes of XRBs and GlC sources in the 0.2--4.5\,keV (XID) band. The diagram shows the number of identified and classified XRBs and GlCs at each flux bin, plotted versus the flux. In addition, the individual distribution of (field) XRBs (in red) as well as GlCs (in green) are given.
}
\label{Fig:XRB_fldist}
\end{figure}
It is clear from Fig.\,\ref{Fig:XRB_spdist}, which shows the spatial distribution of the XRBs, that nearly all sources classified or identified as XRBs (yellow dots) are located in fields that were observed more than once (centre and southern part of the disc). This is partly a selection effect, caused by the fact that these particular fields were observed several times, thus allowing the determination of source variability.
For sources located outside these fields, especially the northern part of the disc, the transient nature must have been reported in the literature to mark them as an XRBs.
The source density of LMXBs, which follows the overall stellar density, is higher in the centre than in the disc of \object{M~31}. One would not expect HMXBs in the central region which is dominated by the bulge (old stellar population).
From Fig.\,\ref{Fig:XRB_spdist_IRAS}, which shows the spatial distribution of the XRBs over-plotted on an IRAS 60\,$\mu$m image \citep{1994STIN...9522539W}, we see that only a few sources, classified or identified as XRBs, are located in the vicinity of star forming regions.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics[clip]{pics/spdist_XRB_new3_wROS_wHMXBc.ps}}
\caption{The spatial distribution of XRBs and candidates from the XMM\ LP-total\ catalogue. The positions of the XRBs and candidates are marked with yellow dots; the two XRB candidates classified from their variability compared with {\it ROSAT}\ observations are marked with blue dots. An increase in the number density of sources in the central field is clearly visible. In addition the two new HMXB candidates presented in Sect.\,\ref{SubSec:XRB_HMXB} (red dots), and the three HMXB candidates of SBK2009 that satisfy our U-B/B-V selectrion criterion (green dots, see Sect.\,\ref{SubSec:XRB_HMXB}) are shown. XRBs which correlate with globular clusters are shown in Fig.\,\ref{Fig:GlC_spdist}.}
\label{Fig:XRB_spdist}
\end{figure}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics[clip]{pics/spdist_XRB_IRAS60_wROS_wHMXBc.ps}}
\caption{The spatial distribution of XRBs and candidates from the XMM\ LP-total\ catalogue. Shown are the same sources as in Fig\,\ref{Fig:XRB_spdist}, but over-plotted on an IRAS 60\,$\mu$m image \citep{1994STIN...9522539W}, which shows the dusty star forming region in \object{M~31}. In addition the locations of star forming regions, which were obtained from GALEX data \citep{2009ApJ...703..614K}, are indicated by cyan dots.}
\label{Fig:XRB_spdist_IRAS}
\end{figure}
References for the sources, selected from their temporal variability, are given in Table~\ref{Tab:varlist}.\@ TPC06 report on four bright X-ray transients, which they detected in the observations of July 2004 and suggested to be XRB candidates. We also found these sources and classified source \hbox{N$^{\underline{o}}$}\ 705 and identified sources \hbox{N$^{\underline{o}}$}\ 985, \hbox{N$^{\underline{o}}$}\ 1\,153, \hbox{N$^{\underline{o}}$}\ 1\,177 as XRBs. One of the identified XRBs (\hbox{N$^{\underline{o}}$}\ 1\,177) shows a very soft spectrum. \citet{2005ApJ...632.1086W} observed source \hbox{N$^{\underline{o}}$}\ 1\,153 with {\it Chandra}\ and \textsl{HST}. From the location and X-ray spectrum they suggest it to be an LMXB. They propose that the optical counterpart of the X-ray source is a star within the X-ray error box , which shows an optical brightness change (in B) by $\simeq$1 mag.
Source \hbox{N$^{\underline{o}}$}\ 985 was first detected in January 1979 by TF91 with the {\it Einstein}\ observatory. WGM06 rediscovered it in {\it Chandra}\ observations from 2004. Their coordinated \textsl{HST} ACS imaging does not reveal any variable optical counterpart. From the X-ray spectrum and the lack of a bright star, WGM06 suggest that this source is an LMXB with a black hole primary.
In the following subsections we discuss three transient XRBs in more detail.
\paragraph{XMMM31~J003833.2+402133} (\hbox{N$^{\underline{o}}$}\ 57) was first detected in the {XMM-Newton}\ observation from 02 January 2008 (s32) at an unabsorbed 0.2\,--\,10\,keV luminosity of $\sim\!2$\ergs{38}.
From two observations, taken about 0.5\,yr (s31) and 1.5\,yr (s3) earlier, we derived upper limits for the fluxes, which were more than a factor of 100 below the values obtained in January 2008.
The combined EPIC spectrum from observation s32 (Fig.\,\ref{SubFig:spec_1}) is best fitted with an absorbed disc blackbody plus power-law model, with $N_{\mathrm{H}}\!=\!1.68^{+0.42}_{-0.48}$\hcm{21}, temperature at the inner edge of the disc $k_{\mathrm{B}}T_{\mathrm{in}}\!=\!0.462\pm0.013$\,keV and power-law index of $2.55^{+0.33}_{-1.05}$.\@ The contribution of the disc blackbody luminosity to the total luminosity is $\sim 59\,\%$. Formally acceptable fits are also obtained from an absorbed disc blackbody and an absorbed bremsstrahlung model (see Table~\ref{Tab:specprop}).
We did not find any significant feature in a fast Fourier transformation (FFT) periodicity search. The combined EPIC light curve during observation s32 was consistent with a constant value.
To identify possible optical counterparts we examined the LGGS images and the images taken with the {XMM-Newton}\ optical monitor during the X-ray observation (UVW1 and UVW2 filters).
The absence of optical/UV counterparts and of variability on short timescales, as well as the spectral properties suggest that this source is a black hole LMXB in the steep power-law state \citep{2006csxs.book..157M}.
\paragraph{CXOM31~J004059.2+411551:}
\citet{2007ATel.1147....1G} reported on the detection of a previously unseen X-ray source in a 5\,ks {\it Chandra}\ ACIS-S observation from 05 July 2007. In an {XMM-Newton}\ ToO observation \citep[sn11,][]{2007ATel.1191....1S} taken about 20 days after the {\it Chandra}\ detection, the source (\hbox{N$^{\underline{o}}$}\ 523) was still bright.
The position agrees with that found by {\it Chandra}. We detected the source at an unabsorbed 0.2\,--\,10\,keV luminosity of $\sim\!1.1$\ergs{38}.
The combined EPIC spectrum (Fig.\,\ref{SubFig:spec_2}) can be well fitted with an absorbed disc blackbody model with $N_{\mathrm{H}}\!=\!\left(2.00\pm{0.16}\right)$\hcm{21} and with a temperature at the inner edge of the disc of $k_{\mathrm{B}}T_{\mathrm{in}}\!=\!0.538\pm0.017$\,keV (Table~\ref{Tab:specprop}). The spectral parameters and luminosity did not change significantly compared to the {\it Chandra}\ values of \citet{2007ATel.1147....1G}.
We did not find any significant feature in an FFT periodicity search. The combined EPIC light curve was consistent with a constant value.
The examination of LGGS images and of images taken with the {XMM-Newton}\ optical monitor (UVW1 and UVW2 filters) during the X-ray observation did not reveal any possible optical/UV counterparts.
The lack of bright optical counterparts and the X-ray parameters (X-ray spectrum, lack of periodicity, transient nature, luminosity) are consistent with this source being a black hole X-ray transient, as already mentioned in \citet{2007ATel.1147....1G}.
\paragraph{XMMU~J004144.7+411110} (\hbox{N$^{\underline{o}}$}\ 705) was detected by \citet{2006ApJ...645..277T} in {XMM-Newton}\ observations b1--b4 (July 2004) at an unabsorbed luminosity of 3.1--4.4\ergs{37} in the 0.3--7\,keV band, using a {\tt DISKBB} model.
We detected the source in observation sn11 (25 July 2007) with an unabsorbed 0.2--10\,keV luminosity of $\sim$1.8\ergs{37}, using also a {\tt DISKBB} model.
In observation sn11, the source was bright enough to allow spectral analysis.
The spectra can be well fitted with an absorbed power-law, disc blackbody or bremsstrahlung model (Table~\ref{Tab:specprop}).
The obtained spectral shapes (absorption and temperature as well as photon index) are in agreement with the values of \citet{2006ApJ...645..277T}.
An FFT periodicity search did not reveal any significant periodicities in the 0.3\,s to 2\,000\,s range.
No optical counterparts were evident in the images taken with the {XMM-Newton}\ optical monitor UVW1 and UVW2 during the sn11 observation, nor in the LGGS images.
The lack of a bright optical counterpart and the X-ray parameters support that this source is a black hole X-ray transient, as classified by \citet{2006ApJ...645..277T}.
\begin{figure}
\subfigure[XMMM31~J003833.2+402133]{\includegraphics[scale=0.3, angle=-90]{pics/spectrum_s32_1.ps}\label{SubFig:spec_1}}
\subfigure[CXOM31~J004059.2+411551]{\includegraphics[scale=0.3, angle=-90]{pics/spectrum_sn1T1_1.ps}\label{SubFig:spec_2}}\\
\subfigure[XMMU~J004144.7+411110]{\includegraphics[scale=0.3, angle=-90]{pics/spectrum_sn1T1_2.ps}\label{SubFig:spec_3}}
\caption{EPIC spectra of the transient sources \subref{SubFig:spec_1} XMMM31~J003833.2+402133, \subref{SubFig:spec_2} CXOM31~J004059.2+411551 and \subref{SubFig:spec_3} XMMU~J004144.7+411110. The histograms show the best-fit model: PL+DISCBB in \subref{SubFig:spec_1}, DISCBB in \subref{SubFig:spec_2} and \subref{SubFig:spec_3}.}
\label{Fig:spec}
\end{figure}
\begin{table*}
\scriptsize
\begin{center}
\caption{Spectral parameters of the transient sources.}
\begin{tabular}{ccccccccc}
\hline\noalign{\smallskip}
\hline\noalign{\smallskip}
\multicolumn{1}{c}{M 31 field} & \multicolumn{1}{c}{Model} &\multicolumn{1}{c}{$N_{\mathrm{H}}$} &
\multicolumn{1}{c}{$k_{\mathrm{B}}T$} & \multicolumn{1}{c}{$R_{in}\sqrt{\cos{i}}^*$} & \multicolumn{1}{c}{Photon} & \multicolumn{1}{c}{$\chi^2$} & \multicolumn{1}{c}{$L_X^{\dagger}$} & \multicolumn{1}{c}{Instrument} \\
\noalign{\smallskip}
& & \multicolumn{1}{c}{(\hcm{21})} & \multicolumn{1}{c}{(keV)}
& \multicolumn{1}{c}{(km)} & \multicolumn{1}{c}{Index} & \multicolumn{1}{c}{(d.o.f)}
& & \\
\noalign{\smallskip}\hline\noalign{\smallskip}\hline\noalign{\smallskip}
& & & \multicolumn{3}{c}{XMMM31~J003833.2+402133 (\hbox{N$^{\underline{o}}$}\ 57)} & &\\
\noalign{\smallskip}\hline\noalign{\smallskip}
s32 &PL+DISCBB&$1.68^{+0.42}_{-0.48}$&$0.462\pm0.013$&$106^{+9}_{-10}$ &$2.55^{+0.33}_{-1.05}$ &173.89(145)&2.04&PN+M1+M2\\{\smallskip}
s32 &DISCBB&$1.06\pm0.06$&$0.511\pm0.009$&$95\pm4$ & &270.01(147)&1.46&PN+M1+M2\\
s32 &BREMSS&$1.91\pm0.07$&$1.082^{+0.029}_{-0.030}$& & &208.65(147)&2.12&PN+M1+M2\\
\noalign{\smallskip}\hline\noalign{\smallskip}\hline\noalign{\smallskip}
& & & \multicolumn{3}{c}{CXOM31~J004059.2+411551 (\hbox{N$^{\underline{o}}$}\ 523)} & &\\
\noalign{\smallskip}\hline\noalign{\smallskip}
sn11 &DISCBB&$2.00\pm0.16$&$0.538\pm0.017$&$75\pm6$ & &97.70(79)&1.12&PN+M1+M2\\
sn11 &BREMSS&$3.13\pm0.19$&$1.097^{+0.060}_{-0.056}$& & &93.17(79)&1.72&PN+M1+M2\\
\noalign{\smallskip}\hline\noalign{\smallskip}\hline\noalign{\smallskip}
& & & \multicolumn{3}{c}{XMMU~J004144.7+411110 (\hbox{N$^{\underline{o}}$}\ 705)} & &\\
\noalign{\smallskip}\hline\noalign{\smallskip}
sn11 &DISCBB&$2.32^{+1.03}_{-0.87}$&$0.586^{+0.100}_{-0.087}$&$26^{+13}_{-8}$ & &29.74(23)&0.18&PN+M1+M2\\
sn11 &BREMSS&$3.72^{+1.14}_{-1.00}$&$1.216^{+0.373}_{-0.269}$& & &29.48(23)&0.29&PN+M1+M2\\
sn11 &PL&$6.17^{+1.72}_{-1.47}$&& &$3.23^{+0.46}_{-0.40}$& 31.57(23)&1.12&PN+M1+M2\\
\noalign{\smallskip}\hline\noalign{\smallskip}\hline\noalign{\smallskip}
\end{tabular}
\label{Tab:specprop}
\end{center}
Notes:\\
$^{ *~}$: effective inner disc radius, where $i$ is the inclination angle of the disc\\
$^{ {\dagger}~}$: unabsorbed luminosity in the $0.2$\,--\,$10.0$\,keV energy range in units of \oergs{38}\\
\normalsize
\end{table*}
\subsubsection{Sources from the XMM-LP total catalogue that were not detected by \textit{ROSAT}}
To search for additional XRB candidates, we selected all sources from the XMM\ LP-total\ catalogue, that were classified as $<$hard$>$ and which did not correlate with a source listed in the {\it ROSAT}\ catalogues (PFJ93, SHP97 and SHL2001). The flux distribution of the selected sources is shown in Fig.\,\ref{Fig:noROSdist}, and Table~\ref{Tab:noROSdist} gives the number of sources brighter than the indicated flux limit.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics[clip,angle=0]{pics/hard_notROSAT_Fluxdist_V5n.ps}}
\caption{Distribution of the source fluxes in the 0.2\,--\,4.5\,keV (XID) band. The diagram shows the number of sources from the XMM\ LP-total\ catalogue that were classified as $<$hard$>$, and in addition do not correlate with a source listed in the {\it ROSAT}\ catalogues at each flux bin plotted versus the flux, using logarithmic scales.}
\label{Fig:noROSdist}
\end{figure}
\begin{table}
\begin{center}
\caption{The cumulative number of sources from the XMM\ LP-total\ catalogue that were classified as $<$hard$>$, and in addition do not correlate with a source listed in the {\it ROSAT}\ catalogues. Four different limiting fluxes are indicated.}
\begin{tabular}{rr}
\hline\noalign{\smallskip}
\hline\noalign{\smallskip}
\multicolumn{1}{c}{XID flux limit} & \multicolumn{1}{c}{\# of sources}\\
\multicolumn{1}{c}{erg\,cm$^{-2}$\,s$^{-1}$} & \\
\hline\noalign{\smallskip}
5.5\,E-15 & 541 \\
1\,E-14 & 242 \\
5\,E-14 & 7 \\
1\,E-13 & 1 \\
\noalign{\smallskip}\hline\noalign{\smallskip}\hline\noalign{\smallskip}
\end{tabular}
\label{Tab:noROSdist}
\end{center}
\normalsize
\end{table}
Possible, new XRB candidates are sources that have an XID flux that lies at least a factor of ten above the {\it ROSAT}\ detection threshold (5.3\ergcm{-15}). These sources fulfil the variability criterion used to classify XRBs (\textit{cf.}\ Sect.\,\ref{Sec:var}). The XMM\ LP-total\ catalogue lists five sources without {\it ROSAT}\ counterparts that have XID fluxes above 5.3\ergcm{-14}. These are: \hbox{N$^{\underline{o}}$}\ 239, \hbox{N$^{\underline{o}}$}\ 365, \hbox{N$^{\underline{o}}$}\ 910, \hbox{N$^{\underline{o}}$}\ 1\,164, and \hbox{N$^{\underline{o}}$}\ 1\,553. Between the {\it ROSAT}\ and {XMM-Newton}\ observations more than ten years have elapsed. On this time scale AGN can also show strong variability. To estimate the number of AGN among the five sources listed above, we investigated how many sources of the identified and classified background objects from the XMM\ LP-total\ catalogue with an XID flux larger than 5.3\ergcm{-14} were not detected by {\it ROSAT}. The result is that {\it ROSAT}\ detected all background sources with an XID flux larger than 5.3\ergcm{-14} that are listed in the XMM\ LP-total\ catalogue. Thus, the probability that any of the five sources listed above is a background object is very small, in particular if the source is located within the D$_{25}$ ellipse of \object{M~31}. Therefore, the two sources located within the D$_{25}$ ellipse are listed in the XMM\ LP-total\ catalogue as XRB candidates, while the remaining three sources, which are located outside the D$_{25}$ ellipse, are classified as $<$hard$>$.\@ All five sources are marked in the comment column of Table~5 with `XRB cand.\ from {\it ROSAT}\ corr.'.
\subsubsection{Detection of high mass X-ray binaries}
\label{SubSec:XRB_HMXB}
As already mentioned, until now not a single secure HMXB in \object{M~31}\ has been confirmed. The reason for this is that the detection of HMXBs in \object{M~31}\ is difficult. \citet{2004ApJ...602..231C} showed that the hardness ratio method is very inefficient in selecting HMXBs in spiral galaxies. The selection process is complicated by the fact, that the spectral properties of BH HMXBs, which have power-law spectra with indices of $\sim$1-- $\sim$2 are similar to LMXBs and AGN. Therefore the region in the HR diagrams where BH HMXB are located is contaminated by other hard sources (LMXBs, AGN, and Crab like SNRs). For the NS HMXBs, which have power-law indices of $\sim$1, and thus should be easier to select, the uncertainties in the hardness ratios lead at best to an overlap -- in the worst case to a fusion -- with the area occupied by other hard sources \citep{2004ApJ...602..231C}.
Based on the spectral analysis of individual sources of \object{M~31}, SBK2009 identified 18 HMXB candidates with power-law indices between 0.8 and 1.2. One of these sources ([SBK2009]~123) correlates with a globular cluster, and hence it is rather an LMXB in a very hard state rather than an HMXB
\citep[\textit{cf.}\ ][]{2004ApJ...616..821T}. Four of their sources ([SBK2009]~34, 106, 149, and 295) do not have counterparts in the XMM\ LP-total\ catalogue.
\citet{Peter} developed a selection algorithm for HMXBs in the SMC, which also uses properties of the optical companion. X-ray sources were selected as HMXB candidates if they had HR2$+$EHR2$>$0.1 as well as an optical counterpart within 2\,\farcs5 of the X-ray source, with $-0.5\!<$B$-$V$<\!0.5$\,mag, $-1.5\!<$U$-$B$<\!-0.2$\,mag and V$<$17\,mag.
We tried to transfer this SMC selection algorithm to \object{M~31}\ sources. In doing so, we encountered two problems: The first problem is that the region of the U-B/B-V diagram is also populated by globular clusters (LMXB candidates) in \object{M~31}. The second problem is that due to the much larger distance to \object{M~31}, the range of detected V magnitudes of HMXBs in the SMC of $\sim$13$<$V$<$17\,mag translates to a $\sim$19$<$V$<$23\,mag criterion for \object{M~31}. Thus the V magnitude of optical counterparts of possible HMXB candidates lies in the same range as the optical counterparts of AGN. Therefore the V mag criterion, which provided most of the discriminatory power in the case of the SMC, fails totally in the case of \object{M~31}.
A few of the sources selected from the optical colour-colour diagram and HR diagrams are bright enough to allow the creation of X-ray spectra. That way two additional (\ie\ not given in SBK2009) HMXB candidates were found.
In addition, we determined the reddening free Q parameter:
\begin{equation}
Q = (\rm{U}-\rm{B})-0.72(\rm{B}-\rm{V})
\end{equation}
\citep[for definition see \eg\ ][]{cox2001allen}
which allowed us to keep only the intrinsically bluest stars, using Q $\le\!-0.4$ \citep[O-type stars typically have Q$<\!-0.9$, while -0.4 corresponds to a B5 dwarf or giant or an A0 supergiant, ][]{2007AJ....134.2474M}. U$-$B and B$-$V were taken from the LGGS catalogue.
\paragraph{XMMM31~J004557.0+414830} (\hbox{N$^{\underline{o}}$}\ 1\,716) has an USNO-B1 (R2$=$18.72\,mag), a 2MASS and an LGGS (V$=$20.02\,mag; Q$=\!-0.44$) counterpart. The EPIC spectrum is best fitted ($\chi^2_{red}\!=\!0.93$) by an absorbed power-law with \hbox{$N_{\rm H}$}$=\!7.4^{+6.0}_{-3.9}$\hcm{21} and photon index $\Gamma\!=\!1.2\pm0.4$. The absorption corrected X-ray luminosity in the 0.2--10\,keV band is $\sim$7.1\ergs{36}.
\paragraph{XMMM31~J004506.4+420615} (\hbox{N$^{\underline{o}}$}\ 1\,579) has an USNO-B1 (B2$=$20.87\,mag), a 2MASS and an LGGS (V$=$20.77\,mag; Q$=\!-1.04$) counterpart. The EPIC PN spectrum is best fitted ($\chi^2_{red}\!=\!1.6$) by an absorbed power-law with \hbox{$N_{\rm H}$}$=\!0.48^{+2.4}_{-1.0}$\hcm{21} and photon index $\Gamma\!=\!1.0^{+0.7}_{-0.5}$. The absorption corrected X-ray luminosity in the 0.2--10\,keV band is $\sim$8.6\ergs{36}.\\
To strengthen these classifications spectroscopic optical follow-up observations of the optical counterparts are needed. An FFT periodicity search did not reveal any significant periodicities for either of the two sources and the light curves do not show eclipses.
From the sources reported as HMXB candidates in SBK2009, three sources ([SBK2009]~21, 236, and 256) are located in the region of the U-B/B-V diagram, that we used. Another three sources ([SBK2009]~123, 172, and 226) are located outside that region. The remaining sources of SBK2009 have either no counterparts with a U-B colour entry in the LGGS catalogue ([SBK2009]~99, 234, 294, and 302) or have no optical counterpart from the LGGS catalogue at all ([SBK2009]~9, 160, 197, and 305). The reddening free Q parameter for the SBK2009 sources that have counterparts in the LGGS catalogue are given in Table~\ref{Tab:SBK_Q}.
\begin{table}
\begin{center}
\caption{Reddening free Q parameter for HMXB candidates of SBK2009.}
\begin{tabular}{rrlr}
\hline\noalign{\smallskip}
\hline\noalign{\smallskip}
\multicolumn{1}{c}{\hbox{N$^{\underline{o}}$}} & \multicolumn{1}{c}{[SBK2009]} & \multicolumn{1}{c}{LGGS counterpart} & \multicolumn{1}{c}{$Q$} \\
\noalign{\smallskip}\hline\noalign{\smallskip}
312 & 21 & J004001.50+403248.0 & -0.34\\
1668 & 236 & J004538.23+421236.0 & -0.77 \\
1724 & 256 & J004558.98+420426.5 & -0.81 \\
1109 & 123 & J004301.51+413017.5 & +1.77 \\
1436 & 172 & J004420.98+413546.7 & -0.65 \\
& & J004421.01+413544.3$^{*}$ & -0.29\\
1630 & 226 & J004526.68+415631.5 & -0.92\\
& & J004526.58+415633.1$^{*}$ & -0.72\\
\noalign{\smallskip}\hline\noalign{\smallskip}\hline\noalign{\smallskip}
\end{tabular}
\label{Tab:SBK_Q}
\end{center}
Notes: \\
$^{ *~}$: counterparts listed in SBK2009
\normalsize
\end{table}
\subsection{Globular cluster sources}
\label{SubSec:GlC}
A significant number of the luminous X-ray sources in the Galaxy and in \object{M~31}\ are found in globular clusters. X-ray sources corresponding to globular clusters are identified by cross-correlating with globular cluster catalogues (see Sect.\,\ref{Sec:CrossCorr_Tech}). Therefore changes between the XMM\ LP-total\ catalogue and the catalogue of PFH2005 in the classification of sources related to globular clusters are based on the availability of and modifications in recent globular cluster catalogues.
In total 52 sources of the XMM\ LP-total\ catalogue correlate with (possible) globular clusters. Of these sources 36 are identified as GlCs because their optical counterparts are listed as globular clusters in the catalogues given in Sect.\,\ref{Sec:CrossCorr_Tech}, while the remaining 16 sources are only listed as globular cluster candidates.
The range of source XID fluxes goes from 3.1\ergcm{-15} (\hbox{N$^{\underline{o}}$}\ 924) to 2.7\ergcm{-12} (\hbox{N$^{\underline{o}}$}\ 1\,057), or in luminosity from 2.3\ergs{35} to 2.0\ergs{38} (Fig.\,\ref{Fig:XRB_fldist}; green histogram). Compared to the fluxes found for the XRBs discussed in Sect.\,\ref{SubSec:XRB}, 14 sources that correlate with GlCs have fluxes below the lowest flux found for field XRBs. The reason for this finding is that the classification of field XRBs is based on the variable or transient nature of the sources, which can only be to detected for brighter sources (\textit{cf.}\ Sect.\,\ref{Sec:var}) and not just by positional coincidence that is also possible for faint sources.
Figure \ref{Fig:GlC_spdist} shows the spatial distribution of the GlC sources. X-ray sources correlating with GlCs follow the distribution of the optical GlCs, which are also concentrated towards the central region of \object{M~31}.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics[clip]{pics/spdist_GlC_newc_new.ps}}
\caption{The spatial distribution of X-ray sources correlating with GlCs and candidates from the XMM\ LP-total\ catalogue (yellow dots). An enhancement of sources towards the central region of \object{M~31}\ is clearly visible.}
\label{Fig:GlC_spdist}
\end{figure}
The three brightest globular cluster sources, which are located in the northen disc of \object{M~31}, are \hbox{N$^{\underline{o}}$}\ 1\,057 (XMMM31~J004252.0+413109), \hbox{N$^{\underline{o}}$}\ 694 (XMMM31~J004143.1+413420), and \hbox{N$^{\underline{o}}$}\ 1\,692 (XMMM31~J004545.8+413941). They are all brighter than 8.4\ergs{37}. Source \hbox{N$^{\underline{o}}$}\ 694 was classified as a black hole candidate, due to its variability observed at such high luminosities. A detailed discussion of the three sources is given in \citet{2008ApJ...689.1215B}.
XMMM31~J004303.2+412121 (\hbox{N$^{\underline{o}}$}\ 1\,118) was identified as a foreground star in PFH2005, based on the classification in the ``Revised Bologna Catalogue" \citep{2004A&A...416..917G}. \citet{2004A&A...416..917G} took the classification from \citet{1997A&A...321..379D}, which is based on the velocity dispersion of that source. Recent `\textsl{HST images unambiguously reveal that this} [B147] \textsl{is a well resolved star cluster, as recently pointed out also by \citet{2007AJ....133.2764B}}' \citep{2007A&A...471..127G}. That is why source \hbox{N$^{\underline{o}}$}\ 1\,118 is now identified as an XRB located in globular cluster B147.
\subsubsection{Integrated optical properties of the globular clusters in which the X ray sources are located}
\begin{table}
\scriptsize
\begin{center}
\caption{Integrated V-I colours, dereddened V-I integrated colours, and age estimates of the globular clusters in which the X ray sources are located.}
\begin{tabular}{lcclllc}
\hline\noalign{\smallskip}
\hline\noalign{\smallskip}
\multicolumn{1}{c}{Name} & \multicolumn{1}{c}{Class$^{*}$} & \multicolumn{1}{c}{Age$^{+}$} & \multicolumn{1}{c}{V mag} & \multicolumn{1}{c}{VI} & \multicolumn{1}{c}{VI0} & \multicolumn{1}{c}{Age$^{\dagger}$} \\
\noalign{\smallskip}\hline\noalign{\smallskip}
B005 & confirmed & old & 15.69 & 1.29 & 1.15 & old \\
SK055B & candidate & - & 18.991 & 0.388 & 0.248 & -- \\
B024 & confirmed & old & 16.8 & 1.15 & 1.01 & old \\
SK100C & candidate & na & 18.218 & 1.181 & 1.041 & old \\
B045 & confirmed & old & 15.78 & 1.27 & 1.13 & old \\
B050 & confirmed & old & 16.84 & 1.18 & 1.04 & old \\
B055 & confirmed & old & 16.67 & 1.68 & 1.54 & old \\
B058 & confirmed & old:: & 14.97 & 1.1 & 0.96 & old-inter \\
MITA140 & confirmed & old & 17 & 9999 & - & -- \\
B078 & confirmed & old & 17.42 & 1.62 & 1.48 & old \\
B082 & confirmed & old & 15.54 & 1.91 & 1.77 & old \\
B086 & confirmed & old & 15.18 & 1.26 & 1.12 & old \\
SK050A & confirmed & - & 18.04 & 1.079 & 0.939 & old-inter \\
B094 & confirmed & old & 15.55 & 1.26 & 1.12 & old \\
B096 & confirmed & old & 16.61 & 1.48 & 1.34 & old \\
B098 & confirmed & old & 16.21 & 1.13 & 0.99 & old-inter \\
B107 & confirmed & old & 15.94 & 1.28 & 1.14 & old \\
B110 & confirmed & old & 15.28 & 1.28 & 1.14 & old \\
B117 & confirmed & old:: & 16.34 & 1 & 0.86 & inter \\
B116 & confirmed & old & 16.79 & 1.86 & 1.72 & old \\
B123 & confirmed & old & 17.416 & 1.29 & 1.15 & old \\
B124 & confirmed & old & 14.777 & 1.147 & 1.007 & old \\
B128 & confirmed & old:: & 16.88 & 1.12 & 0.98 & old-inter \\
B135 & confirmed & old & 16.04 & 1.22 & 1.08 & old \\
B143 & confirmed & old & 16 & 1.22 & 1.08 & old \\
B144 & confirmed & old:: & 15.88 & 0.59 & 0.45 & young \\
B091D & confirmed & old & 15.44 & 9999 & - & -- \\
B146 & confirmed & old:: & 16.95 & 1.09 & 0.95 & interm \\
B147 & confirmed & old & 15.8 & 1.27 & 1.13 & old \\
B148 & confirmed & old & 16.05 & 1.17 & 1.03 & old \\
B150 & confirmed & old & 16.8 & 1.28 & 1.14 & old \\
B153 & confirmed & old & 16.24 & 1.3 & 1.16 & old \\
B158 & confirmed & old & 14.7 & 1.15 & 1.01 & old \\
B159 & confirmed & old & 17.2 & 1.41 & 1.27 & old \\
B161 & confirmed & old & 16.33 & 1.1 & 0.96 & old-inter \\
B182 & confirmed & old & 15.43 & 1.29 & 1.15 & old \\
B185 & confirmed & old & 15.54 & 1.18 & 1.04 & old \\
B193 & confirmed & old & 15.33 & 1.28 & 1.14 & old \\
SK132C & candidate & - & 18.342 & 1.84 & 1.7 & old \\
B204 & confirmed & old & 15.75 & 1.17 & 1.03 & old \\
B225 & confirmed & old & 14.15 & 1.39 & 1.25 & old \\
B375 & confirmed & old & 17.61:: & 1.02 & 0.88 & interm \\
B386 & confirmed & old & 15.547 & 1.154 & 1.014 & old \\
\noalign{\smallskip}\hline\noalign{\smallskip}\hline\noalign{\smallskip}
\end{tabular}
\label{Tab:GlC_optprop}
\end{center}
Notes: \\
$^{ *~}$: classification as confirmed or otherwise comes from the revised Bologna catalogue (December 2009, Version 4) \url{http://www.bo.astro.it/M31/RBC_Phot07_V4.tab}\\
$^{ +~}$: age comes from \citet{2009AJ....137...94C}\\
V, and V$-$I are integrated colours that come from the revised Bologna catalogue (December 2009, Version 4)
\url{http://www.bo.astro.it/M31/RBC_Phot07_V4.tab}\\
(V$-$I)$_{\rm{o}}$ is the dereddened V-I integrated colour, assuming E(B-V)=0.10+-0.03, which is the average of the reddenings of all \object{M~31}\ clusters in \citet{2005AJ....129.2670R}. (this E(B$-$V) corresponds to E(V$-$I)$=\!0.14$)\\
$^{ \dagger~}$: This dereddened colour (V$-$I)$_{\rm{o}}$ is used to estimate the age on the basis of the plots (V$-$I)$_{\rm{o}}$ versus log\,Age from \citet{2007AJ....133..290S}.
\normalsize
\end{table}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics[clip,angle=-0]{pics/GlobClusterVI.eps}}
\caption{The distribution of (V$-$I)$_{\rm{o}}$ for globular clusters (and candidates), with the approximate age-ranges marked, hosting XMM\ LP-total\ X-ray sources.}
\label{Fig:GlC_agedist}
\end{figure}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics[clip,angle=-0]{pics/optGlobCluster_dist.ps}}
\caption{The distribution of (V$-$I)$_{\rm{o}}$ for globular clusters (and candidates) from the RBC~V4, located in the XMM\ LP-total\ field.}
\label{Fig:optGlC_agedist}
\end{figure}
For each X-ray source which correlates with a globular cluster or globular cluster candidate in the optical, we investigated its integrated V-I colour and derived age estimates. Table~\ref{Tab:GlC_optprop} lists the name of the optical counterpart, its classification according to RBC V.4 \citep{2009A&A...508.1285G}, the age classification of \citet{2009AJ....137...94C}, the V magnitude and V-I colour given in RBC V.4, the dereddened V-I colour, and the age estimate derived by ourselves.
The integrated V$-$I colours of the clusters can be found in RBC V.4 and can be used to provide estimates of the ages of the clusters, in conjunction with reddening values. We have adopted a reddening of E(B$-$V)$=\!0.10\pm0.03$, which is the average of the reddenings of all \object{M~31}\ clusters in \citet{2005AJ....129.2670R}. Using these values, we have derived (V$-$I)$_{\rm{o}}$ for our clusters. In most cases (V$-$I)$_{\rm{o}}\!>\!1.0$ suggesting clusters older than $\simeq$2 Gyr according to \citet{2007AJ....133..290S}.
The histogram in Fig.\,\ref{Fig:GlC_agedist} shows the distribution of (V$-$I)$_{\rm{o}}$ for our clusters, with the approximate age-ranges marked.
In general there is good agreement between the \citet{2009AJ....137...94C} and our age estimates. This result indicates that the great majority of the objects are indeed old globular clusters.
Figure~\ref{Fig:optGlC_agedist} shows the distribution of (V$-$I)$_{\rm{o}}$ for all confirmed and candidate globular clusters, listed in the RBC~V.4, which are located in the XMM\ LP-total\ field, and which have V as well as I magnitudes given. A comparison with Fig.\,\ref{Fig:GlC_agedist} again reveals that mainly counterparts of old globular clusters (age $\ga$2\,Gyr) are detected in X-rays.
\subsubsection{Comparing GlC and candidates in \textit{XMM-Newton}, \textit{Chandra} and \textit{ROSAT} catalogues}
\label{SubSub:comp_GlC}
The combined {\it ROSAT}\ PSPC catalogue (SHP97 and SHL2001) contains 33 sources classified as globular cluster counterparts. Of these sources one is located outside the field observed with {XMM-Newton}.\@ Another two sources do not have counterparts in the XMM\ LP-total\ catalogue. The first one is [SHL2001]~232, which is not visible in any {XMM-Newton}\ observation taken before December 2006 as was already reported in \citet{2004ApJ...616..821T}. The second source ([SHL2001]~231) correlates with B\,164 which is identified as a globular cluster in RBC~V3.5.\@ In addition [SHL2001]~231 is listed in PFH2005 as the counterpart of the source [PFH2005]~423. Due to the improved positional accuracy of the X-ray source in the {XMM-Newton}\ observations, PFH2005 rejected the correlation with B\,164 and instead classified [PFH2005]~423 as a foreground star candidate.
Three {\it ROSAT}\ GlC candidates have more than one counterpart in the XMM\ LP-total\ catalogue. [SHL2001]~249 correlates with sources \hbox{N$^{\underline{o}}$}\ 1\,262 and \hbox{N$^{\underline{o}}$}\ 1\,267, where the latter is the X-ray counterpart of the globular cluster B\,185. [SHL2001]~254 correlates with sources \hbox{N$^{\underline{o}}$}\ 1\,289 and \hbox{N$^{\underline{o}}$}\ 1\,293, where the former is the X-ray counterpart of the globular cluster candidate mita\,311 \citep{1993PhDT........41M}. [SHL2001]~258 has a 1$\sigma$ positional error of 48\hbox{$^{\prime\prime}$}\ and thus correlates with sources \hbox{N$^{\underline{o}}$}\ 1\,297, \hbox{N$^{\underline{o}}$}\ 1\,305 and \hbox{N$^{\underline{o}}$}\ 1\,357.\footnote{In addition [SHL2001]~258 correlates with \hbox{N$^{\underline{o}}$}\ 1\,275, \hbox{N$^{\underline{o}}$}\ 1\,289 and \hbox{N$^{\underline{o}}$}\ 1\,293. However these sources have each an additional {\it ROSAT}\ counterpart.} The brightest of these three sources (\hbox{N$^{\underline{o}}$}\ 1\,305), which is actually located closest to the {\it ROSAT}\ position, correlates with the globular cluster candidate SK\,132C (RBC~V3.5).
Table \ref{Tab:ROSAT_GlC_tvar} gives the variability factors (Cols.~6, 8) and significance of variability (7, 9) for sources classified as GlC candidates in the {\it ROSAT}\ PSPC surveys. For most sources only low variability is detected. The two sources with the highest variability factors found (\hbox{N$^{\underline{o}}$}\ 1262, \hbox{N$^{\underline{o}}$}\ 1293) belong to {\it ROSAT}\ sources with more than one {XMM-Newton}\ counterpart. In these cases the {XMM-Newton}\ sources that correlate with the same {\it ROSAT}\ source and the optical globular cluster source show much weaker variability. Interestingly, a few sources show low, but very significant variability. Among these sources is the Z-source identified in \citet[][\hbox{N$^{\underline{o}}$}\ 966]{2003A&A...411..553B} and two of the sources discussed in \citet[][\hbox{N$^{\underline{o}}$}\ 1\,057, \hbox{N$^{\underline{o}}$}\ 1\,692]{2008ApJ...689.1215B}.
\begin{figure*}
\sidecaption
\includegraphics[width=12cm]{pics/transs_image_centre.ps}
\caption[Image of the central field of \object{M~31}\ over-plotted with the positions of six possible transient sources (red) and the sources of the XMM\ LP-total\ catalogue.]{Image of the central field of \object{M~31}\ over-plotted with the positions of six possible transient sources (red) and the sources of the XMM\ LP-total\ catalogue. Sources r2-15 and r3-71 are listed as sources \#17 and \#28, respectively, in \citet{2002ApJ...570..618D}. The three ``red" sources that are only marked with a number (\#58, \#65, \#82) are taken from \citet{2007A&A...468...49V}.}\label{Fig:posGlCtrans_pos}
\end{figure*}
\begin{table*}
\scriptsize
\begin{center}
\caption{Variability between XMM\ LP-total\ and {\it ROSAT}\ observations for sources classified as GlC candidates in the {\it ROSAT}\ PSPC surveys }
\begin{tabular}{rrrrrrrrrccc}
\hline\noalign{\smallskip}
\hline\noalign{\smallskip}
\multicolumn{1}{c}{SRC} & \multicolumn{1}{c}{SI$^{*}$} & \multicolumn{1}{c}{SII$^{*}$} & \multicolumn{1}{c}{XFLUX$^{+}$} & \multicolumn{1}{c}{EXFLUX$^{+}$} & \multicolumn{1}{c}{fv\_SI$^{\dagger}$} & \multicolumn{1}{c}{sv\_SI$^{\dagger}$} & \multicolumn{1}{c}{fv\_SII$^{\dagger}$} & \multicolumn{1}{c}{sv\_SII$^{\dagger}$} & \multicolumn{1}{c}{type} & \multicolumn{1}{c}{SIf$^{\ddagger}$}& \multicolumn{1}{c}{SIIf$^{\ddagger}$}\\
\multicolumn{1}{c}{(1)} & \multicolumn{1}{c}{(2)}& \multicolumn{1}{c}{(3)}& \multicolumn{1}{c}{(4)}& \multicolumn{1}{c}{(5)}& \multicolumn{1}{c}{(6)} & \multicolumn{1}{c}{(7)} & \multicolumn{1}{c}{(8)}& \multicolumn{1}{c}{(9)}& \multicolumn{1}{c}{(10)}& \multicolumn{1}{c}{(11)}& \multicolumn{1}{c}{(12)}\\
\noalign{\smallskip}\hline\noalign{\smallskip}
383 & 73 & 68 & 1.45E-12 & 1.05E-14 & 1.26 & 11.58 & 1.23 & 9.04 & GlC & * & * \\
403 & 79 & 74 & 2.55E-14 & 2.36E-15 & 2.61 & 2.09 & 7.57 & 4.92 & Gal & & \\
422 & & 76 & 2.01E-14 & 1.43E-15 & & & & & $<$hard$>$ & & \\
694 & 122 & 113 & 1.52E-12 & 1.04E-14 & 1.38 & 10.56 & 1.26 & 7.46 & GlC & * & * \\
793 & 138 & 136 & 4.76E-14 & 2.33E-15 & 1.10 & 0.55 & 1.02 & 0.14 & $<$Gal$>$ & * & \\
841 & 150 & 147 & 1.40E-12 & 1.61E-14 & 1.77 & 19.81 & 2.11 & 26.37 & GlC & * & --\\
855 & 158 & 154 & 4.21E-13 & 3.70E-15 & 1.01 & 0.14 & 3.54 & 48.75 & GlC & * & \\
885 & 168 & 163 & 1.56E-14 & 1.60E-15 & 1.70 & 1.54 & & & GlC & & \\
923 & 175 & 175 & 1.47E-13 & 2.07E-15 & 1.87 & 7.68 & 1.30 & 3.44 & GlC & & \\
933 & 178 & & 3.67E-14 & 1.64E-15 & 3.03 & 7.03 & & & GlC & & \\
947 & 180 & 179 & 3.24E-13 & 7.46E-15 & 2.43 & 12.78 & 2.29 & 12.16 & GlC & * & --\\
966 & 184 & 184 & 3.51E-12 & 9.21E-15 & 1.00 & 0.20 & 2.31 & 151.58 & XRB & & \\
1\,057 & 205 & 199 & 2.67E-12 & 2.05E-14 & 1.72 & 26.26 & 1.79 & 28.48 & GlC & * & *\\
1\,102 & 217 & 211 & 3.23E-13 & 2.93E-15 & 1.06 & 1.04 & 5.51 & 60.80 & GlC & * & \\
1\,109 & 218 & 212 & 3.25E-13 & 9.08E-15 & 1.91 & 9.70 & 2.10 & 10.72 & GlC & * & *\\
1\,118 & 222 & 216 & 1.16E-13 & 2.03E-15 & 1.46 & 3.63 & 1.89 & 7.46 & GlC & & \\
1\,122 & 223 & 217 & 2.48E-13 & 2.72E-15 & 2.08 & 12.02 & 7.03 & 73.27 & GlC & & \\
1\,157 & 228 & 223 & 7.59E-13 & 4.48E-15 & 1.06 & 1.68 & 1.08 & 2.75 & GlC & * & * \\
1\,171 & 229 & 227 & 4.68E-13 & 4.93E-15 & 1.82 & 12.92 & 1.75 & 12.19 & GlC & * & *\\
1\,262 & 247 & 249 & 2.94E-14 & 3.10E-15 & 14.11 & 18.40 & 14.64 & 20.24 & & & \\
1\,267 & 247 & 249 & 4.80E-13 & 4.57E-15 & 1.16 & 3.04 & 1.11 & 2.44 & GlC & * & * \\
1\,289 & 250 & 254 & 2.88E-14 & 1.91E-15 & 1.16 & 0.57 & 1.98 & 3.15 & $<$GlC$>$ & * & \\
1\,293 & 250 & 254 & 6.70E-15 & 9.42E-16 & 3.73 & 2.80 & 8.53 & 5.72 & $<$AGN$>$ & & \\
1\,296 & 253 & 257 & 3.89E-14 & 1.58E-15 & 4.03 & 9.38 & 1.25 & 1.17 & GlC & & \\
1\,297 & 252 & 258 & 5.59E-15 & 9.61E-16 & 4.46 & 2.69 & & & $<$hard$>$& & \\
1\,305 & & 258 & 1.69E-14 & 9.87E-16 & & & & & $<$GlC$>$ & & \\
1\,340 & 261 & 266 & 6.07E-14 & 3.01E-15 & 1.77 & 4.20 & 1.30 & 1.64 & GlC & & * \\
1\,357 & & 258 & 7.04E-15 & 1.18E-15 & & & & & $<$hard$>$& & \\
1\,449 & 281 & 289 & 2.34E-14 & 1.00E-15 & 3.10 & 5.36 & 1.79 & 2.39 & fg Star & & \\
1\,463 & 282 & 290 & 7.51E-13 & 8.38E-15 & 1.13 & 3.33 & 1.33 & 7.64 & GlC & * & * \\
1\,634 & 302 & 316 & 7.70E-14 & 2.91E-15 & 3.14 & 5.60 & 1.89 & 4.33 & $<$hard$>$& * & *\\
1\,692 & 318 & 336 & 1.15E-12 & 2.00E-14 & 2.86 & 45.59 & 2.62 & 38.63 & GlC & * & \\
1\,803 & 349 & 354 & 8.72E-13 & 9.17E-15 & 1.32 & 7.87 & 1.03 & 0.93 & GlC & * & * \\
\noalign{\smallskip}\hline\noalign{\smallskip}\hline\noalign{\smallskip}
\end{tabular}
\label{Tab:ROSAT_GlC_tvar}
\end{center}
Notes:\\
$^{ *~}$: SI: SHP97, SII: SHL2001\\
$^{ +~}$: XID Flux and error in erg\,cm$^{-2}$\,s$^{-1}$\\
$^{ {\dagger}~}$: Variability factor and significance of variability, respectively, for comparisons of {XMM-Newton}\ XID fluxes to {\it ROSAT}\ fluxes listed in SPH97 and SHL2001, respectively.\\
$^{ {\ddagger}~}$: An asterisk indicates that the XID flux is larger than the corresponding {\it ROSAT}\ flux. {\it ROSAT}\ count rates are converted to 0.2--4.5\,keV fluxes, using WebPIMMS and assuming a foreground absorption of \hbox{$N_{\rm H}$}\,$=\!6.6$\hcm{20} and a photon index of $\Gamma\!=\!1.7$: ECF$_{\mathrm{SHP97}}\!=\!2.229\times$10$^{-14}$\,erg\,cm$^{-2}$\,cts$^{-1}$ and ECF$_{\mathrm{SHL2001}}\!=\!2.249\times$10$^{-14}$\,erg\,cm$^{-2}$\,cts$^{-1}$
\normalsize
\end{table*}
The 18 X-ray sources correlating with globular clusters which were found in the {\it ROSAT}\ HRI observations (PFJ93) were all re-detected in the {XMM-Newton}\ data.
From the numerous studies of X-ray globular cluster counterparts in \object{M~31}\ based on {\it Chandra}\ observations \citep{2002ApJ...577..738K,2002ApJ...570..618D,2004ApJ...609..735W,2004ApJ...616..821T,2007A&A...468...49V}, only eight sources are undetected in the present study. One of them ([TP2004] 1) is located far outside the field of \object{M~31}\ covered by the Deep {XMM-Newton}\ Survey.
The transient nature of [TP2004] 35, and the fact that it is not observed in any {XMM-Newton}\ observation taken before December 2006 was mentioned in Sect.\,\ref{SubSec:Chcat}.
The six remaining sources (r2-15, r3-51, r3-71, [VG2007]~58, [VG2007]~65, [VG2007]~82) are located in the central area of \object{M~31}\ and are also not reported in PFH2005. Figure~\ref{Fig:posGlCtrans_pos} shows the position of these six sources (in red) and the sources of the XMM\ LP-total\ catalogue (in yellow). If the brightness of the six sources had not changed between the {\it Chandra}\ and {XMM-Newton}\ observations, they would be in principle bright enough to be detected by {XMM-Newton}\ in the merged observations of the central field, which have in total an exposure $\ga$ 100\,ks. Two sources (r2-15 and [VG2007]~65) are located next to sources detected by {XMM-Newton}. Source r2-15 is located within 13\hbox{$^{\prime\prime}$}\ of \hbox{N$^{\underline{o}}$}\ 1\,012 and within 17\hbox{$^{\prime\prime}$}\ of \hbox{N$^{\underline{o}}$}\ 1\,017 and has -- in the {\it Chandra}\ observation -- a similar luminosity to both {XMM-Newton}\ sources. The distance between \hbox{N$^{\underline{o}}$}\ 1\,012 and \hbox{N$^{\underline{o}}$}\ 1\,017 is 17\hbox{$^{\prime\prime}$}\, and within 20\hbox{$^{\prime\prime}$}\ of \hbox{N$^{\underline{o}}$}\ 1\,012, {XMM-Newton}\ detected source \hbox{N$^{\underline{o}}$}\ 1\,006, which is about a factor 4.6 fainter than \hbox{N$^{\underline{o}}$}\ 1\,012.\@ Therefore, when in a bright state, source r2-15 should be detectable with {XMM-Newton}. Source [VG2007]~65 is located within 17\hbox{$^{\prime\prime}$} of \hbox{N$^{\underline{o}}$}\ 1\,100, which is at least 3.5 times brighter than [VG2007]~65. This may complicate the detection of [VG2007]~65 with {XMM-Newton}. The variability of [VG2007]~58, [VG2007]~65, and [VG2007]~82 is supported by the fact that these three sources were not detected in any {\it Chandra}\ study, prior to \citet{2007A&A...468...49V}. Hence, these six sources are likely to be at least highly variable or even transient.
Several sources identified with globular clusters in previous studies have counterparts in the XMM\ LP-total\ catalogue but are not classified as GlC sources by us. Source \hbox{N$^{\underline{o}}$}\ 403 ([SHL2001]~74) correlates with B\,007, which is now identified as a background galaxy \citep[][RBC~V3.5]{2009AJ....137...94C,2007AJ....134..706K}. Sources \hbox{N$^{\underline{o}}$}\ 793 ([SHL2001]~136, s1-12) and \hbox{N$^{\underline{o}}$}\ 796 (s1-11) are the X-ray counterparts of B\,042D and B\,044D, respectively, which are also suggested as background objects by \citet{2009AJ....137...94C}. Source \hbox{N$^{\underline{o}}$}\ 948 (s1-83) correlates with B\,063D, which is listed as a globular cluster candidate in RBC~V3.5, but might be a foreground star \citep{2009AJ....137...94C}. Due to this ambiguity in classification we classified the source as $<$hard$>$. Source \hbox{N$^{\underline{o}}$}\ 966 correlates with [SHL2001]~184, which was classified as the counterpart of the globular cluster NB\,21 (RBC~V3.5) in the {\it ROSAT}\ PSPC survey (SHL2001). In addition, source \hbox{N$^{\underline{o}}$}\ 966 also correlates with the {\it Chandra}\ source r2-26 \citep{2002ApJ...577..738K}. Due to the much better spatial resolution of {\it Chandra}\ compared to {\it ROSAT}, \citet{2002ApJ...577..738K} showed that source r2-26 does not correlate with the globular cluster NB\,21. \citet{2003A&A...411..553B} identified this source as the first Z-source in \object{M~31}. The nature of source \hbox{N$^{\underline{o}}$}\ 1\,078 is unclear as RBC~V3.5 reported that source to be a foreground star, while \citet{2009AJ....137...94C} classified it as an old globular cluster. Due to this ambiguity in the classification and due to the fact that source \hbox{N$^{\underline{o}}$}\ 1\,078 is resolved into two {\it Chandra}\ sources (r2-9, r2-10), we decided to classify the source as $<$hard$>$. Due to the transient nature \citep{2002ApJ...577..738K,2006ApJ...643..356W} and the ambiguous classifications reported by RBC~V3.5 (GlC) and \citet[][H{\small II} region]{2009AJ....137...94C}, we adopt the classification of PFH2005 ($<$XRB$>$) for source \hbox{N$^{\underline{o}}$}\ 1\,152.\@ SBK2009 classified the source correlating with source \hbox{N$^{\underline{o}}$}\ 1\,293 as a globular cluster candidate. We are not able to confirm this classification, as none of the globular cluster catalogues used, contains an entry at the position of source \hbox{N$^{\underline{o}}$}\ 1\,293. Instead we found a radio counterpart in the catalogues of \citet{2005ApJS..159..242G}, \citet{1990ApJS...72..761B} and NVSS.\@ We therefore classified the source as an AGN candidate, as was also done in PFH2005.
For source \hbox{N$^{\underline{o}}$}\ 1\,449 ([SHL2001]~289) the situation is more complicated. SHL2001 report [MA94a]~380 as the globular cluster correlating with this X-ray source. Based on the same reference, \citet{2005PASP..117.1236F} included the optical source in their statistical study of globular cluster candidates. However, the paper with the acronym [MA94a] is not available. An intensive literature search of the papers by Magnier did not reveal any work relating to globular clusters in \object{M~31}, apart from \citet{1993PhDT........41M} which is cited in \citet{2005PASP..117.1236F} as ``MIT".\@ In addition the source is not included in any other globular cluster catalogues listed in Sect.\,\ref{Sec:CrossCorr_Tech}. In the X-ray studies of \citet{2004ApJ...609..735W} and PFH2005 and in \citet{1992A&AS...96..379M} the source is classified as a foreground star (candidate). Hence, we also classified source \hbox{N$^{\underline{o}}$}\ 1\,449 as a foreground star candidate, but suggest optical follow-up observations of the source to clarify its true nature.
A similar case is source \hbox{N$^{\underline{o}}$}\ 422 ([SHL2001]~76), which is classified as a globular cluster by SHL2001, based on a correlation with [MA94a]~16. Here again the source is not listed in any of the globular cluster catalogues used. We found one correlation of source \hbox{N$^{\underline{o}}$}\ 422 with an object in the USNO-B1 catalogue, which has no B2 and R2 magnitude. Two faint sources (V$>\!22.5$\,mag) of the LGGS catalogue are located within the X-ray positional error circle. Thus source \hbox{N$^{\underline{o}}$}\ 422 is classified as $<$hard$>$. While RBC~V3.5 classified the optical counterpart of source \hbox{N$^{\underline{o}}$}\ 1\,634 ([SHL2001]~316) as a globular cluster candidate, \citet{2009AJ....137...94C} regard SK\,182C as being a source of unknown nature. Therefore we decided to classify source \hbox{N$^{\underline{o}}$}\ 1634 as $<$hard$>$.
\section{Conclusions}
\label{Sec:Concl}
This paper presents the analysis of a large and deep {XMM-Newton}\ survey of the bright Local Group SA(s)b galaxy \object{M~31}. The survey observations were taken between June 2006 and February 2008. Together with re-analysed archival observations, they provide for the first time full coverage of the M31 $\mathrm{D}_{25}$ ellipse down to a 0.2\,--\,4.5\,keV luminosity of $\sim$\oergs{35}.
The analysis of combined and individual observations allowed the study of faint persistent sources as well as brighter variable sources.
The source catalogue of the Large {XMM-Newton}\ Survey of \object{M~31}\ contains 1\,897 sources in total, of which 914 sources were detected for the first time in X-rays. The XID source luminosities range from $\sim$4.4\ergs{34} to 2.7\ergs{38}. The previously found differences in the spatial distribution of bright ($\ga$\oergs{37}) sources between the northern and southern disc could not be confirmed.
The identification and classification of the sources was based on properties in the X-ray wavelength regime: hardness ratios, extent and temporal variability. In addition, information obtained from cross correlations with \object{M~31}\ catalogues in the radio, infra-red, optical and X-ray wavelength regimes were used.
The source catalogue contains 12 sources with spatial extent between 6\,\farcs2 and 23\,\farcs0. From spectral investigation and comparison with optical images, five sources were classified as galaxy cluster candidates.
317 out of 1\,407 examined sources showed long term variability
with a significance $>$3$\sigma$ between the {XMM-Newton}\ observations. These include 173 sources in the disc that were not covered in the study of the central field (SPH2008). Three sources located in the outskirts of the central field could not have been detected as variable in the study presented in SPH2008, as they only showed variability with a significance $>$3$\sigma$ between the archival and the ``Large Project" observations. For 69 sources the flux varied by more than a factor of five between XMM-Newton observations; ten of these varied by a factor $>$100.
Discrepancies in source detection between the Large {XMM-Newton}\ Survey catalogue and previous {XMM-Newton}\ catalogues could be explained by different search strategies, and differences in the processing of the data, in the parameter settings of the detection runs and in the software versions used. Correlations with previous {\it Chandra}\ studies showed that those sources not detected in this study are strongly time variable, transient, or unresolved. This is particularly true for sources located close to the centre of \object{M~31},
where {\it Chandra}'s higher spatial resolution resolves more sources. Some of the undetected sources from previous {\it ROSAT}\ studies were located outside the field covered with {XMM-Newton}. However, there were several sources detected by {\it ROSAT}\ that had a {\it ROSAT}\ detection likelihood larger than 15. If these sources were still in a bright state they should have been detected with {XMM-Newton}.\@ Thus, the fact that these sources are not detected with {XMM-Newton}\ implies that they are transient or at least highly variable sources. On the other hand 242 $<$hard$>$ {XMM-Newton}\ sources were found with XID fluxes larger than \oergcm{-14}, which were not detected with {\it ROSAT}.
To study the properties of the different source populations of \object{M~31}, it was necessary to separate foreground stars (40 plus 223 candidates) and background sources (11 AGN and 49 candidates, 4 galaxies and 19 candidates, 1 galaxy cluster and 5 candidates) from the sources of \object{M~31}. 1\,247 sources could only be classified as $<$hard$>$, while 123 sources remained without identification or classification. The majority (about two-thirds, see Stiele et al. 2011 in preparation) of sources classified as $<$hard$>$ are expected to be background objects, especially AGN.
The catalogue of the Large {XMM-Newton}\ survey of \object{M~31}\ contains 30 SSS candidates, with unabsorbed 0.2--1.0\,keV luminosities between 2.4\ergs{35} and 2.8\ergs{37}. SSSs are concentrated to the centre of \object{M~31}, which can be explained
by their correlation with optical novae, and by the overall spatial distribution of \object{M~31}\ late type stars (\ie\ enhanced density towards the centre). Of the 14 identifications made of optical novae, four were presented in more detail.
The 25 identified and 31 classified SNRs had XID luminosities between 1.1\ergs{35} and 4.3$\times$10$^{36}$ erg\,s$^{-1}$. Three of the 25 identified SNRs were detected for the first time in X-rays. For one SNR the {\it ROSAT}\ classification can be confirmed. Six of the SNR candidates were selected from correlations with sources in SNR catalogues from the literature. As these six sources had rather ``hard" hardness ratios they are good candidates for ``plerions". An investigation of the spatial distribution showed that most SNRs and candidates are located in regions of enhanced star formation, especially along the 10\,kpc dust ring in \object{M~31}.
This connection between SNRs and star forming regions, implies that most of the remnants are from type II supernovae. Most of the SNR classifications from previous studies have been confirmed. However, in five cases these classifications are doubtful.
The population of ``hard" \object{M~31}\ sources mainly consists of XRBs. These rather bright sources (XID luminosity range: 1.0\ergs{36} to 2.7\ergs{38}) were selected from their transient nature or strong long term variability (variability factor $>$10; 10 identified, 26 classified sources). The spectral properties of three transient sources were presented in more detail.
A sub-class of LMXBs is located in globular clusters. They were selected from correlations with optical sources included in globular cluster catalogues (36 identified, 16 classified sources). The XID luminosity of GlCs ranges from 2.3\ergs{35} to 1.0\ergs{38}. The spatial distribution of this source class also showed an enhanced concentration to the centre of \object{M~31}.
From optical and X-ray colour-colour diagrams possible HMXB candidates were selected. If the sources were bright enough, an absorbed power-law model was fitted to the source spectra. Two of the candidates had a photon index consistent with the photon index range of NS HMXBs. Hence these two sources were suggested as new HMXB candidates.
Follow-up studies in the optical as well as in radio are in progress or are planned. They will allow us to increase the number of identified sources and help us to classify or identify sources which can up to now only be classified as $<$hard$>$ or are without any classification.
This work focused on the overall properties of the source population of individual classes and gave us deeper insights into the long-term variability, spatial and flux distribution of the sources in the field of \object{M~31}\ and thus helped us to improve our understanding of the X-ray source population of \object{M~31}.
\begin{acknowledgements}
This publication makes use of the USNOFS Image and Catalogue Archive
operated by the United States Naval Observatory, Flagstaff Station
(http://www.nofs.navy.mil/data/fchpix/),
of data products from the Two Micron All Sky Survey,
which is a joint project of the University of Massachusetts and the Infrared
Processing and Analysis Center/California Institute of Technology, funded by
the National Aeronautics and Space Administration and the National Science
Foundation, of the SIMBAD database,
operated at CDS, Strasbourg, France,
and of the NASA/IPAC Extragalactic Database (NED)
which is operated by the Jet Propulsion Laboratory, California
Institute of Technology, under contract
with the National Aeronautics and Space Administration.
The XMM-Newton project is supported by the
Bundesministerium f\"ur Wirtschaft und Technologie/Deutsches Zentrum
f\"ur Luft- und Raumfahrt (BMWI/DLR, FKZ 50 OX 0001) and the Max-Planck
Society. HS acknowledges support by the
Bundesministerium f\"ur Wirtschaft und Technologie/Deutsches Zentrum
f\"ur Luft- und Raumfahrt (BMWI/DLR, FKZ 50 OR 0405).
\end{acknowledgements}
\bibliographystyle{aa}
|
2,877,628,091,511 | arxiv | \section{Introduction}
The night sky brightness, the number of clear nights, the seeing,
transparency and photometric stability are some of the most important
parameters that qualify a site for front-line ground-based
astronomy. There is limited control over all these parameters, and
only in the case of the sky brightness is it possible to keep it at
its natural level by preventing light pollution from the immediate
vicinity of the observatory. Previous to the installation of any
observatory, extensive tests of these parameters are carried out in
order to find the best locations, maximizing then the efficiency of
these expensive infraestructures. However, most of these parameters
are not constant along the time. An example of this can be seen in the
seeing evolution of the Paranal observatory, which is worse now than
when the decision to built it at its current location was
taken\footnote{http://www.eso.org/gen-fac/pubs/astclim/paranal/seeing/singstory.html}. This
is not an untypical situation.
We have started a program to determine the actual values of the main
observing conditions for the Calar Alto observatory. The Calar Alto
observatory is located at 2168m height above the sealevel, in the
Sierra de los Filabres (Almeria-Spain) at $\sim$45 km from the
Mediterranean sea. It is the second largest european astronomical site
in the north hemisphere, just behind the Observatorio del Roque de los
Muchachos (La Palma), and the most important in the continental
europe. Currently there are six telescopes located in the complex,
three of them operated by the Centro Astronomico Hispano Aleman
A.I.E. (CSIC-MPG), including the 3.5m, the largest telescope in the
continental europe.
Along its 26 years of operations there has been different attempts to
characterize some of the main properties described before: (i) Leinert
et al. (1995) determined the sky brightness corresponding to the year
1990; (ii) Hopp \& Fernandez (2002) studied the extinction curve
corresponding to the years 1986-2000; (iii) Ziad et al. (2005)
estimated the median seeing in the observatory from a single campain
in may 2002. However, there is a need for a consistent study of all
these properties, spanning over a similar time period.
In this article we study the main characteristics of the night-sky at
the observatory including: (i) the night-sky spectrum, identifing the
natural and light pollution emission lines and their strength, (ii)
the moonless night-sky brightness in different bands, (iv) the
extinction and its yearly evolution and (v) the atmospheric seeing and
its yearly evolution. The study is limited to the last four years,
with mostly corresponds to a period of minimun solar
activity\footnote{http://www.ngdc.noaa.gov/stp/SOLAR/ftpsolarradio.html}
(which strongly affect several sky properties, like night-sky
brightness and airglow). The derived main properties have been
compared with similar properties at other observatories.
The structure of this article is as follows: in Section \ref{data} we
describe the dataset collected for the current study, including a
description of data and the data reduction; in Section \ref{ana} we
show the analysis performed over the different types of data and the
results derived for each one; in Section \ref{conc} we summarize
the main results and present the conclusions.
\section{Description of the Data}
\label{data}
In order to understand the properties of the night-sky emission at the
Calar Alto Observatory we collected different observational data,
including both imaging and spectroscopic data. Since none of the data
were obtained directly for this study, we scanned thoroughly the
arquived data to acquire a data set with a sufficient degree of
homogeneity.
\subsection{Spectroscopic data}
\label{spec_data}
Spectroscopic data were obtained to determine the mean properties of
the night-sky spectrum of moonless nights at the observatory. In Calar
Alto spectrographs are normally mounted in bright and gray nights,
while dark nights are more frequently allocated for deep imaging
programs. Thus, it is somehow difficult to find spectrocopic data
taken during dark nights. The most frequently mounted spectrographs at
the Calar Alto observatory are CAFOS at the 2.2m ($\sim$50\% of the
allocated time) and PMAS \citep{roth05} at the 3.5m telescope
($\sim$30\% of the allocated time). PMAS is an integral field unit
with two different setups, a lensarray with a reduced field-of-view
(16$\arcsec$$\times$16$\arcsec$ in its largest configuration), and a
wide fiber-bundle that covers a field-of-view of
72$\arcsec$$\times$64$\arcsec$ with 331 individual fibers of
2.7$\arcsec$ diameter each one (PPAK, Kelz et al. 2006). This latter
configuration is particularly interesting to study the properties of
the sky emission, since in a single shot centred on a calibration star
(or a science target) a substantial fraction of the field-of-view
samples the sky. Its large aperture, and the possibility of performing
self-calibration, allows one to obtain spectrophotometric calibrated
and high signal-to-noise spectra of the night-sky emission even with
reduced exposure times. Evenmore, this instrument is frequently
mounted with a low-resolution grating (V300), which covers a
wavelength range of $\sim$3500\AA\ with a spectral resolution of
$\sim$10\AA\ (FWHM). This is also very convenient to obtain spectra of
the sky emission in all the optical wavelength range. Restricting
ourselves to the same instrument and configuration ensures the
homogeneity of the data.
There were 14 clear moonless nights when the instrument was mounted using this
configuration in the period between January 2005 and December 2006. A sample
of 23 observations, including night-sky emission spectra, were selected from
the data taken that nights. The sample was selected by including only
observations of calibration stars and/or small-size and faint targets (eg.,
High-z Ly-$\alpha$ emitters, S\'anchez et al. 2007c), with most of the PPAK
field-of-view sampling sky emission. In addition, only observations near the
zenith, with an airmass lower than 1.5, were included in the sample. In all
the cases the observations were taken far away the ecliptic and the galactic
plane. The data were then reduced using R3D (S\'anchez et al. 2006), following
the standard steps for fiber-based IFS data. Once reduced, the sky spectra
were extracted from the frames using E3D \citep{sanc04}, by selecting areas
clean of objects within the field-of-view. Although the signal-to-noise level
of each individual sky spectrum is somehow different, depending on the
exposure time and the number of selected fibers to extract the spectra, in all
the cases is good enough for the purposes of this study.
Not all the observations during a moonless night are equally dark. The
darkness of an observation depends strongly on the time distance of the
corresponding night from the full moon, the presence of cirrus, dust and local
contamination when pointing towards highly populated areas (like Almeria,
towards the south of the observatory), the zenith distance, and even more the
time distance from the twilight. Thus, from the original dataset we did not
consider those spectra which intensity at 5200\AA\ was larger than 0.3 10$^{-16}$ erg~s$^{-1}$~cm$^{-2}$~\AA$^{-1}$
(V$\sim$20.2 mags). We were then left with a final sample of 10 individual
spectra obtained in 7 different nights, representative of the typical dark
observation in a moonless dark night at Calar Alto. This sample still
comprises a wide range of brightness, and only one (02/06/2005) can be
considered completely dark, corresponding to a new moon night. Table
\ref{tab_data} shows the final list of nights, together with the wavelength
range covered by the spectra each night.
\subsection{Imaging data}
\label{img_data}
Multiband imaging data are collected to characterize the
sky-brightness of moonless nights at the observatory. The search was
restricted to CAFOS covering a similar period of time of the selected
spectroscopic dataset, to preserve the homogeneity of the data. Only
exposures on calibration fields were selected to perform a self
flux-calibration, avoiding the possible errors due to changes in the
atmospheric condition between calibration and measurement. Although
these exposures tend to have short exposure times, from 20s to 200s,
the large field-of-view of CAFOS CCD
($\sim$15$\arcmin$$\times$15$\arcmin$) and pixel size
(0.53$\arcsec$$\times$0.53$\arcsec$), allow a good estimation of the
sky background. The use of calibration fields, with more than 4
calibration stars observed per field, guarantees a good estimation of
the magnitude zero-point per filter and exposure (photometric
calibration error $<$0.1 mags). We focused our search in nights
dedicated to a single project, aimed to study the time evolution of
the multiband photometry of supernovae (PI: Dr. W. Hillebrandt;
Pastorello et al. 2007), observed in service mode. The reason for that
was that this program uses the same instrument setup, filters,
calibration fields and covers a large periods of time. The number of
nights totally or partially dedicated to this project were $\sim$21
along the considered time period. Finally we selected the darkest
possible nights, ie., moonless nights that distance at least 13 days
from the full moon. We were then left with 5 nights. However, only in
two of them the calibration fields were observed at least one hour
after the begining of the astronomical night. In the remaining 3
nights there was still substantial contamination from the twilight,
and they are therefore excluded from any further analysis. Table
\ref{tab_data} shows the final list of selected nights, including the
broad-band filters observed each night.
The Landolt calibration fields PG1633 and PG0918 (Landlot 1992) were observed
each night, respectively, in the listed filters. The images were reduced
following the standard steps, using IRAF routines. First a master bias frame
was created for each night by averaging all the bias frames obtained along the
night and smoothing it. All images were then bias corrected by substracting
the corresponding master bias. For each band a master flat-field frame was
obtained by averaging the bias corrected domeflat images, and normalizing to
the median intensity value. Images were then corrected for pixel-to-pixel
response variations dividing by their corresponding flat-field frames. No sky
subtraction was performed since the aim of this study is to determine its
intensity.
\subsection{Extinction data}
\label{ext_data}
The Calar Alto Extinction monitor (CAVEX), is an inhouse developed instrument
(PI: U.Thiele), which estimates the monocromatic extinction in the V-band
continously along each night. The system is fully automatic, opening half an
hour after the beggining of the astronomical night and closing half an hour
before its end. It continously points towards the north (the polar star),
taking images of $\sim$20s exposure time every $\sim$76s covering a
field-of-view of $\sim$55 degrees with a resolution of 2.3$\arcmin$/pixel. By
traking the location of 15-20 stars in the field, it estimates the extinction
by measuring their apparent magnitudes across a range of 1.1-2.4 airmasses. It
shares the same limits of humidity and wind speed than the telescopes at the
obsevatory, producing a measurement of the extinction every $\sim$2 minutes if
the night is clear. When the night is cloudy or the extinction has strong
fluctuations, the instrument does not produce a reliable data, flagging
it. Therefore, the fraction of time without precise extinction measurements
from the CAVEX is a good estimation of the amount of time lost due non
astronomical weather conditions in the observatory. We have collected all the
available data from the extinction monitor during the last 4 years, from May
2003 to May 2007. The data comprises 214193 individual measurements,
corresponding to 1044 nights from a total of 1478 nights included in this
period.
CAVEX estimates the total extinction in a single band. To characterize
the extinction curve it is needed to measure the extinction at
different wavelengths. In the Summer of 2006 (27/07/06-15/08/06) and
the Winter of 2006-2007 (17/12/06-13/01/07), an instrument built with
the sole purpose of estimating this extinction curve was installed,
named the Extinction Camera and Luminance Background Register
(EXCALIBUR, PI: J.Aceituno). EXCALIBUR is a robotic extinction
monitor able to do cuasi-simultaneous photometric observations in 8
narrow bands covering the wavelength range between 3400\AA\ and
10200\AA, characterizing the extinction curve in this wavelength range
(although only 6 of the bands were operative when installed at Calar
Alto). The instrument was built to estimate the aerosol abundance
based on the shape of the extinction curve (P\'erez-R\'amirez
2007a,b). It estimates $\sim$16 extinction coefficients for each of
the sampled bands per hour, and an average of $\sim$160 extinction
coefficients per band for each night. It was operative for a total of
6 nights in the Summer season and 14 nights in the Winter season. We
collected all these data for the current study.
\subsection{Seeing data}
\label{seeing_data}
The night-sky seeing is measured by a Differential Image Motion
Monitor (DIMM, Aceituno 2004) at the Calar Alto observatory since
August 2004, and it is nowadays a fully automatic instrument. It measure the
seeing at the wavelengths corresponding to the Johnson $V$-band. This
DIMM was calibrated with a previous one installed in the observatory
\cite{vern95}, which, at the same time, was also calibrated with a
Generalized Seeing Monitor in May 2002 \cite{ziad05}, when they both
estimated a median seeing of 0.90$\arcsec$ for that time period. To
study the possible evolution of the seeing at the observatory and its
possible dependencies with another night-sky parameters we collected
all the available seeing measurements on the time period between
January 2005 and December 2006. Contrary to the CAVEX, the DIMM has
more severe limitations to be operative, and it closes automatically
when the humidity is larger than an 80\% or the wind speed is 12 m
s$^{-1}$, which comprises $\sim$50\% of the time. The DIMM produces
an estimation of the transversal and horizontal seeing in each
measurements. Whenever there is an occasional large difference between
both estimations (not due to a dominant wind component) they most
probably are not due to atmospheric effects, but rather to mechanical
oscilations. Due to inherent stability of a DIMM system to this kind
of oscilations, these cases comprise a relatively small number
($\sim$5\%). They have been excluded from the final dataset. We
finally collected a total of 213622 seeing measurements
distributed along 335 nights during the considered time period.
\section{Analysis and Results}
\label{ana}
We describe here the analysis performed over each of the different collected
data.
\subsection{Night Sky Spectrum}
\label{ana_spec}
The data included in the final sample of night-sky spectra for
moonless dark nights described in Section \ref{spec_data} were
combined to create a typical moonless night-sky spectrum at the
observatory. For doing so each spectrum was normalized to the mean
intensity (of the sample) at 5200\AA\ (0.13 10$^{-16}$ erg~s$^{-1}$~cm$^{-2}$~\AA$^{-1}$). Then the final
spectrum was produced by averaging the flux of these normalized
spectra at each sampled wavelength. Figure \ref{spec} shows the
resulting spectrum, covering a wavelength range between 3700 and
7933\AA. This is the first time that a typical night-sky spectrum is
published for the Calar Alto observatory \footnote{The final reduced
FITS file can be downloaded from the webpage:
http://www.caha.es/sanchez/sky/}. The emission lines clearly
identified in the spectrum have been labeled with its corresponding
atomic name and wavelength. Several of the distinctive features of
the night-sky spectrum are due to airglow, although a substantial
fraction is due to light-pollution.
The airglow is emitted by atoms and molecules in the upper atmosphere which
are excited by solar UV radiation during the day and twilight (Ingham 1972).
Airglow is the most important component of the light of night-sky. It
produces the 5577\AA\ and 6300\AA\ lines from OI (which are stronger near the
twilight). It contributes to the ubiquitious 5890-6\AA\ NaD doublet, although
this line is heavely contaminated by light-pollution from low and high
preassure street-lamps. Airglow is also responsible for the OH
rotation/vibration bands in the red and IR, known as the Meinel bands (Meinel
1950), visible at the redder wavelengths of the spectrum (see Fig.
\ref{spec}). In addition it produces a pseudo-continuum in the blue from
overlapping O$_2$ bands (2600-3800\AA) and in the green from NO$_2$ bands
(5000-6000\AA). A more detailed description of the effects of the airglow on
the night-sky emission can be found in Benn \& Ellison (1998a). An atlas of the
airglow from 3100\AA\ to 10000\AA\ was presented by Ingham (1962) and
Broadfoot \& Kendell (1968).
Other contributions to the night-sky spectrum are the zodiacal light,
the starlight, and the extragalactic light, which increases the
background continuum emission (see Benn \& Ellison 1998a and
references therein for a more detailed explanation on their
effects). All of them, including the airglow, comprises the natural
processes that produce the night-sky spectrum in any astronomical
site. In addition to them, the night-sky spectrum can be affected by
the light-pollution due mostly to the street-lights of populated areas
near the observatories. Light pollution arises principally from
troposheric scattering of light emitted by sodium and mercury-vapour
and incandescent street lamps (McNally 1994; Holmes 1997).
Typical spectra of the three more used types of street-lamps were presented by
Osterbrock et al. (1976): the sodium low-pressure lamps, the sodium
high-pressure lamps and the mercury lamps. The first of the three is the one
with less impact in astronomical observations, since its produces most of its
light concentrated in the 5890-6\AA\ NaD and 8183-95\AA\ NaD emission
lines. Therefore only a very reduced number of astronomical observations are
strongly affected by them. On the other hand the high-pressure sodium lamps
emitt most of its light in a broad NaD line centred in $\sim$5890\AA, with a
FWHM of $\sim$400\AA, that shows a central reversal. They also show a strong
8183-95\AA\ emission line, and fainter emission lines at 4494-8\AA, 4665-9\AA,
4758-52\AA, 4979-83\AA, 5149-53\AA, 5683-88\AA, 6154-61\AA\ and
7665,7699\AA. The high-pressure sodium lamps may affect strongly the quality
of any astronomical observation. Finally the mercury lamps produce narrow
lines at 3651/63\AA, 4047\AA, 4358\AA, 5461\AA, 5770\AA\ and 5791\AA, together
with broad features at 6200\AA\ and 7200\AA\ of FWHM$\sim$100\AA, from the
phosphor used to convert UV to visible light. They also produce a weak
continuum emission over the whole visible range. Some mercury pollution lines
can strongly affect certain astronomical studies: (i) the 4358\AA\ Hg line
strongly affect any attempt to measure the emission of the 4364\AA\ [OIII]
line in any object in the Galaxy. This line is fundamental for the estimation
of the electron temperature of galactic nebulae. (ii) the 5460\AA\ Hg line
lies in the centre of the $y$-band of the $ubvy$ Str\"omgren photometric
system, which may affect the programs devoted to the study of stellar
populations that use this filter set.
Other sources of light pollution are the incandescent lamps and the
high-pressure metal halide lamps. The spectrum of the former consists of
continuum emission only, and it is difficult to identify in a night-spectrum.
The latter are nowadays frequently used in the illumination of sport stadiums
and arquitectonical monuments (which can be considerable, since
they are normally oriented towards the sky). These high-pressure metal halide
lamps exhibit some Scandium, Titanium and Litium emission lines, that are
charactezed by a blue edge due to molecular bands (General Electric 1975; Lane
\& Garrison 1978; Osterbrock et al. 1976).
The night-sky spectrum shown in Fig.\ref{spec} shows clear evidence of
strong light pollution from all the street-lamps described before. It
shows strong Mercury lines all over the spectrum, the typical emission
lines and features of high-pressure sodium lamps, with a well detected
broad emission at $\sim$5900\AA, and a strong NaI emission line at
5893\AA\ indicative of low-pressure sodium lamps. Evenmore, it also
presents the typical emission lines corresponding to high-pressure
metal halide lamps. All this pollution comes from the populations
nearby the observatory, in particular from Almeria ($\sim$250000
habitants), at 40 km towards the south, and smaller towns at the north
(like Baza, $\sim$21000 habitants, and Macael, $\sim$6000
habitants). There are well established estimations of the contribution
of city lighting to dark-sky brightness (e.g., Treanor 1973, Walker
1973, Yocke et al. 1986, Garstang 1991, and reference
therein). However, its contribution to the spectrum is more complex,
since it depends on the particular kind of lamps used for street
illumination. Most major observatories nearby populated areas have
some kind of night-sky protection laws with the aim of reducing the
effects of light pollution by controlling the fraction of light that
escapes towards the sky and the kind of lamps used (normally they
promote the use of low-pressure sodium lamps). The Calar Alto
Observatory does not benefit yet from any local sky-protection law
which regulates the street illumination, with the corresponding
effects that can be appreciated in the night-sky spectrum.
To estimate the contribution of light pollution to the night-sky spectrum we
derive the flux intensity corresponding to each of the detected emission
lines. Each line was fitted with a single gaussian function using FIT3D
\cite{sanc06b}, using the same procedure described in \cite{sanc07b}. Table
\ref{tab_line} lists the integrated flux for each of the detected emission
lines shown in Fig.\ref{spec}, together with the identification of the line
and the nominal wavelength. In addition to the emission lines, the flux
intensity of the sodium broad-band emission at $\sim$5900\AA\ was also
estimated by fitting the feature with a single broad gaussian function. The
result is also listed in Table \ref{tab_line}. All the fluxes were converted
to Rayleighs following the conversion formulae by Benn \& Ellison (1998a).
These values can be compared to those ones obtained for another
astronomical sites. E.g., Pedani (2005) presented a study of the
night-sky spectrum at the Observatory of the Roque de los Muchachos,
La Palma. The intensity of the lines produced by the Mercury and
low-pressure Sodium lamps found in our spectrum are only comparable to
those ones in La Palma when pointing in the directions towards the
most polluting towns in this island. On the other hand, the
contribution of the lines produced by the high-pressure Sodium lamps
is much lower, being comparable to that of the less polluted areas of
the sky in La Palma. This may indicate that most of the street lamps
used in Almeria are Mercury and low-pressure Sodium lamps, rather than
high-pressure Sodium ones, and therefore they only affect specific
wavelength ranges (and science programs). Finally we clearly see lines
produced by high pressure metal halide lamps, only marginally (and
recently) detected in the sky spectrum at La Palma (Pedani 2005).
Once determined the contribution of each emission line to the
night-sky spectrum it is possible to decontaminate it by the effect of
the pollution lines (ie., all but the OI ones), creating a {\it clean}
night-sky spectrum. This can be done directly for the Mercury,
Scandium, Titanium and Litium lines, since all its emission is
produced by light pollution. However, in the case of the Sodium lines
there is a natural contribution due to the airglow. We lack a direct
measurement of this natural contribution at Calar Alto, that can only
be achived nowadays with a general blackout. However its natural
contribution must not be significantly larger than in other
astronomical sites. Benn \& Ellison (1998a) estimated the natural
contribution by the broad Sodium emission band in $\sim$0.03 mag in V
and $\sim$0.02 mag in R for La Palma observatory. Similar
contributions are expected by the NaI 5893,6 emission
lines. Therefore, the natural contribution by the Sodium due to the
airglow is expected to be of the order of $\sim$0.04 mag in both
bands. Thus, as a first order we can assume that all the detected
emission by Sodium in our sky-emission spectrum is due to pollution,
and once determined its contribution we can correct it by the expected
contribution of natural emission, if needed.
Table \ref{tab_cont} lists the estimated contribution of the light pollution
to the sky background in different bands. It includes the $B$, $V$ and
$R$-Johnson filters, and a set of medium band filters selected from those ones
used by the ALHAMBRA survey (Moles et al. 2007), a major imaging survey
currently on going at the Calar Alto observatory. They are included to
illustrate the effects of the pollution lines in the background sky when using
median/narrow-band filters affected by these lines. The contamination from
light pollution is clearly stronger in the $B$-band than in other astronomical
sites: e.g, $\sim$0.02 mag at La Palma (Benn \& Ellison 1998a), 0.02-0.04 mag
at Kitt Peak (Massey et al. 1990). Although there are astronomical sites with
similar or stronger contaminations: e.g. Mount Hopkins (see Fig. 2, from
Massey \& Foltz 2000). This contamination can be reduced by a proper light
pollution law that limits the use of the Mercury street lamps. Strong benefits
from such laws has been experienced in different sites (e.g., Benn \& Ellison
1998a; Massey \& Foltz 2000). The contamination in the $V$ and $R$-bands is
slightly stronger than in some other places, like La Palma: 0.05-0.10 mag at the
$V$-band and 0.07-0.12 mag at the $R$-band (Pedani et al. 2005), but it is
similar or smaller than in other astronomical sites, like Kitt Peak: 0.19-0.33
mag in the $V$-band (Massey et al. 1990) or Mount Hopkings: 0.17 mag in the
$V$-band (Massey \& Foltz 2000). One of the two recomendations of the IAU for
a dark place is that the contribution of the pollution to the Sodium emission
should not exceed in intensity the natural airglow one (Smith 1979). If we
consider that the airglow emission is similar in Calar Alto than in La Palma
(or at least of the same order), it is clear that the contribution from the
light pollution is much stronger. In these regards Calar Alto does not
fulfill the IAU recomendations for a dark place. A proper light-pollution law,
that increases the use of low-preasure sodium lamps rather than high-presure
ones, will not reduce the net effect of the light pollution to the
sky-background in these bands, but it will concentrate it in a more reduced
wavelength range, affecting less observing programs.
Most of the lamps that cause the light pollution also produce a certain level
of continuum emission. In particular Mercury and high pressure metal halide
ones. However its contribution is difficult to estimate. Although we donecannot
quantify its effect, the reduction of the use of Mercury, high-pressure Sodium
lamps and high pressure metal halide lamps in the vicinity of the observatory
will certainly also reduce the sky-background continuum.
The lack of previous similar studies of the night sky spectrum at the Calar
Alto observatory (to our knowledge) does not let us to analyze the evolution
of the light pollution along the time.
\subsection{Night Sky Brightness}
\label{ana_mag}
The night sky brightness was determined by using both the imaging and
spectrocopic data described before. In the case of the imaging data, the
calibration fields contain at least 5 calibration starts per frame. The
magnitude zeropoint for each image was determined by measuring the counts
intensity of each of these stars within a fixed aperture of 8$\arcsec$ radius,
using IMEXAM task of IRAF package, and applying the formula:
$$mag_{zero} = mag_{app} + ext + 2.5 {\rm log}_{10} ( counts/t_{exp}) $$
where $mag_{zero}$ is the magnitude zeropoint, $mag_{app}$ is the apparent
magnitude of the calibration star in the corresponding band, $ext$ is the
extinction, derived from the correponding value measured by the CAVEX this
night and the airmass of the image, $counts$ are the measured counts within
the indicated aperture, and $t_{exp}$ is the exposure time. Since the measured
seeing of the images (FWHM of the field stars), ranges between 0.9$\arcsec$
and 1.3$\arcsec$, this aperture ensures that most of the flux is contained
within it, and no aperture correction was applied. The average of the values
derived for each calibration star in the field is considered the zeropoint of
the image, and the standard deviation from this mean value is considered as
the photometric error. This standard deviation ranges between 0.02 mag and
0.06 mag for each band and each night.
The sky brightness was then determined in each image by measuring the mean
counts level in several ($>10$) square apertures of
$\sim$50$\arcsec$$\times$50$\arcsec$, in areas free of targets, using IMEXAM
task of IRAF package. The mean value of these measurements is considered as
the count level of the sky brightness, and the standard deviation with respect
to this mean value the error in the count level estimation. Finally, the sky
surface brightness was determined using the formula:
$$SB_{sky} = mag_{zero} - 2.5 {\rm log}_{10} ( counts_{sky}/t_{exp}/scale^2) $$
where $SB_{sky}$ is the surface brightness in magnitudes per square arcsec,
$mag_{zero}$ is the zeropoint described before, $counts_{sky}$ is the sky
counts level estimated and $scale$ is the pixel scale in arcseconds (ie.,
0.53$\arcsec$ for CAFOS). It is noticed that the magnitude zeropoint was
corrected by the extinction, but the sky brightness was not, following the
convention adopted in most of the recent studies of sky brightness (e.g.,
Walker 1988b; Krisciunas 1990; Lockwood et al 1990; Leinert et al. 1995;
Mattila et al. 1996; Benn \& Ellison 1998a,b). Correcting the sky brightness
for extinction would be appropiate only if the extinguishing layer were known
to be below all sources of sky brightness, which is not the case (Benn \&
Ellison 1998a,b).
In addition to the estimations of the sky surface brighness obtained using
direct imaging we also derived the sky surface brightness by using the PPAK
spectroscopic information for the only full moonless night of our sample
(02/06/2005). The two night-sky spectra obtained this night cover the
wavelength range of the $B$, $V$ and $R$-band filters. We derived the sky flux
intensity for each of these filters by convolving each spectrum with the
corresponding filter tranmission curves listed in the Asiago Database on
Photometric Systems (Moro \& Manuri 2000). Then the fluxes were transformed to
magnitudes by using the zero-pointings listed in Fukugita et al. (1995). The
mean value of the two derived magnitudes in each filter is adopted as the sky
surface brightness of that night, and the absolute difference between both
magnitudes as the error.
Table \ref{tab_mag} lists the sky-surface brightness obtained from both the
imaging and spectroscopic data at the different bands for each moonless
night. In addition, it lists previous results on the sky-brightness at
different astronomical sites, including the results presented by Leinert et
al. (1995) for Calar Alto, derived from broad-band photometry obtained along
three nights in 1990.
\subsubsection{Dependency on the zenith distance}
There is a wide range of sky surface brightness values for the different
bands, considering that all the measurements were obtained in full moonless
nights, without twilight contamination. The data from the first night, derived
using PPAK data, are very similar to those ones from the third night, derived
using CAFOS data, being the sky in the former slightly brigher in the
$B$-band. The sky brightness of the second night is brighter in all the
bands. Looking back to the data, we realize that the airmass of the images of
the 1st night ($\sim$1.7), corresponds to an elevetion much lower than that of
the 2nd and 3rd night data ($\sim$1.3 and $\sim$1.2, respectively). The sky
brightness increases with the airmass for two different reasons. One is a
natural effect of the airglow, which is brighter at low elevations cause the
line of sight intercepts a larger number of atoms in the airglow layer
(Garstang 1989; Benn \& Ellison 1998a, and references therein). A second
effect is the increase of light pollution when pointing towards high populated
areas at low airmass. Walker (1971,1991) and Garstang (1989) estimate that
the increase in the brightness at an air mass of $\sim$1.4, in the direction
of a populated area of P inhabitants at a distance D km to be $\sim \frac{P
D^{-2.5}}{70} $ mag. That would correspond to $\sim$0.3 mag when pointing
directly towards the south, where is the largest populated city nearby
(Almeria). However since our observed fields are mostly pointing towards the
east its contribution is more difficult to estimate. It is not possible to
know which is the actual contribution of the light pollution to the continuum
brightness at the zenith. Therefore we do not know if Calar Alto fulfill (or
not) the other IAU recomendation for a dark site, that is that this
contribution must be lower than $\sim$0.1 mag (Smith 1979).
It is possible to derive an approximate expression of the sky brightness
dependecy on the zenith distance due to natural effects based on the results
by Garstang (1989), as already pointed out by Krisciunas \& Schaefer
(1991). This expression can be used to correct the measured values and derive
a much appropiate estimation of the zenithal sky brightness (Benn \& Ellison
1998a; Patat 2003). Patat (2003) derive the following formula (Appendix C of
that article) for this correction:
$$\Delta m = -2.5 {\rm log}_{10}[(1-f)+f X]+\kappa (X-1)$$
where $\Delta m$ is the increase in sky brightness at a certain band and
airmass ($X$), $f$ is the fraction of the total sky brightness generated by
airglow, being (1-$f$) the fraction produced outside the atmosphere (hence
including zodiacal light, faint stars and galaxies) and $\kappa$ is the
extinction coefficient at the corresponding wavelength.
We applied this correction to our data, using the typical extinction curve at
Calar Alto (following sections) normalized to the corresponding $\kappa_V$
extinction coefficient of each night. A typical value of $f=$0.6 was used for
this correction (Patat 2003). Once applied there is a significant reduction
of the dispersion between the values obtained for each night. This indicates
that the correction works pretty well, despite the fact that it does not take
into account the effects of light pollution. The mean values of the sky
brightness at the zenit after correction for each band are also listed in
Table \ref{tab_mag}.
\subsubsection{Variation of the sky-brightness along the time}
The night sky-brightness at Calar Alto shows no significant change in the last
15 years, when comparing with the results by Leinert et al. (1995). They did
not applied any correction for the dependecy on the zenith distance to their
data (Table 6 of that article). Therefore we must compare with the mean values
without correction. The only band where it is appreciated an increase of the
sky-brightness is the $U$-band ($\sim$0.4 mag brigther). However, if we take
into account that we only have data for this band at low elevation, we cannot
consider these results conclusive. For the remaining bands the sky seems to be
$\sim$0.2 mag brighter in the $B$-band, $\sim$0.2 mag fainter in the $R$-band
and without changes in the $V$- and $I$-bands, when comparing with the mean
values derived for the three nights of our sample. However none of these
differences seems to be significative, lying withing the errors of our
measurements. When comparing with the two darkest nights with data obtained at
high elevation the differences (if any) disappear for the $B$ and $V$-bands,
and the sky seems to be even darker in the $R$ and $I$-band. A possible caveat
to this comparison is that the results listed in Leinert et al. (1995) were
obtained not exactly in the solar minimun, which may produce an increase of
the sky-brightness. However, although their broad-band data were obtained in
the 1990, they also obtained intermediate-band data in the 1993 and their
sinthetized broad-band surface sky-brightness are similar to those obtained in
the nights of the 1990.
\subsubsection{Comparison with other astronomical sites}
The sky surface brightnesses at the Calar Alto observatory at
different bands, listed in Table \ref{tab_mag}, are remarkable similar
to those ones at many other different astronomical sites. In the
optical wavelength range ($U$,$B$,$V$ and $R$-bands), Calar Alto seems
to be a particular dark site, comparible to Mauna Kea. The fact that
both places seem darker than Paranal may be an artifact since the data
presented by Patat et al. (2003) were taken during the maximun of
solar activity. This result is in anycase remarkable, considering the
strong light pollution present in the Calar Alto spectra, which effect
is particularly strong in the $B$, $V$ and $R$-bands
(Tab. \ref{tab_cont}). Most of the listed observatories have little
light pollution or they benefit of specific protection laws against
it. This has been demonstrated as a tremendous useful tool to preserve
or increase the quality of the night-sky (eg., Benn \& Ellison
1998a,b; Massey \& Foltz 2000; Walker \& Schwarz 2007). If the effects
of light pollution could be reduced in the vicinity of Calar Alto it
would become a particularly dark site for optical observations.
On the other hand, the sky is clearly brighter in the $I$-band than in any
other astronomical site listed in this table. Despite the fact that the
observatory is located in the most arid place in Europe, in the vecinity of a
desert (the Tabernas desert), the water vapor Meinel bands are particularly
strong. The humidity at Calar Alto is higher than in other astronomical sites,
like Paranal or Mauna Kea, although there are frequent epochs of low humidity
($<$20\%) in the Summer. The height of the observatory, $\sim$2200 m over the
sea level, normally places it under the inversion layer, which has a particular
strong impact in the strength of the water vapor emission lines. Both combined
effects can explain the rise of the sky-brightness in the $I$-band. It is
important to note here that this effect has a relatively reduced effect over
near-infrared observations in the $J$, $H$ and $K$-band.
\subsection{Extinction coeffcients}
\label{ana_dust}
The median $V$-band extinction at the Calar Alto observatory for the time
period covered by the CAVEX data (Section \ref{ext_data}) was $\sim$0.18 mag,
with a mean value of $\sim$0.21$\pm$0.08 mag. This value is slightly smaller
than the previously reported by Hopp \& Fernandez (2002), which was based on a
much smaller sample of data (comprising 74 nights spanned between 1986 and
2000). They found that there was an increase of the extinction at the Summer
season, that was most probably associated with an increase of the aerosols
(ie. dust) in this period of the year. Similar seasonal pattern has been
appreciated in another major observatories: e.g., La Palma observatory is
strongly affected by dust extinction in the Summer when dust from the Sahara
desert ($\sim$400 km away) blows over the Canary Islands (Benn \& Ellison
1998a). Although the Calar Alto observatory is nearer to the Sahara desert
($\sim$250 km away) than La Palma, it is normally out of its main wind
streams, being shielded by the Altas mountains. On the other hand, it is
located in an arid region nearby a much smaller desert, the Tabernas desert
($\sim$15 km away).
Figure \ref{cavex} shows the evolution of the average $V$-band extinction for
each night along the period of time sampled by the dataset. As already
suspected by Hopp \& Fernandez (2002), there is a clear seasonal pattern. The
typical extinction in the Winter nights is $\kappa_V\sim$0.15 mag, being
mostly restricted to values lower than $\kappa_V<$0.2 mag. In Summer time
there is a wider range of extinctions, although in most of the cases the
extinction is lower than $\kappa_V<$0.4 mag. As indicated before this increase
of the extinction is most probably associated with a rise of the aerosols
(dust) in the atmosphere. We will explore that possibility latter.
This seasonal pattern is somehow similar to the one seen in La Palma. Indeed
the fraction of nights with $\kappa_V>$0.25 mag is similar in both
observatories, $\sim$20\% of the nights. However there is a major difference:
the fraction of nights with high extinction, $\kappa_V>$0.4 mag, at Calar Alto
is very reduced, $\sim$3\%, while at La Palma this fraction is $\sim$10\% of
the nights, with frequent peaks of extinction over $\kappa_V>$0.6 mag (Benn
\& Ellison 1998a, Figure 3).
Based on the fraction of nights that the CAVEX was operative and derived
realiable measurements of the $V$-band extinction ($\kappa_V$), it is estimated
that $\sim$70\% of the nights were astronomically useful in the period covered
by these data (4 complete years). This fraction is remarkable similar to that
one in many other astronomical sites (eg., La Palma, Benn \& Ellison 1998a).
The fraction of fully photometric nights, defined as nights where the $V$-band
extinction varies less than a 20\% along all the night, was $\sim$30\%.
\subsubsection{Contributions to the extinction}
The EXCALIBUR data described in Section \ref{ext_data} were used to determine
the typical extinction curve at Calar Alto. This curve was previously studied
by Hopp \& Fernandez (2002), using an inhomogenous dataset. The extinction
coefficients were analyzed separately for the Summer and Winter seasons due to
the observed seasonal pattern. First, it was derived the mean extinction
coefficients per night by averaging all the measured extinction coefficients
per band obtained along each night ($\sim$160 values). Then, the mean
extinction coefficients per season were determined by averaging all the
measured extinction coefficients per band obtained along each season
nights. Table \ref{tab_ext} lists the average extinctions coefficients
obtained for each season for each of the 6 bands sampled by the instrument,
including their central wavelengths and the standard deviation with respect to
these mean values. As expected the standard deviation in the extinction
coefficients is larger for the Summer data than for the Winter ones, in
agreement with the results shown in the previous section. The EXCALIBUR
results are consistent with the CAVEX ones for the wavelength covered by both
instruments, ie., $\sim$500 nm. This indicates that the extinction
coefficients listed in Table \ref{tab_ext} are a good representation of the
typical values for each season.
The total extinction is mostly due to three contributions, the Rayleigh
scattering at the atmospheric atoms and molecules, the extinction due to
aerosol particles (mostly dust), and extinction due to Ozone (Walker
1987b). The Rayleigh scattering can be described by
$$\kappa_{RC} = B(p,t,n') \lambda^{-4}$$
where $\lambda$ is the wavelength and $B$ is a constant that mostly depends on
the pressure, the temperature and the normalized refractive index of the air
($n'$, slightly wavelength dependent). In general $B$ can be replaced by its
average value for the mean conditions in a certain astronomical site (e.g.,
Rufener 1986). The aerosol particles produce a similar absorption, that can be
described by
$$\kappa_p = b(h_{obs}) \lambda^{-\alpha}$$
where $b$ is a parameter that depends mostly on the height of the observatory
($h_{obs}$), and $\alpha$ is a power law index that depends on the size of the
aerosol grains. Although an $\alpha\sim$1.3, derived by Siedentopf (1948), is
widely used, we adopted a value of $\alpha=$0.8 for consistency with the
previous study of the extinction curve at Calar Alto (Hopp \& Fernandez
2002). The Ozone extinction is a selective absorption by molecular bands. It
can be approximately described by a broad gaussian function centred in
$\lambda\sim$6000 \AA\ (matching the shape shown in Runefer 1986 and Hopp \&
Fernandez 2002):
$$\kappa_{\rm O3} = { C } \ {\rm exp} [-\frac{\lambda-6000}{1200}]$$
Each single contribution to the extinction depends on the wavelength and a
particular constant that, in the case of the two first, depends on the height
and the average atmospheric conditions in the observatory. We adopted the
values listed for $\lambda =$5400\AA\ in Rufener (1986), consitent with those
derived by T\"ug (1977), obtained for La Silla observatory. A similar approach
was followed by Hopp \& Fernandez (2002). The height and average weather
conditions in this observatory are very similar to those of Calar Alto, which
justifies the use of these constants.
Therefore the total extinction curve is a linear combination of these three
contributions:
$$\kappa_\lambda = f_1\ \kappa_{RC} + f_2\ \kappa_p + f_3\ \kappa_{O3}$$
The extinction coefficients for the two seasons were fitted to this linear
combination, deriving the relative contribution of each one ($f_i$) to the
total extinction. Table \ref{tab_ecurve} lists the results from this fitting
analysis, including the relative contribution ($f_i$) derived for each
component for each season. For comparison it also includes the same relative
contributions derived for Calar Alto by Hopp \& Fernandez (2002), and their
compilation of similar results for different astronomical sites.
The contribution of the Rayleigh scattering seems to be rather constant for
the two considered seasons, like the Ozone absorption. On the other hand, the
Aerosol contribution rises considerable in the Summer time, being responsible
of the increase of the extinction in this season, as it was thought. The
estimated contributions of the Rayleigh scattering and the Aerosol extinction
are very similar to the values reported by Hopp \& Fernandez (2002) for the
Winter season, which may indicate that both contributions have not changed
considerably with time (their data corresponds to 1986-2000). Both
contributions are also similar to the ones derived for other major
astronomical sites. On the other hand the contribution of the Ozone absorption
seems to be stronger than in previous measurements and other astronomical
sites. Unfortunally the coverage of EXCALIBUR when mounted at Calar Alto did
not allow to perform an accurate sampling of the wavelength range affected
more strongly by the Ozone absorption, which did not let us to be conclusive
on this respect.
Figure \ref{ext_curve} shows the distribution of the extinction coefficients
along the wavelength for the two season datasets. It also includes the best
fitted linear combination of the three components to the extinction and each
of these components scaled to its relative contribution to the total
extinction. Despite the apparent increase of the Ozone absorption its actual
contribution to the total extinction at any wavelength is very limited, being
neglectible in comparison with the other two contributions. Indeed the fitting
process derives equally good results when this contribution is removed. As
already indicated the Rayleigh scattering contribution is very similar for
both datasets, being the dominant contribution in the Winter season, ie., in
conditions of low extinction. In Winter time is responsible of the $\sim$85\%
of the extinction in the $V$-band, only $\sim$11\% is due to Aerosol
extinction and $\sim$4\% to Ozone absorption. On the other hand, in Summer its
contributions drops to $\sim$63\%, with $\sim$35\% due to Aerosol extinction
and $\sim$2\% due to Ozone absorption. Curiously, the contribution of Aerosols
to the extinction in the conditions of minimun extinction are much reduced
than the one found in other astronomical sites, like La Silla (Burki et
al. 1995).
\subsubsection{The Extinction Curve}
As we shown in the previous section the extinction curve depends strongly on
the relative contribution of each of the three major components to the
extinction. Each of them has a different dependecy with the wavelength, and
two of them depends also on the particular atmospheric conditions at the
observatory. Therefore, it is difficult to derive a precise extinction curve
valid for every night that depends only in a reduced number of parameters.
However it is still possible to look for an approximate expression for the
extinction curve that provide an useful estimation of the extinction at any
wavelength and that depends only in a single parameter: the $V$-band
extinction, measured each night by the CAVEX.
Based on the results of the previous section it is known that the Ozone
absorption have a marginal effect in the total extintion at any
wavelength. Therefore we have not considered it for our approximate
expression. It is also known that the contribution of the Rayleigh scattering
is almost contant along the year, and therefore the variations in the
extinction are controlled by the amount of Aerosol particles (ie.,
dust). Based on this assumption an approximate expression for the extinction
curve can be derived by considering that the extinction in the $V$-band is due
to a fix contribution of Rayleigh scattering (the average for both seasons)
and a variable contribution due to Aerosol extinction. The derived expression
is:
$$\kappa_\lambda \sim 0.0935\ \left(\frac{\lambda}{5450}\right)^{-4} + (0.8*\kappa_{\rm V}-0.0935)\ \left(\frac{\lambda}{5450}\right)^{-0.8} $$
The typical differences found in the extinction coefficients derived using
this formula and the more precise decomposition presented in the previous
section are of the order of $\sim$10\%.
\subsection{Atmospheric Seeing}
\label{ana_seeing}
The seeing data described in section \ref{seeing_data} were used to determine
an average seeing for each night comprised in the dataset (spanned along
$\sim$2 years). The median atmospheric seeing for all the time period was
$\sim$0.90$\arcsec$, with a $\sim$70\% of the nights under subarcsecond seeing
($<$1$\arcsec$). Figure \ref{seeing1} shows the nightly averaged seeing
distribution along the time. The epochs without data in April 2005 and March
2006 were due to reparations in the hut of the DIMM. There is a mild seasonal
pattern in the seeing distribution, with the best seeing concentrated in the
Summer season. To further investigate this possibility we created the seeing
histogram for all the data comprised in the dataset and for two different
subsets of data corresponding to the Summer season (May-September) and the
Winter season (the rest of the months). Figure \ref{seeing2} shows the three
histograms. The differences in the seeing distribution for the Summer (median
seeing $\sim$0.87$\arcsec$) and the Winter seasons (median seeing
$\sim$0.96$\arcsec$) are clearly appreciated. Not only the median seeing is
better in the Summer season, the chances of having better seeing in a Summer
night are larger.
Table \ref{tab_seeing} lists the median seeing estimated for both the total
sample and the two season subsamples. For comparison purposes it also lists
the atmospheric seeing measured at different astronomical sites world-wide,
ordered by increasing seeing. Although the median seeing at Calar Alto is
larger than that of some major astronomical observatories (Mauna Kea, La
Palma), it is actually better than in many other astronomical sites (eg.,
MtGraham, Paranal).
\section{Conclusions}
\label{conc}
We have characterized the main properties of the night-sky at the
Calar Alto observatory, comparing them, when possible, with similar
properties of other different astronomical sites. The main results of
this article can be summarized in the following points:
\begin{itemize}
\item An average night-sky spectrum for the moonless dark-time at the
observatory has been presented for the first time. This spectrum, which covers
the optical wavelength range (3700-7933\AA), is distributed freely to the
community. Airglow and light-pollution emission lines are detected in this
spectrum. The strength of the light-pollution lines has been measured,
estimating their contribution to the emission in different bands. In
comparison with other sites the Mercury lines are particularly strong. The
contribution of the light pollution to the Sodium emission is far stronger
than its natural emission. In this regards Calar Alto does not fulfill the IAU
recomendations for a dark astronomical site (Smith 1979), like other major
astronomical sites (eg., La Palma, Pedani 2005).
\item The moonless night-sky brightness at the zenith has been determined for
the $U$, $B$, $V$, $R$ and $I$-bands. There was no appreciable change in the
sky-brightness over the last 15 years. In comparison with other astronomical
sites, Calar Alto shows a particularly dark sky in the optical bands,
similar to that of MtGraham or Mauna Kea. The sky brightness could be even
darker if it was possible to reduce the light pollution in the optical
bands, which would place Calar Alto as a very dark astronomical site. On the
other hand, the sky is brighter in the $I$-band, mostly due to the strength
of the water-vapor Meinel bands.
\item The extinction, measured along the last 4 years, shows a
seasonal dependency with a typical value of $k_V\sim$0.15 mag in
Winter time and a wide range of values in Summer, most of them
restricted to $k_V<$0.4 mag. This seasonal pattern, caused by
Saharan dust, is similar to the one found in La Palma, but with a
smaller range of values in Summer time. The analysis of the typical
extinction coefficients at different wavelenghts for each season
indicates that the rise of the extinction in Summer is due to an
increase of Aerosols (dust) in this period of the year. Due to the
reduced contribution of the Ozone absorption to the extinction, and
the stability of the contribution of the Rayleigh scattering along
the year it was possible to derive an aproximate expression for the
extinction curve parametrized only by the $V$-band extinction.
\item The fraction of astronomical useful nights, when the weather was good
enough to allow an acurate measurement of the extinction, was $\sim$70\% of
the nights in the last 4 years. This fraction is similar to the one found in
La Palma (Benn \& Ellison 1998). The fraction of these nights that were
photometric was a $\sim$30\%.
\item The median seeing along the last 2 years was 0.90$\arcsec$, being
slightly better in Summer (0.87$\arcsec$) than in Winter
(0.96$\arcsec$). The seeing was better than 1$\arcsec$ in a $\sim$70\% of
the nights. Although this seeing is slightly worse than in some astronomical
sites (eg. Mauna Kea, La Palma), it is better than the currently seeing at
Paranal or MtGraham, two astronomical sites where 10m-like telescope are
currently in operation or under construction.
\end{itemize}
We conclude that Calar Alto remains a good astronomical site, similar
in many aspects to places where there are 10m-like telescopes under
operation or construction. It will strongly benefit from a sky
protection law that would reduce the light pollution, particularly due
to Mercury and high-pressure Sodium street lamps. Such a law has been
under discussion by the local Andalusian goverment during the last few
years and we hope it will be soon operative.
The fact that Calar Alto is placed in continental Europe is a major
advantage in comparison with other European observatories away from
the continent, since both the operational and development costs are
significantly smaller.
For both reasons we consider that this observatory is a good candidate
for the location of future large aperture optical telescopes.
\section{Acknowledgments}
SFS thanks the Spanish Plan Nacional de Astronom\'\i a program
AYA2005-09413-C02-02, of the Spanish Ministery of Education and Science and
the Plan Andaluz de Investigaci\'on of Junta de Andaluc\'{\i}a as research
group FQM322.
|
2,877,628,091,512 | arxiv | \section{Introduction}
Four-dimensional quantum field theories play a crucial role in the mathematical
description of the fundamental forces of nature. The path integral formalism developed
by R.P. Feynman has been established since decades to quantize classical
field theories and thus, to formulate quantum field theories.\\
A quantum field theory (QFT) describes the behavior of certain particles and their
interaction. In the absence of an interaction the path integral of a QFT is usually
trivial. It is mostly the interaction term that makes the path integral challenging
to evaluate. In cases where the coefficient of the interaction term, the coupling
constant, is small enough a perturbative treatment of the path integral is possible
and often sufficient.\\
However many interesting quantities, like e.g. the hadron spectrum, decay constants,
certain matrix elements and form factors, have to be calculated in a regime of a
strong coupling(-constant), where a perturbative approach must fail. In such
situations it is necessary to treat the path integral non-perturbatively. Because of
the lack of closed form solutions the path integral can only be evaluated numerically.
In order to do so, it is necessary to give the path integral a mathematically well
defined meaning. A straightforward method is it to discretize space and time by
introducing a, moreover Euclidean space-time lattice with a fixed spacing between two
neighboring lattice points, the lattice spacing.
Restricting the system to a finite lattice extension (and applying certain boundary
conditions) the
infinite-dimensional path integral is converted to a finite-dimensional integral.
(In principle, one still has to perform the transition to zero lattice spacing and
therefore to infinitely many space-time points at the end.)\\
Such lattice path integrals can easily have dimensions of $O(10^9)$
(e.g. simulations of lattice-discretized quantum chromodynamics (QCD), the theory of
strong interactions of elementary particles).
The high dimensionality of the problem restricts the spectrum of applicable algorithms
to Monte Carlo-based methods. Especially Markov chain-Monte Carlo (Mc-MC) methods
have been successfully applied since the beginning of the study of lattice field theories.
With the algorithms employed observables calculated from a Monte Carlo chain of $N$
steps will obey a statistical error proportional to $1/\sqrt{N}$. \\
Recent developments in the field of Quasi-Monte Carlo (QMC) methods, discussed in
more detail in sections \ref{sec:Plain:Int} to \ref{sec:RQMC}, show that under certain
conditions it is possible to construct sets of samples of integration points leading
to much faster rates of convergence of an observable and a much better asymptotic
error behavior of up to $1/N$.\\
Such an improved error behavior would decrease the number of samples necessary to
achieve a certain error bound, resulting in a drastic reduction of runtime. Note that
for present computations in field theory state-of-the-art supercomputers are used.
It is unclear, whether QMC methods can be used for lattice field theory simulations.
As a first step, to nevertheless investigate this possibility, we will focus in this
work on the study of much simpler models, namely the quantum mechanical harmonic
and anharmonic oscillator.
\section{Quantum Mechanical Harmonic and Anharmonic Oscillator}
In this section we will discuss the basic steps for the quantization of the theory
in the path integral approach and the discretization on a time lattice.
The first step is the construction of the Lagrangian (resp. the action) of the
corresponding classical mechanical system for a given path $x(t)$ of a particle
with mass $M_0$. For a numerically stable evaluation of the path integral it is
essential to pass on to Euclidean time. In this case the Lagrangian $L$ and the
action $S$ is given by:
\begin{align}
\label{eq:Lagrangian}
L(x,t) &= \frac{M_0}{2}
\left(\frac{d x}{dt}\right)^2 + V(x) \\
S(x) &= \int_0^T \, L(x,t) \; dt .
\end{align}
Depending on the scenario (harmonic or anharmonic oscillator) the potential $V(x)$
consists of two parts
\begin{equation}
V(x) = \underbrace{\frac{\mu^2}{2} x^2}_\text{harmonic part} +
\underbrace{\lambda \, x^4}_\text{anharmonic part} \;,
\end{equation}
such that the parameter $\lambda$ controls the anharmonic part of the theory.
It should also be mentioned that in the anharmonic case the parameter
$\mu^2$ can take on negative values, leading then to a double well potential.
The next step is to discretize time into equidistant time slices with a
spacing of $a$. The path is then only defined on the time slices:
\begin{align}
t & \rightarrow t_i = (i-1) \cdot a \quad i = 1 \ldots d \\
x(t) & \rightarrow x_i = x(t_i) \; .
\end{align}
On the lattice the derivative with respect to the time appearing in
\eqref{eq:Lagrangian} (first term) will be replaced by the forward finite difference
$\nabla x_i = \frac{1}{a} ( x_{i+1} - x_i )$. The choice of the lattice derivative
is not unique and requires special care, particularly if one considers more
complicated models like lattice QCD. But in \cite{Creutz_and_Freedman} it was
shown that the lattice derivative chosen here permits a well defined continuum
limit. Putting all the ingredients together, we can write down the lattice action
for the (an)harmonic oscillator
\begin{equation}
S^\text{latt}(x) = a \sum_{i=1}^{ d } \frac{M_0}{2} \left( \nabla x_i \right)^2 + V(x_i) \; .
\end{equation}
For the path a cyclic boundary condition $x_{d+1} = x_1$ can be assumed.
In the following the superscript ``latt'' will be dropped, as we will only
refer to the lattice action from now on.
The expectation value of an observable $O$ of the quantized theory expressed
in terms of the path integral reads as follows:
\begin{equation}\label{sec:Plain:OBS}
\left\langle O(x) \right\rangle \, = \, \frac{\int_{\mathbb{R}^d} O(x)
e^{-S(x)} d x_1...d x_d }{\int_{\mathbb{R}^d} e^{-S(x)} d x_1...d x_d }\;.
\end{equation}
This expression is suitable for a numerical evaluation of certain quantities of
the underlying theory. Up to now only Monte Carlo methods are known to give
reliable results for dimensions $d \gg 10$. One type of such methods, often
used in physics, is the Markov chain-Monte Carlo approach mostly applying
the weight $\propto e^{-S(x)}$ for sampling paths $\{x_i\}$ (so-called
``importance sampling''). Especially the Metropolis algorithm\cite{Metropolis}
is suitable and a straightforward solution of \eqref{sec:Plain:OBS}
(also described in \cite{Creutz_and_Freedman}) and serves as a reference
method for the QMC approach, which is much less intuitive.
The theory of QMC methods is a purely mathematical topic.
During the discussion of the key aspects of QMC, following in the next
sections, we will stick to a rather mathematical language, being more
adequate for the description of a mathematical issue.
\section{Direct Monte Carlo and quasi--Monte Carlo methods}\label{sec:Plain:Int}
In many practical applications one is interested in calculating quotients of the form
\eqref{sec:Plain:OBS} where the action $S(.)$ and the observables $O(.)$ are
usually smooth functions in high dimensions. In some special situations
where one would like to deal with integrands of moderately high dimensions, one may consider an estimator for the integral
$I_1$ in the numerator and $I_2$ in the denominator of \eqref{sec:Plain:OBS} separately,
and then take $I_1/I_2$ as an estimation of $\left\langle O(x) \right\rangle$.
Another possibility is to take a joint estimator for the total quantity
$\left\langle O(x) \right\rangle$ using a single direct sampling method.
A well known approach based on direct sampling is the so called weighted
uniform sampling (WUS) estimator, analyzed in \cite{PowellSwann66}.
We will show some characteristics of the WUS estimator in section \ref{sec:WUS}, and we will
refer from now on to these methods as \textit{plain} or \textit{direct} sampling methods
for estimating \eqref{sec:Plain:OBS}.
In many interesting examples, we encounter the case were the action $S(.)$
and the observable $O(.)$ lead to integrals $I_1$,$I_2$ of Gaussian type.
Then the integrals $I_1$,$I_2$ can be written in the form
\[
I_i\:=\: \frac{1}{(2\pi)^{d/2} \sqrt{\det(C)}}
\int_{\mathbb{R}^d}g_i(\mathbf{x})e^{-\frac{1}{2} \mathbf{x}^\top C^{-1} \mathbf{x}} d\mathbf{x},
\quad
\mathbf{x}=(x_1,\dots,x_d), \; i=1,2 \quad ,
\]
where $C$ denotes the covariance matrix of the Gaussian density function.
A transformation to the unit cube in $\mathbb{R}^d$ can be applied such
that the corresponding integrals take the form
\begin{equation}\label{gen:expected_v}
I
\:=\: \int_{[0,1]^d}g(A \vektor{\Phi}^{-1}(\vektor{z}))d\vektor{z}
\:=\: \int_{[0,1]^d}f(\vektor{z})d\vektor{z}
\:=\: I_{[0,1]^d}(f)
,\quad
\vektor{z}=(z_1,\dots,z_d)\,.
\end{equation}
Here $AA^\top=C$ is some symmetric factorization of the covariance matrix,
and $\vektor{\Phi}^{-1}(\vektor{z}):=(\Phi^{-1}(z_1),\dots,\Phi^{-1}(z_d))^\top$,
where $\Phi^{-1}({\cdot})$ represents
the inverse of the normal cumulative distribution function $\Phi({\cdot})$.\\
In the classical plain or direct Monte--Carlo (MC) approach one tries
to estimate \eqref{gen:expected_v}
by generating samples pseudo-randomly. One starts with a finite sequence of
independent identically distributed (i.i.d.) samples $P_N=\{\vektor{z}_1,\dots,\vektor{z}_N\}$,
where the points $\vektor{z}_j, \; 1\le j \le N$,
have been generated from the uniform distribution in $[0,1]^d$.
Then, the quadrature rule is fixed by taking the average of the function evaluations for $f$
\[
Q_N:= \frac{1}{N} \sum_{j=1}^{N} f(\vektor{z}_j),
\]
as an approximation of the desired integral $\int_{[0,1]^d} f(\vektor{z}) \; d \vektor{z}$.
The resulting estimator $\hat{Q}_N$ is unbiased. The integration error
can be approximated via the central limit theorem, given that $f$ belongs to $L_2([0,1]^d)$.
The variance of the estimator $\hat{Q}_N$ is given by
$$
\frac{\sigma^2}{N}=\frac{1}{N}\left( \int_{[0,1]^d} f^2(\vektor{z}) \; d\vektor{z}
- \left(\int_{[0,1]^d} f(\vektor{z}) \; d\vektor{z} \right)^2 \right).
$$
As measured by its standard deviation from zero
the integration error associated with the
MC approach is then of order $O(N^{-\frac{1}{2}})$.
The quality of the MC samples relies on the selected pseudo--random
number generators of uniform samples, here we use the \textit{Mersenne Twister} generator from Matsumoto and Nishimura (see \cite{Matsumoto98}).
MC is in general a very reliable tool in high--dimensional integration,
but the order of convergence is in fact rather poor.
In contrast, quasi--Monte Carlo (QMC) methods generates deterministically
point sets that are more regularly distributed than
the pseudo--random points from MC (see \cite{L'Ecuyer01}, \cite{Novak_and_Wozniakowski2},
\cite{DiPi10}, \cite{KSS_Review12}).
Typical examples of QMC are shifted
lattice rules and low--discrepancy sequences.
To explain what we mean by ``regularly distributed'',
we define now the classical notion of discrepancy
of a finite sequence of points $P_N$ in $[0,1)^d$.
Given $P_N=\{\vektor{z}_{1},\dots,
\vektor{z}_{N}\}$ a set of points in $ [0,1)^{d}$,
and a nonempty family $\mathbb{I}$ of Lebesgue-measurable
sets in $[0,1)^{d}$, we define the classical discrepancy function by
\[D(\mathbb{I};P_N) := \sup_{B \in \mathbb{I}}\left|\frac{\sum_{i=1}^{N}\:
c_{B}(\vektor{z}_{i})}{N}-\lambda_{d}(B)\right|,\]
where $c_{B}$ is the characteristic function of $B$.
This allows us to define the so called \textit{star discrepancy}.
\begin{definition}
We define the \textit{star discrepancy} $D^{\star}(P_N)$ of the point set $P_N$
by $D^{\star}(P_N):=D (\mathbb{I};P_N)$, where $\mathbb{I}$ is the family of all
sub-intervals of the form $\prod_{i=1}^{d}[0,u_{i})$,
with $u_{i}\ge 0, \; 1\le i \le d$.
\end{definition}
The \textit{star discrepancy} can be considered as a measure of the worst difference
between the uniform distribution and the sampled distribution in $ [0,1)^{d}$
attributed to the point set $P_N$.
The usual way to analyze QMC as a deterministic method
is by choosing a class of integrand functions $F$, and a
measure of discrepancy $D(P_N)$ for the point sets $P_N$.
Then, the deterministic integration error is usually given in the form
$$
|Q_N -\int_{[0,1]^d} f(\vektor{z}) \; d \vektor{z} | \;\; \le \; D(P_N) V(f),
$$
where $V(f)$ measures a particular variation of the
function $f \in F$. A classical particular error bound in this form is
the famous Koksma--Hlawka inequality,
where $D(P_N)$ is taken to be the \textit{star discrepancy}
of the point set $P_N$, and $V(f)$ is the variation
in the sense of Hardy and Krause of $f$.
In the context of QMC, a sequence of points in $ [0,1)^{d}$
is called a low--discrepancy sequence
if $D^{\star}(P_N)=O(N^{-1}(\log(N))^{d})$
for all truncations of the sequence to its first $N$ terms. \\
\subsection{Quasi--Monte Carlo errors and complexity}
There are certain reproducing kernel Hilbert spaces $\mathbb{F}_{d}$ of functions
$f:[0,1]^{d}\to\mathbb{R}$ which are particularly useful for estimating the
quadrature error of QMC methods (see \cite{Hick98}). Consider a kernel
$K:[0,1]^{d}\times[0,1]^{d}\to\mathbb{R}$ satisfying
$K(\cdot,\vektor{y})\in\mathbb{F}_{d}$ and $\langle f,K(\cdot,\vektor{y})\rangle=f(\vektor{y})$ for
each $\vektor{y}\in[0,1]^{d}$ and $f\in\mathbb{F}_{d}$. We denote now with
$\langle\cdot,\cdot\rangle$ and $\|\cdot\|$ the inner product
and norm in $\mathbb{F}_{d}$. If the integral
$$
I(f)=\int_{[0,1]^{d}}f(\vektor{z})d\vektor{z}
$$
is a continuous functional on the space $\mathbb{F}_{d}$,
then the worst case quadrature error $e_{N}(\mathbb{F}_{d})$
for point sets $P_N=\{\vektor{z}_{1},\dots,\vektor{z}_{N}\}$ and quasi-Monte Carlo algorithms
for the space $\mathbb{F}_{d}$ can be given by
\[
e_{N}(\mathbb{F}_{d}):=\sup_{f\in\mathbb{F}_{d}\,,\|f\|\le 1}|
I(f)-Q_{N}(f)|=\sup_{\|f\|\le 1}|\langle
f,h_{N}\rangle|=\|h_{N}\|,
\]
due to Riesz' representation theorem for linear bounded functionals. In this case, the {\em
representer} $h_{N}\in\mathbb{F}_{d}$ of the quadrature error is given by
$$
h_{N}(\vektor{z})=\int_{[0,1]^{d}}K(\vektor{z},\vektor{y})d\vektor{y} -
\frac{1}{N}\sum_{i=1}^{N}K(\vektor{z},\vektor{z}_{i})\quad(\forall
\vektor{z}\in[0,1]^{d}).
$$
In QMC error analysis, one usually considers the weighted (anchored)
tensor product Sobolev space introduced in
\cite{SlWo98}
\[
\mathbb{F}_{d}=\mathcal{W}_{2,{\rm mix}}^{(1,\ldots,1)}([0,1]^{d})
=\bigotimes_{i=1}^{d}W_{2}^{1}([0,1]) \; ,
\]
with the weighted norm $\|f\|_{\gamma}^{2}=\langle
f,f\rangle_{\gamma}$ and inner product
\[
\langle f,g\rangle_{\gamma}=\sum_{u\subseteq\{1,\ldots,d\}}
\prod_{j\in u}\gamma_{j}^{-1}
\int_{[0,1]^{|u|}}\frac{\partial^{|u|}}{\partial
\vektor{z}_{u}}f(\vektor{z}_{u},\mathbf{1})\frac{\partial^{|u|}}{\partial
\vektor{z}_{u}}g(\vektor{z}_{u},\mathbf{1})d \vektor{z}_{u},
\]
where for $u \subseteq \{1,\dots,d\}$ we denote by $|u|$ its cardinality,
and $(\vektor{z}_{u},\mathbf{1})$ denotes the vector
containing the coordinates of $\vektor{z}$ with indices in $u$, and the other
coordinates set equal to $1$.
The corresponding reproducing kernel is given by
\[
K_{d,\gamma}(\vektor{z},\vektor{y})=\prod_{j=1}^{d}(1+\gamma_{j}
[1-\max(z_{j},y_{j})])\quad(\vektor{z},\vektor{y} \in[0,1]^{d}).
\]
There are several other examples considered for error analysis. For example, the
weighted Walsh space consisting of Walsh series
(see \cite[Example 2.8]{DiPi10} and \cite{Dick08}).
The weighted tensor product Sobolev space allow for explicit QMC constructions
deriving error estimates of the form
\begin{equation}\label{rate}
e_{N}(\mathbb{F}_{d})\leq C(\delta)N^{-1+\delta}
\quad(\delta\in(0,\textstyle{\frac{1}{2}}]),
\end{equation}
where the constant $C(\delta)$ is independent on the dimension $d$,
given that the sequence of weights $(\gamma_{j})$ satisfies (see \cite{Kuo2003})
\[
\sum_{j=1}^{\infty}\gamma_{j}^{\frac{1}{2(1-\delta)}}<\infty\,.
\]
Traditional unweighted function spaces considered for integration
suffer the from the curse of dimensionality. Their weighted variants describe a
setting where the variables or group of variables may vary in importance.
Thus, they give a partial explanation of why some
very high-dimensional spaces become tractable for QMC.
Explicit QMC constructions satisfying \eqref{rate} are \textit{shifted lattice rules}
for weighted spaces.
The rate (\ref{rate}) can be also obtained for Niederreiter and Sobol' sequences (see \cite{Wang03}).
The idea of ``weighting'' the norm of the spaces to obtain tractable results can be applied in fact to
more general function spaces than smooth function spaces of tensor product form, and many integration
examples can be found in \cite{Novak_and_Wozniakowski2}.
In our numerical experiments, we used so far QMC algorithms based on
a particular type of low--discrepancy sequences.
Numerical experiments with shifted lattice rules will be carried out in the near future, following
new techniques for fixing adequate weights introduced in \cite{GLLZ12}.
\section{Low--discrepancy $(t,d)$-sequences}
The most well known type of low--discrepancy sequences are the so called
$(t,d)$-sequences.
To introduce how $(t,m,d)$-nets and $(t,d)$-sequences are defined,
we consider first \textit{elementary intervals} in a integer base $b \ge 2$.
Let $E$ be any sub-interval of $[0,1)^{d}$ of the form
$E=\prod_{i=1}^{d}[a_{i}b^{-c_{i}},(a_{i}+1)b^{-c_{i}})$
with $a_{i},\: c_{i} \: \in \mathbb{N} , c_{i} \ge 0, \:0\leq a_{i} < b^{-c_{i}}$
for $1\leq i\leq d$. An interval of this form is
called an elementary interval in base $b$.
\begin{definition}
Let $\:0\leq t \leq m$ be integers. A $(t,m,d)$-net in base $b$ is a point
set $P_N$ of $N=b^{m}$ points in $[0,1)^{d}$
such that every elementary interval
$E$ in base $b$ with $\lambda_{d}(E)=\frac{b^{t}}{b^{m}}$
contains exactly $b^{t}$ points.
\end{definition}
\begin{definition}
Let $t\geq 0$ be an integer. A sequence $\mathbf{x}_{1},\mathbf{x}_{2},...$
of points in $[0,1)^{d}$ is a $(t,d)$-sequence in base $b$ if for all integers
$k\geq0$ and $m>t$, the point set consisting of $N=b^{m}$ points
$\mathbf{x}_{i}$ with $kb^{m}\leq i < (k+1)b^{m}$,
is a $(t,m,d)$-net in base $b$.
\end{definition}
The parameter $t$ is called the \textit{quality parameter}
of the $(t,d)$--sequences.
In \cite{Nied92}, theorem 4.17, it is shown that $(t,d)$-sequences
are in fact low--discrepancy sequences. We reproduce this result in the following
\begin{theorem}
The star-discrepancy $D^{\star}$ of the first $N$ terms $P_N$ of a $(t,d)$-sequence in
base $b$, satisfies
$$
N D^{\star}(P_N) \leq C(d,b) b^t (log(N))^d + O(b^t (log(N))^{d-1}),
$$
where the implied constants depend only on $b$ and $d$.
If either $d=2$ or $b=2$, $d=3,4$, we have
$$
C(d,b)=\frac{1}{d}\left( \frac{b-1}{2 log(b)} \right)^d,
$$
and otherwise
$$
C(d,b)=\frac{1}{d!} \frac{b-1}{2 \lfloor b/2 \rfloor} \left( \frac{\lfloor b/2 \rfloor}{log(b)} \right)^d.
$$
\end{theorem}
Explicit constructions of $(t,d)$-sequences are available. Some of them are
the generalized Faure, Sobol', Niederreiter and Niederreiter--Xing sequences.
All these examples fall into the category of constructions
called \textit{digital sequences}.
We refer to \cite{DiPi10} for further reading on this topic.
\section{Randomized QMC}\label{sec:RQMC}
There are some advantages in retaining the probabilistic properties of the sampling.
There are practical hybrid methods permitting us to combine the good features of MC and
QMC. Randomization is an important tool for QMC if we are interested for a practical
error estimate of our sample quadrature $Q_N$ to the desired integral. One goal is
to randomize the deterministic point set $P_N$ generated by QMC in a way that
the estimator $\hat{Q}_N$ preserves unbiasedness. Another important goal is to preserve
the better equidistribution properties of the deterministic construction.
The simplest form of randomization applied to \textit{digital sequences} seems to be
the technique called \textit{digital $b$--ary shifting}. In this case, we add
a random shift $\Delta \in [0,1)^d$ to each point of the deterministic set
$P_N=\{\vektor{z}_{1},...,\vektor{z}_{N}\}$ using
operations over the selected ring $\mathbb{F}_b$.
The application of this randomization preserves in particular the $t$ value of any projection of
the point set (see \cite{L'Ecuyer01} and references therein). The resulting estimator is
unbiased.\\
The second randomization method we present is the one introduced by
Art B. Owen (\cite{OWE95}) in 1995. He considered $(t,m,d)$-nets and $(t,d)$-sequences
in base $b$ and applied a randomization procedure based on permutations of the digits of
the values of the coordinates of points in these nets and sequences. This can be interpreted
as a random scrambling of the points of the given sequence in such a way
that the net structure remains unaffected.
We do not discuss here in detail Owen's randomization procedure,
or from now on called \textit{Owen's scrambling}.
The main results of this randomization procedure can be stated in the following
\begin{proposition}(\textbf{Equidistribution})\\
A randomized $(t,m,d)$-net in base $b$ using Owen's scrambling is again a $(t,m,d)$-net
in base $b$ with probability 1. A randomized $(t,d)$-sequence in base $b$ using Owen's
scrambling is again a $(t,d)$-sequence in base $b$ with probability 1.
\end{proposition}
\begin{proposition}(\textbf{Uniformity})\\
Let $\tilde{\vektor{z}}_i$ be the randomized version of a point
$\vektor{z}_i$ originally belonging to a $(t,m,d)$-net
in base $b$ or a $(t,d)$-sequence in base $b$, using Owen's scrambling.
Then $\tilde{\vektor{z}}_i$ has
the uniform distribution in $[0,1)^d$, that is, for any Lebesgue measurable set $G \subseteq
[0,1)^d$ , $P( \tilde{\vektor{z}}_i \in G)= \lambda_d(G)$,
with $\lambda_d$ the $d$-dimensional Lebesgue measure.
\end{proposition}
The last two propositions state that after \textit{Owen's scrambling} of \textit{digital sequences}
we retain unaffected the low discrepancy properties of the constructions, and that
after this randomization procedure we obtain random samples uniformly distributed in $[0,1)^s$. \\
The basic results about the variance of the randomized QMC estimator $\hat{Q}_N$
after applying \textit{Owen's scrambling}
to $(t,m,d)$-nets in base $b$ (or of $(t,d)$-sequences in base $b$ )
can be found in \cite{Owen97}. We summarize these results in the following
\begin{theorem}
Let $\tilde{\vektor{z}}_i$, $1\le i \le N$, be the points of a
scrambled $(t,m,d)$-net in base $b$, and let $f$ be a function
on $[0,1)^d$ with integral $I$ and variance $\sigma^2=\int (f-I)^2 d\vektor{z} < \infty.$
Let $\hat{Q}_N= N^{-1}
\sum_{i=1}^N f(\tilde{\vektor{z}}_i)$, where $N=b^m$.
Then for the variance $V(\hat{Q}_N)$ of the randomized QMC estimator
it holds
\[ V(\hat{Q}_N)=o(1/N), \: \text{ as } N \rightarrow \infty, \quad \text{and} \quad
V(\hat{Q}_N)\leq \frac{b^t}{N}\left( \frac{b+1}{b-1} \right)^d \sigma^2.\]
For $t=0$ we have
\[ V(\hat{Q}_N)\leq \frac{1}{N}\left( \frac{b}{b-1} \right)^{d-1} \sigma^2.\]
\end{theorem}
The above theorem says that the variance of scrambled $(0,m,d)$--nets is never more than
$3$ times the variance of the corresponding MC estimator.
The bound of the theorem above can be improved (see theorem 13.9 in \cite{DiPi10}) to show that the
variance of scrambled $(0,m,d)$--nets are in fact always smaller than the variance of the MC estimator.
If the integrand at hand is smooth enough, using \textit{Owen's scrambling}
it can be shown that one can obtain an improved asymptotic error estimate of order
$O(N^{-\frac{3}{2}-\frac{1}{d}+\delta})$, for any $\delta>0 $, see \cite{Owen08}.
Improved scrambling techniques have been developed in \cite{MAT98},\cite{Tezuka03}.\\
\section{Weighted uniform sampling}\label{sec:WUS}
Weighted uniform sampling is a way of estimating a quotient of integrals of the form
\[
R:=\frac{ \int_{[0,1]^d}f_1(\vektor{z})d\vektor{z}}{ \int_{[0,1]^d}f_2(\vektor{z})d\vektor{z}}
\]
by taking the estimator
\begin{equation}
\label{eq:WUS}
\hat{R}_N:=\frac{ \sum_{j=1}^N f_1(\vektor{z}_j)}{\sum_{j=1}^N f_2(\vektor{z}_j)} \; ,
\end{equation}
where the points $\vektor{z}_j, \; 1\le j \le N$,
have been generated from the uniform distribution in $[0,1]^d$.
This estimator was analyzed in \cite{PowellSwann66} and applications have been
investigated for example in \cite{SpaMa} and \cite{Caflisch95}. The bias and the root mean
square error (RMSE) of this estimator satisfy
\begin{align*}
& Bias(\hat{R}_N)=\frac{R \,var(f_2)}{N} -\frac{cov(f_1,f_2)}{N} + O(N^{-\frac{3}{2}}) \\
& RMSE(\hat{R}_N)=\frac{\sqrt{var(f_1) + R^2 var(f_2) - 2R\, cov(f_1,f_2)}}
{\sqrt{N}} + O(N^{-\frac{3}{4}}) \; .
\end{align*}
The bias of the estimator is asymptotically negligible compared with the RMSE.
One clear disadvantage of WUS against Mc-MC or Importance Sampling
for problems with large regions of relative low values
of the integrands is that with WUS we sample over
the entire unit cube $[0,1]^d$ uniformly,
while Mc-MC and Importance Sampling based techniques try to concentrate
in more characteristic or important regions of the integrands.
These limitations where observed in our numerical experiments.
\section{Numerical experiments}
We consider for our numerical tests the \textit{quantum mechanical
harmonic and anharmonic oscillator} in the \textit{path integral
approach} as described in section 2.
For definiteness we repeat here the expression for the action of the system:
\begin{equation}
\label{eq:action_detail}
S(x)=\frac{a}{2} \sum_{i=1}^d \frac{M_0}{a^2} (x_{i+1}-x_i)^2 + \mu^2 x_i^2
+ 2 \lambda x_i^4 \; .
\end{equation}
We investigate the two observable functions
\[
O_1(x)=\frac{1}{d}\sum_{i=1}^d x_i^2 \, , \;
O_2(x)=\frac{1}{d}\sum_{i=1}^d x_i^4 \; ,
\]
using the notation $\left\langle X^2 \right\rangle$,$\left\langle X^4 \right\rangle$
for $\left\langle O_1(x) \right\rangle$,$\left\langle O_2(x) \right\rangle$ in our tests.
\subsection{Harmonic Oscillator}
\label{ssec:HO}
For the harmonic oscillator we can apply immediately the direct sampling approach described in sections \ref{sec:Plain:Int} and \ref{sec:WUS} for calculating estimates of observables $O$ by setting
\[
f_1 = O( A \Phi^{-1}(\vektor{z}) ) \; , \;\; f_2 = 1
\]
in \eqref{eq:WUS}.
The matrix $A$ is a square root of $C$, the covariance matrix of the variables $x_i$, appearing in the action if it is expressed as a bilinear form: $S=\frac{1}{2}x^T C^{-1} x$.
Different factorizations, namely Cholesky and PCA (principle component analysis) have been tried out. The PCA based factorization seemed to perform better in our tests, which is the reason why we will only show results for this method.
The PCA can be explicitely obtained for circulant Toeplitz matrices and the matrix--vector products can be efficiently computed by means of the fast Fourier transform.
In the ordinary Mc-MC approximation, we used the Mersenne Twister\cite{Matsumoto98} pseudo random number generator.
For the QMC tests, we use randomly scrambled Sobol' sequences using the technique proposed by J. Matous\v{e}k\cite{MAT98}.
The error of $\langle X^2 \rangle$ was obtained by scrambling 10 times the QMC sequence and making 10 runs of an Mc-MC simulation (with different seeds). This procedure is repeated 30 times in both cases to obtain the error of the error.
From the results, shown in figure \ref{fig:x2_harmonic}, we can see a scaling that agrees perfectly with the expected behavior, namely $N^{-0.5}$ for Mc-MC and $N^{-1}$ for QMC, for large $N$.
\begin{figure}[ht]
\centering
\begin{minipage}[b]{0.8\linewidth}
\centering
\includegraphics[width=\textwidth]{x2_error_harm_osc.pdf}
\caption{error of $\langle X^2 \rangle$ in dependence of the number of samples $N$, $\lambda=0$ (harmonic oscillator),$d=51$, $M_0=0.5$ and $\mu^2=2.0$}
\label{fig:x2_harmonic}
\end{minipage}
\end{figure}
Although this example is trivial, it was our first successful application of the QMC approach in a physical model and motivated us to pass on to more complicated models.
\subsection{Anharmonic Oscillator}
The WUS approach was also used for this problem to estimate $\langle X^4 \rangle$ and $\langle X^2 \rangle$.
With the anharmonic term in the action the distribution function of the variables $x_i$ becomes very complicated. This makes it very hard to generate the samples directly from the PDF of the anharmonic oscillator. Instead of this, the anharmonic term and a part of the harmonic term is treated as part of the weight functions $f_1$ and $f_2$ in \eqref{eq:WUS},
leaving the sampling procedure of the $x_i$ as it was for the harmonic oscillator, accept for a different factor $\mu^2_{sim}$ in front of the harmonic term\begin{equation}
\label{eq:weight_fns_anharm}
f_1(\vektor{z}) = O( A \Phi^{-1}(\vektor{z}) ) f_2(\vektor{z}) \; ,\quad
f_2(\vektor{z}) = e^{ - \sum a\left( \frac{\mu^2-\mu_{sim}^2}{2}\right) (A \Phi^{-1}(\vektor{z}))_i^2 + a \lambda (A \Phi^{-1}(\vektor{z}))_i^4 }\; .
\end{equation}
This procedure is neccessary, because of $C=A^T A$ being positive definite only if $\mu^2_{sim} > 0 $, which is neccessary for the existence of $A^{-1}$ during the sampling procedure.
Further, it is important to note that the PCA factorization during the generation of the gaussian samples is essential for an efficient reduction of the effective dimension (see \cite{CAF97}) of the problem. For the parameters listed below, we estimated the effective dimensions of the functions \ref{eq:weight_fns_anharm} to be close to $20$ (for a $99 \%$ variance concentration). On the other hand we found out that the effective dimension depends also very strong on the parameter $T = d a$, the physical time extent of the system. For small $T$-values, say $T < 0.2$, the effective dimension is reduced sufficiently good like in the harmonic case, such that the QMC approach leads to a $1/N$ error scaling. The situation changes for $T=1.5$, where the error behaves only like $1/N^{0.75}$, due to the increase of the effective dimension.
The parameters were set to $M_0 = 0.5$, $\lambda=1.0$, $\mu^2 = -16$.
In the two tests the parameters $a$ and $\mu^2_{sim}$ had been adjusted such, that $ T $ was kept fixed. We set $a=0.015$ and $\mu^2_{sim} = 0.015$ for $d=100$, whereas for $d=1000$ $a=0.0015$ and $\mu^2_{sim}=0.0015$ was chosen.
The error analysis of $\langle X^2 \rangle$ and $\langle X^4 \rangle$ has been adopted from the harmonic oscillator test case described in the last subsection \ref{ssec:HO}. The result is shown in figure \ref{sec:numex:fig:1}.
For reasons mentioned earlier, WUS shows its limitations for large $T$ in our experiments.
If $T\geq5$ and $\mu^2 \leq -4 $, then we observe poor results with the Mc-MC or RQMC direct WUS
sampling method. For $T \in [1,1.5]$ and $\mu^2 \in [-20,10] $ the PCA results for RQMC seem satisfactory. The resulting
estimation of the ground state energy matches in at least two significant digits with the theoretical
value, $E_0 = 3.863...$, calculated in \cite{Blank79}, namely $\hat{E}_0 = 3.856 \pm 0.004 $ for
$d=100$ and $\hat{E}_0 = 3.864\pm0.003$ for $d=1000$.
\section{Concluding Remarks}
For the harmonic oscillator we found a large-$N$ error behavior as expected
for QMC ($\sim 1/N $) and Mc-MC ($\sim 1/\sqrt{N}$).
Also for the anharmonic oscillator the estimation procedure leads to a significant
improvement when employing the QMC approach. In this case, the error scaling is
only of $O(N^{-0.75})$ instead of the theoretically best case of $O(N^{-1})$.
Further, we found that the applicability of the WUS approach seems to be limited by the
physical time extent $T=d a$. Stable results could only by found for values $T\leq 1.5$.
On the other hand, the choice of $a$ does not seem to have any effect and the
accessible range of $T$ values gives already estimates of the
ground state energy, compatible (within errors) with the theoretical prediction
(valid in the limit $T\rightarrow \infty$ and $a\rightarrow 0$).
For the case that the improved error scaling and the mild dependence on the lattice spacing $a$ found here will also be present in more elaborate models, the QMC has the potential to become very valuable in the future.
\section*{Acknowledgement}
The authors wish to express their gratitude to Alan Genz (Washington State University)
and Frances Kuo (University of New South Wales, Sydney) for inspiring comments and
conversations, which helped to develop the work in this report. Frances Kuo
collaborated with us during her visit to the Humboldt-University Berlin in 2011.
A.N., K.J. and M.M.-P. acknowledge financial support by the DFG-funded corroborative
research center SFB/TR9.
\begin{figure}
\centering
\begin{subfigure}[b]{0.8\textwidth}
\includegraphics[width =\textwidth]{Errors_PCA_SOBOL_d100_final.png}
\caption{$a=0.015$, $d=100$ ($T=1.5$)}
\end{subfigure}
\begin{subfigure}[b]{0.8\textwidth}
\includegraphics[width =\textwidth]{Errors_PCA_SOBOL_d1000_final.png}
\caption{$a=0.0015$, $d=1000$ ($T=1.5$)}
\end{subfigure}
\caption{Shown is the $log_{10}$(relative error) as box plots with 30 repetitions of the experiment with $\lambda=1.0$, $\mu^2=-16$ and $a$ and $d$ as indicated.
For the sample generation MC and randomly scrambled Sobol' (RQMC) was used with $2^{13},2^{16}$ and $2^{19}$ points,
. The approximate convergence rate is of $O(N^{-0.75})$ for RQMC.
}
\label{sec:numex:fig:1}
\end{figure}
\section*{References}
\bibliographystyle{iopart-num}
|
2,877,628,091,513 | arxiv | \section{Introduction}\label{section:introduction}
In [Pop17], we proposed a new approach to the Mirror Symmetry Conjecture extended to {\bf possibly non-K\"ahler} compact complex manifolds. One of the main ideas was to substitute the Gauduchon cone for the classical K\"ahler cone that is empty on a non-K\"ahler manifold. The {\bf Iwasawa manifold}, a well-known compact non-K\"ahler manifold of complex dimension $3$ that was proved to have the weaker sGG property in [PU14] , was used in [Pop17] to illustrate our theory. The main result of [Pop17] was that the Iwasawa manifold is its own mirror dual. One of the arguments supporting this conclusion was the existence of a correspondence (that is holomorphic in the first argument, anti-holomorphic in the second) between a variation of Hodge structures (VHS) parametrised by what we called the {\it local universal family of essential deformations} of the Iwasawa manifold and a VHS parametrised by a subset of the {\it complexified Gauduchon cone} of this manifold.
\vspace{2ex}
In the present paper, we give yet another criterion of a different nature by which the Iwasawa manifold is self-dual in a sesquilinear way. It states that in the well-known description of this manifold as a locally holomorphically trivial fibration by elliptic curves over a two-dimensional complex torus, both the base and the fibre are self-dual tori. This is the content of Theorem \ref{The:Iwasawa_mirror_self-duality} which is the main result of the paper.
The self-duality criterion is expressed in terms of the Albanese torus and map of the Iwasawa manifold that are manifestations of the Albanese torus and map (otherwise known to always be abstractly defined) we explicitly construct in full generality on any {\bf sGG manifold} by means of Hodge theory duly adapted to the specific context of possibly non-K\"ahler sGG manifolds. This construction occupies section \ref{section:Albanese-sGG}.
Our hope, motivating in part this note, is that the sesquilinear duality between the explicitly constructed Albanese torus and Jacobian torus of an arbitrary sGG manifold will show in the future how to guess the mirror dual of more general sGG manifolds that may not be mirror self-dual.
\vspace{2ex}
Recall that the Iwasawa manifold $X = G/\Gamma$ is defined as the quotient of the Heisenberg group
$$G:=\left\{\begin{pmatrix}1 & z_1 & z_3\\
0 & 1 & z_2\\
0 & 0 & 1\end{pmatrix}\,\, ; \,\, z_1, z_2, z_3\in\mathbb{C}\right\}\subset GL_3(\mathbb{C})$$
\noindent by its discrete subgroup $\Gamma\subset G$ of matrices with entries $z_1, z_2, z_3\in\mathbb{Z}[i]$. The map $(z_1,z_2,z_3)\mapsto (z_1,z_2)$ is easily seen to factor through the action of $\Gamma$ to define a locally holomorphically trivial proper holomorphic submersion
\begin{equation}\label{introd_Alb-fibration}\pi : X\to B\end{equation}
\noindent whose base $B=\mathbb{C}^2/\mathbb{Z}[i]\oplus \mathbb{Z}[i]=\mathbb{C}/\mathbb{Z}[i]\times\mathbb{C}/\mathbb{Z}[i] $ is a two-dimensional complex torus (even an Abelian variety) and whose fibres are all isomorphic to the Gauss elliptic curve $\mathbb{C}/\mathbb{Z}[i]$. The torus $B$ and the map (\ref{introd_Alb-fibration}) are the Albanese torus, resp. Albanese map of the Iwasawa manifold in the standard sense in which these objects are associated with any compact complex manifold using a universal property (cf. e.g. [Uen75, chapter IV, $\S.9$]).
We give in section \ref{section:Albanese-sGG} a precise description of the Albanese torus and map that is valid on every sGG manifold (hence also on the Iwasawa manifold).
Recall that from the invariance under the action of $\Gamma$ of the $\mathbb{C}^3$-valued holomorphic $1$-form on $G$
$$G\ni M=\begin{pmatrix}1 & z_1 & z_3\\
0 & 1 & z_2\\
0 & 0 & 1\end{pmatrix} \mapsto M^{-1}\, dM = \begin{pmatrix}0 & dz_1 & dz_3-z_1\, dz_2\\
0 & 0 & dz_2\\
0 & 0 & 0\end{pmatrix}$$
\noindent we get three holomorphic $1$-forms $\alpha,\beta, \gamma$ on the Iwasawa manifold induced respectively by the forms $dz_1, dz_2, dz_3-z_1dz_2$ on $\mathbb{C}^3$. They are such that
$$d\alpha = d\beta = 0 \hspace{2ex} \mbox{and} \hspace{2ex} d\gamma = \partial\gamma = -\alpha\wedge\beta \neq 0 \hspace{3ex} \mbox{on}\hspace{1ex} X.$$
\noindent The forms $\alpha,\beta, \gamma$, that we call {\it structural}, and their conjugates are known to determine the whole cohomology of $X$ (cf. e.g. [Sch07]).
Considering the Kuranishi family $(X_t)_{t\in\Delta}$ (that is known to be unobstructed by a result of Nakamura, although we shall not use this fact in the present paper) of the Iwasawa manifold $X=X_0$, it is known (cf. e.g. [Ang14, p. 75-77]) that there exist $C^{\infty}$ families $(\alpha_t)_{t\in\Delta}$, $(\beta_t)_{t\in\Delta}$, $(\gamma_t)_{t\in\Delta}$ of smooth $(1,\,0)$-forms on the fibres $(X_t)_{t\in\Delta}$ such that $\alpha_0=\alpha$, $\beta_0=\beta$ and $\gamma_0=\gamma$ and such that the forms $\alpha_t,\beta_t, \gamma_t$ and their conjugates determine the whole cohomology of $X_t$ (cf. e.g. [Ang14, p. 77-84]).
\vspace{2ex}
We will exploit the fact that the structural forms $\alpha_t, \beta_t, \gamma_t$, their conjugates and appropriate products thereof define {\bf canonical} bases in all the cohomology groups that we are interested in on every $X_t$ with $t$ sufficiently close to $0$. This will allow us to deduce from the general explicit construction in section \ref{section:Albanese-sGG} that the Albanese torus $\mbox{Alb}(X_t)$ of any small deformation $X_t$ of the Iwasawa manifold $X_0$ is {\bf self-dual} (cf. Lemma \ref{Lem:identification_tori}). Theorem \ref{The:Iwasawa_mirror_self-duality} follows easily from this.
\section{The Albanese torus and map of an sGG manifold}\label{section:Albanese-sGG}
Let $X$ be a compact complex manifold with $\mbox{dim}_{\mathbb{C}}X=n$.
\subsection{Elements of Hodge theory of $\partial\bar\partial$-manifolds} Recall that $X$ is said to be a {\bf $\partial\bar\partial$-manifold} if the $\partial\bar\partial$-lemma holds on $X$. This means that for every $p,q=0,1, \dots , n$ and for every $d$-closed smooth $(p,\,q)$-form $u$ on $X$, the following exactness conditions are equivalent\!:
\begin{equation}\label{eqn:ddbar-def}u\in\mbox{Im}\,d \iff u\in\mbox{Im}\,\partial \iff u\in\mbox{Im}\,\bar\partial \iff u\in\mbox{Im}\,\partial\bar\partial.\end{equation}
\noindent It is well known (see e.g. [Pop14] for a rundown on the basic properties of these manifolds) that on any $\partial\bar\partial$-manifold, the Hodge decomposition and the Hodge symmetry hold in the following sense\!\!: there exist {\bf canonical} (i.e. depending only on the complex structure of $X$) isomorphisms
\begin{equation}\label{eqn:Hodge-decomp}H^k_{DR}(X,\,\mathbb{C})\simeq\bigoplus\limits_{p+q=k}H^{p,\,q}_{\bar\partial}(X,\,\mathbb{C}) \hspace{2ex} \mbox{and} \hspace{2ex} H^{p,\,q}_{\bar\partial}(X,\,\mathbb{C})\simeq\overline{H^{q,\,p}_{\bar\partial}(X,\,\mathbb{C})}, \hspace{2ex} k=0,1, \dots , 2n,\end{equation}
\noindent where $H^k_{DR}(X,\,\mathbb{C})$ stands for the De Rham cohomology group of degree $k$, while $H^{p,\,q}_{\bar\partial}(X,\,\mathbb{C})$ stands for the Dolbeault cohomology group of bidegree $(p,\,q)$. The inverse of the former isomorphism and the latter isomorphism are respectively defined by
$$([u^{p,\,q}]_{\bar\partial})_{p+q=k}\mapsto \bigg\{\sum\limits_{p=q=k}u^{p,\,q}\bigg\}_{DR}, \hspace{2ex} [u]_{\bar\partial}\mapsto \overline{[\bar{u}]_{\bar\partial}}.$$
\noindent This is made possible by the fact that the $\partial\bar\partial$-lemma ensures the existence of a $d$-closed representative in {\it every} Dolbeault cohomology class $[u]_{\bar\partial}$ of any bidegree $(p,\,q)$ (see e.g. [Pop13, Lemma 3.1]). It also ensures that the above maps are independent of the choice of $d$-closed representatives in the classes involved. The $\partial\bar\partial$-lemma also defines {\it canonical} isomorphisms between any two of the cohomology groups $H^{p,\,q}_{BC}(X,\,\mathbb{C})$ (Bott-Chern), $H^{p,\,q}_{\bar\partial}(X,\,\mathbb{C})$ (Dolbeault) and $H^{p,\,q}_A(X,\,\mathbb{C})$ (Aeppli), so in particular the Hodge decomposition (\ref{eqn:Hodge-decomp}) holds with any of $H^{p,\,q}_{BC}(X,\,\mathbb{C})$ and $H^{p,\,q}_A(X,\,\mathbb{C})$ in place of $H^{p,\,q}_{\bar\partial}(X,\,\mathbb{C})$.
In other words, $\partial\bar\partial$-manifolds behave cohomologically as compact K\"ahler manifolds do. In particular, the {\bf Jacobian} and {\bf Albanese tori} and {\bf maps} can be defined on $\partial\bar\partial$-manifolds in a way identical to the one they are defined on compact K\"ahler manifolds.
\subsection{Elements of Hodge theory of sGG manifolds}
The first purpose of this paper is to show that the {\bf Jacobian} and {\bf Albanese tori} and {\bf maps} can still be defined using Hodge theory in the larger class of {\bf sGG manifolds} (cf. [PU14]) with only minor modifications of the construction from the $\partial\bar\partial$ case. We will show that this is possible despite the fact that sGG manifolds need not admit a Hodge decomposition with symmetry in the standard sense of (\ref{eqn:Hodge-decomp}), but only a much weaker version thereof (cf. the splittings (\ref{eqn:H1-decomp-sGG}) and (\ref{eqn:H2n-1-decomp-sGG}) below that will play a key role in the sequel and what was called a {\it fake Hodge decomposition} in [PU14] that will not be used in this paper).
The {\bf sGG class} of compact complex manifolds, introduced in [PU14], strictly contains the class of $\partial\bar\partial$-manifolds, the best known example of an sGG manifold that is not a $\partial\bar\partial$-manifold being the {\bf Iwasawa manifold}. Recall the following equivalences (cf. [PU14])\!\!:
\begin{eqnarray}\label{eqn:sGG-characterisations}\nonumber X \hspace{1ex} \mbox{is sGG} & \stackrel{(a)}{\iff} & {\cal SG}_X = {\cal G}_X \stackrel{(b)}{\iff} \mbox{every Gauduchon metric on}\,\,X \,\, \mbox{is strongly Gauduchon}\\
\nonumber & \stackrel{(c)}{\iff} & \forall u\in C^{\infty}_{n,\,n-1}(X,\,\mathbb{C})\cap\ker d, \hspace{1ex} \mbox{the implication holds\!\!:} \hspace{1ex} u\in\mbox{Im}\,\partial \implies u\in\mbox{Im}\,\bar\partial\\
\nonumber & \stackrel{(d)}{\iff} & b_1 = 2\,h^{0,\,1}_{\bar\partial},\end{eqnarray}
\noindent where $(a)$ is the definition (given in [PU14]) of sGG manifolds requiring the sG cone ${\cal SG}_X$ of $X$ to equal the (a priori larger) Gauduchon cone ${\cal G}_X$ (see [Pop15] for the terminology), $(b)$ is easily seen to be equivalent to $(a)$ (see e.g. [Pop14] for a reminder of the terminology), $(c)$ expresses the sGG property as a special case of the $\partial\bar\partial$-lemma (cf. [Pop15, Observation 5.3] --- the reader unfamiliar with the terminology of the other equivalences may wish to take equivalence $(c)$ as the definition of sGG manifolds), while $(d)$ is one of the numerical characterisations proved in [PU14]. Actually $b_1 \leq 2\,h^{0,\,1}_{\bar\partial}$ on every compact complex manifold and the equality characterises the sGG manifolds ([PU14, Theorem 1.5]).
Moreover, by [PU14, Theorem 3.1], on every compact complex manifold $X$, the following canonical linear map\!\!:
\begin{eqnarray}\label{eqn:H1-decomp-sGG} F\,:\,H^1_{DR}(X,\,\mathbb{C}) \longrightarrow H^{0,\,1}_{\bar\partial}(X,\,\mathbb{C})\oplus\overline{H^{0,\,1}_{\bar\partial}(X,\,\mathbb{C})},\hspace{3ex} F(\{\alpha\}_{DR}) := ([\alpha^{0,\,1}]_{\bar\partial},\,\overline{[\overline{\alpha^{1,\,0}}]_{\bar\partial}}),\end{eqnarray}
\noindent is well defined and injective. Furthermore, $X$ is sGG if and only if $F$ is an isomorphism. Equivalently, the dual linear map
\begin{eqnarray}\label{eqn:H2n-1-decomp-sGG} F^{\star}:\,H^{n,\,n-1}_{\bar\partial}(X,\,\mathbb{C})\oplus\overline{H^{n,\,n-1}_{\bar\partial}(X,\,\mathbb{C})} \longrightarrow H^{2n-1}_{DR}(X,\,\mathbb{C}),\hspace{3ex} F^{\star}([\beta]_{\bar\partial},\,\overline{[\gamma]}_{\bar\partial}) := \{\beta + \bar\gamma\}_{DR},\end{eqnarray}
\noindent is surjective for any $X$, while $X$ is sGG if and only if $F^{\star}$ is an isomorphism.
Thus, the canonical splittings (\ref{eqn:H1-decomp-sGG}) and (\ref{eqn:H2n-1-decomp-sGG}) of $H^1_{DR}(X,\,\mathbb{C})$ and resp. $H^{2n-1}_{DR}(X,\,\mathbb{C})$ are the weaker substitutes for the Hodge decomposition (\ref{eqn:Hodge-decomp}) in degrees $1$, resp. $2n-1$, afforded to sGG manifolds. Clearly, when $X$ is a $\partial\bar\partial$-manifold, (\ref{eqn:H1-decomp-sGG}) and (\ref{eqn:H2n-1-decomp-sGG}) coincide with the splittings for $k=1$, resp. $k=2n-1$, in (\ref{eqn:Hodge-decomp}).
\begin{Cor}\label{Cor:H01_injection_H1} For every {\bf sGG manifold} $X$, the Dolbeault cohomology group $H^{0,\,1}_{\bar\partial}(X,\,\mathbb{C})$ {\bf injects canonically} into the De Rham cohomology group $H^1_{DR}(X,\,\mathbb{C})$. The canonical injection $j:H^{0,\,1}_{\bar\partial}(X,\,\mathbb{C})\hookrightarrow H^1_{DR}(X,\,\mathbb{C})$ is obtained as the composition of the injective linear maps
$$H^{0,\,1}_{\bar\partial}(X,\,\mathbb{C})\hookrightarrow H^{0,\,1}_{\bar\partial}(X,\,\mathbb{C})\oplus\overline{H^{0,\,1}_{\bar\partial}(X,\,\mathbb{C})} \stackrel{F^{-1}}{\longrightarrow} H^1_{DR}(X,\,\mathbb{C}).$$
\end{Cor}
\noindent {\it Proof.} The sGG assumption ensures that the canonical linear map $F$ defined in (\ref{eqn:H1-decomp-sGG}) is an isomorphism. Then so is its inverse $F^{-1}$. \hfill $\Box$
\vspace{3ex}
The canonical splittings (\ref{eqn:H1-decomp-sGG}) and (\ref{eqn:H2n-1-decomp-sGG}) enable one to construct canonically and explicitly the {\it Jacobian variety} (cf. Definition \ref{Def:jacobian-variety}) and the {\it Albanese variety} (cf. Definition \ref{Def:albanese-variety}) of any sGG manifold by imitating the classical constructions on compact K\"ahler manifolds with the necessary modifications. The details are spelt out in $\S.$\ref{subsection:jacobian} and $\S.$\ref{subsection:albanese}.
\subsection{The Jacobian variety of an sGG manifold}\label{subsection:jacobian}
Let $X$ be an sGG manifold with $\mbox{dim}_{\mathbb{C}}X=n$. The inclusions $\mathbb{Z}\subset\mathbb{R}\subset\mathbb{C}\subset{\cal O}$ induce morphisms
$$H^1(X,\,\mathbb{Z})\longrightarrow H^1(X,\,\mathbb{R})\longrightarrow H^1(X,\,\mathbb{C})\longrightarrow H^1(X,\,{\cal O})$$
\noindent where the image of $H^1(X,\,\mathbb{Z})$ is a lattice in $H^1(X,\,\mathbb{R})$. On the other hand, the map $H^1(X,\,\mathbb{R})\rightarrow H^{0,\,1}_{\bar\partial}(X,\,\mathbb{C})$ obtained by composing the maps $H^1(X,\,\mathbb{R})\rightarrow H^1(X,\,\mathbb{C})\rightarrow H^1(X,\,{\cal O})\simeq H^{0,\,1}_{\bar\partial}(X,\,\mathbb{C})$ identifies canonically with the composite map
$$H^1_{DR}(X,\,\mathbb{R})\stackrel{j_1}{\hookrightarrow} H^1_{DR}(X,\,\mathbb{C})\stackrel{p_1\circ F}\longrightarrow H^{0,\,1}_{\bar\partial}(X,\,\mathbb{C}),$$
\noindent where $j_1$ is the natural injection and $p_1\,\,:\,\,H^{0,\,1}_{\bar\partial}(X,\,\mathbb{C})\oplus\overline{H^{0,\,1}_{\bar\partial}(X,\,\mathbb{C})}\longrightarrow H^{0,\,1}_{\bar\partial}(X,\,\mathbb{C})$ is the projection onto the first factor. Since $F$ is an {\bf isomorphism} (thanks to $X$ being {\bf sGG}), we get that
$$p_1\circ F\circ j_1\,\,:\,\,H^1_{DR}(X,\,\mathbb{R})\longrightarrow H^{0,\,1}_{\bar\partial}(X,\,\mathbb{C})$$
\noindent is an isomorphism. Hence $\mbox{Im}\,H^1(X,\,\mathbb{Z})$ is a lattice in $H^{0,\,1}_{\bar\partial}(X,\,\mathbb{C})$. As a result, we can put
\begin{Def}\label{Def:jacobian-variety} The {\bf Jacobian variety} of an $n$-dimensional sGG manifold $X$ is defined exactly as in the K\"ahler case as the $q$-dimensional complex torus
\begin{equation}\label{eqn:Jac-def}\mbox{Jac}(X):=H^{0,\,1}_{\bar\partial}(X,\,\mathbb{C})/\mbox{Im}\,H^1(X,\,\mathbb{Z}),\end{equation}
\noindent where $q:=h^{0,\,1}_{\bar\partial}(X)$ stands for the irregularity of $X$.
\end{Def}
\subsection{The Albanese variety of an sGG manifold} \label{subsection:albanese}
Let once again $X$ be an sGG manifold with $\mbox{dim}_{\mathbb{C}}X=n$. In a way similar to the above discussion, we have morphisms
$$H^{2n-1}(X,\,\mathbb{Z})\longrightarrow H^{2n-1}(X,\,\mathbb{R})\stackrel{j_{2n-1}}{\longrightarrow} H^{2n-1}(X,\,\mathbb{C})\stackrel{(F^{\star})^{-1}}{\longrightarrow} H^{n,\,n-1}_{\bar\partial}(X,\,\mathbb{C})\oplus\overline{H^{n,\,n-1}_{\bar\partial}(X,\,\mathbb{C})},$$
\noindent where $\mbox{Im}\,H^{2n-1}(X,\,\mathbb{Z})$ is a lattice in $H^{2n-1}(X,\,\mathbb{R})$ (a general feature of any compact complex manifold $X$) and $(F^{\star})^{-1}$ is an {\bf isomorphism} (thanks to $X$ being {\bf sGG}). If we denote by $p_2\,\,:\,\,H^{n,\,n-1}_{\bar\partial}(X,\,\mathbb{C})\oplus\overline{H^{n,\,n-1}_{\bar\partial}(X,\,\mathbb{C})}\longrightarrow \overline{H^{n,\,n-1}_{\bar\partial}(X,\,\mathbb{C})}$ the projection onto the second factor, then
$$p_2\circ (F^{\star})^{-1}\circ j_{2n-1}\,\,:\,\,H^{2n-1}_{DR}(X,\,\mathbb{R})\longrightarrow \overline{H^{n,\,n-1}_{\bar\partial}(X,\,\mathbb{C})}$$
\noindent is an isomorphism and therefore $\mbox{Im}\,H^{2n-1}(X,\,\mathbb{Z})$ is a lattice in $\overline{H^{n,\,n-1}_{\bar\partial}(X,\,\mathbb{C})}\simeq (\overline{H^{0,\,1}_{\bar\partial}(X,\,\mathbb{C})})^{\star}$.
\begin{Def}\label{Def:albanese-variety} The {\bf Albanese variety} of an $n$-dimensional sGG manifold $X$ is the complex torus
\begin{equation}\label{eqn:Alb-def}\mbox{Alb}(X):= \overline{H^{n,\,n-1}_{\bar\partial}(X,\,\mathbb{C})}/\mbox{Im}\,H^{2n-1}(X,\,\mathbb{Z}) = \bigg(\overline{H^{0,\,1}_{\bar\partial}(X,\,\mathbb{C})}\bigg)^{\star}/\mbox{Im}\,H^1(X,\,\mathbb{Z})^\star.\end{equation}
\end{Def}
The spaces $H^{n,\,n-1}_{\bar\partial}(X,\,\mathbb{C})$ and $H^{0,\,1}_{\bar\partial}(X,\,\mathbb{C})$ are dual under the Serre duality, while $H^{2n-1}(X,\,\mathbb{Z})$ and $H^1(X,\,\mathbb{Z})$ are Poincar\'e dual.
Recall that in the standard case when $X$ is K\"ahler, the Albanese torus of $X$ is defined as the quotient
$$H^{n-1,\,n}(X,\,\mathbb{C})/\mbox{Im}\,H^{2n-1}(X,\,\mathbb{Z}).$$
\noindent Since, by Hodge symmetry, the conjugation defines an isomorphism $H^{n-1,\,n}_{\bar\partial}(X,\,\mathbb{C})\simeq \overline{H^{n,\,n-1}_{\bar\partial}(X,\,\mathbb{C})}$ when $X$ is K\"ahler, our Definition \ref{Def:albanese-variety} of the Albanese torus coincides with the standard defintion in the K\"ahler case.
\vspace{3ex}
\begin{Conc}\label{Conc:dual_Jacobi-Albanese} We can now conclude from Definitions \ref{Def:jacobian-variety} and \ref{Def:albanese-variety} that the {\bf Jacobian torus} and the {\bf Albanese torus} of any sGG manifold $X$ are {\bf dual tori} in the sense of the following {\bf sesquilinear duality} obtained by composing the bilinear Serre duality with the conjugation in the second factor\!\!:
\begin{equation}\label{eqn:sesquilinear_Serre}H^{0,\,1}_{\bar\partial}(X,\,\mathbb{C})\times\overline{H^{n,\,n-1}_{\bar\partial}(X,\,\mathbb{C})}\longrightarrow\mathbb{C}, \hspace{3ex} ([\alpha]_{\bar\partial},\,\overline{[\beta]}_{\bar\partial})\mapsto\int\limits_X\alpha\wedge\beta.\end{equation}
\end{Conc}
\subsection{The Albanese map of an sGG manifold}\label{subsection:Albanese-map_sGG}
We can now easily adapt to the general context of sGG manifolds $X$ the construction of the Albanese map $\alpha:X\longrightarrow\mbox{Alb}(X)$ from the familiar K\"ahler case. We shall follow the presentation and use the notation of [Dem97, $\S.9.2$].
Let $X$ be an sGG manifold with $\mbox{dim}_{\mathbb{C}}X=n$. The standard isomorphism
$$H_1(X,\,\mathbb{Z})\longrightarrow H^{2n-1}(X,\,\mathbb{Z})$$
\noindent given by the Poincar\'e duality is induced by the map $[\xi]\mapsto\{I_\xi\}_{DR}\in H^{2n-1}_{DR}(X,\,\mathbb{R})$ associating with the homology class $[\xi]$ of every loop $\xi$ in $X$ the De Rham cohomology class of the current of integration $I_\xi$ over $\xi$. Using this isomorphism, the expression (\ref{eqn:Alb-def}) of the Albanese torus of $X$ transforms to
\begin{equation}\label{eqn:Alb_1}\mbox{Alb}(X)= \bigg(\overline{H^{0,\,1}_{\bar\partial}(X,\,\mathbb{C})}\bigg)^{\star}/\mbox{Im}\,H_1(X,\,\mathbb{Z}),\end{equation}
\noindent where the map $H_1(X,\,\mathbb{Z})\longrightarrow \overline{H^{0,\,1}_{\bar\partial}(X,\,\mathbb{C})}^{\star}$ is defined by
\begin{equation}\label{eqn:I-tilde_xi}[\xi]\mapsto \widetilde{I}_\xi:=\bigg(\overline{[v]}\mapsto\int\limits_\xi \overline{\{v\}}\bigg), \hspace{3ex} \mbox{where} \hspace{1ex} \{v\}:=j([v])\in H^1_{DR}(X,\,\mathbb{C}).\end{equation}
\noindent We have used the canonical injection $j:H^{0,\,1}_{\bar\partial}(X,\,\mathbb{C})\hookrightarrow H^1_{DR}(X,\,\mathbb{C})$ defined in Corollary \ref{Cor:H01_injection_H1} and the fact that the integral $\int_\xi \overline{\{v\}}$ depends only on the homology class $[\xi]$ and on the cohomology class $\overline{\{v\}}$ (so not on the actual representatives of these classes).
\begin{Def}\label{Def:albanese-map_def} Let $X$ be an sGG manifold. Fix a base point $a\in X$. For every point $x\in X$, let $\xi$ be any path from $a$ to $x$ and let $\widetilde{I}_\xi\in\overline{H^{0,\,1}_{\bar\partial}(X,\,\mathbb{C})}^{\star}$ be the linear functional defined in (\ref{eqn:I-tilde_xi}).
The canonical holomorphic map
\begin{equation}\label{eqn:albanese-map_def}\nonumber\alpha: X\longrightarrow\mbox{Alb}(X)= \bigg(\overline{H^{0,\,1}_{\bar\partial}(X,\,\mathbb{C})}\bigg)^{\star}/\mbox{Im}\,H_1(X,\,\mathbb{Z}), \hspace{3ex} x\mapsto \widetilde{I}_\xi \hspace{2ex} \mbox{mod} \hspace{1ex} \mbox{Im}\,H_1(X,\,\mathbb{Z}),\end{equation}
\noindent will be called the {\bf Albanese map} of the sGG manifold $X$.
\end{Def}
Note that the class of $\widetilde{I}_\xi$ modulo $\mbox{Im}\,H_1(X,\,\mathbb{Z})$ does not depend on the choice of path $\xi$ from $a$ to $x$ because for any other such path $\eta$, $\widetilde{I}_{\eta^{-1}\,\xi}\in\mbox{Im}\,H_1(X,\,\mathbb{Z})$. Also note that definition (\ref{eqn:albanese-map_def}) of the Albanese map for sGG manifolds $X$ coincides with the standard definition when $X$ is K\"ahler. Indeed, in the K\"ahler case, $\overline{H^{0,\,1}_{\bar\partial}(X,\,\mathbb{C})}$ is canonically isomorphic to $H^{1,\,0}_{\bar\partial}(X,\,\mathbb{C})$ by Hodge symmetry. Moreover, the role played by the canonical injection $j:H^{0,\,1}_{\bar\partial}(X,\,\mathbb{C})\hookrightarrow H^1_{DR}(X,\,\mathbb{C})$ defined in Corollary \ref{Cor:H01_injection_H1} when $X$ is sGG is an apt substitute for the fact that every holomorphic $1$-form (i.e. the unique representative of every element in $H^{1,\,0}_{\bar\partial}(X,\,\mathbb{C})$) is $d$-closed when $X$ is K\"ahler or merely $\partial\bar\partial$.
As in the standard K\"ahler case, we have an alternative description of the Albanese map.
\begin{Obs}\label{Obs:alternative-description_Albanese-map} Let $X$ be an sGG manifold with $\mbox{dim}_\mathbb{C} X=n$. Using the expression (\ref{eqn:Alb-def}) of the Albanese torus of $X$, the Albanese map of $X$ is given by
\begin{equation}\label{eqn:albanese-map_alternative}\nonumber\alpha: X\longrightarrow\mbox{Alb}(X)= \overline{H^{n,\,n-1}_{\bar\partial}(X,\,\mathbb{C})}/\mbox{Im}\,H^{2n-1}(X,\,\mathbb{Z}), \hspace{3ex} x\mapsto \overline{\{I_\xi\}^{n,\,n-1}} \hspace{2ex} \mbox{mod} \hspace{1ex} \mbox{Im}\,H^{2n-1}(X,\,\mathbb{Z}),\end{equation}
\noindent where $ \overline{\{I_\xi\}^{n,\,n-1}}\in\overline{H^{n,\,n-1}_{\bar\partial}(X,\,\mathbb{C})}$ is the projection of the De Rham cohomology class $\{I_\xi\}_{DR}\in H^{2n-1}_{DR}(X,\,\mathbb{R})$ onto $\overline{H^{n,\,n-1}_{\bar\partial}(X,\,\mathbb{C})}$ w.r.t. the isomorphism
$$(F^{\star})^{-1}:\,H^{2n-1}_{DR}(X,\,\mathbb{C})\stackrel{\simeq}{\longrightarrow} H^{n,\,n-1}_{\bar\partial}(X,\,\mathbb{C})\oplus\overline{H^{n,\,n-1}_{\bar\partial}(X,\,\mathbb{C})} $$
\noindent induced by (\ref{eqn:H2n-1-decomp-sGG}). As usual, $I_\xi$ stands for the current of integration over the path $\xi$ from $a$ to $x$ in $X$.
\end{Obs}
Note that in Observation \ref{Obs:alternative-description_Albanese-map} the only difference in the sGG case compared with the standard K\"ahler (or $\partial\bar\partial$) case is the substitution of $\overline{H^{n,\,n-1}_{\bar\partial}(X,\,\mathbb{C})}$ for $H^{n-1,\,n}_{\bar\partial}(X,\,\mathbb{C})$. These spaces are isomorphic by Hodge symmetry when $X$ is K\"ahler or merely $\partial\bar\partial$.
\section{Application to the mirror self-duality of the sGG Iwasawa manifold}\label{section:application_mirror-Iwasawa}
In this section, we apply the construction in $\S.$\ref{section:Albanese-sGG} to the Iwasawa manifold that is known to not be a $\partial\bar\partial$-manifold (see e.g. [Pop14]). However, the Iwasawa manifold $X=X_0$ and all its small deformations in its Kuranishi family $(X_t)_{t\in\Delta}$ are sGG compact complex manifolds of dimension $3$ (cf. [PU14]). So, the extension to the sGG context of the classical constructions of the Albanese torus and map from the $\partial\bar\partial$ case, performed in $\S.$\ref{subsection:jacobian} and $\S.$\ref{subsection:albanese}, is key to our purposes here.
For the Iwasawa manifold $X=X_0$ and all its small deformations $(X_t)_{t\in\Delta}$, the Albanese maps
$$\pi_t:X_t\longrightarrow \mbox{Alb}(X_t):=B_t, \hspace{3ex} t\in\Delta,$$
\noindent have simple explicit descriptions and $\pi:=\pi_0:X_0\to B_0$ is a locally holomorphically trivial fibration whose fibre $\pi^{-1}(s)$ is the Gauss elliptic curve $\mathbb{C}/\mathbb{Z}[i]$ and whose base is the $2$-dimensional complex torus $\mathbb{C}/\mathbb{Z}[i]\times\mathbb{C}/\mathbb{Z}[i]$.
\vspace{3ex}
First, we show that the Albanese torus of every small deformation $X_t$ of the Iwasawa manifold $X=X_0$ is {\bf self-dual} in the context of the construction of section \ref{section:Albanese-sGG}.
\begin{Lem}\label{Lem:identification_tori} Let $(X_t)_{t\in\Delta}$ be the Kuranishi family of the Iwasawa manifold $X=X_0$. Thus $n=\mbox{dim}_\mathbb{C} X_t = 3$. For every $t\in\Delta$ sufficiently close to $0$, the dual Jacobian and Albanese tori $\mbox{Jac}(X_t)$ and $\mbox{Alb}(X_t)$ can be identified {\bf canonically} in the following sense.
There exist {\bf canonical} isomorphisms
\begin{equation}\label{eqn:identification_tori}H^{0,\,1}_{\bar\partial}(X_t,\,\mathbb{C})\simeq H^{3,\,2}_{\bar\partial}(X_t,\,\mathbb{C}) \hspace{2ex} \mbox{and} \hspace{2ex} H^1(X_t,\,\mathbb{Z})\simeq H^5(X_t,\,\mathbb{Z}), \hspace{3ex} t\in\Delta.\end{equation}
\end{Lem}
\noindent {\it Proof.} Dual finite-dimensional vector spaces are, of course, isomorphic, so the main feature of the isomorphisms (\ref{eqn:identification_tori}) is their canonical nature. By ``canonical'' we mean ``depending only on the complex or differential structure, independent of any choice of metric''. As can be seen below, the canonical nature of these isomorphisms follows from the existence of canonical bases, defined by the structural differential forms $\alpha_t,\beta_t, \gamma_t$ mentioned in the introduction and their conjugates, in the vector spaces involved.
From [Sch07, p.6] and [Ang14, $\S.2.2.2$, $\S.2.2.3$], we gather that the vector spaces featuring in (\ref{eqn:identification_tori}) are generated by the structural $(1,\,0)$-forms $\alpha_t, \beta_t, \gamma_t$ as follows:
\begin{eqnarray}\label{eqn:generation}\nonumber H^{0,\,1}_{\bar\partial}(X_t,\,\mathbb{C}) & = & \bigg\langle[\bar\alpha_t]_{\bar\partial},\,[\bar\beta_t]_{\bar\partial}\bigg\rangle, \hspace{3ex} H^{3,\,2}_{\bar\partial}(X_t,\,\mathbb{C}) = \bigg\langle[\alpha_t\wedge\beta_t\wedge\gamma_t\wedge\bar\alpha_t\wedge\bar\gamma_t]_{\bar\partial},\,[\alpha_t\wedge\beta_t\wedge\gamma_t\wedge\bar\beta_t\wedge\bar\gamma_t]_{\bar\partial}\bigg\rangle,\\
H^1_{DR}(X_t,\,\mathbb{C}) & = & \bigg\langle\{\alpha_t\},\, \{\beta_t\},\, \{\bar\alpha_t\},\, \{\bar\beta_t\}\bigg\rangle, \end{eqnarray}
\begin{eqnarray}\nonumber H^5_{DR}(X_t,\,\mathbb{C}) & = & \bigg\langle\{\alpha_t\wedge\beta_t\wedge\gamma_t\wedge\bar\alpha_t\wedge\bar\gamma_t\},\, \{\alpha_t\wedge\beta_t\wedge\gamma_t\wedge\bar\beta_t\wedge\bar\gamma_t\},\, \{\alpha_t\wedge\gamma_t\wedge\bar\alpha_t\wedge\bar\beta_t\wedge\bar\gamma_t\},\, \\
\nonumber & & \hspace{58ex} \{\beta_t\wedge\gamma_t\wedge\bar\alpha_t\wedge\bar\beta_t\wedge\bar\gamma_t\}\bigg\rangle,\end{eqnarray}
\noindent where $\{\,\,\,\}$ stands for De Rham cohomology classes.
Thus, the isomorphism $H^{0,\,1}_{\bar\partial}(X_t,\,\mathbb{C})\simeq H^{3,\,2}_{\bar\partial}(X_t,\,\mathbb{C})$ of (\ref{eqn:identification_tori}) is canonically defined by $[\bar\xi]_{\bar\partial}\mapsto[\bar\xi\wedge\alpha_t\wedge\beta_t\wedge\gamma_t\wedge\bar\gamma_t]_{\bar\partial}$ for $\xi\in\{\alpha_t,\,\beta_t\}$, while the isomorphism $ H^1_{DR}(X_t,\,\mathbb{C})\simeq H^5_{DR}(X_t,\,\mathbb{C})$ is canonically defined by $\{\zeta\}\mapsto\{\zeta\wedge\alpha_t\wedge\beta_t\wedge\gamma_t\wedge\bar\gamma_t\}$ for $ \zeta\in\{\bar\alpha_t,\,\bar\beta_t\}$ and by $\{\zeta\}\mapsto\{\zeta\wedge\gamma_t\wedge\bar\alpha_t\wedge\bar\beta_t\wedge\bar\gamma_t\}$ for $ \zeta\in\{\alpha_t,\,\beta_t\}$. \hfill $\Box$
\vspace{3ex}
Now, we recall two standard facts that prove between them that every elliptic curve (in particular, the fibre of the Albanese map $\pi:=\pi_0:X_0\to B_0$) is {\bf self-dual}.
\begin{Prop}\label{Prop:standard_elliptic-curves} (see e.g. [Dem97, $\S.10.2$]) Let $X$ be a compact complex manifold such that $\mbox{dim}_\mathbb{C} X=1$ (i.e. $X$ is a compact {\bf complex curve}).
\vspace{1ex}
$(i)$\, The Jacobian torus $\mbox{Jac}(X)$ of $X$ coincides with its Albanese torus $\mbox{Alb}(X)$. Moreover, for every point $a\in X$, the Jacobi map
$$\Phi_a:X\longrightarrow \mbox{Jac}(X), \hspace{3ex} x\mapsto{\cal O}([x]-[a]),$$
\noindent coincides with the Albanese map
$$\alpha:X\longrightarrow \mbox{Alb}(X)=\mbox{Jac}(X).$$
$(ii)$\, If $X$ is an {\bf elliptic curve} (i.e. $g=1$, where $g:=h^{0,\,1}(X)$ is the genus of the complex curve $X$), then $\Phi_a=\alpha$ is an {\bf isomorphism}, i.e.
$$X\simeq \mbox{Jac}(X) = \mbox{Alb}(X).$$
\noindent In particular, since the dual tori $\mbox{Jac}(X)$ and $\mbox{Alb}(X)$ coincide, $X$ is self-dual.
\end{Prop}
\vspace{3ex}
We can now infer the main result of this paper showing that the Iwasawa manifold is its own dual in a simple sense pertaining to its Albanese torus and map. This self-duality point of view complements those considered in [Pop17].
\begin{The}\label{The:Iwasawa_mirror_self-duality} The Iwasawa manifold $X=X_0$ is {\bf its own dual} in the sense that in its Albanese map description
$$\pi=\pi_0:X_0\longrightarrow B_0:=\mbox{Alb}(X_0)$$
\noindent as a locally holomorphically trivial fibration by elliptic curves $\mathbb{C}/\mathbb{Z}[i]$ over the $2$-dimensional complex torus $\mathbb{C}/\mathbb{Z}[i]\times\mathbb{C}/\mathbb{Z}[i]$, both the base $\mbox{Alb}(X_0)$ and the fibre $\pi_0^{-1}(s)$ are (sesquilinearly) self-dual tori.
\end{The}
\noindent {\it Proof.} The self-duality of $\mbox{Alb}(X_0)$ was proved in Lemma \ref{Lem:identification_tori}, while the self-duality of $\pi_0^{-1}(s)$ is the standard fact recalled in Proposition \ref{Prop:standard_elliptic-curves}. \hfill $\Box$
\vspace{6ex}
\noindent {\bf References.} \\
\noindent [Ang14]\, D. Angella --- {\it Cohomological Aspects in Complex Non-K\"ahler Geometry} --- LNM 2095, Springer (2014).
\vspace{1ex}
\noindent [Dem 97]\, J.-P. Demailly --- {\it Complex Analytic and Algebraic Geometry}---http://www-fourier.ujf-grenoble.fr/~demailly/books.html
\vspace{1ex}
\noindent [Pop13]\, D. Popovici --- {\it Holomorphic Deformations of Balanced Calabi-Yau $\partial\bar\partial$-Manifolds}--- arXiv e-print AG 1304.0331v1.
\vspace{1ex}
\noindent [Pop14]\, D. Popovici --- {\it Deformation Openness and Closedness of Various Classes of Compact Complex Manifolds; Examples} --- Ann. Sc. Norm. Super. Pisa Cl. Sci. (5), Vol. XIII (2014), 255-305.
\vspace{1ex}
\noindent [Pop15]\, D. Popovici --- {\it Aeppli Cohomology Classes Associated with Gauduchon Metrics on Compact Complex Manifolds} --- Bull. Soc. Math. France {\bf 143} (3), (2015), p. 1-37.
\vspace{1ex}
\noindent [Pop17]\, D. Popovici --- {\it Non-K\"ahler Mirror Symmetry of the Iwasawa Manifold} --- arXiv e-print AG 1706.06449v1.
\vspace{1ex}
\noindent [PU14]\, D. Popovici, L. Ugarte --- {\it The sGG Class of Compact Complex Manifolds} --- arXiv e-print DG 1407.5070v1.
\vspace{1ex}
\noindent [Uen75]\, K. Ueno --- {\it Classification Theory of Algebraic Varieties and Compact Complex Spaces} --- LNM {\bf 439} (1975).
\vspace{1ex}
\noindent [Sch07]\, M. Schweitzer --- {\it Autour de la cohomologie de Bott-Chern} --- arXiv e-print math. AG/0709.3528v1.
\vspace{6ex}
\noindent Institut de Math\'ematiques de Toulouse, Universit\'e Paul Sabatier,
\noindent 118 route de Narbonne, 31062 Toulouse, France
\noindent Email: popovici@math.univ-toulouse.fr
\end{document}
|
2,877,628,091,514 | arxiv | \section{Introduction}
\emph{Photoacoustic Imaging} is a promising imaging technique for visualizing biological material parameters. In
experiments, the medium is exposed to a short pulse of an electromagnetic wave. The medium absorbs a fraction of
the induced energy, heats up, and reacts with thermoelastic expansion. This in turn produces acoustic waves, which
can be recorded and which are used to determine the electromagnetic absorption coefficient
\cite{XuWan06,ElbSchSchu11_report,KirSch11_report,ZanSchHal09b}.
These coupling properties explain why photoacoustic imaging is referred to as \emph{hybrid} or \emph{Coupled Physics Imaging}.
For some recent progress in hybrid imaging we refer to the surveys \cite{Kuc11,ArrSch12}.
In this paper we investigate the method of time reversal in attenuating media as it was introduced in
\cite{AmmBreGarWah11,Wah11,AmmBreGarWah12_report} for the thermo-viscous wave equation.
In these references, the goal is to construct a parameter dependent family of approximate reconstruction functionals,
which allow for approximation of the initial datum of the thermo-viscous wave equation.
For time-reversal technique for imaging in general, see for example \cite{Amm08,Fin97,FouGarPapSol07}.
In this work, we are using the asymptotic techniques from \cite{AmmBreGarWah11,Wah11} to develop time reversal algorithms for the
Nachman-Smith-Waag (NSW) \cite{NacSmiWaa90} and the Kowar-Scherzer-Bonnefond (KSB) models \cite{KowSch10,KowSchBon11}.
These two models satisfy a \emph{strong causality property}. That is, the solutions of these equations are zero before the initialization and satisfy
\emph{a finite front wave propagation speed}. We emphasize that the thermo-viscous wave equation considered in \cite{AmmBreGarWah11,Wah11}
is not strongly causal \cite{IgnOst10}.
While there is a large literature on inversion formulas and time reversal algorithms for the standard wave equation much less is
known in the case of attenuating media \cite{BurGruHalNusPal07,KowSch10,AmmBreGarWah11,NacSmiWaa90}. Partially, this is due to the
fact that, so far, the reference attenuation model (that is, the governing wave equation) has not been established in this field.
\section{Review on time-reversal for the thermo-viscous equation}
Let $\Omega$ be a bounded domain in $\mathbb{R}^3$. Standard photoacoustic imaging consists in determining the \emph{absorption density} $f$, which is assumed to have compact support $K$ in $\Omega$,
in the acoustic wave equation
\begin{equation}\label{wave-eq}\begin{split}
\frac{\partial^2 p}{\partial t^2}(x,t) - \Delta p(x,t) = \pd{\delta_0}{t}(t) f(x), \qquad (x,t)\in\mathbb{R}^3\times\mathbb{R},\\
p(x,t)=0 \ \text{ and } \ \pd{p}{t}(x,t)=0,\qquad x\in\mathbb{R}^3, \ t < 0,
\end{split}\end{equation}
from measurement data $g(y,t):=p(y,t)$ for some $y\in\partial \Omega$ and $t\in[0,T]$, where $T$ is supposed to be sufficiently large.
In the following, let $a:=a(x)$ be a positive function, describing \emph{attenuation}.
The imaging method considered in \cite{AmmBreGarWah11,Wah11} consist in reconstructing $f$ from data
\begin{equation*}
g_a(y,t):=p^a(y,t) \text{ for all } y\in\partial \Omega \text{ and } t\in[0,+\infty).
\end{equation*}
where $p^a$ solves the \emph{thermo-viscous wave} equation,
\begin{equation}
\label{wave-eq-att-tv}
\begin{split}
\frac{\partial^2 p^a}{\partial t^2}(x,t) -\left(\mathcal{I} + a \frac{\partial}{\partial t} \right) \Delta p^a(x,t) = \pd{\delta_0}{t}(t) f(x),\qquad (x,t)\in\mathbb{R}^3\times\mathbb{R},\\
p^a(x,t)=0 \ \text{ and } \ \pd{p^a}{t}(x,t)=0,\qquad x\in\mathbb{R}^3, \ t \ll 0\;.
\end{split}
\end{equation}
To derive the imaging technique we use $\tilde\Gamma_\omega^a(x,\cdot)$, the fundamental solution at $x$
of the Helmholtz equation,
\begin{equation}
\label{eq:helmholtz}
\omega^2 \tilde u (x,y)+(1+{\tt i} a \omega)\Delta_y \tilde u(x,y)= - \delta_x(y), \qquad y\in \mathbb{R}^3\;.
\end{equation}
In \cite{AmmBreGarWah11,Wah11} the following results have been shown:
\begin{itemize}
\item For fixed cut off parameter $\rho > 0$ the function
\begin{equation} \label{rel-1}
v_{s,\rho}^a(x,t):=-\frac{1}{2\pi}\int_{-\rho}^{\rho}\int_{\partial \Omega} {\tt i} \omega \tilde{\Gamma}_\omega^a(x,y)
g_a(y,T-s) d\sigma(y) e^{-{\tt i} \omega(t-s)} d\omega\,,
\end{equation}
satisfies the thermo-viscous wave equation
\begin{equation} \label{reg-treq-tv}
\frac{\partial^2 v}{\partial t^2}(x,t) -\left(\mathcal{I} - a \frac{\partial}{\partial t} \right) \Delta v(x,t) = S_\rho\left[\pd{\delta_s}{t}\right]
g_a(x,T-s)\delta_{\partial \Omega}\,,
\end{equation}
where $S_\rho$ is as in \eqref{S_r}.
In this formula the necessity of the regularization becomes evident because in the unregularized
form the right hand side consists of a product of two distributions, which a-priori is not well-defined.
\item Moreover, for small $a$ it follows from the results in \cite{Wah11} that
\begin{equation} \label{functional-reg}
\mathcal{I}_\rho^a(x)=\int_0^T v_{s,\rho}^a(x,T)ds \longrightarrow f(x), \qquad \text{as } \rho\to+\infty\,.
\end{equation}
Moreover, \cite[Remark 2.3.6]{Wah11} also gives a reference how to obtain appropriate values for the cut off parameter
$\rho$. The regularization is required so that the function $v_{s,\rho}^a(x,t)$ is well-defined in \eqref{rel-1}.
\end{itemize}
\section{The KSB-model} \label{section-KSB}
In this section, we are deriving an imaging functional for the KSB model \cite[Eqs.(28),(88)]{KowSch10} following the time reversal
approach of \cite{AmmBreGarWah11} for the thermo-viscous wave equation, as outlined above.
Let $\alpha_0 > 0$ and $\gamma\in(1,2]$ be fixed. Then the KSB model assumes that the attenuated pressure $p^a$ satisfies the equation
\begin{equation}
\label{wave-KSB}\begin{split}
\left( \alpha_0 \mathcal{I} + L^{1/2}\right)^2\frac{\partial^2 p^a}{\partial t^2}(x,t) -L \ \Delta p^a(x,t)=L\
\pd{\delta_0(t)}{t} f(x),\qquad x\in\mathbb{R}^3, \ t\in \mathbb{R},\\
p^a(x,t)=0 \text{ and } \ \pd{p^a}{t}(x,t)=0,\qquad x\in\mathbb{R}^3, \ t<0,
\end{split} \end{equation}
where $L^{1/2}$ is the convolution operator (in time) with kernel $\displaystyle \frac{1}{\sqrt{2\pi}} \mathcal{F}^{-1}\left[
\left(1+(-{\tt i} \tau_0 \omega)^{\gamma-1}\right)^{1/2}\right]$ - we emphasize that $L=L^{1/2} \circ L^{1/2}$.
Here $D_t^{\gamma-1}$ denotes a fractional time derivative operator of order $\gamma-1$ \cite{KilSriTru06,Pod99}.
The Fourier transforms, $\hat{p}^a:=\mathcal{F}[p^a](\omega)$ and $\hat{p}:=\mathcal{F}[p](\omega)$, of
\eqref{wave-KSB} and \eqref{wave-eq} satisfy the Helmholtz equations:
\begin{equation*}
\kappa^2(\omega) \hat{p}^a(x)+\Delta \hat{p}^a(x)= {\tt i} \omega f(x), \text{ and } \omega^2
\hat{p}(x)+\Delta \hat{p}(x)= {\tt i} \omega f(x)\,,
\end{equation*}
respectively. Here
\begin{equation}
\label{eq:kappa}
\kappa(\omega)=\omega \left(1+\frac{\alpha_0}{\left(1+(-{\tt i} \tau_0 \omega)^{\gamma-1}\right)^{1/2}} \right)\;.
\end{equation}
Using the particular form of the Helmholtz equations it follows that
\begin{equation*}
\hat{p}^a=\frac{\omega}{\kappa(\omega)}\mathcal{F}[p](\kappa(\omega))\,,
\end{equation*}
which after applying the inverse Fourier transform yields
\begin{equation*}
p^a(x,t)=\mathcal{L}_a[p(x,\cdot)](t)
\end{equation*}
with
\begin{equation}
\label{operator1}
\mathcal{L}_a[\phi](t) :=\mathcal{F}^{-1} \left[ \frac{\omega}{\kappa(\omega)} \mathcal{F}[\phi](\kappa(\omega))\right](t)
=\frac{1}{2\pi} \int_\mathbb{R} \frac{\omega}{\kappa(\omega)} e^{-{\tt i} \omega t} \int_\mathbb{R} e^{{\tt i}
\kappa(\omega)s}\phi(s)dsd\omega\;.
\end{equation}
The goal is to find an imaging operator of the form, that is functions $\tilde{\kappa}$ and $\lambda$,
\begin{equation} \label{tilde-La-operator}
\widetilde\Lcal_a[\phi](t)
:= \frac{1}{2\pi}
\int_\mathbb{R} \frac{\omega \lambda(\omega)}{\tilde{\kappa}(\omega)} e^{-{\tt i} \omega t} \int_\mathbb{R} \phi(s)e^{{\tt i}
\tilde{\kappa}(\omega)s} ds d\omega
=
\mathcal{F}^{-1} \left[ \frac{\omega \lambda(\omega) }{\tilde{\kappa}(\omega)}
\mathcal{F}[\phi](\tilde{\kappa}(\omega))\right](t)\,,
\end{equation}
for which the following expansion with respect to $\alpha_0$ (as in \eqref{eq:kappa}) holds:
\begin{equation}
\label{id1}
{\widetilde\Lcal_a}^* \mathcal{L}_a[\phi](t)= \phi(t) + o(\alpha_0), \quad as \ \alpha_0 \to 0\;.
\end{equation}
Thereby,
\begin{equation}
\label{operator2}
{\widetilde\Lcal_a}^*[\phi](t) :=
\frac{1}{2\pi} \int_\mathbb{R} \frac{\omega}{\tilde{\kappa}(\omega)}\lambda(\omega) e^{{\tt i} \tilde{\kappa}(\omega) t}
\int_\mathbb{R} e^{-{\tt i} \omega s}\phi(s)dsd\omega
\end{equation}
is the adjoint of $\widetilde\Lcal_a$.
The derivation of imaging operators as in \eqref{tilde-La-operator} follows general principles, which are used later on for the
other attenuation models as well.
The principle consists in construction auxiliary functions $\lambda_1$, $\nu_1$, and $\nu_2$ which are determined from
$\kappa$ by general construction without taking into account the special structure of the function. The explicit construction
comes at a later stage.
We introduce the auxiliary function $\lambda_1: \mathbb{R} \to \mathbb{R}$ which is related to $\kappa$ in the following way:
\begin{equation}
\label{order}
\kappa(\omega)=\omega(1-\alpha_0 \lambda_1(\omega))+{\mathcal O}(\alpha_0^2).
\end{equation}
Then, it follows by expansion with respect to $\alpha_0$ that
\begin{equation*}
\begin{aligned}
\frac{\omega}{\kappa(\omega)} &= 1 + \alpha_0 \lambda_1(\omega) + {\mathcal O}(\alpha_0^2) \text{ for fixed } \omega \in
\mathbb{R}\,,\\
e^{{\tt i} \kappa(\omega) s} &= e^{{\tt i} \omega s} (1 - {\tt i} \alpha_0 \omega \lambda_1(\omega) s) +
{\mathcal O}(\alpha_0^2) \text{ for fixed } s \in \mathbb{R}\,,
\end{aligned}
\end{equation*}
and consequently for fixed $s,\omega \in \mathbb{R}$
\begin{equation}
\label{prod1}
\frac{\omega}{\kappa(\omega)} e^{{\tt i} \kappa(\omega)s} = e^{{\tt i} \omega s} \left(
1 + \alpha_0 (\lambda_1(\omega) - {\tt i} \omega \lambda_1(\omega) s)\right) + {\mathcal O}(\alpha_0^2)\;.
\end{equation}
This shows that
\begin{equation*}
\begin{aligned}
\mathcal{L}_a[\phi(s)](t) &= \phi(t) + \alpha_0 \mathcal{F}^{-1}\left[ \lambda_1(\omega) \mathcal{F}[\phi(s)](\omega)-{\tt i}
\omega\lambda_1(\omega) \mathcal{F}[s\phi(s)](\omega)\right](t) +o(\alpha_0)\\
&=\mathcal{L}_0[\phi(s)](t) +
\alpha_0 \mathcal{F}^{-1}\left[ \lambda_1(\omega) \mathcal{F}[\phi(s)](\omega)-{\tt i}
\omega\lambda_1(\omega) \mathcal{F}[s\phi(s)](\omega)\right](t)
+ o(\alpha_0)\;.
\end{aligned}
\end{equation*}
Note that for $\alpha=0$, $\mathcal{L}_0 = \mathcal{I}$.
\footnote{
In \cite{AmmBreGarWah11,Wah11}, $\big(\kappa(\omega)-\omega\big)$ is an imaginary function, therefore the Stationary Phase Method is used for the asymptotic analysis of the previous integral operators.
Here the relevant function, $\big( -\alpha_0 \lambda_1(\omega) \big)$ in its general form, is complex (with non-vanishing real part) and the usage of the method of Steepest Descent would provide the relevant
asymptotic expansion. The Stationary Phase Method can be seen as a special case of the method of Steepest Descent. In this work, the relevant real part is zero for the NSW and the thermo-viscous model and consequently the Stationary Phase Method can be applied. However for the KSB model this part is non-zero and the use of the method of Steepest Descent is needed.}
In order to get an explicit form of $\widetilde\Lcal_a$ we introduce auxiliary functions $\nu_i: \mathbb{R} \to \mathbb{R}$, $i=1,2$, and
\begin{equation}
\label{tkappa}
\begin{aligned}
\tilde{\kappa}(\omega) &:=\omega(1-\alpha_0 \nu_1(\omega))+{\mathcal O}(\alpha_0^2)\,,\\
\lambda(\omega)&:=1+\alpha_0 \nu_2(\omega)+{\mathcal O}(\alpha_0^2).
\end{aligned}
\end{equation}
Again, by expansion with respect to $\alpha_0$ it follows that
\begin{equation*}
\begin{aligned}
e^{{\tt i} \tilde{\kappa}(\omega) s}&=e^{{\tt i} \omega s} (1+\alpha_0 (-{\tt i} \omega\nu_1(\omega))
s)+{\mathcal O}(\alpha_0^2)\,,\\
\frac{\omega\lambda(\omega)}{\tilde{\kappa}(\omega)}&=1+\alpha_0 (\nu_1(\omega)+\nu_2(\omega))+{\mathcal
O}(\alpha_0^2)\,,
\end{aligned}
\end{equation*}
and consequently,
\begin{equation}
\label{nu}
\frac{\omega\lambda(\omega)}{\tilde{\kappa}(\omega)}e^{{\tt i} \tilde{\kappa}(\omega) s}
=e^{{\tt i} \omega s} (1+\alpha_0(\nu_1(\omega)+\nu_2(\omega)-{\tt i} \omega \nu_1(\omega)s))+{\mathcal
O}(\alpha_0^2)\;.
\end{equation}
Therefore, we have
\begin{equation*}
\begin{aligned}
{\widetilde\Lcal_a}^*[\phi(s)](t) =& \phi(t) \\
&+ \alpha_0 \left\lbrace \mathcal{F}^{-1}\left[ (\nu_1(-\omega)+\nu_2(-\omega)) \mathcal{F}[\phi(s)](\omega)\right] (t)
\right.\\
&\qquad + t
\left. \mathcal{F}^{-1}\left[{\tt i} \omega \nu_1(-\omega) \mathcal{F}[\phi(s)](\omega)\right](t) \right\rbrace \\
& +o(\alpha_0).
\end{aligned}
\end{equation*}
The goal is to determine $\nu_1, \nu_2$ such that the corresponding operators
$\mathcal{L}_a$ and ${\widetilde\Lcal_a}^*$ satisfy \eqref{id1}.
Using the two expansions for $\mathcal{L}_a$ and ${\widetilde\Lcal_a}^*$ it follows that
\begin{equation} \label{aux-1}
\begin{aligned}
{\widetilde\Lcal_a}^*\mathcal{L}_a[\phi(s)](t) = & \phi(t) \\
&+ \alpha_0 \left\lbrace \mathcal{F}^{-1}\left[ \left( \lambda_1(\omega) + \nu_1(-\omega)+\nu_2(-\omega)\right)
\mathcal{F}[\phi(s)](\omega)\right](t) \right. \\
& \qquad + \mathcal{F}^{-1}\left[ -{\tt i} \omega\lambda_1(\omega) \mathcal{F}[s\phi(s)](\omega)\right] (t) \\
& \left. \qquad + t \mathcal{F}^{-1}\left[{\tt i} \omega \nu_1(-\omega) \mathcal{F}[\phi(s)](\omega)\right](t)
\right\rbrace \\
&+o(\alpha_0)
\end{aligned} \end{equation}
To satisfy \eqref{id1} we require that the first order term in $\alpha_0$ of the equation \eqref{aux-1} has to vanish.
By taking the Fourier transform of this term we see that the term vanishes if
\begin{equation} \label{aux-2}
\big( \lambda_1(\omega) + \nu_1(-\omega)+\nu_2(-\omega)\big) \mathcal{F}[\phi](\omega) - \omega\lambda_1(\omega) \frac{d
\mathcal{F}[\phi]}{d\omega}(\omega)+
\frac{d}{d\omega} \left( \omega \nu_1(-\omega) \mathcal{F}[\phi](\omega) \right)=0 ,
\end{equation}
where we have used the property \eqref{eq:diff} with n=1. Now, it is straightforward to see that the solution of the following system
\begin{equation} \label{cond-0}
\begin{aligned}
\lambda_1(\omega)+\nu_1(-\omega)+\nu_2(-\omega) + \frac{d (\omega \nu_1(-\omega))}{d\omega} &= 0\,,\\
-\omega\lambda_1(\omega)+\omega \nu_1(-\omega)&=0\,,
\end{aligned}
\end{equation}
satisfies the equation \eqref{aux-2} which directly implies \eqref{id1}. Equivalently, we get the following conditions
\begin{equation} \label{cond-1}
\nu_1(-\omega) = \lambda_1(\omega) \text{ and } \nu_2(-\omega) = -3 \lambda_1(\omega) - \omega \frac{d
\lambda_1(\omega)}{d\omega}\;.
\end{equation}
Now, we introduce $\Gamma_\omega(x,y) $ and $\tilde\Gamma_\omega^a(x,y) $ which are the fundamental solutions of the
Helmholtz equations
\begin{equation} \label{fundamental-Helm} \omega^2 \Gamma_\omega(x,y)+\Delta_y \Gamma_\omega(x,y)= - \delta_x(y), \qquad
y\in \mathbb{R}^3 \end{equation}
and
\begin{equation} \label{fundamental-treq} \tilde{\kappa}(\omega)^2 \tilde\Gamma_\omega^a(x,y)+\Delta_y
\tilde\Gamma_\omega^a(x,y)= - \lambda(\omega) \delta_x(y), \qquad y\in \mathbb{R}^3,
\end{equation}
respectively.
We can prove that
\begin{equation}
\label{proposition-1}
\pd{\tilde\Gamma^a}{t}=\widetilde\Lcal_a\left[ \pd{\Gamma}{t}\right],
\end{equation}
where $\widetilde\Lcal_a$ is defined in \eqref{tilde-La-operator},
\begin{equation} \label{Gamma} \Gamma(x,y,t,\tau)=
\mathcal{F}^{-1}\left\{ \Gamma_\omega(x,y) \right\}(t-\tau),\end{equation} and \begin{equation} \label{Gamma^a}
\tilde\Gamma^a(x,y,t,\tau)= \mathcal{F}^{-1}\left\{ \tilde\Gamma_\omega^a(x,y) \right\}(t-\tau),\end{equation} and
$\mathcal{F}^{-1}$ denotes the inverse Fourier transform with respect to $\omega$.
Then, we define the function $v_s^a(x,t)$ by
\begin{equation} \label{v_s^a}
v_s^a(x,t)=-\frac{1}{2\pi}\int_\mathbb{R}\int_{\partial\Omega} {\tt i} \omega \tilde{\Gamma}_\omega^a(x,y) g_a(y,T-s)
d\sigma(y) e^{-{\tt i} \omega(t-s)} d\omega ,
\end{equation}
where we recall that $g_a(y,t):=p^a(y,t) \text{ for all } y\in\partial \Omega \text{ and } t\in[0,T).$
For the KSB model, it follows from \eqref{eq:kappa} and \eqref{order} that
\begin{equation*}
\lambda_1(\omega)=-\left(1+(-{\tt i} \tau_0 \omega)^{\gamma-1}\right)^{-1/2}.
\end{equation*}
Using this in \eqref{cond-1} we get
\begin{equation*}
\nu_1(\omega)=-\left(1+({\tt i} \tau_0 \omega)^{\gamma-1}\right)^{-1/2}.
\end{equation*}
and
\begin{equation*}
\nu_2(\omega) =\frac{7-\gamma}{2}\left(1+({\tt i} \tau_0 \omega)^{\gamma-1}\right)^{-1/2} +
\frac{\gamma-1}{2}\left(1+({\tt i} \tau_0 \omega)^{\gamma-1}\right)^{-3/2}\;.
\end{equation*}
Using these expressions for $\nu_1(\omega)$ and $\nu_2(\omega)$ we suggest the following choice for
$\tilde{\kappa}(\omega)$ and $\lambda(\omega)$:
\begin{equation}
\label{NSW-tilde-kappa}
\tilde{\kappa}(\omega)=\omega (1 - \alpha_0 \nu_1(\omega)) \text{ and }
\lambda(\omega)=1 + \alpha_0 \nu_2 (\omega)\,,
\end{equation}
respectively. Note, that there is no remainder term in \eqref{cond-1}.
By applying the
expressions \eqref{NSW-tilde-kappa} into \eqref{fundamental-treq}, then the previous
definition of the function $v_s^a(x,t)$ yields the following identity:
\begin{equation*}
\begin{split}
& \left( \tilde{L}^{1/2} \left( \alpha_0 \mathcal{I} + \tilde{L}^{1/2}\right)^2 \frac{\partial^2}{\partial t^2} -\tilde{L}^{1/2} \ \tilde{L}
\ \Delta \right) v_s^a(x,t)\\
& = \bigg( \tilde{L}^{1/2} \ \tilde{L} +\alpha_0 \left( (\gamma-1) \mathcal{I} +(7-\gamma) \tilde{L} \right) \bigg)
\pd{\delta_s}{t}\left(g_a(x,T-s)\delta_{\partial \Omega}\right) \text{ for } x \in \Omega\;.
\end{split}
\end{equation*}
Here
\begin{equation}
\label{eq:tildeL}
\tilde{L}=\mathcal{I} + (-\tau_0)^{\gamma-1}D_t^{\gamma-1}\,,
\end{equation}
and
\begin{equation*}
g_a(y,t):=p^a(y,t) \text{ for all } y\in\partial \Omega\,, t\in[0,T]\,,
\end{equation*}
where $T$ is supposed to be sufficiently large such that $p^a(x,t)=0=\pd{p^a}{t}(x,t)$ for $t\geq T$
and $x\in \Omega$.
In the following subsection we prove that the functional $$ \mathcal{I}^a(x)=\int_0^T v_s^a(x,T)ds $$ is an approximation of
the initial datum $f(x)$.
For doing this we have to make some regularization of the relevant operators.
\subsection{The regularized time reversal functional}
Matters of convergence of some infinite integrals defined above, suggest a regularization in the same way as in
\cite{AmmBreGarWah11} and \cite{Wah11}. We define the function
\begin{equation} \label{v-sr^a}
v_{s,\rho}^a(x,t)=-\frac{1}{2\pi}\int_{-\rho}^{\rho}\int_{\partial\Omega} {\tt i} \omega
\tilde{\Gamma}_\omega^a(x,y) g_a(y,T-s) d\sigma(y)
e^{-{\tt i} \omega(t-s)}
d\omega .\end{equation}
as an approximation of $v_s^a(x,t)$ defined in \eqref{v_s^a}. Moreover, we define the regularized fundamental solution of $\tilde\Gamma^a$ as in \eqref{Gamma^a}
\begin{equation*}\tilde{\Gamma}_\rho^a(x,y,s,t)= \frac{1}{2\pi}\int_{-\rho}^{\rho} e^{-{\tt i} \omega (t-s)}
\tilde{\Gamma}_\omega^a(x,y)d\omega ,\end{equation*}
the regularized operator $\widetilde\mathcal{L}_{a,\rho}$ defined in \eqref{operator1}
\begin{equation*}
\widetilde\mathcal{L}_{a,\rho}[\phi](t)=\frac{1}{2\pi} \int_0^\infty \phi(s) \int_{-\rho}^\rho \frac{\omega
\lambda(\omega)}{\tilde\kappa(\omega)} e^{{\tt i}\tilde\kappa(\omega)s}e^{-{\tt i}\omega t}d\omega ds,
\end{equation*}
and its adjoint
\begin{equation*} \label{operator-adjoint}\widetilde\mathcal{L}_{a,\rho}^*[\phi](t)=\frac{1}{2\pi} \int_{-\rho}^\rho
\frac{\omega \lambda(\omega)}{\tilde\kappa(\omega)} e^{{\tt i}\tilde\kappa(\omega) t} \int_0^\infty
e^{-{\tt i}\omega s}\phi(s)dsd\omega .\end{equation*}
Using these definitions we write the approximated version of the equations \eqref{id1} and \eqref{proposition-1},
respectively, that is
\begin{equation}\label{id-rho}
\widetilde\mathcal{L}_{a,\rho}^* \mathcal{L}_a[\phi](t)= S_\rho[\phi](t) + o(\alpha_0)\,,
\end{equation}
and
\begin{equation}
\label{proposition-rho} \pd{\tilde\Gamma^a_\rho}{t}=\widetilde\mathcal{L}_{a,\rho}\left[ \pd{\Gamma}{t}\right] ,\end{equation}
where the operator $S_\rho\left[\phi\right]$ and the function $\Gamma$ were defined in \eqref{S_r} and \eqref{Gamma}, respectively.
Similarly to the previous subsection, applying the definition \eqref{v-sr^a} of $v_{s,\rho}^a(x,t)$ in equation
\eqref{fundamental-treq} with usage of the expressions \eqref{NSW-tilde-kappa} and \eqref{NSW-tilde-kappa} we obtain the
following wave equation
\begin{equation}
\begin{split}
& \left( \tilde{L}^{1/2} \left( \alpha_0 \mathcal{I} + \tilde{L}^{1/2}\right)^2 \frac{\partial^2}{\partial t^2} -\tilde{L}^{1/2} \ \tilde{L}
\ \Delta \right) v_{s,\rho}^a(x,t)\\
& = S_\rho\left[ \bigg( \tilde{L}^{1/2} \ \tilde{L} +\alpha_0 \left( (\gamma-1) \mathcal{I} +(7-\gamma) \tilde{L} \right)
\bigg) \pd{\delta_s}{t}\right] \left(g_a(x,T-s)\delta_{\partial \Omega}\right) \text{ for }x\in \Omega\;.
\end{split}
\end{equation}
Finally, we can obtain the reconstruction functional $\mathcal{I}_\rho^a$. Indeed, since equations \eqref{id-rho} and
\eqref{proposition-rho} hold, then Proposition 2.3.5 in \cite{Wah11} shows that
\begin{equation*}
\mathcal{I}_\rho^a(x)=\int_0^T v_{s,\rho}^a(x,T)ds \longrightarrow f(x), \qquad \text{as }
\rho\to+\infty.
\end{equation*}
\begin{remark} The latter proposition suggest that the larger $\rho$ we choose, the better approximation we get.
However, the previous computation of $\mathcal{I}_\rho^a(x)$ involves the integration of the fundamental solution
$\tilde{\Gamma}_\omega^a(x,y)$ which grows exponentially as $\exp\left\lbrace \Im\{\tilde{\kappa}(\omega)\}|x-y|\right\rbrace $. In order to ensure stability of $\mathcal{I}_\rho^a(x)$ this term must not be greater than one \cite[Remark 2.3.6]{Wah11}. For large $\omega$, the expression $| \Im\{\tilde{\kappa}(\omega)\} |$ is behaving like (and is less than) $\alpha_0
|\omega|\sin\frac{(\gamma-1)\pi}{4}, \ \gamma \in (1,2].$ So one should not use frequencies larger than
$\frac{1}{\alpha_0 diam(\Omega)}$. Hence, we get $\rho \simeq\frac{1}{\alpha_0 diam(\Omega)}$ to be the threshold for the imaging
functional stability, where $diam(\Omega)$ denotes the diameter of the domain $\Omega$. A finer estimation of the
threshold can be given if we use that $| \Im\{\tilde{\kappa}(\omega)\} |$ is behaving like $\alpha_0
\tau_0^{\frac{1-\gamma}{2}} |\omega|^{\frac{3-\gamma}{2}}
\sin\frac{(\gamma-1)\pi}{4}, \ \gamma \in (1,2]$, for large values of $\omega$. Consequently, \begin{equation*}\rho
\simeq\dfrac{\tau_0^{\frac{\gamma-1}{3-\gamma}}}{\left( \alpha_0 diam(\Omega) \sin\frac{(\gamma-1)\pi}{4}\right)^{\frac{2}{3-\gamma}}}
,\end{equation*} with $\gamma \in (1,2]$.\end{remark}
\section{The NSW model}
Let $p^a$ satisfy the following problem
\begin{equation} \label{nsw-wave_eq}
\left(\mathcal{I} + \tilde{\tau} \frac{\partial }{\partial t} \right) \frac{\partial^2 p^a}{\partial t^2} -\left(\mathcal{I} + \tau \frac{\partial}{\partial t} \right)
\Delta p^a = \left(\mathcal{I} + \tau \frac{\partial}{\partial t} \right)\pd{\delta_0}{t} f,
\end{equation}
along with the conditions in \eqref{wave-KSB}, where we consider the NSW model for one relaxation process, as defined in
\cite{KowSch10}. Moreover, we assume that $\tau>\tilde\tau>0$ so that the strong causality condition in \cite{KowSch10} is satisfied
and that $\tau$ and $\tilde\tau$ are small and of the same magnitude.
Substituting $\tilde\tau = \alpha_0 \tilde r$ and $\tau = \alpha_0 r$, we find
\begin{equation*}
\kappa(\omega)=\omega \sqrt{\dfrac{1-{\tt i} \omega \tilde\tau}{1-{\tt i} \omega \tau}} = \omega \sqrt{\dfrac{1-{\tt i} \alpha_0 \omega \tilde r}{1-{\tt i} \alpha_0 \omega r}}.
\end{equation*}
By applying the expansion \eqref{order} for $\kappa(\omega)$ in terms of $\alpha_0$ we get $$\lambda_1(\omega)=-{\tt i} \omega \frac{r-\tilde r}{2}.$$
Consequently, the auxiliary functions $\nu_i,\ i=1,2$ are given by the conditions \eqref{cond-1} and read as follows
\begin{equation} \label{nsw-nu_1,2}
\nu_1(\omega)={\tt i} \omega \frac{r-\tilde r}{2} \ \text{ and } \
\nu_2(\omega)=-2{\tt i}\omega (r-\tilde r)\;.
\end{equation}
Therefore, we obtain the expansion of $\tilde{\kappa}(\omega)$ and $\lambda(\omega)$ from \eqref{tkappa}, i.e.,
\begin{equation*}
\begin{aligned}
\tilde{\kappa}(\omega) &:=\omega(1-\alpha_0 \nu_1(\omega))+{\mathcal O}(\alpha_0^2)\,,\\
\lambda(\omega)&:=1+\alpha_0 \nu_2(\omega)+{\mathcal O}(\alpha_0^2),
\end{aligned}
\end{equation*}
where now the functions $\nu_i,\ i=1,2$ are given by \eqref{nsw-nu_1,2}.
We make the following choices for the functions $\tilde{\kappa}(\omega)$ and $\lambda(\omega)$
\begin{equation} \label{nsw-tka+la}
\tilde{\kappa}(\omega)=\omega \sqrt{\dfrac{1+{\tt i} \omega \tilde\tau}{1+{\tt i} \omega \tau}} \ \text{ and } \
\lambda(\omega)=\left( \dfrac{1+{\tt i} \omega \tilde\tau}{1+{\tt i} \omega \tau}\right) ^2,
\end{equation}
which satisfy \eqref{tkappa} for non-vanishing ${\mathcal O}(\alpha_0^2)$ terms.
From here, using the Helmholtz equation \eqref{fundamental-treq} and with the same arguments as in the previous section
we derive the following regularized time reverted attenuated equation (which is of course not unique)
\begin{equation}
\label{nsw-reg-treq}
\begin{aligned}
~ & \left( \left(\mathcal{I} - \tau \frac{\partial }{\partial t} \right)\left(\mathcal{I} - \tilde{\tau} \frac{\partial }{\partial t} \right) \frac{\partial^2 }{\partial t^2}
-\left(\mathcal{I} - \tau \frac{\partial}{\partial t} \right)^2 \Delta \right) v_{s,\rho}^a(x,t) \\
= & S_\rho\left[ \left(\mathcal{I} - \tilde\tau
\frac{\partial}{\partial t} \right)^2 \pd{\delta_s}{t}\right] \left(g_a(x,T-s)\delta_{\partial \Omega}\right),
\end{aligned}
\end{equation}
where $S_\rho$ is defined in \eqref{S_r}. Here \begin{equation*}
g_a(y,t):=p^a(y,t) \text{ for all } y\in\partial \Omega\,, t\in[0,T]\,,
\end{equation*}
where $p^a(y,t)$ satisfies equation \eqref{nsw-wave_eq} and $T$ is supposed to be sufficiently large such that $p^a(x,t)=0=\pd{p^a}{t}(x,t)$ for $t\geq T$
and $x\in \Omega$.
The reconstruction imaging functional is given by \eqref{functional-reg}, i.e.
\begin{equation*}
\mathcal{I}_\rho^a(x)=\int_0^T v_{s,\rho}^a(x,T)ds \longrightarrow f(x), \qquad \text{as } \rho\to+\infty\,.
\end{equation*}
\begin{remark}
For the case of N relaxation processes, the procedure will be conceptually the same. Now, the attenuated wave equation
has a more complicated form \cite{NacSmiWaa90} and we have
\begin{equation*}
\kappa(\omega) = \omega \sqrt{\dfrac{1}{N}\sum_{j=1}^N\dfrac{1-{\tt i} \omega \tilde\tau_j}{1-{\tt i} \omega \tau_j}}.
\end{equation*}
Here, we assume that $\{\tau_j,\tilde{\tau}_j\}_1^N$ are small
and of the same magnitude, i.e. all $\{\tau_j,\tilde{\tau}_j\}_1^N$ are of order ${\mathcal O}(\alpha_0)$.
Then the expansions of the functions $\kappa(\omega)$, $\tilde{\kappa}(\omega)$ and $\lambda(\omega)$ in terms of $\alpha_0$, as they were given in \eqref{order} and \eqref{tkappa},
and the relevant asymptotic analysis allow us to make (as previously) the following choices
\begin{equation} \label{nsw-tka+la-gen}
\tilde{\kappa}(\omega)=\omega \sqrt{\dfrac{1}{N}\sum_{j=1}^N\dfrac{1+{\tt i} \omega \tilde\tau_j}{1+{\tt i} \omega \tau_j}}
\text{ and } \lambda(\omega)=\left( \dfrac{1}{N}\sum_{j=1}^N\dfrac{1+{\tt i} \omega \tilde\tau_j}{1+{\tt i} \omega
\tau_j}\right)^2, \end{equation}
which lead to the corresponding time-reverted attenuated equation. One can find the corresponding wave equation by applying the
inverse Fourier transform on the Helmholtz equation \eqref{fundamental-treq}, for the later values of
$\tilde{\kappa}(\omega)$ and $\lambda(\omega)$. This procedure will lead to an equation similar to \eqref{nsw-reg-treq},
but this time one obtains a more complicated form.
Following the analysis of the previous section we will choose the value of the truncation parameter $\rho$ appearing in \eqref{nsw-reg-treq} by finding
the behaviour of the expression $|\Im (\tilde{\kappa}(\omega))|$, with $\tilde{\kappa}(\omega)$ given in \eqref{nsw-tka+la}. For $|\omega| \tilde\tau < |\omega|\tau<1$ we find that
$|\Im(\tilde{\kappa}(\omega))|$ is behaving like $\dfrac{\omega^2}{2}(\tau-\tilde\tau)$, which (following the arguments
from \cite[Remark 2.3.6]{Wah11}) yields the truncation parameter
\begin{equation*}
\rho\simeq\frac{1}{\sqrt{diam(\Omega)(\tau-\tilde\tau)}}.
\end{equation*}
Moreover, this value of the truncation parameter is sufficient
for the other two cases, i.e. for $1<|\omega|\tilde\tau<|\omega|\tau$ and $|\omega|\tilde\tau<1<|\omega|\tau$, when
$|\Im(\tilde{\kappa}(\omega))|$ is behaving like
$\frac{1}{2}\sqrt{\frac{\tilde\tau}{\tau}}\left(\frac{1}{\tilde\tau}
-\frac{1}{\tau}\right) $ and $\sqrt{\dfrac{|\omega|}{2\tau}}\left| -1+\frac{1}{2}\left( |\omega|\tilde\tau +
\frac{1}{|\omega|\tau}\right)\right|$, respectively.
For the case of N relaxation processes, one can use similar arguments to get an estimation of the truncation parameter
$\rho$. In the case that $|\omega|\tilde\tau_j<|\omega|\tau_j<1,$ for all $j=1,\ldots,N$ we find that
$|\Im(\tilde{\kappa}(\omega)\}|$, with $\tilde{\kappa}(\omega)$ given in \eqref{nsw-tka+la-gen}, is behaving like $\dfrac{\omega^2}{2N}\sum_{j=1}^N(\tau_j-\tilde\tau_j)$, which yields
the truncation parameter
\begin{equation*}
\displaystyle\rho\simeq\sqrt{\dfrac{N}{diam(\Omega)\sum_{j=1}^N(\tau_j-\tilde\tau_j)}}.
\end{equation*}
For the several other cases, the arguments of the previous remark, along with the usage of the
triangular inequality, give us the opportunity to observe that the above-mentioned estimation for the truncation
parameter is sufficient.
\end{remark}
\begin{remark}
The formal procedure outlined above also applies to thermo-viscous model, as this was defined by the wave equation (84) in \cite{KowSch10}.
This is the special case of the NSW-model with one relaxation process and $\tilde\tau=0$. Note that the thermo-viscous wave equation (84) in \cite{KowSch10} refers to
a not strongly causal model and has a different RHS from the thermo-viscous equation \eqref{wave-eq-att-tv}, which was analysed in \cite{Wah11}.
\end{remark}
\section{Higher order terms}
In this section we describe the procedure for evaluating higher order terms of the operators $\mathcal{L}_a$ and $\widetilde\Lcal_a^*$, defined in \eqref{operator1} and \eqref{tilde-La-operator} and
consequently a higher order approximation of the reconstruction functional $\mathcal{I}_\rho^a$.
In particular, this method allows determining the higher order terms of the asymptotic expansion for the functions
$\tilde{\kappa}(\omega)$ and $\lambda(\omega)$, appearing in \eqref{tilde-La-operator}.
We make the following ansatz
\begin{equation}
\label{ka-gen}\kappa(\omega)=\omega \sum_{j=0}^\infty (-1)^j \lambda_j(\omega) a^j, \qquad \lambda_0(\omega)=1
\end{equation}
and consequently
\begin{equation}
\label{w/ka-gen}\frac{\omega}{\kappa(\omega)}=\sum_{j=0}^\infty \mu_j(\omega) a^j,
\end{equation}
with
\begin{equation}
\label{mu0,1,2-gen} \mu_0(\omega)=1, \qquad \mu_1(\omega)=\lambda_1(\omega), \qquad
\mu_2(\omega)=\lambda_1^2(\omega)-\lambda_2(\omega)
\end{equation}
and in general $$\mu_j(\omega)=\mathcal{P}_j\left( \left\{ \lambda_k(\omega) \right\}_{k=1}^j \right), $$ where $\mathcal{P}_j$ denotes
a polynomial (on several variables) of order $j$. In addition, the expansion \eqref{ka-gen} yields the following
\begin{equation}
\label{exp-ka-gen}e^{{\tt i} \kappa(\omega) s}=e^{{\tt i}\omega s}\sum_{j=0}^\infty \psi_j(\omega,s) a^j,
\end{equation}
with
\begin{equation}
\label{psi0,1,2-gen} \psi_0(\omega,s)=1, \qquad \psi_1(\omega,s)=-{\tt i}\omega\lambda_1(\omega)s, \qquad
\psi_2(\omega,s)={\tt i}\omega\lambda_2(\omega)s+\frac{1}{2}\left( {\tt i}\omega\lambda_1(\omega)s\right)^2
\end{equation}
and in general
\begin{equation}
\label{psij-gen}\psi_j(\omega,s)=\mathcal{Q}_j\left( \left\{ {\tt i}\omega\lambda_k(\omega) s \right\}_{k=1}^j \right),
\end{equation} where $\mathcal{Q}_j$ denotes a polynomial (on several variables) of order $j$.
In the case of higher order terms we make the ansatz
\begin{equation}
\label{wka-gen}\tilde\kappa(\omega)=\omega \sum_{j=0}^\infty (-1)^j \lambda_j(-\omega) a^j.
\end{equation}
This expansion is without loss of generality. In the procedure described in section \ref{section-KSB}, concerning the first order approximation of $\tilde\kappa(\omega)$,
the relevant terms were considered unknown and one had to determine them. However, applying the previous expansion we
state a consistency condition for the time-reversal algorithm which, instead of giving a system of equations, yields a
set of identities, as we will see below.
In the same way with the previously mentioned expansions we get
\begin{equation}
\label{w/wka-gen}\frac{\omega}{\tilde\kappa(\omega)}=\sum_{j=0}^\infty \mu_j(-\omega) a^j
\end{equation}
and
\begin{equation}
\label{exp-wka-gen}e^{{\tt i} \tilde\kappa(\omega) s}=e^{{\tt i}\omega s}\sum_{j=0}^\infty
\widetilde\psi_j(\omega,s) a^j,
\end{equation}
with
\begin{equation}
\label{wpsij-gen} \widetilde\psi_j(\omega,s)=\mathcal{Q}_j\left( \left\{ {\tt i}\omega\lambda_k(-\omega) s \right\}_{k=1}^j \right),
\end{equation}
where $\mathcal{Q}_j$ denote the same polynomials as in \eqref{psij-gen} .
Finally, we consider the expansions
\begin{equation}
\label{la-gen}\lambda(\omega)=\sum_{j=0}^\infty \beta_j(\omega) a^j, \qquad \beta_0(\omega)=1
\end{equation}
and
\begin{equation}
\label{gamma-gen}\frac{\omega}{\tilde\kappa(\omega)}\lambda(\omega)=\omega \sum_{j=0}^\infty \gamma_j(\omega) a^j, \qquad
\gamma_0(\omega)=1.
\end{equation}
These, expansions along with \eqref{wka-gen} yield the following relation
\begin{equation}
\label{beta-gen}\beta_n(\omega)=\sum_{i+j=n} (-1)^j \gamma_i(\omega) \lambda_j(-\omega).
\end{equation}
Now, since we know explicitly $\tilde\kappa(\omega)$, our strategy consists of determining $\lambda(\omega)$; consequently
the Helmholtz equation \eqref{fundamental-treq}, under the procedure described in the relevant section, provides the corresponding time-reverted wave equation.
So, in what follows out target is to describe a procedure to find $\beta_n(\omega), \ n\in \mathbb{N}$, equivalently determine
the terms of the asymptotic expansion of $\lambda(\omega)$.
Applying the previous expansions in the expressions \eqref{operator1} and \eqref{operator2}, we get
\begin{equation}
\label{op-gen} \mathcal{L}_a[\phi](t)=\sum_{k=0}^\infty f_k[\phi](t) a^k \text{ and }
\widetilde\Lcal_a^*[\phi](t)=\sum_{k=0}^\infty g_k[\phi](t) a^k,
\end{equation}
with $f_0 \equiv g_0 \equiv Id$, $\{f_k,g_k\}_{k=1}^n$ being operators that can be obtained explicitly in terms of
$\{\lambda_j,\mu_j,\psi_j,\widetilde\psi_j,\gamma_j\}_{j=1}^k$, for $k=1,\ldots,n,$ respectively. In particular,
$\{f_k,g_k\}_{k=1}^2$ are obtained explicitly with use of the expressions \eqref{mu0,1,2-gen}, \eqref{psi0,1,2-gen},
\eqref{wpsij-gen}, i.e.,
$$f_1[\phi(s)](t)=\mathcal{F}^{-1}\left[ -{\tt i}\omega\lambda_1(\omega)
\mathcal{F}[s\phi(s)](\omega)+\lambda_1(\omega)\mathcal{F}[\phi(s)](\omega)\right](t),$$
\begin{equation} \begin{split}f_2[\phi(s)](t)=\mathcal{F}^{-1}\bigg[\left( \lambda_1^2(\omega)-\lambda_2(\omega) \right)
\mathcal{F}[\phi(s)](\omega) -{\tt i} \omega \left( \lambda_1^2(\omega)-\lambda_2(\omega) \right) \mathcal{F}[s\phi(s)](\omega) \\
+\frac{1}{2}\left( {\tt i}\omega\lambda_1(\omega)\right)
^2\mathcal{F}[s^2\phi(s)](\omega)\bigg](t),\end{split}\nonumber\end{equation}
$$g_1[\phi(s)](t)=\mathcal{F}^{-1}\left[ \gamma_1(-\omega) \mathcal{F}[\phi(s)](\omega)\right] + t \mathcal{F}^{-1}\left[
{\tt i}\omega\lambda_1(\omega)\mathcal{F}[\phi(s)](\omega)\right](t),$$
\begin{equation} \begin{split}g_2[\phi(s)](t)=\mathcal{F}^{-1}\left[ \gamma_2(-\omega) \mathcal{F}[\phi(s)](\omega)\right] + t
\mathcal{F}^{-1}\left[ {\tt i}\omega \left( \gamma_1(-\omega)\lambda_1(\omega)- \lambda_2(\omega)\right)
\mathcal{F}[\phi(s)](\omega)\right](t) \\ + t^2 \mathcal{F}^{-1}\left[ \frac{1}{2}\left( {\tt i}\omega\lambda_1(\omega)\right)^2
\mathcal{F}[\phi(s)](\omega)\right](t).\end{split}\nonumber\end{equation}
Consequently, we get the following asymptotic expansion
\begin{equation}
\label{op-gen-id} \widetilde\Lcal_a^*\mathcal{L}_a[\phi](t)=\phi[t] +\bigg(f_1[\phi](t)+ g_1[\phi](t)\bigg) a+
\bigg(f_2[\phi](t)+ g_2[\phi](t)+g_1[f_1[\phi]](t)\bigg) a^2 + o(a^2),
\end{equation}
when $a\to 0$, with $\{f_k,g_k\}_{k=1}^2$ defined above. The identity
\begin{equation}
\label{id-gen} \widetilde\Lcal_a^*\mathcal{L}_a[\phi](t)=\phi(t) + o(a^n), \qquad a\to 0
\end{equation}
is satisfied, up to the 1st order (equivalently the identity \eqref{id1} is satisfied), when
$$\mathcal{F}\left[ f_1[\phi] + g_1[\phi]\right](\omega)=0, \quad \forall \omega \in \mathbb{R}.$$ Using the definitions of the
operators $f_1$ and $g_1$ given above and the property \eqref{eq:diff} this condition yields the following equation
\begin{equation} \label{cond-gen-1} \gamma_1(-\omega)=-2\lambda_1(\omega)-\omega\frac{d\lambda_1(\omega)}{d\omega},
\end{equation}
which is equivalent with the conditions \eqref{cond-0}.
By applying the condition \eqref{cond-gen-1} in the expression \eqref{beta-gen} for $n=1$, we get that
\begin{equation}\label{beta_1}\beta_1(-\omega)=-\lambda_1(\omega)+\gamma_1(-\omega)=-3\lambda_1(\omega)-\omega\frac{d\lambda_1(\omega)}{d\omega},\end{equation}
which is the first order approximation, of the function $\lambda(\omega)$, as this was defined in \eqref{la-gen}. From the expressions \eqref{tkappa} and \eqref{la-gen}, one can see that $\nu_2\equiv \beta_1$, as they were introduced in
these equalities, respectively. So, equation \eqref{beta_1} is exactly the second of the conditions \eqref{cond-1}. Note
that the first of these conditions, is satisfied as an identity by the choice of the expansion of
$\tilde\kappa(\omega)$, given in
\eqref{wka-gen}.
Following the same procedure for the 2nd order in the identity \eqref{id-gen}, the expression \eqref{op-gen-id} yields
$$\mathcal{F} [ f_2[\phi]+ g_2[\phi]+g_1[f_1[\phi]]](\omega)=0,$$ which after some calculations where we made use of the
definitions of the operators $f_2$, $g_2$ and the fact that
\begin{equation} \begin{split} g_1[f_1[\phi(s)]](t)=\mathcal{F}^{-1}\big[ \gamma_1(-\omega)\lambda_1(\omega)
\mathcal{F}[\phi(s)](\omega)-{\tt i}\omega\gamma_1(-\omega)\lambda_1(\omega) \mathcal{F}[s\phi(s)](\omega)\big] \\ + t
\mathcal{F}^{-1}\left[ {\tt i}\omega \lambda_1^2(\omega)\mathcal{F}[\phi(s)](\omega)- \left( {\tt i}\omega\lambda_1(\omega)\right)^2
\mathcal{F}[s\phi(s)](\omega)\right](t)
\end{split} \nonumber\end{equation}
we obtain the following equation
\begin{equation} \label{cond-gen-2}
\begin{split}\gamma_2(-\omega)&=-\lambda_1^2(\omega)+\lambda_2(\omega)-\gamma_1(-\omega)\lambda_1(\omega)\\
&+\frac{d}{d\omega}\bigg(\omega \left( -\lambda_1^2(\omega)+\lambda_2(\omega)-\gamma_1(-\omega)\lambda_1(\omega)\right)
\bigg) -\frac{1}{2}\frac{d^2}{d\omega^2}\left( \big(\omega\lambda_1(\omega) \big) ^2\right).
\end{split}\end{equation}
Substitution of the conditions \eqref{cond-gen-1} and
\eqref{cond-gen-2} in the expression \eqref{beta-gen} provides the second order approximation for the function
$\lambda(\omega)$, i.e. $\beta_2(\omega)$, as this was defined in \eqref{la-gen}.
A general treatment of this problem would appear as follows: The general form of \eqref{op-gen-id} would be
$$ \widetilde\Lcal_a^* \mathcal{L}_a[\phi](t) = \phi(t) +\sum_{k=1}^n h_k[\phi](t) a^k + o(a^n) , \qquad a \to 0 , $$
where $\{h_k\}_{k=1}^n$ are operators expressed explicitly (after some calculations and the use of the previous expansions) in terms of $\{\lambda_j,\gamma_j\}_{j=1}^k$, for $k=1,\ldots,n$, respectively. Consequently, the condition
$$\mathcal{F}[h_k[\phi]](\omega)=0, \qquad \forall \omega \in \mathbb{R} $$ yields $\gamma_k(-\omega)$ as an
explicit expression of $\left\{ \lambda_j(\omega)\right\}_{j=1}^k$, $\left\{ \gamma_j(-\omega)\right\}_{j=1}^{k-1}$ and
their derivatives up to order $k$. Hence, the expression \eqref{beta-gen} yields the $k$-th order approximation for the
function $\lambda(\omega)$, i.e. $\beta_k(\omega)$.
\def$'$} \providecommand{\noopsort}[1]{{$'$} \providecommand{\noopsort}[1]{}
|
2,877,628,091,515 | arxiv | \section{Introduction}
Using present data from neutrino oscillations, the $3 \times 3$ neutrino
mixing matrix is largely determined, together with the two mass-squared
differences \cite{data}. In the Standard Model of particle interactions,
there are 3 lepton families. The charged-lepton mass matrix linking
left-handed $(e, \mu, \tau)$ to their right-handed counterparts is in
general arbitrary, but may always be diagonalized by 2 unitary
transformations:
\begin{equation}
{\cal M}_l = U^l_L \pmatrix{m_e & 0 & 0 \cr 0 & m_\mu & 0 \cr 0 & 0 & m_\tau}
(U^l_R)^\dagger.
\end{equation}
Similarly, the neutrino mass matrix may also be diagonalized by 2 unitary
transformations if it is Dirac:
\begin{equation}
{\cal M}^D_\nu = U^\nu_L \pmatrix{m_1 & 0 & 0 \cr 0 & m_2 & 0 \cr 0 & 0 &
m_3} (U^\nu_R)^\dagger,
\end{equation}
or by just 1 unitary transformation if it is Majorana:
\begin{equation}
{\cal M}^M_\nu = U^\nu_L \pmatrix{m_1 & 0 & 0 \cr 0 & m_2 & 0 \cr 0 & 0 &
m_3} (U^\nu_L)^T.
\end{equation}
Notice that whereas the charged leptons have individual names, the
neutrinos are only labeled as $1,2,3$, waiting to be named.
The observed neutrino mixing matrix is the mismatch between
$U^l_L$ and $U^\nu_L$, i.e.
\begin{eqnarray}
U_{l\nu} = (U^l_L)^\dagger U^\nu_L \simeq \pmatrix{0.85 & 0.52 & 0.053
\cr -0.33 & 0.62 & -0.72 \cr -0.40 & 0.59 & 0.70} \simeq \pmatrix{\sqrt{2/3}
& 1/\sqrt{3} & 0 \cr -1/\sqrt{6} & 1/\sqrt{3} & -1/\sqrt{2} \cr -1/\sqrt{6}
& 1/\sqrt{3} & 1/\sqrt{2}}.
\end{eqnarray}
This approximate pattern has been dubbed tribimaximal by Harrison, Perkins,
and Scott \cite{hps}. Notice that the 3 vertical columns are evocative
of the mesons $(\eta_8,\eta_1,\pi^0)$ in their $SU(3)$ decompositions.\\
\noindent How can the HPS form of $U_{l\nu}$ be derived from a symmetry? The
difficulty comes from the fact that any symmetry defined in the basis
$(\nu_e,\nu_\mu,\nu_\tau)$ is automatically applicable to $(e,\mu,\tau)$
in the complete Lagrangian. To do so, usually one assumes the canonical
seesaw mechanism and studies the Majorana neutrino mass matrix
\begin{equation}
{\cal M}_\nu = -{\cal M}^D_\nu {\cal M}_N^{-1} ({\cal M}^D_\nu)^T
\end{equation}
in the basis where ${\cal M}_l$ is diagonal; but the symmetry apparent
in ${\cal M}_\nu$ is often incompatible with a diagonal ${\cal M}_l$ with
3 very different eigenvalues.\\
\noindent In this talk, I will discuss first the pitfall of $\mu
\leftrightarrow \tau$ symmetry based on maximal $\nu_\mu-\nu_\tau$ mixing.
I will show how it can be done properly with the permutation symmetry $S_3$.
I will then spend most of the rest of my time on the tetrahedral symmetry
$A_4$ and a little on the permutation symmetry $S_4$. These are examples of
how exact and approximate tribimaximal mixing may be obtained.
\section{Maximal $\nu_\mu-\nu_\tau$ Mixing}
Consider just 2 families. Suppose we want maximal $\nu_\mu-\nu_\tau$
mixing, then we should have
\begin{equation}
{\cal M}_\nu = \pmatrix{a & b \cr b & a} = {1 \over \sqrt2} \pmatrix{1 & -1
\cr 1 & 1} \pmatrix{a+b & 0 \cr 0 & a-b} {1 \over \sqrt2} \pmatrix{1 & 1 \cr
-1 & 1}.
\end{equation}
This seems to require the exchange symmetry $\nu_\mu \leftrightarrow
\nu_\tau$, but since $(\nu_\mu,\mu)$ and $(\nu_\tau,\tau)$ are $SU(2)_L$
doublets, we must also have $\mu \leftrightarrow \tau$ exchange.
Nevertheless, we still have the option of assigning $\mu^c$ and $\tau^c$.
If $\mu^c \leftrightarrow \tau^c$ exchange is also assumed, then
\begin{equation}
{\cal M}_l = \pmatrix{A & B \cr B & A} = {1 \over \sqrt2} \pmatrix{1 & -1
\cr 1 & 1} \pmatrix{A+B & 0 \cr 0 & A-B} {1 \over \sqrt2} \pmatrix{1 & 1 \cr
-1 & 1}.
\end{equation}
Hence $U_{l\nu} = (U^l_L)^\dagger U^\nu_L = 1$ and there is no mixing.
If $\mu^c$ and $\tau^c$ do not transform under this exchange, then
\begin{equation}
{\cal M}_l = \pmatrix{A & B \cr A & B} = {1 \over \sqrt2} \pmatrix{1 & -1
\cr 1 & 1} \pmatrix{\sqrt{2(A^2+B^2)} & 0 \cr 0 & 0} \pmatrix{c & s \cr
-s & c},
\end{equation}
where $c=A/\sqrt{A^2+B^2}$, $s=B/\sqrt{A^2+B^2}$. Again $U_{l\nu} =
(U^l_L)^\dagger U^\nu_L = 1$.
\section{Permutation Symmetry $S_3$}
To overcome the difficulty of obtaining maximal $\nu_\mu-\nu_\tau$ mixing,
consider the non-Abelian discrete symmetry $S_3$, i.e. the
permutation group of 3 objects, which is also the symmetry group of the
equilateral triangle. It has 6 elements divided into 3 equivalence
classes, with the irreducible representations \underline{1},
\underline{1}$'$, and \underline{2}. Let
\begin{equation}
\omega = \exp \left( {2\pi i \over 3} \right) = -{1 \over 2} + i
{\sqrt 3 \over 2},
\end{equation}
then the 6 matrices of the \underline{2} representation may be chosen as
follows.
\begin{eqnarray}
C_1: \pmatrix{1 & 0 \cr 0 & 1}, ~~~ C_2: \pmatrix{\omega & 0 \cr 0 &
\omega^2}, \pmatrix{\omega^2 & 0 \cr 0 & \omega}, ~~~
C_3: \pmatrix{0 & 1 \cr 1 & 0}, \pmatrix{0 & \omega \cr \omega^2 & 0},
\pmatrix{0 & \omega^2 \cr \omega & 0},
\end{eqnarray}
where $C_i$ refer to the 3 equivalence classes in the character table shown.
\begin{table}[htb]
\centering
\caption{Character table of $S_3$.}
\begin{tabular}{cccccc}
\hline
class&$n$&$h$&$\chi_1$&$\chi_{1'}$&$\chi_2$\\
\hline
$C_1$&1&1&1&1&2\\
$C_2$&2&3&1&1&--1\\
$C_3$&3&2&1&--1&0\\
\hline
\end{tabular}
\end{table}
The fundamental multiplication rule is then
\begin{equation}
\underline{2} \times \underline{2} = \underline{1}(12+21) +
\underline{1}'(12-21) + \underline{2}(22,11).
\end{equation}
Let $(\nu_i,l_i) \sim \underline{2}$, $l^c_i \sim \underline{2}$,
$(\phi^0_1,\phi^-_1) \sim \underline{1}$, $(\phi^0_2,\phi^-_2) \sim
\underline{1}'$, then
\begin{equation}
{\cal M}_l = \pmatrix{0 & fv_1+f'v_2 \cr fv_1-f'v_2 & 0} = \pmatrix{m_\mu & 0
\cr 0 & m_\tau} \pmatrix{0 & 1 \cr 1 & 0}.
\end{equation}
Let $\xi_i = (\xi_i^{++},\xi_i^+,\xi_i^0) \sim \underline{2}$ and
$\xi_0 \sim \underline{1}$, then
\begin{equation}
{\cal M}_\nu = \pmatrix{hu_1 & h_0u_0 \cr h_0u_0 & hu_2} = \pmatrix
{a & b \cr b & a}
\end{equation}
for $u_1=u_2$. Thus
\begin{equation}
U_{l\nu} = (U^l_L)^\dagger U^\nu_L = {1 \over \sqrt 2} \pmatrix{1 & -1 \cr
1 & 1},
\end{equation}
i.e. maximal $\nu_\mu-\nu_\tau$ mixing may be achieved, despite having
a diagonal ${\cal M}_l$ with $m_\mu \neq m_\tau$.
\section{Tetrahedral Symmetry $A_4$}
For 3 families, we should look for a group with a \underline{3}
representation, the simplest of which is $A_4$, the group of the even
permutation of 4 objects, which is also the symmetry group of the
tetrahedron.
\begin{table}[htb]
\centering
\caption{Character table of $A_4$.}
\begin{tabular}{ccccccc}
\hline
class&$n$&$h$&$\chi_1$&$\chi_{1'}$&$\chi_{1''}$&$\chi_3$\\
\hline
$C_1$&1&1&1&1&1&3\\
$C_2$&4&3&1&$\omega$&$\omega^2$&0\\
$C_3$&4&3&1&$\omega^2$&$\omega$&0\\
$C_4$&3&2&1&1&1&--1\\
\hline
\end{tabular}
\end{table}
\noindent The fundamental multiplication rule is
\begin{eqnarray}
\underline{3} \times \underline{3} &=& \underline{1}(11+22+33) +
\underline{1}'(11+\omega^222+\omega33) + \underline{1}''
(11+\omega22+\omega^233) \nonumber \\ &+& \underline{3}(23,31,12) +
\underline{3}(32,13,21).
\end{eqnarray}
Note that $\underline{3} \times \underline{3} \times \underline{3} =
\underline{1}$ is possible in $A_4$, i.e. $a_1 b_2 c_3 +$ permutations,
and $\underline{2} \times \underline{2} \times \underline{2} = \underline{1}$
is possible in $S_3$, i.e. $a_1 b_1 c_1 + a_2 b_2 c_2$.
\begin{table}[htb]
\centering
\caption{Perfect geometric solids.}
\begin{tabular}{ccccc}
\hline
solid&faces&vertices&Plato&group\\
\hline
tetrahedron&4&4&fire&$A_4$\\
octahedron&8&6&air&$S_4$\\
icosahedron&20&12&water&$A_5$\\
hexahedron&6&8&earth&$S_4$\\
dodecahedron&12&20&quintessence&$A_5$\\
\hline
\end{tabular}
\end{table}
\noindent The tetrahedron is one of five perfect geometric solids known to the
ancient Greeks. In order to match them to the 4 elements (fire, air,
water, and earth) already known, Plato invented a fifth (quintessence)
as that which pervades the cosmos and presumably holds it together.
Since a cube (hexahedron) may be embedded inside an octahedron and vice versa,
the two must have the same group structure and are thus dual to each other.
The same holds for the icosahedron and dodecahedron. The tetrahedron is
self-dual. Compare this first theory of everything to today's contender,
i.e. string theory. (A) There are 5 consistent string theories in 10
dimensions. (B) Type I is dual to Heterotic $SO(32)$, Type IIA
is dual to Heterotic $E_8 \times E_8$, and Type IIB is self-dual.
\subsection{Exact HPS}
Following the original papers \cite{mr01,bmv03} on $A_4$, let $(\nu_i,l_i)
\sim \underline{3}$, but $l^c_i \sim \underline{1}, \underline{1}',
\underline{1}''$, then with $(\phi_i^0,\phi_i^-) \sim \underline{3}$,
\begin{equation}
{\cal M}_l = \pmatrix{h_1v_1 & h_2v_1 & h_3v_1 \cr h_1 v_2 & h_2 v_2 \omega
& h_3 v_2 \omega^2 \cr h_1 v_3 & h_2 v_3 \omega^2 & h_3 v_3 \omega} =
{1 \over \sqrt 3} \pmatrix{1 & 1 & 1 \cr 1 & \omega & \omega^2
\cr 1 & \omega^2 & \omega} \pmatrix{h_1 & 0 & 0 \cr 0 & h_2 & 0 \cr 0 & 0
& h_3} \sqrt{3} v,
\end{equation}
for $v_1=v_2=v_3=v$.
Let $\xi_1 \sim \underline{1}$, $\xi_2 \sim \underline{1}'$, $\xi_3 \sim
\underline{1}''$, $\xi_{4,5,6} \sim \underline{3}$, with $\langle \xi_5
\rangle = \langle \xi_6 \rangle = 0$, then \cite{m04}
\begin{equation}
{\cal M}_\nu = \pmatrix{a+b+c & 0 & 0 \cr 0 & a + b\omega + c\omega^2 & d \cr
0 & d & a+b\omega^2+c\omega}.
\end{equation}
In the $(\nu_e,\nu_\mu,\nu_\tau)$ basis, it becomes
\begin{equation}
{\cal M}^{(e,\mu,\tau)}_\nu = \pmatrix{a+2d/3 & b-d/3 & c-d/3 \cr b-d/3 &
c+2d/3 & a-d/3 \cr c-d/3 & a-d/3 & b+2d/3}.
\end{equation}
If $b=c$, then the eigenvalues of this matrix are simply
\begin{equation}
m_1=a-b+d, ~~~ m_2=a+2b, ~~~ m_3=-a+b+d,
\end{equation}
and
\begin{equation}
U_{l\nu} = \pmatrix{\sqrt{2/3}
& 1/\sqrt{3} & 0 \cr -1/\sqrt{6} & 1/\sqrt{3} & -1/\sqrt{2} \cr -1/\sqrt{6}
& 1/\sqrt{3} & 1/\sqrt{2}},
\end{equation}
i.e. tribimaximal mixing is obtained as desired. If $b \neq c$, then
$U_{e3} \neq 0$, and $|U_{e3}| < 0.16$ implies $0.5 < \tan^2 \theta_{12}
< 0.52$, whereas experimentally, $\tan^2 \theta_{12} = 0.45 \pm 0.05$.\\
\noindent The above pattern involves 4 parameters $(a,b,c,d)$. If a model
can be constructed for which $b=c$ naturally, then the HPS {\it ansatz} of
tribimaximal mixing will be realized. Of course, the three masses are
not predicted, as shown in Eq.~(19). If $b \neq 0$ and $c \neq 0$, it is
difficult, if not impossible, to find an auxiliary symmetry which will
enforce their equality. On the other hand, they can both be zero, and thus
equal to each other, if $\xi_2$ and $\xi_3$ are absent in the above. This
is the essence of how the problem is first solved by Altarelli and
Feruglio \cite{af05}. In that case,
\begin{equation}
m_1=a+d, ~~~ m_2=a, ~~~ m_3=-a+d.
\end{equation}
The requirement $\Delta m^2_{12} \simeq \Delta m^2_{sol} << \Delta m^2_{atm}
\simeq \Delta m^2_{23}$ implies
\begin{equation}
|d| \simeq -2|a|\cos \phi, ~~~ |m_{1,2}|^2 \simeq |a|^2, ~~~ |m_3|^2 \simeq
|a|^2 (1+8\cos^2 \phi),
\end{equation}
i.e. normal ordering of neutrino masses with the sum rule \cite{m05}
\begin{equation}
|m_{\nu_e}|^2 \simeq |m_{ee}|^2 + \Delta m^2_{atm}/9,
\end{equation}
where $|m_{\nu_e}|$ is the kinematic $\nu_e$ mass measured in beta decay
and $|m_{ee}|$ is the effective Majorana neutrino mass measured in
neutrinoless double beta decay.\\
\noindent Another 2-parameter tribimaximal scenario \cite{m05} is to choose
$a=0$, $b=c$. In that case,
\begin{equation}
m_1=-b+d, ~~~ m_2=2b, ~~~ m_3=b+d.
\end{equation}
Here both normal and inverted ordering of neutrino
masses are possible with the sum rule
\begin{equation}
|m_{\nu_e}|^2 \simeq 3|m_{ee}|^2 - (2/3) \Delta m^2_{atm},
\end{equation}
More recently, exact HPS was obtained by Babu and He \cite{bh05} with $A_4$,
using the canonical seesaw mechanism. Their solution may be considered
the ``inverse'' of Ref.~\cite{af05}. Another example of exact HPS was
obtained by Grimus and Lavoura \cite{gl05} with $S_3$ plus 1 commuting
and 6 noncommuting $Z_2$ symmetries.
\subsection{Approximate HPS}
An alternative $A_4$ assignment \cite{hmvv05} is to let $(\nu_i,l_i), l^c_i
\sim \underline{3}$ with $(\phi^0_i,\phi^-_i) \sim \underline{1},
\underline{1}', \underline{1}''$, then ${\cal M}_l$ is diagonal with
\begin{equation}
\pmatrix{m_e \cr m_\mu \cr m_\tau} = \pmatrix{1 & 1 & 1 \cr 1 & \omega
& \omega^2 \cr 1 & \omega^2 & \omega} \pmatrix{h_1v_1 \cr h_2v_2 \cr h_3v_3}.
\end{equation}
For the neutrino mass matrix, let $\xi_1 \sim \underline{1}$, $\xi_2 \sim
\underline{1}'$, $\xi_3 \sim \underline{1}''$, $\xi_{4,5,6} \sim
\underline{3}$, with $\langle \xi_4 \rangle = \langle \xi_5 \rangle =
\langle \xi_6 \rangle$, then
\begin{equation}
{\cal M}_\nu = \pmatrix{a+b+c & d & d \cr d & a+b\omega+c\omega^2 & d \cr
d & d & a+b\omega^2+c\omega}.
\end{equation}
Let $b=c$ and rotate to the basis $[\nu_e,(\nu_\mu+\nu_\tau)/\sqrt 2,
(-\nu_\mu+\nu_\tau)/\sqrt 2]$, then
\begin{equation}
{\cal M}_\nu = \pmatrix{a+2b & \sqrt 2 d & 0 \cr \sqrt 2 d & a-b+d & 0 \cr
0 & 0 & a-b-d},
\end{equation}
i.e. maximal $\nu_\mu - \nu_\tau$ mixing and $U_{e3}=0$. The solar mixing
angle is now given by $\tan 2 \theta_{12} = 2\sqrt 2 d/(d-3b)$. For
$b << d$, $\tan 2 \theta_{12} \to 2\sqrt 2$, i.e. $\tan^2 \theta_{12} \to
1/2$, but $\Delta m^2_{sol} << \Delta m^2_{atm}$ implies $2a+b+d \to 0$, so
that $\Delta m^2_{atm} \to 6bd \to 0$ as well. Therefore, $b \neq 0$ is
required, and $\tan^2 \theta_{12} \neq 1/2$, but should be close to it,
because $b=0$ enhances the symmetry of ${\cal M}_\nu$ from $Z_2$ to $S_3$.
Here $\tan^2 \theta_{12} < 1/2$ implies inverted ordering and
$\tan^2 \theta_{12} > 1/2$ implies normal ordering.
\section{Permutation Symmetry $S_4$}
In the above application of $A_4$, approximate tribimaximal mixing involves
the {\it ad hoc} assumption $b=c$. This problem is overcome by using $S_4$ in
a supersymmetric seesaw model proposed recently \cite{s4}, yielding the
result
\begin{equation}
{\cal M}_\nu = \pmatrix{a+2b & c & c \cr c & a-b & d \cr
c & d & a-b}.
\end{equation}
Here $b=0$ and $c=d$ are related limits. The permutation group of 4
objects is $S_4$. It contains both $S_3$ and $A_4$. It is also the
symmetry group of the hexahedron (cube) and the octahedron.
\begin{table}[htb]
\centering
\caption{Character table of $S_4$.}
\begin{tabular}{cccccccc}
\hline
class&$n$&$h$&$\chi_1$&$\chi_{1'}$&$\chi_2$&$\chi_3$&$\chi_{3'}$\\
\hline
$C_1$&1&1&1&1&2&3&3\\
$C_2$&3&2&1&1&2&--1&--1\\
$C_3$&8&3&1&1&--1&0&0\\
$C_4$&6&4&1&--1&0&--1&1\\
$C_5$&6&2&1&--1&0&1&--1\\
\hline
\end{tabular}
\end{table}
\noindent The fundamental multiplication rules are
\begin{eqnarray}
\underline{3} \times \underline{3} &=& \underline{1}(11+22+33) +
\underline{2}(11+\omega^222+\omega33,11+\omega22+\omega^233) \nonumber \\
&+& \underline{3}(23+32,31+13,12+21) + \underline{3}'(23-32,31-13,12-21),\\
\underline{3}' \times \underline{3}' &=& \underline{1} +
\underline{2} + \underline{3}_S + \underline{3}'_A,\\
\underline{3} \times \underline{3}' &=& \underline{1}' +
\underline{2} + \underline{3}'_S + \underline{3}_A.
\end{eqnarray}
Note that both $\underline{3} \times \underline{3} \times \underline{3} =
\underline{1}$ and $\underline{2} \times \underline{2} \times \underline{2}
= \underline{1}$ are possible in $S_4$.
Let $(\nu_i,l_i),l^c_i,N_i \sim \underline{3}$ under $S_4$. Assume singlet
superfields $\sigma_{1,2,3} \sim \underline{3}$ and $\zeta_{1,2} \sim
\underline{2}$, then
\begin{equation}
{\cal M}_N = \pmatrix{A+f(\langle \zeta_2 \rangle + \langle \zeta_1 \rangle)
& h \langle \sigma_3 \rangle & h \langle \sigma_2 \rangle \cr
h \langle \sigma_3 \rangle & A + f(\langle \zeta_2 \rangle \omega +
\langle \zeta_1 \rangle \omega^2) & h \langle \sigma_1 \rangle \cr
h \langle \sigma_2 \rangle & h \langle \sigma_1 \rangle & A +
f(\langle \zeta_2 \rangle \omega^2 + \langle \zeta_1 \rangle \omega)}.
\end{equation}
The most general $S_4$-invariant superpotential of $\sigma$ and $\zeta$ is
given by
\begin{eqnarray}
W &=& M(\sigma_1 \sigma_1 + \sigma_2 \sigma_2 + \sigma_3 \sigma_3) +
\lambda \sigma_1 \sigma_2 \sigma_3 + m \zeta_1 \zeta_2 + \rho(\zeta_1
\zeta_1 \zeta_1 + \zeta_2 \zeta_2 \zeta_2) \nonumber \\
&+& \kappa[(\sigma_1 \sigma_1 + \sigma_2 \sigma_2 \omega + \sigma_3 \sigma_3
\omega^2) \zeta_2 + (\sigma_1 \sigma_1 + \sigma_2 \sigma_2 \omega^2 +
\sigma_3 \sigma_3 \omega) \zeta_1].
\end{eqnarray}
The resulting scalar potential has a minimum at $V=0$ (thus preserving
supersymmetry) only if $\langle \zeta_1 \rangle = \langle \zeta_2 \rangle$
and $\langle \sigma_2 \rangle = \langle \sigma_3 \rangle$, so that
\begin{equation}
{\cal M}_N = \pmatrix{A+2B & C & C \cr C & A-B & D \cr
C & D & A-B}.
\end{equation}
To obtain a diagonal ${\cal M}_l$, choose $\phi^l_{1,2,3} \sim
\underline{1} + \underline{2}$. To obtain a Dirac ${\cal M}_{\nu N}$
proportional to the identity, choose $\phi^N_{1,2,3} \sim \underline{1}
+ \underline{2}$ as well, but with zero vacuum expectation value for
$\phi^N_{2,3}$. This allows ${\cal M}_\nu$ to have the form of Eq.~(29),
and thus approximate tribimaximal mixing.
\section{Conclusion and Outlook}
Since my talk on finite groups in Dubrovnik exactly two years ago (which was
itself exactly two years after my talk at the Gran Sasso Laboratory on that
fateful day), much progress has been made.\\
\noindent With the application of the non-Abelian discrete symmetry $A_4$,
a plausible theoretical understanding of the HPS form of the neutrino mixing
matrix has been achieved, i.e. $\tan^2 \theta_{23} = 1$, $\tan^2 \theta_{12}
= 1/2$, $\tan^2 \theta_{13} = 0$.\\
\noindent Another possibility is that $\tan^2 \theta_{12}$ is not 1/2, but
close to it. This has theoretical support in an alternative version of $A_4$,
but is much more natural in $S_4$.\\
\noindent In the future, this approach to lepton family symmetry should be
extended to include quarks, perhaps together in a consistent overall theory,
such as $SU(3)^3$ finite unification \cite{mmz04}.
\section*{Acknowledgement}
I thank George Zoupanos and the other organizers of Corfu2005 for
their great hospitality and a stimulating conference. This work is supported
by the EPEAEK programme ``Pythagoras II'' and co-funded by the European
Union (75\%) and the Hellenic state (25\%).
|
2,877,628,091,516 | arxiv | \section{Introduction}
Agent-based simulation is a powerful tool assisting life scientists in
better understanding complex biological systems.
In silico simulation is an inexpensive and efficient way to rapidly test hypotheses
about the (patho)physiology of cellular populations, tissues, organs, or
entire organisms \citep{Yankeelov2016, ji2017mathematical}.
However, the effectiveness of such computer simulations for scientific research
is often limited, mainly because of two reasons.
First, after the slowing down of Moore's law \citep{moores-law} and Dennard
scaling \citep{dennard_design_1974}, hardware has become increasingly parallel
and heterogeneous.
Most simulators do not take full advantage of these hardware enhancements.
The resulting limited computational power forces life scientists to
compromise either on the resolution of the model or on simulation size \citep{thorne2007abm}.
Second, existing simulators have often been developed with a specific use case
in mind.
This makes it challenging to implement the desired model, even if it deviates only
slightly from its original purpose.
To help researchers tackle these two major challenges,
we propose a novel open-source platform for biology dynamics modeling, BioDynaMo{}.
We alleviate both of these problems by emphasizing performance and
modularity.
BioDynaMo{} features a high-performance simulation engine which is fully
parallelized and able to offload computation to hardware accelerators.
The software comprises a set of fundamental biological functions, and a
flexible design that adapts to specific user requirements.
Currently, BioDynaMo{} implements the biological model presented in \cite{ZublerDouglas2009framework},
but this model can easily be extended, modified, or replaced.
Hence, BioDynaMo{} is well-suited for simulating a wide range of biological processes
including cell proliferation, migration, growth, etc.
BioDynaMo{} provides by design five system properties:
\begin{itemize}
\item \textbf{Agent-based.}
The BioDynaMo{} project is established to support developmental simulations of
biological dynamics.
A good approximation for such in silico simulations is agent-based modeling
\citep{railsback2019agent}.
Agents are modeled as discrete objects that perform actions based on
their current state, behavior, and the surrounding environment.
\item \textbf{General purpose.}
BioDynaMo{} is developed to become a general-purpose tool for agent-based
simulation.
To simulate models from various fields, BioDynaMo{}'s software
design is extensible and modular.
\item \textbf{Large scale.}
Biological systems contain a large number of agents.
The cerebral cortex, for example, comprises approximately 16 billion neurons
\citep{azevedo_equal_2009}.
Biologists should not be limited by the number of agents within a simulation.
Consequently, BioDynaMo{} is designed to take full advantage of modern hardware and use
memory efficiently to scale up simulations.
\item \textbf{Easily programmable.
}
The success of a simulator depends, among other things, on how quickly a
scientist, not necessarily an expert in computer science or high-performance programming,
can translate an idea into a simulation.
This characteristic can be broken down into four key requirements that BioDynaMo{} is designed to fullfil:
First, BioDynaMo{} provides a wide range of common functionalities such as visualization, plotting,
parameter parsing, backups, etc.
Second, BioDynaMo{} provides simulation primitives that minimize the
programming effort necessary to build a use case.
Third, as outlined in item ``General purpose", BioDynaMo{} has a modular and
extensible design.
Fourth, BioDynaMo{} provides a coherent API and hides implementation details
that are irrelevant for a computational model (e.g., details such as parallelization strategy,
synchronization, load balancing, or hardware optimizations).
\item \textbf{Quality assured.}
BioDynaMo{} establishes a rigorous, test-driven development process to foster
correctness, maintainability of the codebase, and reproducibility of results.
\end{itemize}
The main contribution of this paper is an open-source, high-performance, and general-purpose
simulation platform for agent-based simulations.
We provide the following evidence to support this claim:
(i) We detail the user-facing features of BioDynaMo{} that enable users to build
a simulation based on predefined building blocks and to define a model
tailored to their needs.
(ii) We present three basic use cases in the field of neuroscience, oncology, and
epidemiology to demonstrate BioDynaMo{}'s capabilities and modularity.
(iii) We show that BioDynaMo{} can produce biologically-meaningful simulation results
by validating these use cases against experimental data, or an analytical solution.
(iv) We present performance data on different systems and scale each use case to
one billion agents to demonstrate BioDynaMo{}'s performance.
\subsection{Prior work}
\label{sec:prior-work}
The history of agent-based modeling and simulation goes well before the 1990s;
however, it has seen widespread use in biological systems in the 2000s.
Several simulators have been published demonstrating the importance of agent-based models in computational biology research
\citep{tisue2004netlogo, emonet_agentcell:_2005,
ZublerDouglas2009framework, koene_netmorph:_2009,
richmond_high_2010, collier2011repasthpc, lardon_idynomics:_2011,
rudge_computational_2012, mirams_chaste:_2013,
torben-nielsen_context-aware_2014, kang_biocellion:_2014,
cytowski_large-scale_2014,
matyjaszkiewicz_bsim_2017,
ghaffarizadeh_physicell:_2018}.
In this section, we compare BioDynaMo{}'s most crucial system properties with prior work.
\textbf{Large-scale model support.}
The authors of BioCellion \citep{kang_biocellion:_2014}, PhysiCell
\citep{ghaffarizadeh_physicell:_2018}, Timothy \citep{cytowski_large-scale_2014},
and Repast HPC \citep{collier2011repasthpc}
recognize the necessity for efficient
implementations to enable large-scale models.
Although these tools can simulate a large number of agents, they do
not support neural development.
The NeuroMaC neuroscientific simulator \citep{torben-nielsen_context-aware_2014}
claims to be scalable, but the authors do not present performance data and
present simulations with only 100 neurons.
Therefore, BioDynaMo{}'s ability to simulate large-scale neural development,
which we demonstrate in the results section, is, to our knowledge, unrivaled.
\textbf{General-purpose platform.}
Many simulators focus on a specific application area: bacterial colonies
\citep{emonet_agentcell:_2005, matyjaszkiewicz_bsim_2017,
rudge_computational_2012, lardon_idynomics:_2011}, cell colonies
\citep{kang_biocellion:_2014, mirams_chaste:_2013, cytowski_large-scale_2014},
and neural development \citep{ZublerDouglas2009framework, koene_netmorph:_2009,
torben-nielsen_context-aware_2014}.
Pronounced specialization of a simulator may prevent its capacity to adapt to
different use cases or simulation scenarios.
In contrast, BioDynaMo{} is a general-purpose platform for agent-based simulations by being both modular and extensible.
\textbf{Quality assurance.}
Automated software testing is the foundation of a modern development workflow.
Unfortunately, several simulation tools
\citep{ZublerDouglas2009framework, rudge_computational_2012,
lardon_idynomics:_2011, koene_netmorph:_2009, torben-nielsen_context-aware_2014,
cytowski_large-scale_2014}
omit these tests.
\cite{mirams_chaste:_2013} recognize this shortcoming and
describe a rigorous development workflow in their paper.
BioDynaMo{} has over 280 automated tests which are continuously executed on all supported operating systems to ensure high code quality.
BioDynaMo{}'s open-source code base, tutorials, and documentation not only help users get started, but also enable validation by external examiners.
\section{Design and implementation}
In this section, we present the main simulation concepts of BioDynaMo{} and describe our
approach to achieve modularity, extensibility, and high performance.
We provide further information about the biological model, software quality, and
features like web-based interactive notebooks, and backups in Supplementary File S1 Section~1.
\subsection{Simulation concepts}
BioDynaMo{} is implemented in the C++ programming language and supports simulations that follow an agent-based approach.
Figure~\ref{fig:simulation-concepts} gives an overview of BioDynaMo{}'s main concepts,
while Figure~\ref{fig:sw-design} illustrates its object-oriented design.
A characteristic property of agent-based simulations is the absence of a
central organization unit that orchestrates the behavior of all agents.
Quite to the contrary, each agent is an autonomous entity that
individually determines its behavior.
An agent (Figure~\ref{fig:simulation-concepts}A) has a 3D geometry, behavior, and environment.
There is a broad spectrum of entities that can be modeled as an agent.
In the results section we show examples where an agent represents a subcellular structure
(neuroscience use case), a cell (oncology use case), or an entire
person (epidemiology use case).
Currently, BioDynaMo{} supports agents with cylindrical and spherical geometry.
Figure~\ref{fig:simulation-concepts}B shows example agent behaviors such as growth factor secretion, chemotaxis, or cell division.
Like genes, behaviors can be activated or inhibited.
BioDynaMo{} achieves this by attaching them to or removing them from the corresponding
agent.
BioDynaMo{} simplifies the regulation of behaviors if new agents are created.
The user can control if a behavior will be copied to a new agent or
removed from the existing agent, based on the event type.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.7\linewidth]{fig-1}
\caption{{\bf Simulation concepts.}
Overview of the high-level simulation concepts of BioDynaMo{}.
Agents (A) have their own geometry, behavior (B), and environment (C).
(B) Agent behavior
is defined in separate classes, which are inserted into agents.
A few possible examples for agents and behaviors are displayed.
The update of an agent
is based on its current state and its surrounding environment. (C) The
environment is determined by radius $r$ and contains other agents or extracellular
substances.
The simulation algorithm (D) can be divided into two main parts: the definition of the
initial model and execution of the simulation.
}
\label{fig:simulation-concepts}
\end{figure}
The \textit{Environment} is the vicinity that the agent can
interact with (Figure~\ref{fig:simulation-concepts}C).
It comprises other agents and chemical substances in the
extracellular matrix.
Surrounding agents are, for example, needed to calculate mechanical
interactions among agent pairs.
BioDynaMo{} determines the environment
based on a uniform grid implementation.
The implementation divides the total simulation space into uniform boxes of
the same size and assigns agents to boxes based on the center of
mass of the agent.
Hence, the agents in the environment can be obtained by
iterating over the assigned box and all its surrounding boxes
(27 boxes in total).
The box size is chosen based on the largest agent in the simulation to
ensure all mechanical interactions are taken into account.
Currently, the user defines a simulation programmatically in C++
(Figure~\ref{fig:simulation-concepts}D).
There are two main steps involved: initialization and execution.
During initialization, the user places agents in space, sets
their attributes, and defines their behavior.
In the execution step, the simulation engine evaluates the defined model in the
simulated physical environment by executing a series of operations.
We distinguish between agent operations and standalone operations (Figure~\ref{fig:sw-design}).
At a high level, an agent operation is a function that: (i)
alters the state of an agent and potentially also its
environment, (ii) creates a new agent, or (iii) removes an
agent from the simulation.
Examples for agent operations are: execute all behaviors and calculate mechanical forces.
The simulation engine executes agent operations for each agent for each
time step.
Alternatively, standalone operations perform a specific task during one time step and are
therefore only invoked once.
Examples include the update of substance diffusion and the export of visualization data.
Supplementary File S1 Section~1.1.3 contains more details about how operations enable multi-scale simulations.
\subsection{Modularity}
BioDynaMo{} is a simulation platform that can be used to develop
simulations in various computational biology fields (e.g. neuroscience,
oncology, epidemiology, etc.).
Although agent-based models in these different fields may intrinsically
vary, there is a set of functionalities and definitions that they have in
common.
These commonalities, which consist of
simulation and support features, are part of the BioDynaMo{} core.
Simulation features include the physics between cellular bodies,
the diffusion of extracellular substances, and basic behavior, such
as proliferation and cell death.
Support features include visualization, data analysis, plotting,
parameter management, simulation backups, etc.
Functionalities that are field-specific are separated from the core and are
bundled as a separate module.
Figure~\ref{fig:sw-design} gives an overview of BioDynaMo{}'s software design.
\cite{DEMONTIGNY2020} demonstrated BioDynaMo{}'s modularity by coupling it with another simulator
to create a hybrid agent-based, continuum-based model.
\begin{figure*}[!ht]
\centering
\includegraphics[width=\textwidth]{fig-2}
\caption{{\bf Software design and modularity.}
Overview of selected classes and functions that are important from the users's perspective.
Classes in white (BioDynaMo{} core) and green (BioDynaMo{}'s neuroscience module) are part of the current BioDynaMo{} installation.
The remaining classes illustrate how we extended BioDynaMo{} to implement the use cases and benchmarks shown in this paper
(purple: neuroscience use case, red: oncology use case, orange: epidemiology use case, blue: soma clustering benchmark, yellow: cell proliferation benchmark).
A complete list of BioDynaMo{} classes can be found at \href{https://biodynamo.org/}{https://biodynamo.org/}.
}
\label{fig:sw-design}
\end{figure*}
\textbf{Neuroscience module.}
The neuroscience module is an example of how to extend functionality in the core to
target BioDynaMo{} to a specific field.
The module adds two new agents \texttt{NeuronSoma} and
\texttt{NeuriteElement}, and models behavior like neurite extension from
the soma, neurite elongation, and neurite bifurcation.
The model closely follows the principles of Cortex3D
\citep{ZublerDouglas2009framework}.
Neurites are implemented as a binary tree.
Each neurite element can have up to two daughter elements.
The cylindrical neurite element is approximated as a spring with a point mass
on its distal end.
These springs are connected to each other to transmit forces along the chain of
neurite elements.
\textbf{User-defined components.}
If the desired functionality is missing, the user can create, extend, or modify
agents, behaviors, operations, and other classes as shown in
Figure~\ref{fig:sw-design}.
BioDynaMo{}'s software design focuses on loosely-coupled, well-defined components.
This focus not only serves the purpose of creating a clear separation of the
functionalities of BioDynaMo{}, but, perhaps even more significantly, allows users to integrate
user-defined components without significant changes to the underlying software
architecture. This facilitates collaboration and the creation of an open-model library.
We anticipate this library will help researchers in implementing their models more rapidly.
\subsection{Performance and parallelism}
BioDynaMo{}'s performance is based on the following seven enhancements:
(i) The whole simulation engine is parallelized using OpenMP \citep{openmp} compiler
directives.
OpenMP is a good fit since BioDynaMo{} exploits mostly loop parallelism (see
Figure~\ref{fig:simulation-concepts}A).
(ii) To increase the maximum theoretical speedup due to parallel processing
(as described by Amdahl's law
\citep{amdahl_validity_1967}), we minimize the number of serial code portions in
BioDynaMo{}.
(iii) We avoid unnecessary copying of data and optimize data access patterns on
machines with non-uniform memory access (NUMA) architecture.
Compute nodes with multiple NUMA nodes have different memory access latencies
depending on whether a thread accesses local or remote memory.
Therefore, we load-balance agents and their environment on
available NUMA nodes.
We built an optimized iterator over all agents to
minimize threads' memory accesses to non-local memory.
This is necessary because OpenMP does not have built-in support for such
functionality.
(iv) We detect stationary regions within the simulation and skip the expensive collision
detection for those agents.
(v) We perform just-in-time compilation to give the visualization engine ParaView direct access
to Agent attributes.
(vi) We develop an optimized memory allocator and concurrent hashmap.
(vii) We consider offloading computations to hardware accelerator in our software design
(see Figure~\ref{fig:sw-design}).
Our GPU code is implemented in NVidia CUDA
and OpenCL
and can
be executed on graphics cards of different vendors (NVidia, AMD, or Intel).
More details on BioDynaMo{}'s performance enhancements and analyses are beyond this paper's scope,
and we aim to report them in a future publication.
\section{Results}
This section demonstrates BioDynaMo{}'s capacity to simulate disparate problems in
systems biology with simple yet representative use cases in neuroscience,
oncology, and epidemiology.
Furthermore, we compare BioDynaMo{}'s performance with an established serial neural simulator
\citep{ZublerDouglas2009framework}, analyze its scalability, and quantify the impact
of GPU acceleration.
For each use case we provide pseudocode for all agent behaviors, a table with
model parameters, and more detailed performance results in Supplementary File~S1 Section~2.
\subsection{Neuroscience use case}
\label{sec:pyramidal-cell}
This example illustrates the use of BioDynaMo{} to model neurite growth of pyramidal
cells using chemical cues.
Initially, a pyramidal cell, composed of a 10 $\mu m$ cell body, three 0.5 $\mu m$ long
basal dendrites, and one 0.5 $\mu m$ long apical dendrite (all of them considered
here as agents), is created in 3D space.
Furthermore, two artificial growth factors were initialized, following a Gaussian
distribution along the z-axis.
The distribution of these growth factors guided dendrite growth and remained
unchanged during the simulation.
Dendritic development was dictated by a behavior defining growth
direction, speed, and branching behavior for apical and basal dendrites.
At each step of the simulation, the dendritic growth direction depended on
the local chemical growth factor gradient, the dendrite's previous direction,
and a randomly chosen direction.
In addition, the dendrite's diameter tapered as it grew (shrinkage), until it
reached a specified diameter, preventing it from growing any further.
The weight of each element on the direction varied between apical and basal
dendrites.
Apical dendrites were more driven by the chemical gradient and
were growing at twice the speed of basal dendrites.
On the contrary, basal dendrites were more conservative in their growth
direction; the weight of their previous direction was more important.
Likewise, branching behavior differed between apical and basal dendrites.
In addition to a higher probability of branching (0.038 and 0.006 for
apical and basal respectively), apical dendrites had the possibility to
branch only on the main branch of the arbor.
On the contrary, basal dendrites were only ruled by a simple probability to
branch at each time step.
These simple rules gave rise to a straight long apical dendrite with a simple
branching pattern and more dispersed basal dendrites, as shown in
Figure~\ref{fig:pyramidal-cell}A, similar to what can be observed in real
pyramidal cell morphologies as shown in Figure~\ref{fig:pyramidal-cell}B or \cite{spruston2008pyramidal} (Figure~1A CA1).
Using our growth model, we were able to generate a large number of various
realistic pyramidal cell morphologies.
We used a publicly available database of real pyramidal cells coming from
\citep{mellstrom_specific_2016} for comparison and parameter tuning.
Two measures were used to compare our simulated neurons and the 69 neurons
composing the real morphologies database: the average number of branching points,
and the average length of dendritic trees.
No significant differences were observed between our simulated neurons and the
real ones ($p < 0.001$ using a T-test for two independent samples).
These results are shown in Figure~\ref{fig:pyramidal-cell}D.
The simulation of the pyramidal cell growth consisted of 361 lines of C++ code.
\begin{figure}[!h]
\centering
\includegraphics[width=0.9\linewidth]{fig-3}
\caption{{\bf Pyramidal cell simulation.}
(A) Example pyramidal cell simulated with BioDynaMo{}.
(B) Real neuron (R67nr67b-CEL1.CNG) taken from \citep{mellstrom_specific_2016} and
visualized with \href{https://neuroinformatics.nl/HBP/morphology-viewer/}{https://neuroinformatics.nl/HBP/morphology-viewer/}
(C) Large-scale simulation.
The model started with 5000 initial pyramidal cell bodies and contained
9 million agents after simulating 500 iterations.
Simulation execution time was 46 seconds on a server with 72 CPU cores.
(D) Morphology comparison between simulated neurons and experimental data from
\citep{mellstrom_specific_2016}. Error bars represent the standard deviation.
(A,C) A video is available in Supplementary Information.}
\label{fig:pyramidal-cell}
\end{figure}
Figure~\ref{fig:pyramidal-cell}C shows a large scale simulation incorporating
5000 neurons similar to the one described above, and demonstrates the potential of
BioDynaMo{} for developmental, anatomical, and connectivity studies in the brain.
This simulation contained 9 million agents.
These 500 iterations correspond to approximately three weeks of pyramidal cell
growth in the rat.
\subsection{Oncology use case}
In this section, we present a tumor spheroid simulation to replicate
in vitro experiments from \citep{Gongetal2015invitroMCF7}.
Tumor spheroid experiments are typically employed to investigate the
pathophysiology of cancer, and are also being used for pre-clinical drug screening
\citep{Nunes_et_al:2019}.
Here we considered three in vitro test cases using a breast adenocarcinoma
MCF-7 cell line \citep{Gongetal2015invitroMCF7} with different initial cell
populations ($2000$, $4000$, and $8000$ MCF-7 cells).
Our goal was to simulate the growth of this mono cell culture embedded in a
collagenous (extracellular) matrix.
This approach, as opposed to a free suspension one, incorporates cell-matrix interactions
to mimic the tumor-host environment.
Initially, cancer cells (agents) were clustered in a spherical shape around the origin with a
diameter of $310$, $380$, or $460$ micrometers.
The three-dimensional extracellular matrix (ECM) was represented in
our simulations as a $8$ mm\textsuperscript{3} cube.
The fundamental cellular mechanisms modeled here include cell growth,
cell duplication, cell migration, and cell apoptosis.
A single behavior governed all these processes.
The cell growth rate was derived from the published data \citep{Sutherland3998}, while cell migration
(cell movement speed), cell survival, and apoptosis were fine-tuned after
trial and error testing.
Since the in vitro study considered the same agarose gel matrix composition among the experiments,
the BioDynaMo{} model assumes identical parameters for the cell--matrix interactions in the simulations.
Considering the homogeneous ECM properties, tumor cell migration was
modeled as Brownian motion.
The in vitro experiments showed that instantaneous spheroid growth was
hindered by the compression of the surrounding agarose gel matrix (see
Figure~\ref{fig:tumor-spheroids}A), owing to cell reorganization at the onset of the cancer mass implantation into the gel.
As a result, the tumor spheroid diameter was initially decreasing.
However, the present simulation example focuses modeling the growth of the spheroid after it had set in the agarose gel matrix.
Therefore, as shown in Figure~\ref{fig:tumor-spheroids}A, BioDynaMo{} simulations are set to start on day two or three.
\begin{figure}[!t]
\centering
\includegraphics[width=0.9\linewidth]{fig-4}
\caption{{\bf Comparison between in vitro
MCF-7 tumor spheroid experiments and our in silico simulations using
BioDynaMo.
}
(A) Human breast adenocarcinoma tumor spheroid (MCF-7 cell line) development
during a 15 day period, where different initial cell populations were
considered (see Fig 3 in \citep{Gongetal2015invitroMCF7}).
Error bars denote standard deviation to the experimental data.
The mean of the in silico results is shown as a solid black line with
a grey band depicting minimum and maximum observed value.
(B) Qualitative comparison between the microscopy images and simulation
snap-shots is shown in the three boxes.
Scale bars correspond to 100$\mu$m.
A video is available in Supplementary Information.
}
\label{fig:tumor-spheroids}
\end{figure}
The in vitro experiments from
\citep{Gongetal2015invitroMCF7} and the simulations using
BioDynaMo{} are depicted in Figure~\ref{fig:tumor-spheroids}.
Each line plot in Figure~\ref{fig:tumor-spheroids}A compares the mean diameter
between the experiments and the simulations
over time, which demonstrates the validity and accuracy of BioDynaMo{}.
The diameter of the spheroids in the simulations were deducted from the volume
of the convex hull that enclosed all cancer cells.
The in vitro experiments used microscopy imaging to measure the spheroid's
diameters \citep{Gongetal2015invitroMCF7}.
Figure~\ref{fig:tumor-spheroids}B compares snapshots of the
simulated tumor spheroids (bottom row) against microscopy images of in vitro
spheroids (top row) at different time points.
The spheroid's morphologies between the in vitro experiments and the
BioDynaMo{} simulations are in excellent agreement.
The example has 424 lines of C++ code, including the generation of the plot shown
in Figure~\ref{fig:tumor-spheroids}A.
Running one simulation took 0.98--3.39s on a laptop
and 1.24--4.16s on a server, both using one CPU core.
\subsection{Epidemiology use case}
This section presents an agent-based model that describes the spreading of infectious diseases between humans.
The model divides the population into three groups: susceptible, infected, and recovered
(SIR) agents.
We compare our simulation results with the solution of the original SIR model from
\cite{kermack_1927}, which used the following three differential equation to describe the model dynamics:
$dS/dt = - \beta S I / N$,
$dI/dt = \beta S I / N - \gamma I$, and
$dR/dt = \gamma I$.
$S$, $I$, and $R$ are the number of susceptible, infected, and recovered individuals, $N$ is the
total number of individuals, $\beta$ is the mean transmission rate, and $\gamma$ the recovery rate.
For our agent-based implementation (Figure~\ref{fig:epidemiology}C) we created a new agent
(representing a person) that encompasses three new behaviors, and extended an operation
to count the number of agents in each group (see Figure~\ref{fig:sw-design}).
Agents were randomly distributed in space and have three behaviors.
Infection. A susceptible agent became infected with the infection probability if an infected agent was within the infection radius.
Recovery. An infected agent recovered with the recovery probability at every time step.
Random movement. All agents moved randomly in space. The absolute distance an agent may travel in every time step is limited.
In this agent-based model, the speed at which an infectious disease spreads depended on: the infection probability, the number of contacts each agent has
with other agents, and the recovery rate.
The number of contacts in turn depended on the infection radius, the maximum distance an agent may travel, and the density of agents in the simulation space.
We selected two infectious diseases with different characteristics to verify our model: measles and seasonal influenza.
We obtained values for the basic reproduction number $R_0$ and recovery duration $T_R$ from the literature
(Measles: $R_0 = 12.9$, $T_R = 8$ days \citep{guerra_2017, who_measles}, Influenza:
$R_0 = 1.3$, $T_R = 4.1$ days \citep{chowell_2008})
and determined the parameters $\beta$ and $\gamma$ for the analytical model, based on $R_0 = \beta / \gamma$ and $\gamma = 1 / T_R$.
For the agent-based model we set the recovery probability to $\gamma$, and placed 2000 susceptible agents and a few infected agents randomly in a cubic space with length 100.
The remaining parameters (infection radius, infection probability, and maximum movement in one time step) were determined using particle swarm optimization \citep{kennedy_1995}.
Figure~\ref{fig:epidemiology} shows that the agent-based model is in excellent agreement with the equation-based approach from \citep{kermack_1927} for measles and influenza.
The example has 566 lines of C++ code, including the generation of the plot shown
in Figure~\ref{fig:epidemiology}.
Running one simulation took 0.59--1.59s using one CPU core.
\begin{figure}
\centering
\includegraphics[width=0.95\linewidth]{fig-5}
\caption{{\bf Measles and seasonal influenza SIR model results.}
(A,B) Comparison between agent-based (solid lines) and analytical (dashed lines) model for
measles (A) and seasonal influenza (B). The agent-based simulation was repeated ten times. The
individual simulation results are shown as thin solid lines. The bold solid line represents the mean
from all simulations. The legend is shared between the two plots.
(C) Visualization of the measles and influenza model for different time steps in 3D space.
Susceptible persons are shown in white, infected persons in red, and recovered persons in blue.
Persons move randomly and follow the rules for infection and recovery.
}
\label{fig:epidemiology}
\end{figure}
\subsection{Performance}
Efficient usage of computing resources is paramount for large-scale simulations with billions of agents, reduced computational costs, and low energy footprint.
To this end, we quantify the performance of BioDynaMo{} with three simulations: cell
growth and division, soma clustering, and pyramidal cell growth.
These simulations have different properties and are, therefore, well suited to
evaluate BioDynaMo{}'s simulation engine under a broad set of conditions.
Supplementary File S1 Section~2.2 contains more details about these benchmarks.
First, to demonstrate the performance improvements against established agent-based
simulators, we compared BioDynaMo{} with Cortex3D \citep{ZublerDouglas2009framework}.
Cortex3D has the highest similarity in terms of the underlying biological model
out of all the related works presented in Section~\ref{sec:prior-work}.
More specifically, BioDynaMo{} and Cortex3D use the same method to determine mechanical forces
between agents and the same model to grow neural morphologies.
This makes Cortex3D the best candidate with which to compare BioDynaMo{}
and ensure a fair comparison.
Figure~\ref{fig:performance}A shows the speedup of BioDynaMo{} for the three
simulations.
We observed a significant speedup between 18 and 78$\times$.
Note that we set the number of threads available to BioDynaMo{} to one since
Cortex3D is not parallelized.
The speedup was larger, when the simulation was more dynamic or more complex.
\begin{figure}[!t]
\includegraphics[width=\linewidth]{fig-6}
\caption{{\bf BioDynaMo{} performance analysis.}
(A) Speedup of BioDynaMo{} compared to Cortex3D.
(B) Strong scaling behavior of BioDynaMo{} on a server with 72
physical cores, two threads per core, and four NUMA domains.
The grey area highlights hyper-threads.
}
\label{fig:performance}
\end{figure}
Second, to evaluate the scalability of BioDynaMo{}, we measured the simulation
time with an increasing number of threads.
We increased the number of agents used in the comparison with Cortex3D and
reduced the number of simulation timesteps to 10.
Figure~\ref{fig:performance}B shows the strong scaling analysis.
All simulation parameters were kept constant, and the number of threads was
increased from one to the number of logical cores provided by the benchmark
server.
The maximum speedup ranged between 65$\times$ and 75$\times$, which corresponds to a parallel
efficiency of 0.90 and 1.04.
Performance improved even after all physical cores were utilized and hyper-threads were
used.
Hyper-threads are highlighted in gray in Figure~\ref{fig:performance}B.
We want to emphasize that even the pyramidal cell growth benchmark scaled well,
despite the challenges of synchronization and load imbalance.
Third, we evaluated the impact of calculating the mechanical forces on the
GPU using the cell growth and division, and soma clustering simulations.
We excluded the pyramidal cell growth simulation because the current GPU kernel
does not support cylinder geometry yet.
The benchmarks were executed on System C (see Supplementary File S1 Table~4), comparing an NVidia Tesla V100 GPU
with 32 CPU cores (64 threads).
We observed a speedup of 1.27$\times$ for cell growth and division, and
5.04$\times$ for soma clustering.
The speedup correlated with the number of collisions in the simulation.
The computational intensity is directly linked with the number of collisions
between agents.
In summary, in the scalability test, we observed a minimum speedup of 65$\times$.
Furthermore, we measured a minimum speedup of 18$\times$ comparing BioDynaMo{}
with Cortex3D both using a single thread.
Based on these two observations, we conclude that on System A (see Supplementary File S1 Table~4){} BioDynaMo{} is
more than three orders of magnitude faster than Cortex3D.
Based on these speedups, we executed the neuroscience, oncology, and epidemiology use cases with
one billion agents.
Using all 72 physical CPUs on System B (see Supplementary File S1 Table~4), we measured a runtime of 1 hour 37 minutes,
6 hours 49 minutes, and 3 hours 54 minutes, respectively.
One billion agents, however, are not the limit.
The maximum depends on the available memory and accepted execution duration.
To be consistent across all use cases and keep our pipeline's total execution
time better manageable, we decided to run these benchmarks with one billion agents.
Table~5 in Supplementary File S1 shows that available memory would permit an
epidemiological simulation with three billion agents.
With enough memory, BioDynaMo{} is capable of supporting hundreds of billions of agents.
\section{Discussion}
This paper presented BioDynaMo{}, a novel open-source platform for agent-based
simulations.
Its modular software architecture allows researchers to implement models of distinctly different fields, of which neuroscience, oncology, and epidemiology were demonstrated in this paper.
Although the implemented models follow a simplistic set of rules, the results that emerge from the simulations are prominent and highlight BioDynaMo{}'s capabilities.
We do not claim that these models are novel, but we rather want to emphasize that BioDynaMo{} enables scientists to (i) develop models in various computational biology fields in a modular fashion, (ii) obtain results rapidly with the parallelized execution engine, (iii) scale up the model to billions of agents on a single server, and (iv) produce results that are in agreement with validated experimental data.
Although BioDynaMo{} is modular, we currently offer a limited number of ready-to-use simulation primitives.
We are currently expanding our library of agents and behaviors to facilitate model development beyond the current capacity.
Ongoing work uses BioDynaMo{} to gain insights into retinal development, cryopreservation,
multiscale (organ-to-cell) cancer modelling, COVID-19 spreading in closed environments, radiation-induced tissue damage, and more.
Further efforts focus on accelerating drug development by replacing in vitro experiments with in silico simulations using BioDynaMo{}.
Our performance analysis showed improvements of up to three orders of magnitude over
state-of-the-art baseline simulation software, allowing us to scale up simulations to an unprecedented number of agents.
To the best of our knowledge, BioDynaMo{} is the first scalable simulator of
neural development with cellular interactions that scales to more than one billion agents.
The same principles used to model axons and dendrites in the neuroscience use case could
also be applied to simulate blood and lymphatic vessels.
We envision BioDynaMo{} to become a valuable tool in computational biology, fostering faster and easier simulation of complex and large-scale systems,
interdisciplinary collaboration, and scientific reproducibility.
\section*{Funding}
This work was supported by the CERN Knowledge Transfer office [to L.B. and A.H.];
the Israeli Innovation Authority [to A.H.];
the Research Excellence Academy from the Faculty of Medical Science of the Newcastle University [to J.dM.];
the UCY StartUp Grant scheme [to V.V.];
the Medical Research Council of the United Kingdom [MR/N015037/1 to R.B., MR/T004347/1 to M.K.];
the Engineering and Physical Sciences Research Council of the UK [EP/S001433/1 to R.B., NS/A000026/1, EP/N031962/1 to M.K.];
a PhD studentship funded by Newcastle University’s School of Computing [to J.J.];
the Wellcome Trust [102037 to M.K.];
the Guangci Professorship Program of Ruijin Hospital (Shanghai Jiao Tong Univ.) [to M.K.];
and by several donations by SAFARI Research Group's industrial partners including Huawei, Intel, Microsoft, and VMware [to O.M.].
The authors have declared that no competing interests exist.
\section*{Acknowledgments}
We want to thank Giovanni De Toni for his work on the BioDynaMo{} build system.
\bibliographystyle{natbib}
|
2,877,628,091,517 | arxiv | \section{Introduction}
Let $\underline{\mathbf{f}}=(f_{1}, f_{2}, f_{3})$ be a triple of normalized newforms in $S^{\mathrm{new}}_{2}(\Gamma_{0}(N))^{3}$ of level $N\geq 5$. We assume that $N=N^{+}N^{-}$ is a factorization of $N$ such that $(N^{+}, N^{-})=1$ and $N^{-}$ is square-free with odd number of prime factors. We fix a prime $l\geq 5$ which will be used as the residual characteristic of the coefficient rings throughout this article. For each $i=1, 2, 3$, let $\mathrm{V}_{i}=\mathrm{V}_{f_{i}, \lambda_{i}}$ be the $\lambda_{i}$-adic Galois representation attached $f_{i}$ given by Eichler-Shimura construction where $\lambda_{i}$ is a place over $l$ in the Hecke field $\mathbf{Q}(f_{i})$ of $f_{i}$. Then we can attach a \emph{Garret-Rankin type triple product $L$-function} $$L(f_{1}\otimes f_{2}\otimes f_{3}, s)$$ to the triple product Galois representation $\mathrm{V}_{1}\otimes \mathrm{V}_{2}\otimes \mathrm{V}_{3}$. The parity of the order of vanishing of $L(f_{1}\otimes f_{2}\otimes f_{3}, s)$ at the central critical point $s=2$ is controlled by the
the \emph{global root number} $\epsilon\in\{1, -1\}$ which can be factored into $\epsilon=\prod_{v\mid N\infty}\epsilon_{v}$ of local root numbers $\epsilon_{v}\in\{\pm1\}$. In our setting, $\epsilon_{\infty}=-1$ since the weights $(2, 2, 2)$ of $\underline{\mathbf{f}}$ are \emph{balanced}. In this article, we will assume the following assumption.
\begin{equation*}\tag{$\epsilon=1$}
\text{\emph{The product of local root numbers $\prod_{v\mid N}\epsilon_{v}=-1$ and therefore $\epsilon=1$.}}
\end{equation*}
One can also attach a more arithmetic object to the triple tensor product Galois representation $\mathrm{V}_{1}\otimes \mathrm{V}_{2}\otimes \mathrm{V}_{3}$ called the Bloch-Kato Selmer group. We remind its definition below. Denote by $\mathrm{B_{cris}}$ the crystalline period ring with respect to $\mathbf{Q}_{l}$. The \emph{triple product Bloch-Kato Selmer group} $$\mathrm{H}^{1}_{f}(\mathbf{Q}, \mathrm{V}_{1}\otimes \mathrm{V}_{2}\otimes \mathrm{V}_{3}(-1))$$ is the subset of classes $s\in \mathrm{H}^{1}(\mathbf{Q}, \mathrm{V}_{1}\otimes \mathrm{V}_{2}\otimes \mathrm{V}_{3}(-1))$ such that
${\rm{loc}}_{l}(s)\in \mathrm{H}^{1}_{f}(\mathbf{Q}_{l}, \mathrm{V}_{1}\otimes \mathrm{V}_{2}\otimes \mathrm{V}_{3}(-1))$
where $\mathrm{loc}_{l}$ is the localization map to the local Galois cohomology group at $l$ and
$\mathrm{H}^{1}_{f}(\mathbf{Q}_{l}, \mathrm{V}_{1}\otimes \mathrm{V}_{2}\otimes \mathrm{V}_{3}(-1))$
is given by the Bloch-Kato local condition
\begin{equation*}
\ker[ \mathrm{H}^{1}(\mathbf{Q}_{l}, \mathrm{V}_{1}\otimes \mathrm{V}_{2}\otimes \mathrm{V}_{3}(-1))\rightarrow \mathrm{H}^{1}_{f}(\mathbf{Q}_{l}, (\mathrm{V}_{1}\otimes \mathrm{V}_{2}\otimes \mathrm{V}_{3})\otimes\mathrm{B}_{\mathrm{cris}}(-1))].
\end{equation*}
The \emph{Bloch-Kato conjecture} predicts a relationship between the order of the vanishing of the triple product $L$-function $L(f_{1}\otimes f_{2}\otimes f_{3}, s)$ at $s=2$ and the rank of $\mathrm{H}^{1}_{f}(\mathbf{Q}, \mathrm{V}_{1}\otimes \mathrm{V}_{2}\otimes \mathrm{V}_{3}(-1))$. The present article will be concerned with the \emph{rank 0 case} of the Bloch-Kato conjecture for $\mathrm{V}_{1}\otimes \mathrm{V}_{2}\otimes \mathrm{V}_{3}$. More precisely, we are concerned with the following conjecture.
\begin{conj}
Suppose that the central critical value $L(f_{1}\otimes f_{2}\otimes f_{3}, 2)$ is non-zero, then we have
\begin{equation*}
\mathrm{H}^{1}_{f}(\mathbf{Q}, \mathrm{V}_{1}\otimes \mathrm{V}_{2}\otimes \mathrm{V}_{3}(-1))=0.
\end{equation*}
\end{conj}
One of the most successful approaches to prove such conjectures is via the \emph{Euler-Kolyvagin system} arguments. In practice, one need to impose various assumptions on the triple $\underline{\mathbf{f}}=(f_{1}, f_{2}, f_{3})$ to apply the Euler system argument. We will formulate a more precise conjecture with a few additional assumptions in the last section of this article, see Conjecture \ref{main-conj}. This article makes the first step towards the construction of such an Euler-Kolyvagin system for the triple product representation. More precisely, the main result of this article concerns an explicit reciprocity formula \'{a} la Bertonilli-Darmon for the representation $\mathrm{V}_{1}\otimes \mathrm{V}_{2}\otimes \mathrm{V}_{3}$. Such formula, loosely speaking, relates cycles classes on Shimura varieties to period integrals that govern the central critical values of automorphic $L$-functions via level raising congruences among automorphic forms. We refer the readers to the recent article \cite{LTXZZ} for the most general results along this line. In this article, we relate the \emph{Gross-Schoen diagonal cycle class} on the triple product of Shimura curves at a place of bad reduction to a period integral that represents the algebraic part of the triple product $L$-function. Note that our results are not subsumed by \cite{LTXZZ}. In the companion article \cite{Wang}, we will study the Gross-Schoen diagonal cycle class on the triple product of Shimura curves at a place of good reduction and so-called second reciprocity law is proved there. In the case when $\underline{\mathbf{f}}=(f, f, f)$ for a single modular form $f$ of weight two, our constructions here and in \cite{Wang} produce the desired Euler-Kolyvagin system for the symmetric cube representation of $f$ and we can give some evidences to the Bloch-Kato conjecture for the symmetric cube representation of $f$ in the rank $0$ and rank $1$ case.
\subsection{Main results}
In order to state our results precisely, we introduce more notations. For $i=1, 2, 3$, let $\mathbf{Q}(f_{i})$ be the Hecke field of $f_{i}$. For simplicity, we assume that $E=\mathbf{Q}(f_{1})=\mathbf{Q}({f_{2}})=\mathbf{Q}({f_{3}})$ in this introduction. Let $\lambda$ be a place of $E$ above $l$ and $E_{\lambda}$ be the completion of $E$ at $\lambda$. Let $\mathcal{O}=\mathcal{O}_{E_{\lambda}}$ be the ring of integers of $E_{\lambda}$. We denote by $\varpi$ a uniformizer of $\mathcal{O}$ and set $\mathcal{O}_{n}=\mathcal{O}/\varpi^{n}$ for any $n\geq 1$. Let $\phi_{i}: \mathbb{T}\rightarrow \mathcal{O}$ be the natural morphism form the $l$-adic Hecke algebra to $\mathcal{O}$ corresponding to the Hecke eigensystem of $f_{i}$ and let $\phi_{i, n}: \mathbb{T}\rightarrow \mathcal{O}_{n}$ be the reduction of $\phi_{i}$ modulo $\varpi^{n}$. Here the $l$-adic Hecke algebra $\mathbb{T}$ is the $l$-adic completion of the Hecke algebra that acts faithfully on the subspace of $S_{2}(\Gamma_{0}(N))$ that is new at primes dividing $N^{-}$. In particular, $\phi_{i}$ sends the Hecke operator $T_{p}$ to the $p$-th Fourier coefficient $a_{p}(f)$ of $f$ for $p\nmid N$. We denote by $I_{i, n}$ the kernel of $\phi_{i, n}$ and by $\mathfrak{m}_{i}$ the maximal ideal in $\mathbb{T}$ containing $I_{i, n}$. Let $\mathfrak{m}_{\underline{\mathbf{f}}}=(\mathfrak{m}_{1}, \mathfrak{m}_{2}, \mathfrak{m}_{3})$. We will always assume that the maximal ideals $\mathfrak{m}_{i}$ are \emph{residually irreducible} in the sense explained below. Let
\begin{equation*}
\rho_{i}: G_{\mathbf{Q}}\rightarrow \mathrm{GL}(\mathrm{V}_{i})
\end{equation*}
be the Galois representation attached to $f_{i}$. Then we denote by $\bar{\rho}_{i}$ the residual Galois representation of $\rho_{i}$. We say $\mathfrak{m}_{i}$ is residually irreducible if $\bar{\rho}_{i}$ is absolutely irreducible.
We introduce the notion of \emph{n-admissible primes} for $\underline{\mathbf{f}}=(f_{1}, f_{2}, f_{3})$ in Definition \ref{n-adm}. A prime $p$ is $n$-admissible for the triple $\underline{\mathbf{f}}$ if
\begin{enumerate}
\item $p\nmid Nl$;
\item $l\nmid p^{2}-1$;
\item $\varpi^{n}\mid p+1-\epsilon_{p, i}a_{p}(f_{i})$ with $i=1, 2, 3$ and $\epsilon_{i}\in \{\pm1\}$;
\item $\epsilon_{p, 1}\epsilon_{p, 2}\epsilon_{p, 3}=1$.
\end{enumerate}
This is an extension of the notion of $n$-admissible prime in \cite{BD-Main} to the triple product setting which can be loosely interpreted as those primes $p$ for which one can find a triple $\underline{\mathbf{f}}^{[p]}=(f^{[p]}_{1}, f^{[p]}_{2}, f^{[p]}_{3})$ of weight $2$ newforms of level $pN$ that are congruent to $\underline{\mathbf{f}}=(f_{1}, f_{2}, f_{3})$ modulo $\varpi^{n}$. We highlight the additional condition $(4)$. This condition is imposed to make a sign change for the triple product $L$-function attached to $\underline{\mathbf{f}}^{[p]}=(f^{[p]}_{1}, f^{[p]}_{2}, f^{[p]}_{3})$ at $p$. Indeed the local sign at $p$ is given by $-\epsilon_{p,1}\epsilon_{p, 2}\epsilon_{p, 3}$ by \cite[1.3]{GK92} which is $-1$ by our assumption. This sign change makes it reasonable to consider a cycle class attached to the triple $\underline{\mathbf{f}}^{[p]}$ in light of the Bloch-Kato conjecture of odd rank. For the triple $\underline{\mathbf{f}}^{[p]}$ and $i=1, 2, 3$, we have morphisms $\phi^{[p]}_{i}: \mathbb{T}^{[p]}\rightarrow \mathcal{O}$ and $\phi^{[p]}_{i,n}: \mathbb{T}^{[p]}\rightarrow \mathcal{O}_{n}$ defined similarly as before with $\mathbb{T}^{[p]}$ the $l$-adic Hecke algebra corresponding to the subspace of $S_{2}(\Gamma_{0}(pN))$ that is new at primes dividing $pN^{-}$ . Let $I^{[p]}_{i, n}$ be the kernel of $\phi^{[p]}_{i,n}$ and let $\mathfrak{m}^{[p]}_{i}$ be the maximal ideal containing $I^{[p]}_{i, n}$ in $\mathbb{T}^{[p]}$. Let $\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}=(\mathfrak{m}^{[p]}_{1}, \mathfrak{m}^{[p]}_{2}, \mathfrak{m}^{[p]}_{3})$. We fix a such $n$-admissible prime $p$ for $\underline{\mathbf{f}}$. For the cycle class, it is natural to consider the diagonal cycle on the triple product of Shimura curves. The diagonal cycles on the triple product of curves are generally referred to as the \emph{Gross-Schoen diagonal cycles}. These cycles are introduced and studied in \cite{GS95}. Their connections to tripe product $L$-functions are given in \cite{GK92}, \cite{YZZ-dia}. To define the Shimura curves in our case, we need to introduce the following quaternion algebras. Let $B$ be the definite quaternion algebra over $\mathbf{Q}$ of discriminant $N^{-}$ and $B^{\prime}$ be the indefinite quaternion algebra over $\mathbf{Q}$ of discriminant $pN^{-}$. Then one can associate a Shimura set $X^{B}=X^{B}_{N^{+}, N^{-}}$ to $B$ and a Shimura curve $X=X^{B^{\prime}}_{N^{+}, pN^{-}}$ over $\mathbf{Q}$ to $B^{\prime}$. We refer the reader to \S 2.1 for the constructions. In particular we have an integral model of $\mathfrak{X}$ of $X$ over $\mathbf{Z}_{(p)}$. Note that since $p$ is ramified in $B^{\prime}$, the completion of $\mathfrak{X}$ at its special fiber admits \emph{Cerednick-Drinfeld uniformization}. We consider the diagonal morphism
\begin{equation*}
\theta: \mathfrak{X}\rightarrow \mathfrak{X}^{3}
\end{equation*} of $\mathfrak{X}$ into the triple fiber product $\mathfrak{X}^{3}$. We obtain thus a cycle class $\theta_{*}[\mathfrak{X}\otimes\mathbf{Q}]\in \mathrm{CH}^{2}(\mathfrak{X}^{3}\otimes\mathbf{Q})$ in the Chow group of $\mathfrak{X}^{3}\otimes\mathbf{Q}$. Since the triple $\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}$ is residually irreducible, the K\"{u}nneth formula implies that
\begin{equation*}
\mathrm{H}^{3}(\mathfrak{X}^{3}\otimes{{\mathbf{Q}}^{\mathrm{ac}}}, \mathcal{O}(2))_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}=(\otimes^{3}_{i=1}\mathrm{H}^{1}(X_{\mathbf{Q}^{\mathrm{ac}}}, \mathcal{O}(1))_{\mathfrak{m}^{[p]}_{i}})(-1)
\end{equation*}
and $\mathrm{H}^{*}(\mathfrak{X}^{3}\otimes{{\mathbf{Q}}^{\mathrm{ac}}}, \mathcal{O}(2))_{\mathfrak{m}_{\underline{\mathbf{f}}}^{[p]}}$ vanishes identically for $*\neq 3$. Thus the cycle class map and the Hochschild-Serre spectral sequence induces the following {Abel-Jacobi map}
\begin{equation*}
\mathrm{AJ}_{\underline{\mathbf{f}}}: \mathrm{CH}^{2}(\mathfrak{X}^{3}\otimes \mathbf{Q})\rightarrow \mathrm{H}^{1}(\mathbf{Q}, \mathrm{H}^{3}(\mathfrak{X}^{3}\otimes{{\mathbf{Q}}^{\mathrm{ac}}}, \mathcal{O}(2))_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}).
\end{equation*}
We denote by $\mathrm{M}^{[p]}_{n}(\underline{\mathbf{f}})$ the Galois module over $\mathcal{O}_{n}$ given by
\begin{equation*}
\otimes^{3}_{i=1}\mathrm{H}^{1}(X_{{\mathbf{Q}}^{\mathrm{ac}}}, \mathcal{O}(1)){/I^{[p]}_{i, n}}
\end{equation*}
The Abel-Jacobi map composed with the canonical map
\begin{equation*}
\mathrm{H}^{3}(X^{3}\otimes{{\mathbf{Q}}^{\mathrm{ac}}}, \mathcal{O}(2))_{\mathfrak{m}_{\underline{\mathbf{f}}}^{[p]}}\rightarrow \mathrm{M}^{[p]}_{n}(\underline{\mathbf{f}})(-1)
\end{equation*}
gives rise to the following mod $\varpi^{n}$-version of the Abel-Jacobi map
\begin{equation*}
\mathrm{AJ_{\underline{\mathbf{f}}, n}}: \mathrm{CH}^{2}(\mathfrak{X}^{3}\otimes \mathbf{Q})\rightarrow \mathrm{H}^{1}(\mathbf{Q}, \mathrm{M}^{[p]}_{n}(\underline{\mathbf{f}})(-1)).
\end{equation*}
We thus obtain a global cohomology class $\Theta^{[p]}_{n}\in \mathrm{H}^{1}(\mathbf{Q}, \mathrm{M}^{[p]}_{n}(\underline{\mathbf{f}})(-1))$ by applying the map $\mathrm{AJ_{\underline{\mathbf{f}}, n}}$ to the cycle $\theta_{*}[\mathfrak{X}\otimes \mathbf{Q}]\in \mathrm{CH}^{2}(\mathfrak{X}^{3}\otimes \mathbf{Q})$.
On the other hand, let $\underline{\mathbf{f}}^{B}=(f^{B}_{1}, f^{B}_{2}, f^{B}_{3})\in S^{B}_{2}(N^{+}, \mathcal{O})^{\oplus 3}$ be the \emph{Jacquet-Langlands transfer} of $\underline{\mathbf{f}}=(f_{1}, f_{2}, f_{3})$ in the space of \emph{quaternionic modular forms} $S^{B}_{2}(N^{+}, \mathcal{O})^{3}$ as in \cite[Definition 1.1]{BD-Main}. We consider the following period integral
\begin{equation*}
I(f^{B}_{1}, f^{B}_{2}, f^{B}_{3})=\sum_{z\in X^{B}} f^{B}_{1}(z)f^{B}_{2}(z)f^{B}_{3}(z).
\end{equation*}
By the main result of \cite{KH91} which resolves a conjecture of Jacquet, this period integral is non-vanishing if $L(f_{1}\otimes f_{2}\otimes f_{3}, 2)$ is non-vanishing. Our first goal of this article is to provide an explicit relation between the cohomology class $\Theta^{[p]}_{n}$ and the period integral $I(f^{B}_{1}, f^{B}_{2}, f^{B}_{3})$. To compare these two objects, we pass the class $\Theta^{[p]}_{n}$ living in the global Galois cohomology group $\mathrm{H}^{1}(\mathbf{Q}, \mathrm{M}^{[p]}_{n}(\underline{\mathbf{f}})(-1))$ to its \emph{singular quotient} $\mathrm{H}^{1}_{\mathrm{sing}}(\mathbf{Q}_{p}, \mathrm{M}^{[p]}_{n}(\underline{\mathbf{f}})(-1))$ at $p$
whose definition is recalled in \eqref{fin-sing}. The singular quotient $\mathrm{H}^{1}_{\mathrm{sing}}(\mathbf{Q}_{p}, \mathrm{M}^{[p]}_{n}(\underline{\mathbf{f}})(-1))$ itself has a geometric description and this description is referred to as the \emph{ramified arithmetic level raising} in the recent literature \cite{LT}, \cite{LTXZZ}. Here the word ramified refers to the fact that the relevant Shimura variety has bad reduction at the prime $p$. Our first main result is such an arithmetic level raising theorem for the triple product of Shimura curves.
\begin{thm}[Ramified arithemtic level raising ]\label{level-raise-intro}
Let $p$ be an $n$-admissible prime for the triple $\underline{\mathbf{f}}=(f_{1}, f_{2}, f_{3})$. For each $i=1, 2, 3$, assume that
\begin{enumerate}
\item the maximal ideal $\mathfrak{m}_{i}$ is residually irreducible;
\item each $S^{B}_{2}(N^{+}, \mathcal{O})_{\mathfrak{m}_{i}}$ is a free rank $1$ module over $\mathbb{T}_{\mathfrak{m}_{i}}$.
\end{enumerate}
Then the singular quotient $\mathrm{H}^{1}_{\mathrm{sing}}(\mathbf{Q}_{p}, \mathrm{M}^{[p]}_{n}(\underline{\mathbf{f}})(-1))$ is free of rank $3$ over $\mathcal{O}_{n}$ and we have an isomorphism
\begin{equation}\label{intro-equ}
\mathrm{H}^{1}_{\mathrm{sing}}(\mathbf{Q}_{p}, \mathrm{M}^{[p]}_{n}(\underline{\mathbf{f}})(-1))\cong \otimes^{3}_{i=1}S^{B}_{2}(N^{+}, \mathcal{O})/I_{i,n}.
\end{equation}
\end{thm}
This theorem is proved in Theorem \ref{arithmetic-level-raising} and Corollary \ref{main-coro}. We remark that the hypothesis $(2)$ on the forms $f_{i}$ is called \emph{$l$-isolated} in \cite{BD-Main}. One can achieve hypothesis $(2)$ by imposing conditions on the residual Galois representation $\bar{\rho}_{i}$ of $\rho_{i}$ and then applying the Taylor-Wiles argument refined by Diamond in \cite{Diamond-TW}. For example, one can impose the following conditions in \cite[Hypothesis $\mathrm{CR}^{+}$]{CH-1}.
\begin{ass}[$\mathrm{CR}^{+}$] For $i=1, 2, 3$, the residual Galois representation $\bar{\rho}_{i}$ satisfies the following assumptions
\begin{enumerate}
\item $\bar{\rho}_{i}$ is absolutely irreducible when restricted to $G_{\mathbf{Q}(\sqrt{p^{*}})}$ where $p^{*}=(-1)^{\frac{p-1}{2}}p$;
\item If $q\mid N^{-}$ and $q\equiv \pm1\mod l$, then $\bar{\rho}_{i}$ is ramified;
\item If $q\mid \mid N^{+}$ and $q\equiv 1\mod l$, then $\bar{\rho}_{i}$ is ramified;
\item The Artin conductor $N_{\bar{\rho}_{i}}$ is prime to $N/N_{\bar{\rho}_{i}}$.
\end{enumerate}
\end{ass}
We refer the reader to \cite[Proposition 6.8]{CH-1} for an exposition of the Taylor-Wiles argument mentioned above to achieve hypothesis $(2)$ in Theorem \ref{level-raise-intro}. The proof of Theorem \ref{level-raise-intro} is geometric in nature by analyzing the semistable model of $X^{3}$ at an $n$-admissible prime $p$. To calculate the singular quotient, we use the machineries developed in \cite{Liu-cubic}, in particular the so-called \emph{potential map} which we give a down-to-earth introduction in the first part of the article. The right-hand side of the isomorphism \eqref{intro-equ} can be made even more geometrically in terms of the $\emph{component groups}$ of the Shimura curves at $p$. Let $\mathcal{J}$ be the N\'{e}ron model of the Jacobian $\mathrm{Jac}(X_{\mathbf{Q}_{p^{2}}})$ of $X_{\mathbf{Q}_{p^{2}}}$ and let $\Phi$ be the group of connected components of the special fiber of $\mathcal{J}$, then the isomorphism in \eqref{intro-equ} comes more canonically from the following isomorphism
\begin{equation}
\mathrm{H}^{1}_{\mathrm{sing}}(\mathbf{Q}_{p}, \mathrm{M}^{[p]}_{n}(\underline{\mathbf{f}})(-1))\cong \oplus^{3}_{j=1}(\otimes^{3}_{i=1}\Phi_{\mathcal{O}} /I^{[p]}_{i,n}).
\end{equation}
Comparing to the pioneering works on arithmetic level raising \cite{BD-Main}, \cite{Liu-cubic}, \cite{LTXZZ}, one of the main difference in our setting is that the singular quotient is of rank $3$ in our settings as opposed to being of rank $1$ in all the previously mentioned works. This interesting phenomenon suggests some richer structures hidden in our setting which we do not completely understand yet. Nevertheless, the above theorem still provides the natural setting for relating the cohomology class $\Theta^{[p]}_{n}$ to the period integral $I(f^{B}_{1}, f^{B}_{2}, f^{B}_{3})=\sum_{z\in X^{B}} f^{B}_{1}(z)f^{B}_{2}(z)f^{B}_{3}(z).$ In order to state this relation, we introduce a natural pairing
\begin{equation*}
\begin{aligned}
(\hphantom{a},\hphantom{a}):\otimes^{3}_{i=1}S^{B}_{2}(N^{+}, \mathcal{O})[I_{i, n}]\times \otimes^{3}_{i=1}S^{B}_{2}(N^{+}, \mathcal{O})/(I_{i, n})\rightarrow \mathcal{O}_{n}\\
\end{aligned}
\end{equation*}
in \eqref{pairing}.
Let $\partial_{p}\Theta^{[p]}_{n}$ be the image of $\Theta^{[p]}_{n}$ in $\mathrm{H}^{1}_{\mathrm{sing}}(\mathbf{Q}_{p}, \mathrm{M}^{[p]}_{n}(\underline{\mathbf{f}})(-1))$. For $j=1, 2, 3$, we denote by
\begin{equation*}
\partial^{(j)}_{p}\Theta^{[p]}_{n}\in \otimes^{3}_{i=1}S^{B}_{2}(N^{+}, \mathcal{O}){/I_{i,n}}
\end{equation*}
its projection to the $j$-th copy of
\begin{equation*}
\oplus^{3}_{j=1}(\otimes^{3}_{i=1}S^{B}_{2}(N^{+}, \mathcal{O}){/I_{i,n}}).
\end{equation*}
Then our second main result is the following explicit reciprocity formula which we will call the \emph{first reciprocity law}.
\begin{thm}\label{recip-intro}
Let $p$ be an $n$-admissible prime for the triple $\underline{\mathbf{f}}$. We assume the assumptions in Theorem \ref{level-raise-intro} are satisfied. Let $\phi_{1}\otimes \phi_{2}\otimes \phi_{3}\in \otimes^{3}_{i=1}S^{B}_{2}(N^{+}, \mathcal{O})[I_{i,n}]$. Then we have the following formula,
\begin{equation}\label{reci-formula}
(\partial^{(j)}_{p}\Theta^{[p]}_{n}, \phi_{1}\otimes \phi_{2}\otimes \phi_{3})=(p+1)^{3}\sum_{z\in X^{B}}\phi_{1}(z)\phi_{2}(z)\phi_{3}(z)
\end{equation}
for $j=1, 2, 3$.
\end{thm}
This formula is the analogue of the \emph{first reciprocity law} in \cite[Theorem 4.1]{BD-Main}, the \emph{congruence formulae} in \cite[Theorem 4.11]{Liu-HZ} and \cite[Theorem 4.5]{Liu-cubic}. The right-hand side of \eqref{reci-formula} obviously represents the period integral $I(f^{B}_{1}, f^{B}_{2}, f^{B}_{3})$ and this formula suggests that the class $\Theta^{[p]}_{n}$ should be useful to bound the Selmer group. However as we explained before, the singular quotient is of rank $3$ and the class $\Theta^{[p]}_{n}$ itself is not enough to bound the Selmer group. Combining the present work with the companion article \cite{Wang} suggests the following intuitive picture: the Selmer group of the level raised triple $\underline{\mathbf{f}}^{[p]}$ skips the possibility of being rank $1$ and goes directly to rank $3$ from the rank $0$ Selmer group for the original triple $\underline{\mathbf{f}}$. Therefore it should be reasonable to expect three global classes that fill up the singular quotient. We invite the reader to compare this with the paper of Darmon-Rotger \cite{DR-2}. They considered the so-called \emph{unbalanced case} where the weights of the triple of modular forms are $(2, 1, 1)$. The singular quotient at $p$ there is a direct sum of four one-dimensional spaces. They managed to find $4$ global classes starting from the primitive diagonal cycle class similar to our $\Theta^{[p]}_{n}$ in this article using techniques from $p$-adic deformations (Hida families) which fill up the singular quotient and such that each class only appears in exactly one of the four one-dimensional spaces. Using these classes, they can bound certain equivariant part of the Mordell-Weil group of an elliptic curve but not the Selmer group. This suggests that we should be able modify the class $\Theta^{[p]}_{n}$ and construct three classes which each sits in exactly one of the three direct summands of
\begin{equation*}
\oplus^{3}_{j=1}(\otimes^{3}_{i=1}S^{B}_{2}(N^{+}, \mathcal{O}){/I_{i, n}}).
\end{equation*}
Moreover these three classes should satisfy a similar reciprocity law as the one we obtained for $\Theta^{[p]}_{n}$ in Theorem \ref{recip-intro}. Using these conjectural classes, we are indeed able to bound the Selmer group for the triple product motive. Therefore we regard the present work as the first step towards the rank $0$ case of the Bloch-Kato conjecture for the triple product motive of modular forms.
\subsection{The symmetric cube motive}
It is natural to consider the degenerate case when $f_{1}=f_{2}=f_{3}=f$ for a single modular form $f\in S^{\mathrm{new}}_{2}(\Gamma_{0}(N))$. We denote by $\mathrm{V}_{f}$ the Galois representation attached to $f$. In this case the triple tensor product representation factors as
\begin{equation*}
\mathrm{V}_{f}^{\otimes 3}(-1)= \mathrm{Sym}^{3} \mathrm{V}_{f}(-1)\oplus \mathrm{V}_{f}\oplus \mathrm{V}_{f}.
\end{equation*}
Correspondingly, we have a factorization of the $L$-function
\begin{equation*}
L(f\otimes f \otimes f, s)= L(\mathrm{Sym}^{3}f, s) L(f, s-1)^{2}.
\end{equation*}
We introduce in this case the notion of \emph{ $(n, 1)$-admissible primes} for $f$. A prime $p$ is an $(n, 1)$-admissible prime for $f$ if
\begin{enumerate}
\item $p\nmid Nl$;
\item $l\nmid p^{2}-1 $;
\item $\varpi^{n}\mid p+1-a_{p}(f)$.
\end{enumerate}
Comparing to the definition of an $n$-admissible prime for a triple $(f_{1}, f_{2}, f_{3})$, we require here that $\epsilon_{p,i}=1$ for all $i=1, 2, 3$. We let $\underline{\mathbf{f}}=(f, f, f)$. Let $\mathrm{M}^{\diamond}_{n}(\underline{\mathbf{f}})(-1)$ be the symmetric cube component of $\mathrm{M}^{[p]}_{n}(\underline{\mathbf{f}})(-1)$. Then we find that the singular quotient $\mathrm{H}^{1}_{\mathrm{sing}}(\mathbf{Q}_{p}, \mathrm{M}^{\diamond}_{n}(\underline{\mathbf{f}})(-1))$ at $p$ is in fact of rank $1$. Therefore the Euler system argument can be applied here. However, the factorization of the $L$-function and the Galois representation suggest that the results concerning the rank $0$ case of the Bloch-Kato conjecture here will be conditional on the non-vanishing of $L(f, 1)$. Therefore we have the following result.
\begin{thm}\label{main-symm}
Suppose that the modular form $f$ satisfies the following assumptions:
\begin{enumerate}
\item the maximal ideals $\mathfrak{m}_{f}$ are all residually irreducible;
\item the $\mathbb{T}_{\mathfrak{m}_{f}}$-module $S^{B}_{2}(N^{+}, \mathcal{O})_{\mathfrak{m}_{f}}$ is free of rank $1$;
\item the residual Galois representation $\bar{\rho}_{f}$ is surjective;
\item the value $L(f, 1)$ is non-vanishing.
\end{enumerate}
Assume that value $L(\mathrm{Sym}^{3}f, 2)$ is non-zero. Then the Bloch-Kato Selmer group for $\mathrm{Sym}^{3} \mathrm{V}_{f}(-1)$
\begin{equation*}
\mathrm{H}^{1}_{f}(\mathbf{Q}, \mathrm{Sym}^{3} \mathrm{V}_{f}(-1))=0.
\end{equation*}
\end{thm}
This theorem is proved in the last section of this article and is based on a familiar Euler system argument, see \cite{Gro-Koly}, \cite{BD-Main}, \cite{Liu-HZ} and \cite{Liu-cubic} for a few examples. A similar result in the rank $1$ case for the Bloch-Kato Selmer group of $\mathrm{Sym}^{3} \mathrm{V}_{f}(-1)$ will be proved in \cite{Wang}.
\subsection{Bipartite Euler system and other applications}
We close this introduction by discussing some applications and questions the present work points to.
In the companion work \cite{Wang}, we prove the \emph{second reciprocity law} in our setting which should have applications to the rank $1$ case of the Bloch-Kato conjecture for the triple product motive of modular forms. It also implies that the diagonal cycle classes $\Theta^{[p]}_{n}$ and the period integrals $I(f^{B}_{1}, f^{B}_{2}, f^{B}_{3})$ form an analogue of the Bipartite Euler system in the sense of Howard \cite{How}. Note that the definition there requires the only Frobenius eigenvalues on the local Galois representation at an $n$-admissible prime $p$ are $p$ and $1$ and therefore the singular quotient is of rank $1$. This is obviously different from our setting. Therefore what we have for the triple product motive is only an analogue of his Bipartite Euler system. On the other hand, the classes $\Theta^{\diamond[p]}_{n}$ we produced for the symmetric cube motive of modular forms do form a Bipartite Euler system in his sense.
Secondly, by replacing the $l$-adic \'{e}tale cohomology and $l$-adic Abel-Jacobi map in the present work with crystalline cohomology and the $p$-adic Abel-Jacobi map one should be able to prove certain type of $p$-adic Gross-Zagier formulas for certain triple product $p$-adic $L$-functions. See \cite{Hsieh} for the relevant construction of $p$-adic $L$-functions and see \cite{BD-unif} for the type of $p$-adic Gross-Zagier formula for Heegner points on Shimura curves.
\subsection{Notations and conventions} We will use common notations and conventions in algebraic number theory and algebraic geometry. The cohomologies of schemes appear in this article will be understood as computed over the \'{e}tale sites.
For a field $K$, we denote by $K^{\mathrm{ac}}$ a separable closure of $K$ and put $G_{K}:=\mathrm{Gal}(K^{ac}/K)$ the Galois group of $K$. We let $\mathbf{A}$ be the ring of ad\`{e}les over $\mathbf{Q}$ and $\mathbf{A}^{\infty}$ be the subring of finite ad\`{e}les. For a prime $p$, $\mathbf{A}^{\infty, (p)}$ is the prime-to-$p$ part of $\mathbf{A}^{\infty}$.
Let $K$ be a local field with ring of integers $\mathcal{O}_{K}$ and residue field $k$. We let $I_{K}$ be the inertia subgroup of $G_{K}$. Suppose $\mathrm{M}$ is a $G_{K}$-module. Then the finite part $\mathrm{H}^{1}_{\mathrm{fin}}(K, \mathrm{M})$ of $\mathrm{H}^{1}(K, \mathrm{M})$ is defined to be $H^{1}(k, \mathrm{M}^{I_{K}})$ and the singular part $\mathrm{H}^{1}_{\mathrm{sing}}(K, \mathrm{M})$ of $\mathrm{H}^{1}(K, \mathrm{M})$ is defined to be the quotient of $\mathrm{H}^{1}(K, \mathrm{M})$ by the image of $\mathrm{H}^{1}_{\mathrm{fin}}(K, \mathrm{M})$.
Two quaternion algebras will be used in this article: the definite quaternion algebra $B$ with discriminant $N^{-}$ and the indefinite quaternion algebra $B^{\prime}$ with discriminant $pN^{-}$.
\subsection*{Acknowledgements} We would like to thank Henri Darmon and Liang Xiao for valuable discussions about this work. We would like to thank Minglun Hsieh for introducing the work of Bertonili-Darmon to the author in graduate school and for his interest to the present work. We also acknowledge the deep debt this article owes to the pioneering works of Bertonili-Darmon and Yifeng Liu. We are truly grateful to Pengfei Guan for his generous support in the unsettling period of time during which this article was written.
\section{Review of weight spectral sequence}
\subsection{Nearby cycles on semi-stable schemes} Let $K$ be a henselian discrete valuation field with residue field $k$ of characteristic $p$ and valuation ring $\mathcal{O}_{K}$. We fix a uniformizer $\pi$ of $\mathcal{O}_{K}$. We set $S=\mathrm{Spec}(\mathcal{O}_{K})$, $s=\mathrm{Spec}(k)$ and $\eta=\mathrm{Spec}(K)$. Let $K^{\mathrm{ac}}$ be a separable closure of $K$ and $K_{\mathrm{ur}}$ the maximal unramified extension of $K$ in $K^{\mathrm{ac}}$. We denote by $k^{\mathrm{ac}}$ the residue field of $K_{\mathrm{ur}}$.
Let $I_{K}=\mathrm{Gal}(K^{\mathrm{ac}}/K_{\mathrm{ur}})\subset G_{K}=\mathrm{Gal}(K^{\mathrm{ac}}/K)$ be the inertia group. Let $l$ be a prime different from $p$. We set $t_{l}: I_{K}\rightarrow \Lambda(1)$ to be the canonical surjection given by
\begin{equation*}
\sigma \mapsto (\sigma(\pi^{1/l^{m}})/\pi^{1/l^{m}})_{m}
\end{equation*}
for every $\sigma\in I_{K}$.
Let $\mathfrak{X}$ be a \emph{strict semi-stable scheme} over $S$ purely of relative dimension $n$ which we also assume to be proper. This means that $\mathfrak{X}$ is locally of finite presentation and Zariski locally \'{e}tale over $$\mathrm{Spec}(\mathcal{O}_{K}[X_{1}, \cdots, X_{n}]/(X_{1}\cdots X_{r}-\pi))$$ for some integer $1\leq r\leq n$. We let $X_{k}$ be the special fiber of $\mathfrak{X}$ and $X_{k^{\mathrm{ac}}}$ be its base-change to $k^{\mathrm{ac}}$. Let $X=\mathfrak{X}_{\eta}$ be the generic fiber of $\mathfrak{X}$ and $X_{K_{\mathrm{ur}}}$ be its base-change to $K_{\mathrm{ur}}$. We have the following natural maps
\begin{equation*}
\begin{aligned}
&i:X_{k}\rightarrow \mathfrak{X},\\
&j: X\rightarrow \mathfrak{X}, \\
&\bar{i}: X_{k^{\mathrm{ac}}}\rightarrow \mathfrak{X}_{\mathcal{O}_{K_{\mathrm{ur}}}},\\
& \bar{j}: X_{K_{\mathrm{ur}}}\rightarrow \mathfrak{X}_{\mathcal{O}_{K_{\mathrm{ur}}}}. \\
\end{aligned}
\end{equation*}
Throughout this paper, we fix a prime $l\geq 5$. Let $\Lambda$ be $\mathbf{Z}/l^{v}, \mathbf{Z}_{l}$ or a finite extension of $\mathbf{Z}_{l}$. We define
the \emph{Nearby cycle sheaf} by
\begin{equation*}
R^{q}\Psi(\Lambda)= \bar{i}^{*}R^{q}\bar{j}_{*}\Lambda
\end{equation*}
and the \emph{Nearby cycle complex} by
\begin{equation*}
R\Psi(\Lambda)= \bar{i}^{*}R\bar{j}_{*}\Lambda.
\end{equation*}
We regard the latter as an object in the derived category $D^{+}(X_{k^{\mathrm{ac}}}, \Lambda[I_{K}])$ of sheaves of $\Lambda$-modules with continuous $I_{K}$-actions. By proper base change, we always have
\begin{equation*}
\mathrm{H}^{*}(X_{k^{\mathrm{ac}}}, R\Psi(\Lambda))=\mathrm{H}^{*}(X_{K^{\mathrm{ac}}}, \Lambda).
\end{equation*}
Let $D_{1},\cdots, D_{m} $ be the set of irreducible components of $X_{k}$. For each index set $I\subset \{1, \cdots, m\}$ of cardinality $p$, we set $X_{I, k}=\cap_{i\in I} D_{i}$. This is a smooth scheme of dimension $n-p$. For $1\leq p \leq m-1$, let
\begin{equation}
X^{(p)}_{k}=\bigsqcup_{I\subset \{1, \cdots, m\}, \mathrm{Card}(I)=p+1} X_{I, k}
\end{equation}
and
\begin{equation}
a_{p}: X^{(p)}_{k}\rightarrow X_{k}
\end{equation}
be the projection, we have $a_{p *}\Lambda=\wedge^{p+1}a_{0 *}\Lambda$. Consider the Kummer exact sequence in the case $\Lambda=\mathbf{Z}/l^{v}$
\begin{equation}
0\rightarrow \Lambda(1)\rightarrow \mathcal{O}^{*}_{X} \rightarrow \mathcal{O}^{*}_{X}\rightarrow 0.
\end{equation}
Let $\partial(\pi)\in i^{*}R^{1}j_{*}\Lambda(1) $ be the image of $\pi$ under the coboundary map by applying $i^{*}Rj_{*}$ to the above exact sequence. We let $\theta: \Lambda_{X_{k}}\rightarrow i^{*}R^{1}j_{*}\Lambda(1)$ be the map sending $1$ to $\partial(\pi)$ and $\delta:\Lambda_{X_{k}}\rightarrow a_{0*}\Lambda$ be the canonical map. Then we have the following results regarding to the resolution of the Nearby cycle sheaf.
\begin{proposition}\label{nearby-cycle-q}
We have the following.
\begin{enumerate}
\item There is an isomorphism of exact sequences
\begin{equation*}
\begin{tikzcd}
\Lambda_{X_{k}}\arrow{r}{\delta}\arrow{d}&a_{0*}\Lambda\arrow{r}{\delta\wedge}\arrow{d}&\cdots\arrow{r}{\delta\wedge}\arrow{d}&a_{n*}\Lambda\arrow{r}\arrow{d}&0\\
\Lambda_{X_{k}}\arrow{r}{\theta}&i^{*}R^{1}j_{*}\Lambda(1)\arrow{r}{\theta\cup}&\cdots\arrow{r}{\theta\cup}&i^{*}R^{n+1}j_{*}\Lambda(n+1)\arrow{r}&0\\
\end{tikzcd}.
\end{equation*}
\item For $p\geq 0$, we have an exact sequence
\begin{equation*}
\begin{tikzcd}
R^{p}\Psi(\Lambda)\arrow{r}{\theta\cup}&i^{*}R^{p+1}j_{*}\Lambda(1)\arrow{r}{\theta\cup}&\cdots\arrow{r}{\theta\cup}&i^{*}R^{n+1}j_{*}\Lambda(n+1-p)\arrow{r}&0\\
\end{tikzcd}.
\end{equation*}
\item For $p\geq 0$, we have a quasi-isomorphism of complexes
\begin{equation*}
R^{p}\Psi(\Lambda)[-p]\xrightarrow{\sim} [a_{p*}\Lambda(-p)\xrightarrow{\delta\wedge}\cdots\xrightarrow{\delta\wedge} a_{n*}\Lambda(-p)\rightarrow0]
\end{equation*}
\end{enumerate}
\end{proposition}
\begin{proof}
The first two statements are taken from \cite[Corollary 1.3]{Saito} and the last one is an immediate consequence of the first two.
\end{proof}
\subsection{Monodromy filtration and spectral sequence} Suppose $A$ is an object in an abelian category and $N$ is a nilpotent endomorphism on $A$. We define two filtrations on $A$.
\begin{itemize}
\item We define the kernel filtration $F_{\bullet}$ by putting $F_{p}A=\ker(N^{p+1}: A\rightarrow A)$ for $p\geq 0$.
\item We define the image filtration $G^{\bullet}$ by putting $G^{q}A=\mathrm{im}(N^{q}: A\rightarrow A)$ for $q>0$.
\end{itemize}
Using these two filtrations,
\begin{itemize}
\item we define the convolution filtration $M_{\bullet}A$ by $M_{r}A=\oplus_{p-q=r}F_{p}A\cap G^{q}A$.
\end{itemize}
To calculate the graded piece of the convolution filtration, we define the induced $G^{\bullet}$-filtration on the graded piece of $F_{\bullet}$ by $G^{q}Gr^{F}_{p}A=\mathrm{im}(G^{q}A\cap F_{p}A\rightarrow Gr^{F}_{p}A)$. Then we see that the $r$-th graded piece of the convolution filtration $M_{\bullet}A$ is given by
\begin{equation*}
Gr^{M}_{r}A\cong\bigoplus_{p-q=r}Gr^{q}_{G}Gr^{F}_{p}A.
\end{equation*}
The convolution filtration in this case is known as the \emph{Monodromy filtration} and it is characterized by
\begin{enumerate}
\item $M_{n}A=0$ and $M_{-n-1}A=0$.
\item $N: A\rightarrow A$ sends $M_{r}A$ into $M_{r-2}A$ for $r\in\mathbf{Z}$.
\item $N^{r}: Gr^{M}_{r}A\rightarrow Gr^{M}_{-r}A$ is an isomoprhism.
\end{enumerate}
Let $A=R\Psi(\Lambda)$. Let $T$ be an element in $I_{K}$ such that $t_{l}(T)$ is a generator of $\Lambda(1)$ then $T$ induces a nilpotent operator $T-1$ on $R\Psi(\Lambda)$. Let $N=(T-1)\otimes\breve{T}$ where $\breve{T}\in \Lambda(-1)$ be the dual of $t_{l}(T)$. Then with respect to $N$, we have the following characterization of the Monodromy filtration on $R\Psi(\Lambda)$
\begin{enumerate}
\item $M_{n}R\Psi(\Lambda)=0$ and $M_{-n-1}R\Psi(\Lambda)=0$.
\item $N: R\Psi(\Lambda)(1)\rightarrow R\Psi(\Lambda)$ sends $M_{r}R\Psi(\Lambda)(1)$ into $M_{r-2}R\Psi(\Lambda)$ for $r\in\mathbf{Z}$.
\item $N^{r}: Gr^{M}_{r}R\Psi(\Lambda)(r)\rightarrow Gr^{M}_{-r}R\Psi(\Lambda)$ is an isomoprhism.
\end{enumerate}
We can use Proposition \ref{nearby-cycle-q} to calculate the Monodromy filtration on $R\Psi(\Lambda)$. In addition to Proposition \ref{nearby-cycle-q}, we need the following results in \cite[Lemma 2.5, Corollary 2.6]{Saito}.
\begin{enumerate}
\item The kernel filtration $F_{p}R\Psi(\Lambda)$ is given by the canonical truncated filtration $\tau_{\leq p}R\Psi(\Lambda)$ and therefore
\begin{equation}\label{grad-ker}
Gr^{F}_{p}R\Psi(\Lambda)\cong R^{p}\Psi(\Lambda)[-p]\cong [a_{p*}\Lambda(-p)\xrightarrow{\delta\wedge}\cdots\xrightarrow{\delta\wedge} a_{n*}\Lambda(-p)\rightarrow0].
\end{equation}
\item The image filtration $G^{q}Gr^{F}_{p}R\Psi(\Lambda)$ on $Gr^{F}_{p}R\Psi(\Lambda)$ is given by the truncation in \eqref{grad-ker} to $p+q$ position
\begin{equation*}
G^{q}Gr^{F}_{p}R\Psi(\Lambda)= [a_{p+q*}\Lambda(-p)\xrightarrow{\delta\wedge}\cdots\xrightarrow{\delta\wedge} a_{n*}\Lambda(-p)\rightarrow0].
\end{equation*}
\item Combining the above two results, we have
\begin{equation*}
Gr^{q}_{G}Gr^{F}_{p}R\Psi(\Lambda)=a_{p+q*}\Lambda[-p-q](-p)
\end{equation*}
and therefore we arrive at the following equation
\begin{equation*}
Gr^{M}_{r}R\Psi(\Lambda)=\bigoplus_{p-q=r}a_{p+q*}\Lambda[-p-q](-p).
\end{equation*}
\end{enumerate}
The Monodromy filtration induces the \emph{weight spectral sequence}
\begin{equation}\label{wt-seq}
\mathrm{E}^{p,q}_{1}= \mathrm{H}^{p+q}(X_{k^{\mathrm{ac}}}, Gr^{M}_{-p}R\Psi(\Lambda))\Rightarrow \mathrm{H}^{p+q}(X_{k^{\mathrm{ac}}}, R\Psi(\Lambda))=\mathrm{H}^{p+q}(X_{K^{\mathrm{ac}}}, \Lambda).
\end{equation}
The $\mathrm{E}_{1}$-term of this spectral sequence can be made explicit by
\begin{equation*}
\begin{aligned}
\mathrm{H}^{p+q}(X_{k^{\mathrm{ac}}}, Gr^{M}_{-p}R\Psi(\Lambda))&=\bigoplus_{i-j=-p, i\geq0, j\geq0}\mathrm{H}^{p+q-(i+j)}(X^{(i+j)}_{k^{\mathrm{ac}}}, \Lambda(-i))\\
&=\bigoplus_{i\geq\mathrm{max}(0, -p)}\mathrm{H}^{q-2i}(X^{(p+2i)}_{k^{\mathrm{ac}}}, \Lambda(-i)).\\
\end{aligned}
\end{equation*}
This spectral sequence is first found by Rapoport-Zink in \cite{RZ} and thus is also known as the Rapoport-Zink spectral sequence.
\subsection{Examples in dimension $1$ and $3$} We will make the weight spectral sequence explicit in dimension $1$ and dimension $3$ which are the only cases that will be used in the computations later. The following convention will used throughout this article: we will write $\mathrm{H}^{*}(a_{p*}\Lambda)$ instead of $\mathrm{H}^{*}(X_{k^{\mathrm{ac}}}, a_{p*}\Lambda)=\mathrm{H}^{*}(X^{(p)}_{k^{\mathrm{ac}}}, \Lambda)$.
\subsubsection{One dimensional case}: Let $\mathfrak{X}$ be a relative curve over $\mathrm{Spec}(\mathcal{O}_{K})$. Then we immediately calculate that
\begin{equation}
\begin{aligned}
&Gr^{M}_{-1}R\Psi(\Lambda)=a_{1*}\Lambda[-1], \\
&Gr^{M}_{0}R\Psi(\Lambda)=a_{0*}\Lambda, \\
&Gr^{M}_{1}R\Psi(\Lambda)=a_{1*}\Lambda[-1](-1). \\
\end{aligned}
\end{equation}
The $\mathrm{E}_{1}$-page of the weight spectral sequence is given by
\begin{center}
\begin{tikzpicture}[thick,scale=0.8, every node/.style={scale=0.8}]
\matrix (m) [matrix of math nodes,
nodes in empty cells,nodes={minimum width=5ex,
minimum height=5ex,outer sep=-5pt},
column sep=1ex,row sep=1ex]{
& & & & \\
2 & \mathrm{H}^{0}(a_{1*}\Lambda)(-1) & \mathrm{H}^{2}(a_{0*}\Lambda) & & \\
1 & & \mathrm{H}^{1}(a_{0*}\Lambda) & & \\
0 & & \mathrm{H}^{0}(a_{0*}\Lambda) & \mathrm{H}^{0}(a_{1*}\Lambda) &\\
\quad\strut & -1 & 0 & 1 & \strut \\};
\draw[thick] (m-1-1.east) -- (m-5-1.east) ;
\draw[thick] (m-5-1.north) -- (m-5-5.north) ;
\end{tikzpicture}
\end{center}
and it clearly degenerates at the $\mathrm{E}_{2}$-page. We therefore have the Monodromy filtration
\begin{equation*}
0\subset^{\mathrm{E}^{1,0}_{2}} M_{1}\mathrm{H}^{1}(X_{K^{\mathrm{ac}}}, \Lambda)\subset^{\mathrm{E}^{0,1}_{2}} M_{0}\mathrm{H}^{1}(X_{K^{\mathrm{ac}}}, \Lambda)\subset^{\mathrm{E}^{-1,2}_{2}} M_{-1}\mathrm{H}^{1}(X_{K^{\mathrm{ac}}},\Lambda)
\end{equation*}
with graded pieces given by
\begin{equation*}
\begin{aligned}
&Gr^{M}_{-1}\mathrm{H}^{1}(X_{K^{\mathrm{ac}}}, \Lambda)=\ker[\mathrm{H}^{0}(a_{1*}\Lambda(-1))\xrightarrow{\tau} \mathrm{H}^{2}(a_{0*}\Lambda)]\\
&Gr^{M}_{0}\mathrm{H}^{1}(X_{K^{\mathrm{ac}}}, \Lambda)= \mathrm{H}^{1}(a_{0*}\Lambda);\\
&Gr^{M}_{1}\mathrm{H}^{1}(X_{K^{\mathrm{ac}}}, \Lambda)= \mathrm{coker}[\mathrm{H}^{0}(a_{0*}\Lambda)\xrightarrow{\rho} \mathrm{H}^{0}(a_{1*}\Lambda)];\\
\end{aligned}
\end{equation*}
where $\tau$ is the \emph{Gysin morphism} and $\rho$ is the \emph{restriction morphism}.
Note that we have the following commutative diagram
\begin{equation}\label{picard-lef}
\begin{tikzcd}
\mathrm{H}^{1}(X_{K^{\mathrm{ac}}}, \Lambda(1)) \arrow[r] \arrow[d, "N"] & \ker[\mathrm{H}^{0}(a_{1*}\Lambda)\xrightarrow{\tau} \mathrm{H}^{2}(a_{0*}\Lambda)(1)] \arrow[d, "N"] \\
\mathrm{H}^{1}(X_{K^{\mathrm{ac}}}, \Lambda) & \mathrm{coker}[\mathrm{H}^{0}(a_{0*}\Lambda)\xrightarrow{\rho} \mathrm{H}^{0}(a_{1*}\Lambda)] .\arrow[l]
\end{tikzcd}
\end{equation}
In this case, we recover the \emph{Picard-Lefschetz formula} if we identify $\mathrm{H}^{0}(a_{1*}\Lambda)$ with the \emph{vanishing cycles} $\oplus_{x}R\Phi(\Lambda)_{x}$ on $X_{k^{\mathrm{ac}}}$ where $x$ runs through the set of singular points $X^{(1)}_{k}$. We refer the readers to \cite[Example 2.4.6]{Illusie} for the details about this case.
\subsubsection{Three dimensional case} Let $\mathfrak{X}$ be a relative threefold over $\mathrm{Spec}(\mathcal{O}_{K})$. We can also easily list the graded pieces of the Monodromy filtration on $R\Psi(\Lambda)$
\begin{equation*}
\begin{aligned}
&Gr^{M}_{-3}R\Psi(\Lambda)=a_{3*}\Lambda[-3], \\
&Gr^{M}_{-2}R\Psi(\Lambda)=a_{2*}\Lambda[-2], \\
&Gr^{M}_{-1}R\Psi(\Lambda)=a_{1*}\Lambda[-1]\oplus a_{3*}\Lambda[-3](-1), \\
&Gr^{M}_{0}R\Psi(\Lambda)=a_{0*}\Lambda\oplus a_{2*}\Lambda[-1](-1), \\
&Gr^{M}_{1}R\Psi(\Lambda)=a_{1*}\Lambda[-1](-1)\oplus a_{3*}\Lambda[-3](-2),\\
&Gr^{M}_{2}R\Psi(\Lambda)=a_{2*}\Lambda[-2](-2), \\
&Gr^{M}_{3}R\Psi(\Lambda)=a_{3*}\Lambda[-3](-3).\\
\end{aligned}
\end{equation*}
The $\mathrm{E}_{1}$-page of the weight spectral sequence is given below
\begin{equation}\label{E1-primitive}
\begin{tikzpicture}[thick,scale=0.6, every node/.style={scale=0.6}]
\matrix (m) [matrix of math nodes,
nodes in empty cells,nodes={minimum width=5ex,
minimum height=5ex,outer sep=-5pt},
column sep=1ex,row sep=1ex]{
& & & & \\
6 &\mathrm{H}^{0}(a_{3*}\Lambda)(-3) & \mathrm{H}^{2}(a_{2*}\Lambda)(-2) &\mathrm{H}^{4}(a_{1*}\Lambda)(-1) & \mathrm{H}^{6}(a_{0*}\Lambda) \\
5 & &\mathrm{H}^{1}(a_{2*}\Lambda)(-2) &\mathrm{H}^{3}(a_{1*}\Lambda)(-1) &\mathrm{H}^{5}(a_{0*}\Lambda)& \\
4 & &\mathrm{H}^{0}(a_{2*}\Lambda)(-2) &\mathrm{H}^{2}(a_{1*}\Lambda)(-1)\oplus\mathrm{H}^{0}(a_{3*}\Lambda)(-2) &\mathrm{H}^{4}(a_{0*}\Lambda)\oplus\mathrm{H}^{2}(a_{2*}\Lambda)(-1)&\mathrm{H}^{4}(a_{1*}\Lambda)\\
3 & & &\mathrm{H}^{1}(a_{1*}\Lambda)(-1) &\mathrm{H}^{3}(a_{0*}\Lambda)\oplus\mathrm{H}^{1}(a_{2*}\Lambda)(-1) &\mathrm{H}^{3}(a_{1*}\Lambda)\\
2 & & &\mathrm{H}^{0}(a_{1*}\Lambda)(-1) &\mathrm{H}^{2}(a_{0*}\Lambda)\oplus\mathrm{H}^{0}(a_{2*}\Lambda)(-1) &\mathrm{H}^{2}(a_{1*}\Lambda)\oplus \mathrm{H}^{0}(a_{3*}\Lambda)(-1) &\mathrm{H}^{2}(a_{2*}\Lambda)\\
1 & & & &\mathrm{H}^{1}(a_{0*}\Lambda) &\mathrm{H}^{1}(a_{1*}\Lambda) &\mathrm{H}^{1}(a_{2*}\Lambda)\\
0 & & & &\mathrm{H}^{0}(a_{0*}\Lambda) &\mathrm{H}^{0}(a_{1*}\Lambda) &\mathrm{H}^{0}(a_{2*}\Lambda)& \mathrm{H}^{3}(a_{3*}\Lambda)\\
\quad\strut & -3 & -2 & -1 & 0 &1 &2 &3 & \strut \\};
\draw[thick] (m-1-1.east) -- (m-9-1.east) ;
\draw[thick] (m-9-1.north) -- (m-9-9.north) ;
\end{tikzpicture}
\end{equation}
and it does not necessarily degenerates at $\mathrm{E}_{2}$.
\subsection{Potential map and Galois cohomology} Let $\mathrm{M}$ be a $G_{K}$-module over $\Lambda$, then we have the following exact sequence of Galois cohomology groups
\begin{equation}\label{fin-sing}
0\rightarrow \mathrm{H}^{1}_{\mathrm{fin}}(K, \mathrm{M})\rightarrow \mathrm{H}^{1}(K, \mathrm{M})\xrightarrow{\partial_{p}} \mathrm{H}^{1}_{\mathrm{sing}}(K, \mathrm{M})\rightarrow 0
\end{equation}
where
\begin{equation}
\mathrm{H}^{1}_{\mathrm{fin}}(K, \mathrm{M})=\mathrm{H}^{1}(k, \mathrm{M}^{I_{K}})
\end{equation}
is called the \emph{unramified} or \emph{finite} part of the cohomology group $\mathrm{H}^{1}(K, \mathrm{M})$ and $\mathrm{H}^{1}_{\mathrm{sing}}(K, \mathrm{M})$ defined as the quotient of $\mathrm{H}^{1}(K, \mathrm{M})$ by its finite part is called the \emph{singular quotient} of $\mathrm{H}^{1}(K, \mathrm{M})$. The natural quotient map $\mathrm{H}^{1}(K, \mathrm{M})\xrightarrow{\partial_{p}} \mathrm{H}^{1}_{\mathrm{sing}}(K, \mathrm{M})$ will be referred to as the \emph{singular quotient map}. Let $x\in\mathrm{H}^{1}(K, \mathrm{M})$, we call the element $\partial_{p}(x)\in \mathrm{H}^{1}_{\mathrm{sing}}(K, \mathrm{M})$ the \emph{singular residue} of $x$. Let $\mathrm{M}=\mathrm{H}^{n}(X_{K^{\mathrm{ac}}},\Lambda(r))$ be $r$-th twist of the middle degree cohomology of $X_{K^{\mathrm{ac}}}$. We review in this section how to calculate the singular quotient using the weight spectral sequence recalled above. In \cite{Liu-cubic}, the author postulated certain situations where the singular quotient can be calculated by a formula similar to the Picard-Lefschetz formula. What will be presented below is the same as his construction of the \emph{potential map}. We will not recall his general machinery but only the case of a curve or a threefold. We need the following elementary lemma.
\begin{lemma}
Let $\mathrm{M}=\mathrm{H}^{n}(X_{K^{\mathrm{ac}}},\Lambda(r))$, then we have
\begin{equation}
\mathrm{H}^{1}_{\mathrm{sing}}(K, \mathrm{M})\cong (\frac{\mathrm{M}(-1)}{N\mathrm{M}})^{G_{k}}
\end{equation}
\end{lemma}
\begin{proof}
This is well-known. The details can be found in \cite[Lemma 2.6]{Liu-cubic} for example.
\end{proof}
\subsubsection{One dimensional case} In this case, let $\mathrm{M}=\mathrm{H}^{1}(X_{K^{\mathrm{ac}}}, \Lambda(1))$, we can use the Picard-Lefschetz formula. Recall the diagram
\begin{equation*}
\begin{tikzcd}
\mathrm{H}^{1}(X_{K^{\mathrm{ac}}}, \Lambda(1)) \arrow[r] \arrow[d, "N"] & \ker[\mathrm{H}^{0}(a_{1*}\Lambda)\xrightarrow{\tau} \mathrm{H}^{2}(a_{0*}\Lambda(1))] \arrow[d, "N"] \\
\mathrm{H}^{1}(X_{K^{\mathrm{ac}}}, \Lambda) & \mathrm{coker}[\mathrm{H}^{0}(a_{0*}\Lambda)\xrightarrow{\rho} \mathrm{H}^{0}(a_{1*}\Lambda)] \arrow[l]
\end{tikzcd}
\end{equation*}
in \eqref{picard-lef}.
Then we have
\begin{equation}\label{1-sing}
\begin{aligned}
\mathrm{H}^{1}_{\mathrm{sing}}(K, \mathrm{M}) &\cong (\frac{\mathrm{M}(-1)}{N\mathrm{M}})^{G_{k}}\cong (\frac{\mathrm{coker}[\mathrm{H}^{0}(a_{0*}\Lambda)\xrightarrow{\rho} \mathrm{H}^{0}(a_{1*}\Lambda)]}{N\ker[\mathrm{H}^{0}(a_{1*}\Lambda)\xrightarrow{\tau} \mathrm{H}^{2}(a_{0*}\Lambda(1))]})^{G_{k}}
\end{aligned}
\end{equation}
Composing $\tau$ and $\rho$, we have
\begin{equation}\label{1-potential}
\begin{aligned}
(\frac{\mathrm{coker}[\mathrm{H}^{0}(a_{0*}\Lambda)\xrightarrow{\rho} \mathrm{H}^{0}(a_{1*}\Lambda)]}{N\ker[\mathrm{H}^{0}(a_{1*}\Lambda)\xrightarrow{\tau} \mathrm{H}^{2}(a_{0*}\Lambda(1)))]})^{G_{k}} \cong\mathrm{coker}[\mathrm{H}^{0}(a_{0*}\Lambda)\xrightarrow{\rho} \mathrm{H}^{0}(a_{1*}\Lambda)\xrightarrow{\tau} \mathrm{H}^{2}(a_{0*}\Lambda(1))]^{G_{k}}.
\end{aligned}
\end{equation}
\subsubsection{Three dimensional case} In this case, in light of later applications we will only consider $\mathrm{M}=\mathrm{H}^{3}(X_{K^{\mathrm{ac}}}, \Lambda(2))$. We put the following assumptions on $\mathrm{M}$.
\begin{assumption}\label{assump-E}
We assume the weight spectral sequence satisfy the following assumptions.
\begin{enumerate}
\item The weight spectral sequence which converges to $\mathrm{M}=\mathrm{H}^{3}(X_{K^{\mathrm{ac}}}, \Lambda(2))$ degenerates in its $\mathrm{E}_{2}$-page. Thus it induces the following filtration
\begin{equation*}
\begin{aligned}
&0\subset^{\mathrm{E}^{3,0}_{2}(2)}M_{3}\mathrm{H}^{3}(X_{K^{\mathrm{ac}}}, \Lambda(2))\subset^{\mathrm{E}^{2,1}_{2}(2)} M_{2}\mathrm{H}^{3}(X_{K^{\mathrm{ac}}}, \Lambda(2))\\&\subset^{\mathrm{E}^{1,2}_{2}(2)} M_{1}\mathrm{H}^{3}(X_{K^{\mathrm{ac}}}, \Lambda(2)) \subset^{\mathrm{E}^{0,3}_{2}(2)} M_{0}\mathrm{H}^{3}(X_{K^{\mathrm{ac}}}, \Lambda(2))\\
&\subset^{\mathrm{E}^{-1,4}_{2}(2)}M_{-1}\mathrm{H}^{3}(X_{K^{\mathrm{ac}}}, \Lambda(2))\subset^{\mathrm{E}^{-2,5}_{2}(2)} M_{-2}\mathrm{H}^{1}(X_{K^{\mathrm{ac}}}, \Lambda(2))\\
&\subset^{\mathrm{E}^{-3,6}_{2}(2)} M_{-3}\mathrm{H}^{3}(X_{K^{\mathrm{ac}}},\Lambda(2))=\mathrm{H}^{3}(X_{K^{\mathrm{ac}}}, \Lambda(2)).\\
\end{aligned}
\end{equation*}
\item The term $\mathrm{E}^{i, 3-i}_{2}(2)$ in the above filtration which has a non-trivial subquotient that is $G_{k}$-invariant is only $\mathrm{E}^{-1, 4}_{2}(2)$.
\item The term $\mathrm{E}^{i, 3-i}_{2}(2)(-1)=\mathrm{E}^{i, 3-i}_{2}(1)$ in the above filtration which has a non-trivial subquotient which is $G_{k}$-invariant is only $\mathrm{E}^{1, 2}_{2}(1)$.
\end{enumerate}
\end{assumption}
Under the assumptions above, we have the following commutative diagram
\begin{equation*}
\begin{tikzcd}
\mathrm{H}^{3}(X_{K^{\mathrm{ac}}}, \Lambda(2))^{G_{k}} \arrow[r] \arrow[d, "N"] & (\mathrm{E}^{-1,4}_{2}(2))^{G_{k}} \arrow[d, "N"] \\
\mathrm{H}^{3}(X_{K^{\mathrm{ac}}}, \Lambda(1))^{G_{k}} & (\mathrm{E}^{1,2}_{2}(1))^{G_{k}} \arrow[l]
\end{tikzcd}
\end{equation*}
and from we obtain
\begin{equation*}
\begin{aligned}
\mathrm{H}^{1}_{\mathrm{sing}}(K, \mathrm{M}) &\cong (\frac{\mathrm{M}(-1)}{N\mathrm{M}})^{G_{k}} \cong (\frac{\mathrm{E}^{1,2}_{2}(1)}{N \mathrm{E}^{-1,4}_{2}(2)})^{G_{k}}\\
\end{aligned}
\end{equation*}
We have
\begin{equation*}
\mathrm{E}^{-1,4}_{2}(2)=\frac{\ker[\mathrm{H}^{2}(a_{1*}\Lambda(1))\oplus \mathrm{H}^{0}(a_{3*}\Lambda)\xrightarrow{(\tau+\rho,\tau)}\mathrm{H}^{4}(a_{0*}\Lambda(2))\oplus \mathrm{H}^{2}(a_{2*}\Lambda(1))]}{\mathrm{im}[\mathrm{H}^{0}(a_{2*}\Lambda)\xrightarrow{(\tau, \rho)} \mathrm{H}^{2}(a_{1*}\Lambda(1))\oplus\mathrm{H}^{0}(a_{3*}\Lambda) ]}
\end{equation*}
and
\begin{equation*}
\mathrm{E}^{1,2}_{2}(1)=\frac{\ker[\mathrm{H}^{2}(a_{1*}\Lambda(1))\oplus \mathrm{H}^{0}(a_{3*}\Lambda)\xrightarrow{(\rho,\tau)}\mathrm{H}^{2}(a_{2*}\Lambda(1))]}{\mathrm{im}[\mathrm{H}^{2}(a_{0*}\Lambda(1))\oplus \mathrm{H}^{0}(a_{2*}\Lambda) \xrightarrow{(\rho,\tau+\rho)} \mathrm{H}^{2}(a_{1*}\Lambda(1))\oplus\mathrm{H}^{0}(a_{3*}\Lambda) ]}.
\end{equation*}
Then we find that
\begin{equation}\label{potential}
\begin{aligned}
& (\frac{\mathrm{E}^{1,2}_{2}(1)}{N \mathrm{E}^{-1,4}_{2}(2)})^{G_{k}}=(\frac{\mathrm{im}[\mathrm{H}^{2}(a_{1*}\Lambda(1))\xrightarrow{\tau}\mathrm{H}^{4}(a_{0*}\Lambda(2))]}{\tau\mathrm{im}[\mathrm{H}^{2}(a_{0*}\Lambda(1))\xrightarrow{\rho} \mathrm{H}^{2}(a_{1*}\Lambda(1))]})^{G_{k}}.\\
\end{aligned}
\end{equation}
We will denote by
\begin{equation*}
A^{2}(X_{k}, \Lambda)^{0}=\mathrm{im}[\mathrm{H}^{2}(a_{1*}\Lambda(1))\xrightarrow{\tau}\mathrm{H}^{4}(a_{0*}\Lambda(2))]^{G_{k}}
\end{equation*}
which appear on the numerator of \eqref{potential} and by
\begin{equation*}
A_{2}(X_{k}, \Lambda)^{0}=\mathrm{im}[\mathrm{H}^{2}(a_{0*}\Lambda(1))\xrightarrow{\rho}\mathrm{H}^{2}(a_{1*}\Lambda(1))]^{G_{k}}
\end{equation*}
which appear on the denominator of \eqref{potential}.
Let
\begin{equation*}
A^{2}(X_{k}, \Lambda)^{0}\xrightarrow{\eta} \mathrm{H}^{1}_{\mathrm{sing}}(K, \mathrm{H}^{3}(X_{K^{\mathrm{ac}}}, \Lambda(2)))
\end{equation*}
be the natural map and we have the following exact sequence
\begin{equation}\label{sing-exact}
A_{2}(X_{k}, \Lambda)^{0}\xrightarrow{\nabla}A^{2}(X_{k}, \Lambda)^{0}\xrightarrow{\eta} \mathrm{H}^{1}_{\mathrm{sing}}(K, \mathrm{H}^{3}(X_{K^{\mathrm{ac}}}, \Lambda(2)))\rightarrow 0.
\end{equation}
the natural map $\nabla$ induced by $\tau$ is called the potential map, see \cite[Definition 2.5]{Liu-cubic}.
We will now explain how this map is related to the image of the {Abel-Jacobi map}. Consider the cycle class map
\begin{equation*}
\mathrm{cl}: \mathrm{CH}^{2}(X_{K}, \Lambda)\rightarrow \mathrm{H}^{4}(X_{K}, \Lambda(2))
\end{equation*}
where $\mathrm{CH}^{2}(X_{K}, \Lambda)$ is the Chow group of $X_{K}$ with coefficient in $\Lambda$. Let $\mathrm{CH}^{2}(X_{K}, \Lambda)_{0}$ be the kernel of the composite map
\begin{equation*}
\mathrm{CH}^{2}(X_{K}, \Lambda)\xrightarrow{\mathrm{cl}} \mathrm{H}^{4}(X_{K}, \Lambda(2))\rightarrow \mathrm{H}^{4}(X_{K^{\mathrm{ac}}}, \Lambda(2)).
\end{equation*}
Then the cycle class map induces the following \emph{Abel-Jacobi map}
\begin{equation*}
\mathrm{AJ}_{\Lambda}: \mathrm{CH}^{2}(X_{K}, \Lambda)_{0}\rightarrow \mathrm{H}^{1}(K, \mathrm{H}^{3}(X_{K^{\mathrm{ac}}}, \Lambda(2))).
\end{equation*}
Suppose that $z\in \mathrm{CH}^{2}(X_{K}, \Lambda)_{0}$. Let $\mathcal{Z}$ be the Zariski closure of $z$ in $\mathfrak{X}$ and $z^{*}$ be the cycle of codimension $2$ in $\mathfrak{X}$ supported on $\mathcal{Z}$ and whose restriction to $X_{K}$ is $z$. Let $\tilde{z}\in \mathrm{H}^{4}(a_{0*}\Lambda(2))^{G_{k}} =\mathrm{H}^{4}(X^{(0)}_{k^{\mathrm{ac}}}, \Lambda(2))^{G_{k}}$ be the image of $z^{*}$ under the following composite maps
\begin{equation*}
\begin{aligned}
&\mathrm{H}^{4}_{\mathcal{Z}}(\mathfrak{X}, \Lambda(2))\rightarrow \mathrm{H}^{4}(\mathfrak{X}, \Lambda(2))\rightarrow \mathrm{H}^{4}(X_{k}, \Lambda(2))
\rightarrow \mathrm{H}^{4}(X_{k^{\mathrm{ac}}}, \Lambda(2))^{G_{k}}\rightarrow \mathrm{H}^{4}(X^{(0)}_{k^{\mathrm{ac}}}, \Lambda(2))^{G_{k}}.\\
\end{aligned}
\end{equation*}
We have the following results of Liu \cite{Liu-cubic}.
\begin{proposition}\label{cal-aj}
The element $\tilde{z}$ belongs to $ A^{2}(X_{k}, \Lambda)^{0}$. Moreover we have
\begin{equation*}
\partial_{p}\mathrm{AJ}_{\Lambda}(z)=\eta(\tilde{z})
\end{equation*}
in $\mathrm{H}^{1}_{\mathrm{sing}}(K, \mathrm{H}^{3}(X_{k^{\mathrm{ac}}}, \Lambda(2)))$.
\end{proposition}
\begin{proof}
The first statement follows easily from \cite[Lemma 2.17]{Liu-cubic} and the second is a special case of \cite[Theorem 2.18]{Liu-cubic}.
\end{proof}
\section{Arithmetic level raising for Shimura curves}
\subsection{Shimura curves and Shimura sets}
Let $N$ be a positive integer with a factorization $N=N^{+}N^{-}$ with $N^{+}$ and $N^{-}$ coprime to each other. We assume that $N^{-}$ is square-free and is a product of odd number of primes. Let $B$ be the definite quaternion algebra over $\mathbf{Q}$ with discriminant $N^{-}$ and let $B^{\prime}$ be the indefinite quaternion algebra over $\mathbf{Q}$ with discriminant $pN^{-}$. Let $\mathcal{O}_{B^{\prime}}$ be a maximal order of $B^{\prime}$ and let $\mathcal{O}_{B^{\prime}, N^{+}}$ be the Eichler order of level $N^{+}$ in $\mathcal{O}_{B^{\prime}}$. We let $G^{\prime}$ be the algebraic group over $\mathbf{Q}$ given by $B^{\prime \times}$ and let $K^{\prime}_{N^{+}}$ be the open compact of $G^{\prime}(\mathbf{A}^{\infty})$ defined by $\hat{\mathcal{O}}^{\times}_{B^{\prime}, N^{+}}$. We let $G$ be the algebraic group over $\mathbf{Q}$ given by $B^{\times}$. Note that we have an isomorphism $G^{\prime}(\mathbf{A}^{\infty, (p)})\xrightarrow{\sim} G(\mathbf{A}^{\infty, (p)})$ and via this isomorphism we will view $K^{\prime p}$ as an open compact subgroup of $G(\mathbf{A}^{\infty, (p)})$ for any open compact subgroup $K^{\prime}$ of $G^{\prime}(\mathbf{A}^{\infty})$.
Let $X=X^{B^{\prime}}_{N^{+}, pN^{-}}$ be the Shimura curve over $\mathbf{Q}$ with level $K^{\prime}=K^{\prime}_{N^{+}}$. The complex points of this curve is given by the following double coset
\begin{equation*}
X(\mathbf{C})=G^{\prime}(\mathbf{Q})\backslash \mathcal{H}^{\pm} \times G^{\prime}(\mathbf{A}^{\infty})/K^{\prime}.
\end{equation*}
There is a natural model $\mathfrak{X}$ over $\mathbf{Z}_{(p)}$ of $X$. Recall it represents the following moduli problem. Let $S$ be a test scheme over $\mathbf{Z}_{(p)}$ and then $\mathfrak{X}(S)$ is the set of triples $(A, \iota, \bar{\eta})$ where
\begin{enumerate}
\item $A$ is an $S$-abelian scheme of relative dimension $2$;
\item $\iota: \mathcal{O}_{B^{\prime}}\hookrightarrow \End_{S}(A)$ is an embedding which is special in the sense of \cite[131-132]{BC-unifor};
\item $\bar{\eta}$ is an equivalence class of isomorphisms
\begin{equation*}
\eta: V^{p}(A)\xrightarrow{\sim} B^{\prime}(\mathbf{A}^{\infty,(p)})
\end{equation*}
up to multiplication by $K^{p}$, where $$V^{p}(A)=\prod_{q \neq p}T_{q}(A)\otimes \mathbf{Q}$$ is the prime to $p$ rational Tate module of $A$.
\end{enumerate}
It is well known this moduli problem is representable by a projective scheme over $\mathbb{Z}_{(p)}$ of dimension $1$. We will usually consider the base change of $\mathfrak{X}$ to $\mathbf{Z}_{p^{2}}$ and we will denote it by the same notation. Let $\mathbf{F}_{q}$ be a finite extension of $\mathbf{F}_{p}$, we will denote by $X_{\mathbf{F}_{q}}$ the base change of the special fiber $X_{\mathbf{F}_{p}}$ of $\mathfrak{X}$ to $\mathbf{F}_{q}$.
Let $K$ be the open compact subgroup of $G(\mathbf{A}^{\infty})$ given by the Eichler order $\mathcal{O}_{B, N^{+}}$. We define the \emph{Shimura set} $X^{B}$ by the following double coset
\begin{equation}\label{Shi-set}
X^{B}= G(\mathbf{Q})\backslash G(\mathbf{A}^{\infty})/K.
\end{equation}
Let $K_{0}(p)$ be the open compact subgroup of $G(\mathbf{A}^{\infty})$ given by the Eichler order $\mathcal{O}_{B, pN^{+}}$ of level $pN^{+}$ in $B$. We define $X^{B}_{0}(p)$ by the double coset
\begin{equation}\label{Shi-set-p}
X^{B}_{0}(p)= G(\mathbf{Q})\backslash G(\mathbf{A}^{\infty})/K_{0}(p).
\end{equation}
Let $\Lambda$ be a ring. We will use the following notations throughout this article.
\begin{itemize}
\item We denote by $\mathrm{H}^{0}(X^{B}, \Lambda)$ the set of continuous function on $X^{B}$ valued in $\Lambda$. We will also write this space as $S^{B}_{2}(N^{+}, \Lambda)$ and refer to it as the space of \emph{quaternionic modular forms} of level $N^{+}$
\item In the same way, we define $\mathrm{H}^{0}(X^{B}_{0}(p), \Lambda)$. We will also write this space as $S^{B}_{2}(pN^{+}, \Lambda)$ and refer to it as the space of \emph{quaternionic modular forms} of level $pN^{+}$
\item Let $W$ be a scheme we will let $\mathbf{P}^{d}(W)$ be the $\mathbf{P}^{d}$-bundle over $W$. For example $\mathbf{P}^{1}(X^{B})$ is the $\mathbf{P}^{1}$-bundle over $X^{B}$.
\end{itemize}
\subsection{The $p$-adic upper half plane} Let $k^{\mathrm{ac}}$ be an algebraic closure of $\mathbf{F}_{p}$ and let $W_{0}=W(k^{\mathrm{ac}})$ be ring of Witt vectors of $k^{\mathrm{ac}}$ with fraction field $K_{0}$. Let $D$ be the quaternion algebra over $\mathbf{Q}_{p}$ and let $\mathcal{O}_{D}$ be the maximal order of $D$. We write
\begin{equation*}
\mathcal{O}_{D}=\mathbf{Z}_{p^{2}}[\Pi]/(\Pi^{2}-p)
\end{equation*}
with $\Pi a=\sigma(a)\Pi$ with $a\in \mathbf{Z}_{p^{2}}$.
We first recall the notion of a \emph{special formal $\mathcal{O}_{D}$-module}. Let $S$ be a $W_{0}$-scheme. A special formal $\mathcal{O}_{D}$-module over $S$ is a formal $p$-divisible group $X$ of dimension $2$ and height $4$ with an $\mathcal{O}_{D}$-action
\begin{equation*}
\iota: \mathcal{O}_{D}\rightarrow \End_{S}(X)
\end{equation*}
such that $\mathrm{Lie}(X)$ is a locally free $\mathbf{Z}_{p^{2}}\otimes \mathcal{O}_{S}$-module of rank $1$. We fix a special $\mathcal{O}_{D}$-module $\mathbb{X}$ over $k^{\mathrm{ac}}$ whose Dieudonn\'{e} module is denoted by $\mathbb{M}$. Consider the functor $\mathcal{M}$ on the category Nilp of $W_{0}$-schemes $S$ such that $p$ is locally nilpotent in $\mathcal{O}_{S}$ such that $\mathcal{M}(S)$ classifies the isomorphism classes of pairs $(X, \rho_{X})$ where
\begin{enumerate}
\item $X$ is special formal $\mathcal{O}_{D}$-module;
\item $\rho_{X}: X\times_{S}\bar{S}\rightarrow \mathbb{X}\times_{\mathbf{F}} \bar{S}$ is a quasi-isogney.
\end{enumerate}
The functor $\mathcal{M}$ is represented by a formal scheme over $W_{0}$ which we also denote by $\mathcal{M}$. The formal scheme $\mathcal{M}$ breaks into a disjoint union
\begin{equation*}
\mathcal{M}=\bigsqcup_{i\in\mathbf{Z}}\mathcal{M}_{i}
\end{equation*}
according to the height $i$ of the quasi-isogeny $\rho_{X}$. Each formal scheme $\mathcal{M}_{i}$ is isomorphic to the \emph{$p$-adic upper half plane} $\mathcal{H}_{p}$ base-changed to $W_{0}$. The group $\mathrm{GL}_{2}(\mathbf{Q}_{p})$ acts naturally on the formal scheme $\mathcal{M}$ and each $\mathcal{M}_{i}$ affords an action of the group $\mathrm{GL}^{0}_{2}(\mathbf{Q}_{p}):=\{g\in\mathrm{GL}_{2}(\mathbf{Q}_{p}): \mathrm{ord}_{p}(\mathrm{det}(g))=0\}$. We review the descriptions of the special fiber of the formal scheme $\mathcal{M}_{0}$. Since this description is well-known, see \cite{KR00} for example, we content on explaining this on the level of points. Let $(X, \rho)\in \mathcal{M}(k^{\mathrm{ac}})$ and let $M$ be the covariant Dieudonn\'{e} of $X$ and the action of $\mathbf{Z}_{p^{2}}$ on $X$ induced a grading
\begin{equation}
M=M_{0}\oplus M_{1}
\end{equation}
that satisifies
\begin{equation}
\begin{aligned}
& pM_{0}\subset^{1} VM_{1}\subset^{1} M_{0},\hphantom{a}pM_{1}\subset^{1} VM_{0}\subset^{1} M_{1}\\
& pM_{0}\subset^{1} \Pi M_{1}\subset^{1} M_{0},\hphantom{a}pM_{1}\subset^{1} \Pi M_{0}\subset^{1} M_{1}\\
\end{aligned}
\end{equation}
Since the action of $\Pi$ and $V$ commute, we have the induced maps
\begin{equation}
\begin{aligned}
& \Pi: M_{0}/VM_{1}\rightarrow M_{1}/VM_{0},\\
& \Pi: M_{1}/VM_{0}\rightarrow M_{0}/VM_{1}.\\
\end{aligned}
\end{equation}
Since both $M_{0}/VM_{1}$ and $M_{1}/VM_{0}$ are of dimension $1$ and the composite of the two maps is obviously zero, we can conclude that there is an $i\in \mathbf{Z}/2\mathbf{Z}$ such that $\Pi M_{i}\subset VM_{i}$. Since both $\Pi M_{i}$ and $VM_{i}$ are of colength $1$ in $M_{i+1}$, we conclude that they are in fact equal to each other. We say that $i$ is a \emph{critical index} if $VM_{i}=\Pi M_{i}$ and a critical index always exists for $M$. We let $\tau=\Pi^{-1}V$ and it acts as an automorphism on $M_{i}$ if $i$ is a critical index.
If $0$ is a critical index, then we set $L_{0}=M^{\tau=1}_{0}$ and we call it a \emph{vertex lattice of type $0$}. This is a $\mathbf{Z}_{p}$-lattice of rank $2$ and we associate to it the projective line $\mathbf{P}(L_{0}/p)$. Then $VM_{1}/pM_{0}\subset^{1} M_{0}/pM_{0}=L_{0}/pL_{0}\otimes k^{\mathrm{ac}}$ gives a point on $\mathbf{P}(L_{0}/pL_{0})(k^{\mathrm{ac}})$. If $1$ is a critical index, then we similarly put $L_{1}=\Pi M^{\tau=1}_{1}$ and we call it a \emph{vertetx lattice of type $1$}. We again associate to it the projective line $\mathbf{P}(L_{1}/pL_{1})$. Similarly $VM_{0}/pM_{1}\subset^{1} M_{1}/pM_{1}=L_{1}/pL_{1}\otimes k^{\mathrm{ac}}$ gives a point on $\mathbf{P}(L_{1}/pL_{1})(k^{\mathrm{ac}})$. This construction gives all the irreducible components of the special fiber of $\mathcal{M}_{0}$. If both $0$ and $1$ are critical indices, then we will identify the point on $\mathbf{P}(L_{1}/pL_{1})$ given by $VM_{0}/pM_{1}\subset^{1} M_{1}/pM_{1}=L_{1}/pL_{1}\otimes k^{\mathrm{ac}}$ and the point on $\mathbf{P}(L_{0}/pL_{0})$ given by $VM_{1}/pM_{0}\subset^{1} M_{0}/pM_{0}=L_{0}/pL_{0}\otimes k^{\mathrm{ac}}$. Notice in this case the vertex lattices $L_{0}, L_{1}$ satisfies the following inclusions $$pL_{0}\subset L_{1}\subset L_{0}.$$ We summarize the above discussion in the following proposition.
\begin{proposition}\label{Drinfeld} We have the following statements.
\begin{enumerate}
\item The irreducible components of the special fiber of $\mathcal{M}_{0}$ can be partitioned into two types according to whether $0$ or $1$ is a critical index. Each irreducible component corresponds to a vertex lattice and is isomorphic to a projective line.
\item Two irreducible components corresponding two vertex lattices of the same type will not intersect. Let $L_{0}$ be a vertex lattice of type $0$ and $L_{1}$ be a vertex lattice of type $1$, then the corresponding irreducible components intersect if an only if $pL_{0}\subset L_{1}\subset L_{0}$.
\item The irreducible components are parametrized by $\mathrm{GL}_{2}(\mathbf{Q}_{p})/\mathrm{GL}_{2}(\mathbf{Z}_{p})\times \mathbf{Z}/2\mathbf{Z}$. The intersection points of the irreducible components are parametrized by $\mathrm{GL}_{2}(\mathbf{Q}_{p})/\mathrm{Iw}_{p}$ where $\mathrm{Iw}_{p}$ is the Iwahori subgroup of $\mathrm{GL}_{2}(\mathbf{Q}_{p})$.
\end{enumerate}
\end{proposition}
\begin{proof}
The first two points follow from the previous discussions. For the third point, note that the stabilizer of a vertex lattice in $\mathrm{GL}_{2}(\mathbf{Q}_{p})$ is obviously $\mathrm{GL}_{2}(\mathbf{Z}_{p})$. The condition $pL_{0}\subset L_{1}\subset L_{0}$ defines a standard alcove in the Bruhat-Tits building of $\mathrm{PGL}_{2}(\mathbf{Q}_{p})$ and therefore the stabilizer of the pair $(L_{0}, L_{1})$ is the Iwahori subgroup $\mathrm{Iw}_{p}$.
\end{proof}
\subsection{Cerednick-Drinfeld uniformization} From here on, we will set $k=\mathbf{F}_{p^{2}}$, $K=\mathbf{Q}_{p^{2}}$ and $\mathcal{O}_{K}=\mathbf{Z}_{p^{2}}$. Recall we have the integral model $\mathfrak{X}$ of the Shimura curve $X$ over $\mathcal{O}_{K}$ and we denote by $\mathfrak{X}^{\wedge}$ its completion along the ideal defined by $p$. The Cerednick-Drinfeld uniformization theorem asserts that the ${\mathfrak{X}}^{\wedge}$ can be uniformized by the formal scheme $\mathcal{M}$:
\begin{equation}\label{p-unifor}
{\mathfrak{X}}^{\wedge} \xrightarrow{\sim} G(\mathbf{Q})\backslash \mathcal{M} \times G(\mathbf{A}^{\infty, (p)})/K^{p}.
\end{equation}
It follows from the descriptions in Proposition \ref{Drinfeld} and the above uniformization theorem that the irreducible components of the special fiber $X_{k}$ are projective lines. It also follows that $\mathfrak{X}$ has strict semistable reduction. More precisely, we have the following proposition.
\begin{proposition}\label{curve-red}
We have the following descriptions of the scheme $X_{k}$.
\begin{enumerate}
\item The scheme $X_{k}$ is a union $\mathbf{P}^{1}$-bundles over Shimura sets
\begin{equation*}
X_{k}= \mathbf{P}^{1}(X^{B}_{+})\cup \mathbf{P}^{1}(X^{B}_{-}).
\end{equation*}
where both $X^{B}_{+}$ and $X^{B}_{-}$ are isomorphic to the Shimura set $X^{B}$ as in \eqref{Shi-set}.
\item The intersection points of the two $\mathbf{P}^{1}$-bundles $\mathbf{P}^{1}(X^{B}_{+})$ and $\mathbf{P}^{1}(X^{B}_{-})$ are given by
\begin{equation*}
\mathbf{P}^{1}(X^{B}_{+})\cap \mathbf{P}^{1}(X^{B}_{-})=X^{B}_{0}(p).
\end{equation*}
This also can be identified with the set of singular points on $X_{k}$.
\end{enumerate}
\end{proposition}
\begin{proof}
For $(1)$, recall that there are two types of irreducible components of the special fiber of $\mathcal{M}_{0}$ corresponding to vertex lattices of type $0$ and vertex lattices of type $1$. By the Cerednick--Drinfeld uniformization \eqref{p-unifor}, we only need to notice that the irreducible components of corresponding to vertex lattices of type $0$ or vertex lattice of type $1$ are uniformized by
\begin{equation*}
G(\mathbf{Q})\backslash \mathrm{GL}_{2}(\mathbf{Q}_{p})/\mathrm{GL}_{2}(\mathbf{Z}_{p})\times G(\mathbf{A}^{\infty, (p)})/K^{p}.
\end{equation*}
This is just another way of writing $X^{B}$. We will define $X^{B}_{+}$ to be the copy of $X^{B}$ corresponding to the vertex lattice of type $0$ and define $X^{B}_{-}$ to be the copy of $X^{B}$ corresponding to the vertex lattice of type $1$. For $(2)$, the Cerednick--Drinfeld uniformization \eqref{p-unifor} shows that the set of intersection points are given by
\begin{equation*}
G(\mathbf{Q})\backslash \mathrm{GL}_{2}(\mathbf{Q}_{p})/\mathrm{Iw}_{p}\times G(\mathbf{A}^{\infty, (p)})/K^{p}.
\end{equation*}
Again this is another way of writing $X^{B}_{0}(p)$. The rest of the claims are clear.
\end{proof}
We denote by $\pi_{+}: X^{B}_{0}(p)\rightarrow X^{B}_{+}$ and $\pi_{-}: X^{B}_{0}(p)\rightarrow X^{B}_{-}$ the natural specialization maps. These two maps give the Hecke correspondence of $X^{B}$. This can be seen as follows: the map $\pi_{+}$ is induced by sending the pair of vertex lattices $(L_{1}\subset L_{0})$ to $L_{0}$ and the map $\pi_{-}$ is induced by sending $(L_{1}\subset L_{0})$ to $L_{1}$. The two lattices are related by the matrix
$
\begin{pmatrix}
0& p\\
1 & 0\\
\end{pmatrix}
$
that is if one chooses a suitable basis $(e_{1}, e_{2})$ for $L_{0}$ then $(e_{2}, pe_{1})$ will be a basis for $L_{1}$.
The nontrivial element in $\mathrm{Gal}(k/\mathbf{F}_{p})$ acts on $X_{k}$ and it permutes the two $\mathbf{P}^{1}$-bundles $\mathbf{P}^{1}(X^{B}_{+})$ and $\mathbf{P}^{1}(X^{B}_{-})$. On the set $X^{B}_{0}(p)$ it acts by the classical \emph{Atkin-Lehner involution}. For this, see \cite[\S 1.7]{BD-Mumford} for example.
\subsection{Ramified level raising on Shimura curves} Recall $N=N^{+}N^{-}$ with $N^{+}$ and $N^{-}$ defined previously. Let $f\in \mathrm{S}^{\mathrm{new}}_{2}(\Gamma_{0}(N))$ be a newform of weight $2$. Let $E=\mathbf{Q}(f)$ be the Hecke field of $f$. Let $\lambda$ be a place of $E$ above $l$ and $E_{\lambda}$ be the completion of $E$ at $\lambda$. Let $\varpi$ be a uniformizer of $\mathcal{O}=\mathcal{O}_{E_{\lambda}}$ and we write $\mathcal{O}_{n}=\mathcal{O}/\varpi^{n}$ for $n\geq 1$. We let $\mathbb{T}=\mathbb{T}_{N^{+}, N^{-}}$, respectively $\mathbb{T}^{[p]}=\mathbb{T}_{N^{+}, pN^{-}}$, be the $l$-adic Hecke algebra corresponding to the cusp forms of level $N=N^{+}N^{-}$, respectively of level $Np=N^{+}N^{-}p$, which is new at primes dividing $N^{-}$, respectively at primes dividing $pN^{-}$. Since $f$ is an eigenform, we have a morphism
$\phi_{f}: \mathbb{T}\rightarrow \mathcal{O}$
corresponding to the system of eigenvalues of $f$. More precisely, we have $\phi_{f}(T_{v})=a_{v}(f)$ for $v\nmid N$ and $\phi_{f}(U_{v})=a_{v}(f)$ for $v\mid N$.
\begin{definition}\label{n-adm}
Let $n\geq 1$ be an integer. We say that a prime $p$ is \emph{$n$-admissible} for $f$ if
\begin{enumerate}
\item $p\nmid Nl$;
\item $l\nmid p^{2}-1 $;
\item $\varpi^{n}\mid p+1-\epsilon_{p}(f)a_{p}(f)$ for some $\epsilon_{p}(f)\in\{-1, 1\}$.
\end{enumerate}
\end{definition}
Let $\phi_{f, n}: \mathbb{T}\rightarrow \mathcal{O}_{n}$ be reduction of the map $\phi_{f}: \mathbb{T}\rightarrow \mathcal{O}$ modulo $\varpi^{n}$. We denote by $I_{f, n}$ the kernel of this map and $\mathfrak{m}_{f}$ the unique maximal ideal of $\mathbb{T}$ containing $I_{f, n}$. We will say $\mathfrak{m}_{f}$ is residually irreducible if the residue Galois representation $\bar{\rho}_{f}$ attached to $f$ is irreducible. The following result is known as the \emph{ramified arithmetic level raising for Shimura curves} and was first proved in \cite[Theorem 5.15, Corollary 5.18]{BD-Main}. The proof of the theorem below is inspired by two lectures given by Liang Xiao at the Morningside center \cite{Xiao} on the subject.
\begin{theorem}\label{level-raise-curve}
Let $p$ be an $n$-admissible prime for $f$. We assume that
\begin{enumerate}[label=(\roman*)]
\item the maximal ideal $\mathfrak{m}_{f}$ is residually irreducible;
\item The module $S^{B}_{2}(N^{+}, \mathcal{O})_{\mathfrak{m}_{f}}$ is free of rank $1$ over $\mathbb{T}_{\mathfrak{m}_{f}}$.
\end{enumerate}
Then we have the following.
\begin{enumerate}
\item There exists a surjective homomorphism $\phi^{[p]}_{f, n}: \mathbb{T}^{[p]}\rightarrow \mathcal{O}_{n}$ such that $\phi^{[p]}_{f, n}$ agrees with $\phi_{f, n}$ at all Hecke operators away from $p$ and sends $U_{p}$ to $\epsilon_{p}(f)$. We will denote by $I^{[p]}_{f, n}$ the kernel of $\phi^{[p]}_{f, n}$.
\item We have an isomorphism of $\mathcal{O}_{n}$-modules of rank $1$
\begin{equation*}
S^{B}_{2}(N^{+}, \mathcal{O}){/I_{f, n}}\xrightarrow{\cong}\mathrm{H}^{1}_{\mathrm{sing}}(K, \mathrm{H}^{1}(X_{K^{\mathrm{ac}}}, \mathcal{O}(1)){/I^{[p]}_{f, n}}).
\end{equation*}
\end{enumerate}
\end{theorem}
\begin{proof}
In this proof we will set $\mathrm{M}=\mathrm{H}^{1}(X_{k^{\mathrm{ac}}}, \mathcal{O}(1))$ and $\mathrm{M}_{n}=\mathrm{H}^{1}(X_{k^{\mathrm{ac}}}, \mathcal{O}(1)){/I_{f, n}}$. Then we use the following formula as proved in \eqref{1-sing} and \eqref{1-potential}
\begin{equation}\label{sing-formula}
\mathrm{H}^{1}_{\mathrm{sing}}(K, \mathrm{M})\cong \mathrm{coker}[\mathrm{H}^{0}(a_{0*}\mathcal{O})\xrightarrow{\rho} \mathrm{H}^{0}(a_{1*}\mathcal{O})\xrightarrow{\tau} \mathrm{H}^{2}(a_{0*}\mathcal{O}(1))]^{G_{k}}.
\end{equation}
Here
$$\mathrm{H}^{0}(a_{0*}\mathcal{O})=\mathrm{H}^{0}(\mathbf{P}^{1}(X^{B}_{+}), \mathcal{O})\oplus\mathrm{H}^{0}(\mathbf{P}^{1}(X^{B}_{-}), \mathcal{O})$$
and we can identify it with the space
$S^{B}_{2}(N^{+},\mathcal{O})^{\oplus 2}$. Similarly under the Poincare duality, we can also identify
\begin{equation*}
\mathrm{H}^{2}(a_{0*}\mathcal{O}(1))=\mathrm{H}^{2}(\mathbf{P}^{1}(X^{B}_{+}), \mathcal{O}(1))\oplus\mathrm{H}^{2}(\mathbf{P}^{1}(X^{B}_{-}), \mathcal{O}(1))
\end{equation*}
with $S^{B}_{2}(N^{+}, \mathcal{O})^{\oplus 2}$. The space $\mathrm{H}^{0}(a_{1*}\mathcal{O})$ can be identified with $S^{B}_{2}(pN^{+},\mathcal{O})$. Under these identifications, the composition appearing in \eqref{sing-formula}
\begin{equation*}
\mathrm{H}^{0}(a_{0*}\mathcal{O})\xrightarrow{\rho} \mathrm{H}^{0}(a_{1*}\mathcal{O}) \xrightarrow{\tau} \mathrm{H}^{2}(a_{0*}\mathcal{O}(1))
\end{equation*}
is given by the \emph{intersection matrix}
\begin{equation*}
\begin{pmatrix}
-(p+1) &T_{p}\\
T_{p} &-(p+1)\\
\end{pmatrix}
\end{equation*}
which we will also denote by $\nabla$. Since $p$ is $n$-admissible for $f$, the singular quotient of $\mathrm{M}_{n}$
\begin{equation*}
\mathrm{H}^{1}_{\mathrm{sing}}(K, \mathrm{M}_{n})\cong\mathrm{coker}[S^{B}_{2}(N^{+},\mathcal{O})^{\oplus 2}_{/I_{f, n}}\xrightarrow{\nabla}S^{B}_{2}(N^{+},\mathcal{O})^{\oplus 2}_{/I_{f, n})}]
\end{equation*}
is of rank one over $\mathcal{O}_{n}$ and is isomorphic to $S^{B}_{2}(N^{+},\mathcal{O})_{/I_{f, n}}$. Note here the isomorphism between
\begin{equation*}
\mathrm{coker}[S^{B}_{2}(N^{+},\mathcal{O})^{\oplus 2}_{/I_{f, n}}\xrightarrow{\nabla}S^{B}_{2}(N^{+},\mathcal{O})^{\oplus 2}_{/I_{f, n})}]
\end{equation*}
and $S^{B}_{2}(N^{+},\mathcal{O})^{\oplus 2}_{/I_{f, n}}$ is induced by the map
\begin{equation*}
S^{B}_{2}(N^{+},\mathcal{O})^{\oplus 2}_{/I_{f, n}}\rightarrow S^{B}_{2}(N^{+},\mathcal{O})_{/I_{f, n}}, \hphantom{aa} (x, y)\mapsto \frac{1}{2}(x+\epsilon_{p}(f)y)
\end{equation*}
This proves the second part of the theorem.
By \cite[Theorem 5.8]{BD-Main} and \cite[\S3.5]{CH-2}, the natural $U_{p}$-action on
\begin{equation*}
\mathrm{H}^{2}(a_{0*}\mathcal{O}(1))\cong S^{B}_{2}(N^{+},\mathcal{O})^{\oplus 2}
\end{equation*}
is given by $(x, y)\mapsto (-py, x+T_{p}y)$.
We consider the automorphism $$\delta: S^{B}_{2}(N^{+},\mathcal{O})^{\oplus 2} \rightarrow S^{B}_{2}(N^{+},\mathcal{O})^{\oplus 2}$$ given by
$(x, y)\mapsto (x+T_{p}y, y)$.
Then a quick calculation gives us that $\nabla\circ \delta=U^{2}_{p}-1$. This means that the quotient
\begin{equation*}
\frac{S^{B}_{2}(N^{+},\mathcal{O})^{\oplus 2}}{(I_{f, n},U^{2}_{p}-1)} \cong \mathrm{coker}[S^{B}_{2}(N^{+},\mathcal{O})^{\oplus 2}_{/I_{f, n}}\xrightarrow{\nabla}S^{B}_{2}(N^{+},\mathcal{O})^{\oplus 2}_{/I_{f, n})}]\end{equation*}
is of rank $1$. Since $p$ is $n$-admissible for $f$, we see immediately $U_{p}+\epsilon_{f}$ is invertible on $S^{B}_{2}(N^{+},\mathcal{O})^{\oplus 2}_{/I_{f, n}}$. Therefore we have
\begin{equation*}
\frac{S^{B}_{2}(N^{+},\mathcal{O})^{\oplus 2}}{(I_{f, n},U^{2}_{p}-1)} \cong \frac{S^{B}_{2}(N^{+},\mathcal{O})^{\oplus 2}}{(I_{f, n},U_{p}-\epsilon_{p}(f))}
\end{equation*}
and the latter quotient is of rank $1$ over $\mathcal{O}_{n}$. Then the action of $\mathbb{T}^{[p]}$ on this rank $1$ quotient gives the desired morphism
$\phi^{[p]}_{f, n}: \mathbb{T}^{[p]}\rightarrow \mathcal{O}_{n}$.
\end{proof}
We will now compare the ingredients in the above proof with those in \cite{BD-Main}. Let $\mathcal{G}$ be the dual reduction graph of $X_{k}$. We denote by $\mathcal{V}(\mathcal{G})$ the set of vertices and $\mathcal{E}(\mathcal{G})$ the set of edges. Then we have the following identifications
\begin{equation*}
\mathrm{H}^{0}(a_{0*}\Lambda)=\Lambda[\mathcal{V}(\mathcal{G})]
\end{equation*}
and
\begin{equation*}
\mathrm{H}^{0}(a_{1*}\Lambda)=\Lambda[\mathcal{E}(\mathcal{G})]
\end{equation*}
for $\Lambda$ equals to $\mathbf{Z}_{/l^{n}}$, $\mathbf{Z}_{l}$ or a finite extension of $\mathbf{Z}_{l}$.
Moreover under these identifications, the restriction map
\begin{equation*}
\mathrm{H}^{0}(a_{0*}\Lambda)\xrightarrow{\rho} \mathrm{H}^{0}(a_{1*}\Lambda)
\end{equation*}
can be identified with
\begin{equation*}
\Lambda[\mathcal{V}(\mathcal{G})]\xrightarrow{d^{*}=-s^{*}+t^{*}} \Lambda[\mathcal{E}(\mathcal{G})].
\end{equation*}
And the map $$\mathrm{H}^{0}(a_{1*}\Lambda)\xrightarrow{\tau} \mathrm{H}^{2}(a_{0*}\Lambda(1))$$ can be identified with $$\Lambda[\mathcal{E}(\mathcal{G})]\xrightarrow{d_{*}=-s_{*}+t_{*}} \Lambda[\mathcal{V}(\mathcal{G})].$$
Here $s, t$ are the source and target maps
\begin{equation*}
\mathcal{E}(\mathcal{G})\xrightarrow{s, t}\mathcal{V}(\mathcal{G})
\end{equation*}
defined in an obvious manner. Let $\mathcal{J}$ be the N\'{e}ron model of the Jacobian $\mathrm{Jac}(X_{K})$ of $X_{K}$ and let $\Phi$ be the group of connected components of special fiber $\mathcal{J}_{k}$. Denote by $\mathcal{X}$, resp. $\mathcal{X}^{\vee}$, the \emph{character group}, resp. the \emph{cocharacter group} of $\mathcal{J}_{k}$. The following proposition describes $\Phi$, $\mathcal{X}$ and $\mathcal{X}^{\vee}$ in terms of the dual reduction graph $\mathcal{G}$.
\begin{proposition} \label{component-grp}
We have the following statements
\begin{enumerate}
\item There is a Hecke module isomorphism
$$\mathbf{Z}[\mathcal{V}(\mathcal{G})]_{0}/d_{*}d^{*} \cong \Phi$$
where $\mathbf{Z}[\mathcal{V}(\mathcal{G})]_{0}$ be the image of $\mathbf{Z}[\mathcal{E}(\mathcal{G})]\xrightarrow{d_{*}}\mathbf{Z}[\mathcal{V}(\mathcal{G})]$.
\item The kernel of the map $$\mathbf{Z}[\mathcal{V}(\mathcal{G})]\xrightarrow{d^{*}}\mathbf{Z}[\mathcal{E}(\mathcal{G})]$$ is Eisenstein.
\item The cokernel of the map $$\mathbf{Z}[\mathcal{E}(\mathcal{G})]\xrightarrow{d_{*}}\mathbf{Z}[\mathcal{V}(\mathcal{G})]$$ is Eisenstein.
\item There is an isomorphism of Hecke modules $$\mathcal{X}= \ker[\mathbf{Z}[\mathcal{E}(\mathcal{G})]\xrightarrow{d_{*}}\mathbf{Z}[\mathcal{V}(\mathcal{G})]].$$
\item There is an isomorphism of Hecke modules $$\mathcal{X}^{\vee}= \mathrm{coker}[\mathbf{Z}[\mathcal{V}(\mathcal{G})]\xrightarrow{d^{*}}\mathbf{Z}[\mathcal{E}(\mathcal{G})]].$$
\end{enumerate}
\end{proposition}
\begin{proof}
The first statement follows from \cite[\S9.6, Theorem 1]{BLR1}. The second statement is the well known ``Ihara's lemma" for definite quaternion algebras in view of what we have discussed before. See \cite[Theorem 3.15]{Ri-100} for the version we need.
\end{proof}
\begin{corollary}[{\cite[Corollary 5.18]{BD-Main}}]
Under the assumptions in Theorem \ref{level-raise-curve}, we have a more canonical isomorphism
\begin{equation*}
\Phi_{\mathcal{O}}/I^{[p]}_{f, n} \cong \mathrm{H}^{1}_{\mathrm{sing}}(K, \mathrm{H}^{1}(X_{K^{\mathrm{ac}}}, \mathcal{O}(1))/I^{[p]}_{f, n}).
\end{equation*}
\end{corollary}
\begin{proof}
Recall the following formula coming out of the weight spectral sequence for Shimura curves
\begin{equation*}
\mathrm{H}^{1}_{\mathrm{sing}}(K, \mathrm{H}^{1}(X_{K^{\mathrm{ac}}}, \mathcal{O}(1))/I^{[p]}_{f, n})\cong \mathrm{coker}[\mathrm{H}^{0}(a_{0*}\mathcal{O})/I^{[p]}_{f, n}\xrightarrow{\rho} \mathrm{H}^{0}(a_{1*}\mathcal{O})/I^{[p]}_{f, n}\xrightarrow{\tau} \mathrm{H}^{2}(a_{0*}\mathcal{O}(1))/I^{[p]}_{f, n}]^{G_{k}}.
\end{equation*}
From the discussion before and Proposition \ref{component-grp} $(1)$, we see that the right-hand side of the above equation can be identified with
$\Phi_{\mathcal{O}}/I^{[p]}_{f,n}$.
\end{proof}
\begin{lemma}\label{comp-char}
There is an isomorphism under the assumption in Theorem \ref{level-raise-curve}
\begin{equation*}
\mathcal{X}^{\vee}_{\mathcal{O}}/I^{[p]}_{f, n}\cong \Phi_{\mathcal{O}}/I^{[p]}_{f, n}\cong S^{B}_{2}(N^{+}, \mathcal{O})/I_{f, n}.
\end{equation*}
\end{lemma}
\begin{proof}
We have an exact sequence
\begin{equation*}
0\rightarrow \mathcal{X}_{\mathcal{O}} \rightarrow \mathcal{X}^{\vee}_{\mathcal{O}} \rightarrow \Phi_{\mathcal{O}}\rightarrow 0
\end{equation*}
by \cite[5.11]{BD-Main} which induces
\begin{equation*}
\mathcal{X}_{\mathcal{O}}/I^{[p]}_{f, n} \rightarrow \mathcal{X}^{\vee}_{\mathcal{O}}/I^{[p]}_{f, n} \rightarrow \Phi_{\mathcal{O}}/I^{[p]}_{f, n}\rightarrow 0.
\end{equation*}
Let $J$ be the Jacobian of the curve $X$. By the proof of \cite[Theorem 5.17]{BD-Main}, the Galois module $\mathrm{Ta}_{l}(J)/I^{[p]}_{f, n}$ is of rank $2$ over $\mathcal{O}_{n}$ and hence $\mathcal{X}^{\vee}_{\mathcal{O}}/I^{[p]}_{f, n}$ is of rank $1$ over $\mathcal{O}_{n}$ by the $p$-adic uniformization of $J$. The result follows as $\Phi_{\mathcal{O}}/I^{[p]}_{f, n}\cong S^{B}_{2}(N^{+}, \mathcal{O})/I_{f, n}$ is also rank $1$ under our assumption by the previous discussions.
\end{proof}
\section{Arithmetic level raising for triple product of Shimura curves}
\subsection{Semistable model of triple product of Shimura curves} Recall in the last section we have defined the Shimura curve $X=X^{B^{\prime}}_{N^{+}, pN^{-}}$ with discriminant $pN^{-}$ over $\mathbf{Q}$. Let $X^{3}$ be the threefold fiber product of $X$ over $\mathbf{Q}$. Recall that we have the integral model $\mathfrak{X}$ of $X$ defined over $\mathcal{O}_{K}$. Let $\mathfrak{X}^{3}$ be the threefold fiber product of $\mathfrak{X}$ over $\mathcal{O}_{K}$. First, we analyze the reduction of $\mathfrak{X}^{3}$. We denote by $X^{3}_{k}$ the special fiber of $\mathfrak{X}^{3}$. By Proposition \ref{curve-red}, we know each $X_{k}$ can be described as the union $\mathbf{P}^{1}(X^{B}_{+})\cup \mathbf{P}^{1}(X^{B}_{-})$ and therefore the special fiber $X^{3}_{k}$ can be described by the cube given below.
\begin{equation*}
\begin{tikzpicture}
\matrix (m) [matrix of math nodes, row sep=2em,
column sep=2em]{
& {X}^{045}_{[+-+]} & & {X}^{024}_{[+++]}\\
{X}^{015}_{[--+]}& & {X}^{012}_{[-++]} & \\
& {X}^{345}_{[+--]}& & {X}^{234}_{[++-]} \\
{X}^{135}_{[---]} & & {X}^{123}_{[-+-]} & \\} ;
\path
(m-1-2) edge (m-1-4) edge (m-2-1) edge (m-3-2)
(m-1-4) edge (m-3-4) edge (m-2-3)
(m-2-1) edge [-,line width=6pt,draw=white] (m-2-3) edge (m-2-3) edge (m-4-1)
(m-3-2) edge (m-3-4) edge (m-4-1)
(m-4-1) edge (m-4-3)
(m-3-4) edge (m-4-3)
(m-2-3) edge [-,line width=6pt,draw=white] (m-4-3) edge (m-4-3) ;
\end{tikzpicture}
\end{equation*}
We will explain the meaning of the simplices in this cube.
\begin{itemize}
\item The $0$-simplices are the vertices of the cube. They correspond to $3$-dimensional strata in $X^{3}_{k}$. Consider the vertex labeled by $X^{123}_{[-+-]}$ for example. The superscript $123$ has no real meaning and is simply used for ordering the vertices. This ordering is inherited from Liu's paper \cite{Liu-cubic} where the labels have real meanings in terms of achimedean places of a cubic field. The subscript $[-+-]$ means that $X^{123}_{[-+-]}$ is of the form
\begin{equation*}
\mathbf{P}^{1}(X^{B}_{-})\times \mathbf{P}^{1}(X^{B}_{+})\times \mathbf{P}^{1}(X^{B}_{-}).
\end{equation*}
\item The $1$-simplices are the edges of the cube. They correspond to $2$-dimensional strata in $X^{3}_{k}$. For example, we will label the edge between ${X}^{135}_{[---]}$ and ${X}^{123}_{[-+-]}$ by $X^{1235}_{[-\pm-]}$. This means we will take the union on the superscript and take the intersection on the subscript. Then $X^{1235}_{[-\pm-]}$ is of the form
\begin{equation*}
\mathbf{P}^{1}(X^{B}_{-})\times X^{B}_{0}(p) \times \mathbf{P}^{1}(X^{B}_{-}).
\end{equation*}
\item The $2$-simplices are the faces of the cube. They correspond to $1$-dimensional strata in $X^{3}_{k}$. We use similar convention as in the last point. For example
\begin{equation*}
X^{01235}_{[-\pm\pm]}=\mathbf{P}^{1}(X^{B}_{-})\times X^{B}_{0}(p) \times X^{B}_{0}(p).
\end{equation*}
\item Finally, the $3$-simplex is the zero dimensional components given by
\begin{equation*}
X^{012345}_{[\pm\pm\pm]}=X^{B}_{0}(p)\times X^{B}_{0}(p) \times X^{B}_{0}(p).
\end{equation*}
\end{itemize}
We will usually drop the subscript from the notations and put it back there when we need to know the exact form of the strata. By an easy computation on the local rings, we see that $\mathfrak{X}^{3}$ is not semi-stable. Following the procedure in \cite[Example 6.15]{GS95}, we can obtain a semistable model denoted by $\mathcal{Y}$ of $\mathfrak{X}^{3}$ over $\mathcal{O}_{K}$. More precisely, to obtain $\mathcal{Y}$, we blow-up $\mathfrak{X}^{3}$ along the closed subscheme $X^{135}$, then we blow-up the strict transform of $X^{024}$. We denote by $\pi: \mathcal{Y}\rightarrow \mathfrak{X}^{3}$ the natural morphism between these two schemes given by the aforementioned process. The generic fiber $\mathcal{Y}_{K}$ agree with the generic fiber ${X}^{3}_{K}$ of $\mathfrak{X}^{3}$. The special fiber of $\mathcal{Y}$ will be denoted by $Y_{k}$ and it can be described by the following cube. The densely dotted line on the cube correspond to the new intersections between three dimensional strata caused by the blow-ups and they give new two dimensional strata.
\begin{equation}\label{primitive-cube}
\begin{tikzpicture}
\matrix (m) [matrix of math nodes, row sep=2em,
column sep=2em]{
& {Y}^{045}_{[+-+]} \vphantom{f^{*}} & & {Y}^{024}_{[+++]}\hphantom{EEE}\\
{Y}^{015}_{[--+]} \vphantom{f^{*}}& & {Y}^{012}_{[-++]} \hphantom{Ee}& \\
& {Y}^{345}_{[+--]} \vphantom{U} & & {Y}^{234}_{[++-]} \vphantom{E}\\
{Y}^{135}_{[---]} \vphantom{M} & & {Y}^{123}_{[-+-]} \vphantom{N} & \\} ;
\path
(m-1-2) edge (m-1-4) edge (m-2-1) edge (m-3-2) edge[densely dotted](m-4-1)
(m-1-4) edge (m-3-4) edge (m-2-3) edge[densely dotted] (m-3-2) edge[densely dotted](m-4-1) edge[densely dotted] (m-2-1) edge[densely dotted] (m-4-3)
(m-2-1) edge [-,line width=6pt,draw=white] (m-2-3) edge (m-2-3) edge (m-4-1)
(m-3-2) edge (m-3-4) edge (m-4-1)
(m-4-1) edge (m-4-3)
(m-3-4) edge (m-4-3) edge[densely dotted](m-4-1)
(m-2-3) edge [-,line width=6pt,draw=white] (m-4-3) edge (m-4-3) edge[densely dotted](m-4-1) ;
\end{tikzpicture}
\end{equation}
We will explain the meaning of some of the simplices of this cube.
\begin{itemize}
\item The $0$-simplices are the vertices of the cube. They correspond to $3$-dimensional strata in ${Y}_{k}$. These are the strict transform of the corresponding strata in $X^{3}_{k}$. For example, $Y^{012}$ is the strict transform of $X^{012}$ under $\pi$.
\item The $1$-simplices are the edges on the cube. They correspond to $2$-dimensional strata in ${Y}_{k}$. Notice that there are three types of edges:
\begin{enumerate}
\item those correspond to the original edges in the cube for $X^{3}_{k}$, for example $Y^{0125}=Y^{012}\cap Y^{015}$;
\item those correspond to the faces in the cube for $X^{3}_{k}$, for example $Y^{01235}=Y^{012}\cap Y^{135}$;
\item the one correspond to the ``main diagonal" of the cube $Y^{012345}=Y^{024}\cap Y^{135}$.
\end{enumerate}
\end{itemize}
\begin{proposition} \label{Yk}
We have the following descriptions of the $2$-dimensional and $3$-dimensional strata appear in the above list.
\begin{enumerate}
\item The three dimensional strata
\begin{equation*}
Y^{i(i+1)(i+2)}
\end{equation*}
is the blow-up of $X^{i(i+1)(i+2)}$ along the one dimensional strata $X^{i(i+1)(i+2)(i+3)(i+5)}$ for $i\in \{0,1, 2, 3, 4, 5\}$.
\item The three dimensional strata
\begin{equation*}
Y^{024}
\end{equation*}
is the blow-up of $X^{024}$ along the zero dimensional strata $X^{012345}$ followed by the blow-up of the strict transform of $X^{01234}\cup X^{01245}\cup X^{02345}$.
\item The three dimensional strata
\begin{equation*}
Y^{135}
\end{equation*}
is the blow-up of $X^{135}$ along the zero dimensional strata $X^{012345}$ followed by the blow-up of the strict transform of $X^{01235}\cup X^{01345}\cup X^{12345}$.
\item The two dimensional strata
\begin{equation*}
Y^{i(i+1)(i+2)(i+3)}
\end{equation*}
maps isomorphically to $X^{i(i+1)(i+2)(i+3)}$ for $i\in\{0, 1, 2, 3, 4, 5\}$.
\item The two dimensional strata
\begin{equation*}
Y^{i(i+1)(i+2)(i+4)}
\end{equation*}
is the blow-up of $X^{i(i+1)(i+2)(i+4)}$ along $X^{012345}$ for $i\in\{0, 1, 2, 3, 4, 5\}$.
\item The two dimensional strata
\begin{equation*}
Y^{i(i+1)(i+2)(i+3)(i+5)}
\end{equation*}
is a $\mathbf{P}^{1}$-bundle over $X^{i(i+1)(i+2)(i+3)(i+5)}$ for $i\in\{0, 1, 2, 3, 4, 5\}$. In fact, $Y^{i(i+1)(i+2)(i+3)(i+5)}$ is the exceptional divisor of the blow-up
$\pi: Y^{i(i+1)(i+2)}\rightarrow X^{i(i+1)(i+2)}$.
\item The two dimensional strata
\begin{equation*}
Y^{012345}
\end{equation*}
is a $\mathbf{P}^{2}$-bundle over $X^{012345}$.
\end{enumerate}
Moreover all the strata of dimension $2$ or dimension $3$ are given in the above list.
\end{proposition}
\begin{proof}
The proof of these statements are exactly the same as these given in \cite[Proposition B.39]{Liu-cubic}. Although we are working with different Shimura varities, the underlying local models are the same.
\end{proof}
\subsection{Cohomology of blow-ups} We insert here a quick discussion of how to compute cohomology of varieties obtained by blow-ups, following \cite[\S 3]{Ito-p-uni}. Let $F$ be an algebraically closed field of any characteristic and let $X$ be a projective smooth irreducible variety over $F$ of dimension $n$. Let $Y_{1}, \cdots, Y_{r}\subset X$ be mutually disjoint smooth closed irreducible subvarieties of codimension $d\geq 2$. Let $i: Y=\sqcup^{r}_{t=1} Y_{t} \hookrightarrow X$ be the closed immersion. Let $\pi: \tilde{X}\rightarrow X$ be the blow-up of $X$ along $Y$ and $\tilde{Y}$ be the strict transform of $Y$. These maps will fit in the following cartesian diagram
\begin{equation}
\begin{tikzcd}
\tilde{Y} \arrow[r, "\tilde{i}"] \arrow[d, "\pi_{\mid Y}"] & \tilde{X} \arrow[d, "\pi"] \\
Y \arrow[r, "i"] & X .
\end{tikzcd}
\end{equation}
Since $Y$ is a disjoint union of mutually disjoint smooth irreducible closed subvarieties of codimension $d$, $\tilde{Y}$ is a $\mathbf{P}^{d-1}$-bundle over $Y$.
Let $\xi=c_{1}(\mathcal{O}_{\tilde{Y}}(1))\in \mathrm{H}^{2}(\tilde{Y}, \Lambda(1))$ be the first Chern class of $\mathcal{O}_{\tilde{Y}}(1)$ of the $\mathbf{P}^{d-1}$-bundle $\tilde{Y}$. \begin{proposition}\label{bl-coho}
We have the following fomrula
\begin{equation*}
\begin{aligned}
\mathrm{H}^{k}(\tilde{X}, \Lambda(n))&\cong \mathrm{H}^{k-2}(Y, \Lambda(n-1))\oplus \cdots \oplus \mathrm{H}^{k-2-2(d-2)}(Y, \Lambda(n-(d-2)))\xi^{d-2}
\oplus \mathrm{H}^{k}(X, \Lambda(n)). \\
\end{aligned}
\end{equation*}
\end{proposition}
\begin{proof}
This is a quite well-known results in \cite{sga5} and one can find a nice exposition of this result in \cite[(3.3)]{Ito-p-uni}.
\end{proof}
\subsection{Level raising on triple product of Shimura curves } As in \S 2.1, $N$ is a positive integer which admits a factorization $N=N^{+}N^{-}$ with $(N^{+}, N^{-})=1$ such that $N^{-}$ is square free and has odd number of prime factors. Let
\begin{equation*}
\begin{aligned}
&f_{1}=\sum_{n\geq 1} a_{n}(f_{1})q^{n},\\
&f_{2}=\sum_{n\geq 1} a_{n}(f_{2})q^{n},\\
&f_{3}=\sum_{n\geq 1} a_{n}(f_{3})q^{n} \\
\end{aligned}
\end{equation*}
be a triple of weight $2$ normalized newforms in $ S^{\mathrm{new}}_{2}(\Gamma_{0}(N))$. We write $\underline{\mathbf{f}}=(f_{1}, f_{2}, f_{3})$.
For each $i=1, 2, 3$, let $E_{i}=\mathbf{Q}(f_{i})$ be the Hecke field of $f_{i}$. Let $\mathcal{O}_{i}$ be a finite extension of $\Lambda$ containing the ring of integers of $E_{i}$ with uniformizer $\varpi_{i}$. We will write $\mathbf{F}_{i}$ as the residue field of $\mathcal{O}_{i}$. Let $p$ be a prime away from $N$. Recall that $\mathbb{T}=\mathbb{T}_{N^{+}, N^{-}}$, respectively $\mathbb{T}^{[p]}=\mathbb{T}_{N^{+}, N^{-}p}$, is the $l$-adic Hecke algebra corresponding to the subspace of the cusp forms of level $N=N^{+}N^{-}$, respectively of level $Np=N^{+}N^{-}p$, which are new at primes dividing $N^{-}$, respectively at $pN^{-}$. Since $(f_{1}, f_{2}, f_{3})$ is a triple of eigenforms, we have morphisms
$\phi_{i}: \mathbb{T}\rightarrow \mathcal{O}_{i}$ corresponding to $f_{i}$ and $\phi_{i, n}: \mathbb{T}\rightarrow \mathcal{O}_{i, n}$ corresponding to the reduction of $\phi_{i}$ modulo $\varpi^{n}_{i}$ for $i=1, 2, 3$.
\begin{definition}
Let $n\geq 1$ be an integer. We say that a prime $p$ is \emph{$n$-admissible} for $\underline{\mathbf{f}}=(f_{1}, f_{2}, f_{3})$ if
\begin{enumerate}
\item $p\nmid Nl$;
\item $l\nmid p^{2}-1$;
\item $\varpi_{i}^{n}\mid p+1-\epsilon_{p,i}a_{p}(f_{i})$ with $\epsilon_{p,i}=\pm1$ for $i=1, 2, 3$;
\item $\epsilon_{p,1}\epsilon_{p,2}\epsilon_{p,3}=1$.
\end{enumerate}
\end{definition}
Let $p$ be an \emph{$n$-admissible} prime for $\underline{\mathbf{f}}$. We know by Theorem \ref{level-raise-curve} that there are morphisms
$\phi^{[p]}_{i, n}: \mathbb{T}^{[p]}\rightarrow \mathcal{O}_{i, n}$ that agree with $\phi_{i, n}: \mathbb{T}\rightarrow \mathcal{O}_{i, n}$
at all Hecke operators except those at $p$ and such that $\phi^{[p]}_{i, n}(U_{p})=\epsilon_{p, i}$ for $i=1,2,3$. We denote by $I_{{i}, n}$, respectively $I^{[p]}_{{i}, n}$, the kernel of the map $\phi_{i, n}$, respectively the kernel of the map $\phi^{[p]}_{i, n}$. We also let $\mathfrak{m}_{i}$ be the maximal ideal of $\mathbb{T}$ containing $I_{{i}, n}$ and let $\mathfrak{m}^{[p]}_{i}$ be the maximal ideal of $\mathbb{T}^{[p]}$ containing $I^{[p]}_{{i}, n}$. We will always assume that the maximal ideal $\mathfrak{m}_{i}$ is residually irreducible. Let $\mathfrak{m}_{\underline{\mathbf{f}}}=(\mathfrak{m}_{1}, \mathfrak{m}_{2}, \mathfrak{m}_{3})$ and $\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}=(\mathfrak{m}^{[p]}_{1}, \mathfrak{m}^{[p]}_{2}, \mathfrak{m}^{[p]}_{3})$.
There is an action of $\mathbb{T}^{[p]}\times \mathbb{T}^{[p]}\times \mathbb{T}^{[p]}$ on the $l$-adic cohomology of $\mathfrak{X}^{3}$ which extends to that of $\mathcal{Y}$. Let $\Lambda=\mathbf{Z}_{l}$ from now on. By the K\"{u}nneth formula it makes sense to localize the cohomology
\begin{equation*}
\mathrm{H}^{3}(\mathcal{Y}\otimes{K^{\mathrm{ac}}}, \Lambda(2))=\mathrm{H}^{3}(\mathfrak{X}^{3}\otimes{K^{\mathrm{ac}}}, \Lambda(2))
\end{equation*}
at the triple $\mathfrak{m}_{\underline{\mathbf{f}}^{[p]}}$.
\begin{equation}\label{Kunneth}
\mathrm{H}^{3}(\mathfrak{X}^{3}\otimes{K^{\mathrm{ac}}}, \Lambda(2))_{\mathfrak{m}_{\underline{\mathbf{f}}^{[p]}}}=\otimes^{3}_{i=1}\mathrm{H}^{1}(X_{K^{\mathrm{ac}}}, \Lambda(1))_{\mathfrak{m}^{[p]}_{i}}(-1).
\end{equation}
This follows from the fact that the $\mathrm{H}^{0}$ and $\mathrm{H}^{2}$ of the Shimura curve $X_{K^{\mathrm{ac}}}$ are Eisenstein as Hecke modules and the K\"{u}nneth formula. The same reasoning implies that $\mathrm{H}^{*}(\mathfrak{X}^{3}\otimes{K^{\mathrm{ac}}}, \Lambda(r))_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}$ is non-zero only when $*=3$ for any integer $r$. To analyze $\mathrm{H}^{3}(\mathfrak{X}^{3}\otimes{K^{\mathrm{ac}}}, \Lambda(2))$, we will localize its weight spectral sequence whose first page is given by \eqref{E1-primitive} at the triple $\mathfrak{m}_{\underline{\mathbf{f}}^{[p]}}$ and write the resulting spectral sequence by $\mathbb{E}(2)$. The ``untwisted" spectral sequence will be denoted by $\mathbb{E}$.
\begin{lemma}
The $\mathrm{E}_{1}$-page of the spectral sequence $\mathbb{E}(2)$ is given below
\begin{equation*}\label{E1}
\begin{tikzpicture}[thick,scale=0.9, every node/.style={scale=0.6}]
\matrix (m) [matrix of math nodes,
nodes in empty cells,nodes={minimum width=5ex,
minimum height=5ex,outer sep=-5pt},
column sep=1ex,row sep=1ex]{
& & & & \\
6 &\mathrm{H}^{0}(a_{3*}\Lambda(-1))_{*} & \mathrm{H}^{2}(a_{2*}\Lambda)_{*}&\mathrm{H}^{4}(a_{1*}\Lambda(1))_{*} & \mathrm{H}^{6}(a_{0*}\Lambda(2))_{*} \\
4 & &\mathrm{H}^{0}(a_{2*}\Lambda)_{*} &\mathrm{H}^{2}(a_{1*}\Lambda(1))_{*}\oplus \mathrm{H}^{0}(a_{3*}\Lambda)_{*} &\mathrm{H}^{4}(a_{0*}\Lambda(2))_{*}\oplus\mathrm{H}^{2}(a_{2*}\Lambda(1))_{*}&\mathrm{H}^{4}(a_{1*}\Lambda(2))_{*}\\
2 & & &\mathrm{H}^{0}(a_{1*}\Lambda(1))_{*} &\mathrm{H}^{2}(a_{0*}\Lambda(2))_{*}\oplus\mathrm{H}^{0}(a_{2*}\Lambda(1))_{*} &\mathrm{H}^{2}(a_{1*}\Lambda(2))_{*}\oplus\mathrm{H}^{0}(a_{3*}\Lambda(1))_{*}&\mathrm{H}^{2}(a_{2*}\Lambda(2))_{*}\\
0 & & & &\mathrm{H}^{0}(a_{0*}\Lambda(2))_{*} &\mathrm{H}^{0}(a_{1*}\Lambda(2))_{*} & \mathrm{H}^{2}(a_{2*}\Lambda(2))&\mathrm{H}^{0}(a_{3*}\Lambda(2))_{*}&\\
\quad\strut & -3 & -2 & -1 & 0 &1 &2 &3 & \strut \\};
\draw[thick] (m-1-1.east) -- (m-6-1.east) ;
\draw[thick] (m-6-1.north) -- (m-6-9.north) ;
\end{tikzpicture}
\end{equation*}
where the subscript ${*}$ means localization at $\mathfrak{m}_{\underline{\mathbf{f}}^{[p]}}$. In particular it follows that $\mathbb{E}(2)$ will degenerate on its $\mathrm{E}_{2}$-page.
\end{lemma}
\begin{proof}
This follows from Proposition \ref{bl-coho} and the explicit descriptions of $X^{3}_{{k}}$ in \eqref{primitive-cube} which allow us to remove all the odd degree cohomology terms.
\end{proof}
\begin{lemma}
The spectral sequence $\mathbb{E}(2)$ satisfies Assumption \ref{assump-E}.
\end{lemma}
\begin{proof}
We need to analyze the cohomology term $\mathrm{H}^{2}(a_{1*}\Lambda(1))$. First we have
\begin{equation}\label{a1}
\begin{aligned}
\mathrm{H}^{2}(a_{1*}\Lambda(1))= &\oplus^{5}_{i=0}\mathrm{H}^{2}(Y^{i(i+1)(i+2)(i+3)}_{k^{\mathrm{ac}}}, \Lambda(1))\\
&\oplus\oplus^{5}_{i=0}\mathrm{H}^{2}(Y^{i(i+1)(i+2)(i+4)}_{k^{\mathrm{ac}}}, \Lambda(1))\\
&\oplus\oplus^{5}_{i=0} \mathrm{H}^{2}(Y^{i(i+1)(i+2)(i+3)(i+5)}_{k^{\mathrm{ac}}}, \Lambda(1))\\
&\oplus\mathrm{H}^{2}(Y^{012345}_{k^{\mathrm{ac}}}, \Lambda(1))\\
\end{aligned}
\end{equation}
by Proposition \ref{Yk}.
We explicate the terms in the direct sum on the righthand side of \eqref{a1}.
\begin{itemize}
\item For $\mathrm{H}^{2}(Y^{i(i+1)(i+2)(i+3)}_{k^{\mathrm{ac}}}, \Lambda(1))$, we only make it explicit for $i=0$. By Proposition \ref{Yk} $(4)$, we have
\begin{equation*}
\begin{aligned}
\mathrm{H}^{2}(Y^{0123}_{k^{\mathrm{ac}}}, \Lambda(1))&= \mathrm{H}^{2}(X^{0123}_{[-+\pm], k^{\mathrm{ac}}}, \Lambda(1))\\
&=\mathrm{H}^{2}(\mathbf{P}^{1}(X^{B}_{-}), \Lambda(1))\otimes \mathrm{H}^{0}(\mathbf{P}^{1}(X^{B}_{+}), \Lambda)\otimes \mathrm{H}^{0}(X^{B}_{0}(p), \Lambda)\\
&\oplus\mathrm{H}^{0}(\mathbf{P}^{1}(X^{B}_{-}), \Lambda)\otimes \mathrm{H}^{2}(\mathbf{P}^{1}(X^{B}_{+}), \Lambda(1))\otimes \mathrm{H}^{0}(X^{B}_{0}(p), \Lambda).\\
\end{aligned}
\end{equation*}
\item For $\mathrm{H}^{2}(Y^{i(i+1)(i+2)(i+4)}_{k^{\mathrm{ac}}}, \Lambda(1))$, we only make it explicit for $i=0$. By Proposition \ref{Yk} $(5)$, we have
\begin{equation*}
\begin{aligned}
\mathrm{H}^{2}(Y^{0124}_{k^{\mathrm{ac}}}, \Lambda(1))&= \mathrm{H}^{2}(X^{0124}_{[\pm++], k^{\mathrm{ac}}}, \Lambda(1))\oplus \mathrm{H}^{0}(X^{012345}_{k^{\mathrm{ac}}}, \Lambda)\\
&=\mathrm{H}^{0}(X^{B}_{0}(p)), \Lambda)\otimes \mathrm{H}^{2}(\mathbf{P}^{1}(X^{B}_{+}), \Lambda(1))\otimes \mathrm{H}^{0}(\mathbf{P}^{1}(X^{B}_{+}), \Lambda)\\
&\oplus\mathrm{H}^{0}(X^{B}_{0}(p)), \Lambda)\otimes \mathrm{H}^{0}(\mathbf{P}^{1}(X^{B}_{+}), \Lambda)\otimes \mathrm{H}^{2}(\mathbf{P}^{1}(X^{B}_{+}), \Lambda(1))\\
& \oplus \mathrm{H}^{0}(X^{012345}, \Lambda).\\
\end{aligned}
\end{equation*}
\item For $\mathrm{H}^{2}(Y^{i(i+1)(i+2)(i+3)(i+5)}_{k^{\mathrm{ac}}}, \Lambda(1))$, we again only make it explicit for $i=0$. By Proposition \ref{Yk} $(6)$, we have
\begin{equation*}
\begin{aligned}
\mathrm{H}^{2}(Y^{01235}_{k^{\mathrm{ac}}}, \Lambda(1))&= \mathrm{H}^{2}(\mathbf{P}^{1}(X^{01235}_{[-\pm\pm], k^{\mathrm{ac}}}), \Lambda(1))\\
&= \mathrm{H}^{0}(X^{01235}_{[-\pm\pm], k^{\mathrm{ac}}}), \Lambda) \\
&=\mathrm{H}^{0}(\mathbf{P}^{1}(X^{B}_{-}), \Lambda)\otimes \mathrm{H}^{0}(X^{B}_{0}(p), \Lambda)\otimes \mathrm{H}^{0}(X^{B}_{0}(p), \Lambda).\\
\end{aligned}
\end{equation*}
\item By Proposition \ref{Yk} $(7)$, we have
\begin{equation*}
\begin{aligned}
\mathrm{H}^{2}(Y^{012345}_{k^{\mathrm{ac}}}, \Lambda(1))&=\mathrm{H}^{0}(X^{012345}_{k^{\mathrm{ac}}}, \Lambda)\\
&=\mathrm{H}^{0}(X^{B}_{0}(p), \Lambda)\otimes\mathrm{H}^{0}(X^{B}_{0}(p), \Lambda)\otimes\mathrm{H}^{0}(X^{B}_{0}(p), \Lambda)
\end{aligned}
\end{equation*}
\end{itemize}
It follows from the above calculation that $G_{k}$ acts trivially on $\mathrm{H}^{2}(a_{1*}\Lambda(1))$. Using this we can immediately verify that Assumption \ref{assump-E} is satisfied for the spectral sequence $\mathbb{E}$.
\end{proof}
The above lemma allows us to apply the exact sequence in \eqref{sing-exact} to calculate
\begin{equation*}
\mathrm{H}^{1}_{\mathrm{sing}}(K, \mathrm{H}^{3}(X^{3}\otimes{K^{\mathrm{ac}}}, \Lambda(2))_{\mathfrak{m}_{\underline{\mathbf{f}}^{[p]}}}).
\end{equation*}
For $i=1, 2, 3$, we will set
\begin{equation*}
\mathrm{T}^{[p]}_{i}=\mathrm{H}^{1}(X_{\mathbf{Q}^{\mathrm{ac}}}, \mathcal{O}_{i}(1))_{\mathfrak{m}^{[p]}_{i}}
\end{equation*}
and let $\mathrm{T}^{[p]}_{i, n}= \mathrm{T}^{[p]}_{i}/I^{[p]}_{i, n}$ be the natural quotient.
Let
\begin{equation*}
\mathrm{M}^{[p]}(\underline{\mathbf{f}})=\mathrm{T}^{[p]}_{1}\otimes \mathrm{T}^{[p]}_{2}\otimes \mathrm{T}^{[p]}_{3}
\end{equation*}
which is a $G_{\mathbf{Q}}$-module over $\mathcal{O}=\mathcal{O}_{1}\otimes\mathcal{O}_{2}\otimes\mathcal{O}_{3}$ and let
\begin{equation*}
\mathrm{M}^{[p]}_{n}(\underline{\mathbf{f}})=\mathrm{T}^{[p]}_{1, n}\otimes \mathrm{T}^{[p]}_{2, n}\otimes \mathrm{T}^{[p]}_{3, n}
\end{equation*}
which is a $G_{\mathbf{Q}}$-module over $\mathcal{O}_{n}=\mathcal{O}_{1, n}\otimes \mathcal{O}_{2, n}\otimes \mathcal{O}_{3, n}$.
\begin{lemma}\label{3-fin}
Let $p$ be an $n$-admissible prime for $\underline{\mathbf{f}}$. The module $\mathrm{M}^{[p]}_{n}(\underline{\mathbf{f}})$ is unramified viewed as an $\mathcal{O}_{n}[G_{K}]$-module and is isomorphic to
\begin{equation*}
\mathcal{O}_{n}\oplus\mathcal{O}^{\oplus 3}_{n}(1)\oplus\mathcal{O}^{\oplus 3}_{n}(2) \oplus \mathcal{O}_{n}(3).
\end{equation*}
The singular quotient $\mathrm{H}^{1}_{\mathrm{sing}}(K, \mathrm{M}^{[p]}_{n}(\underline{\mathbf{f}})(-1))$ is free of rank $3$ over $\mathcal{O}_{n}$
\end{lemma}
\begin{proof}
The first part follows from the fact that $p$ is an $n$-admissible prime for $\underline{\mathbf{f}}$. In particular, $(3)$ in the definition of $n$-admissible prime for $\underline{\mathbf{f}}$ implies that $\mathrm{T}^{[p]}_{i, n}\cong \mathcal{O}_{i, n}\oplus\mathcal{O}_{i, n}(1)$ as an $\mathcal{O}_{K}$-module. See also Lemma \ref{p-lower}. For the second part, we have
\begin{equation*}
\begin{aligned}
\mathrm{H}^{1}_{\mathrm{sing}}(K, \mathrm{M}^{[p]}_{n}(\underline{\mathbf{f}})(-1)) &\cong\mathrm{H}^{1}(I_{K}, \mathrm{M}^{[p]}_{n}(\underline{\mathbf{f}})(-1))^{G_{k}} \\
&\cong\mathrm{Hom}(I_{K}, \mathrm{M}^{[p]}_{n}(\underline{\mathbf{f}})(-1))^{G_{k}} \\
&\cong\mathrm{Hom}(\Lambda(1), \mathrm{M}^{[p]}_{n}(\underline{\mathbf{f}})(-1))^{G_{k}} .\\
\end{aligned}
\end{equation*}
Then the result follows from the first part.
\end{proof}
The following theorem is the \emph{ramified arithmetic level raising for the triple product of Shimura curves} and is one of the main results in this article. Recall that we denote by $\Phi$ the group of connected components of the special fiber of the N\'{e}ron model of the Jacobian of the Shimura curve $X_{K}$.
\begin{theorem}\label{arithmetic-level-raising}
Let $p$ be an $n$-admissible prime for the triple $\underline{\mathbf{f}}=(f_{1}, f_{2}, f_{3})$. For $i=1, 2, 3$, assume that
\begin{enumerate}
\item the maximal ideal $\mathfrak{m}_{i}$ is residually irreducible;
\item each $S^{B}_{2}(N^{+}, \mathcal{O}_{i})_{\mathfrak{m}_{i}}$ is free of rank $1$ over $\mathbb{T}_{\mathfrak{m}_{i}}$.
\end{enumerate}
Then we have an isomorphism
\begin{equation*}
\mathrm{H}^{1}_{\mathrm{sing}}(K, \mathrm{M}^{[p]}_{n}(\underline{\mathbf{f}})(-1))\cong \oplus^{3}_{j=1}(\otimes^{3}_{i=1}S^{B}_{2}(N^{+}, \mathcal{O}_{i}){/I_{i,n}}).
\end{equation*}
More canonically, we have the isomorphism
\begin{equation*}
\mathrm{H}^{1}_{\mathrm{sing}}(K, \mathrm{M}^{[p]}_{n}(\underline{\mathbf{f}})(-1)) \cong \oplus^{3}_{j=1}(\otimes^{3}_{i=1}\Phi_{\mathcal{O}_{i}}/I^{[p]}_{i,n}).
\end{equation*}
\end{theorem}
\subsection{Proof of the arithmetic level raising} To prove the preceding theorem, we need a different presentation of the potential map. Recall that we have the following exact sequence
\begin{equation*}
A_{2}(Y_{k}, \Lambda)^{0}_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}\xrightarrow{\nabla}A^{2}(Y_{k}, \Lambda)^{0}_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}\xrightarrow{\eta} \mathrm{H}^{1}_{\mathrm{sing}}(K, \mathrm{H}^{3}(X^{3}\otimes{K^{\mathrm{ac}}}, \Lambda(2))_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}})\rightarrow 0
\end{equation*}
where
\begin{equation*}
\begin{aligned}
&A^{2}(Y_{k}, \Lambda)^{0}_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}=\mathrm{im}[\mathrm{H}^{2}(a_{1*}\Lambda(1))_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}\xrightarrow{\tau}\mathrm{H}^{4}(a_{0*}\Lambda(2))_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}]^{G_{k}}\\
&A_{2}(Y_{k}, \Lambda)^{0}_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}=\mathrm{im}[\mathrm{H}^{2}(a_{0*}\Lambda(1))_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}\xrightarrow{\rho}\mathrm{H}^{2}(a_{1*}\Lambda(1))_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}]^{G_{k}}.\\
\end{aligned}
\end{equation*}
\begin{lemma}
We have the following statements.
\begin{enumerate}
\item $A_{2}(Y_{k}, \Lambda)^{0}_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}\cong\mathrm{coker}[\mathrm{H}^{0}(a_{1*}\Lambda)_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}\xrightarrow{\tau}\mathrm{H}^{2}(a_{0*}\Lambda(1))_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}]^{G_{k}}$.
\item $A^{2}(Y_{k}, \Lambda)^{0}_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}\cong\ker[\mathrm{H}^{4}(a_{0*}\Lambda(2))_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}\xrightarrow{\rho}\mathrm{H}^{4}(a_{1*}\Lambda(2))_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}]^{G_{k}}$.
\item The map $\nabla$ is induced by the composite $\tau\circ\rho$ under the above isomorphisms.
\end{enumerate}
\end{lemma}
\begin{proof}
By the previous reasoning, $\mathrm{H}^{2}(X^{3}\otimes{K^{\mathrm{ac}}}, \Lambda(1))$ is zero as it is Eisenstein and therefore $\mathbb{E}^{0, 2}_{2}(1)$ is zero. It follows then that
\begin{equation*}
\ker[\mathrm{H}^{2}(a_{0*}\Lambda(1))_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}\xrightarrow{\rho}\mathrm{H}^{2}(a_{1*}\Lambda(1))_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}]\cong \mathrm{im}[\mathrm{H}^{0}(a_{1*}\Lambda)_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}\xrightarrow{\tau}\mathrm{H}^{2}(a_{0*}\Lambda(1))_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}].
\end{equation*}
Then $(1)$ is clear. The proof of $(2)$ is completely the same using the fact that $\mathrm{H}^{4}(X^{3}\otimes{K^{\mathrm{ac}}}, \Lambda(2))$ and thus $\mathbb{E}^{0, 4}_{2}(2)$ vanish this time. The rest of the claims follow by constructions.
\end{proof}
\begin{lemma}\label{feature-cycle}
We have the following isomorphisms
\begin{enumerate}
\item $A_{2}(Y_{k}, \Lambda)^{0}_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}\cong\oplus^{5}_{i=0}\mathrm{H}^{0}(X^{i(i+1)(i+2)(i+3)(i+5)}_{k}, \Lambda)_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}\oplus \mathrm{H}^{0}(X^{012345}_{k}, \Lambda)_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}$;
\item $A^{2}(Y_{k}, \Lambda)^{0}_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}\cong\oplus^{5}_{i=0}\mathrm{H}^{2}(X^{i(i+1)(i+2)(i+3)(i+5)}_{k}, \Lambda(1))_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}\oplus \mathrm{H}^{0}(X^{012345}_{k}, \Lambda)_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}$.
\end{enumerate}
\end{lemma}
\begin{proof}
The second part follows from the first part by duality and we only prove the first part. We have by Proposition \ref{Yk}
\begin{equation*}
\mathrm{H}^{2}(a_{0*}\Lambda(1))_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}=\oplus^{5}_{i=0}\mathrm{H}^{2}(Y^{i(i+1)(i+2)}_{k^{\mathrm{ac}}}, \Lambda(1))_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}\oplus \mathrm{H}^{2}(Y^{024}_{k^{\mathrm{ac}}}, \Lambda(1))_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}\oplus \mathrm{H}^{2}(Y^{135}_{k^{\mathrm{ac}}}, \Lambda(1))_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}.
\end{equation*}
By Proposition \ref{Yk} $(1)$ and Proposition \ref{bl-coho}, we have
\begin{equation*}
\begin{aligned}
&\mathrm{H}^{2}(Y^{i(i+1)(i+2)}_{k^{\mathrm{ac}}}, \Lambda(1))=\mathrm{H}^{2}(X^{i(i+1)(i+2)}_{k^{\mathrm{ac}}}, \Lambda(1))\oplus \mathrm{H}^{0}(X^{i(i+1)(i+2)(i+3)(i+5)}_{k^{\mathrm{ac}}}, \Lambda);\\
&\mathrm{H}^{2}(Y^{024}_{k^{\mathrm{ac}}}, \Lambda(1))=\mathrm{H}^{2}(X^{024}_{k^{\mathrm{ac}}}, \Lambda(1))\oplus \mathrm{H}^{0}(X^{01234}_{k^{\mathrm{ac}}}, \Lambda) \oplus \mathrm{H}^{0}(X^{01245}_{k^{\mathrm{ac}}}, \Lambda) \oplus \mathrm{H}^{0}(X^{02345}_{k^{\mathrm{ac}}}, \Lambda)\oplus \mathrm{H}^{0}(X^{012345}_{k^{\mathrm{ac}}}, \Lambda);\\
&\mathrm{H}^{2}(Y^{135}_{k^{\mathrm{ac}}}, \Lambda(1))=\mathrm{H}^{2}(X^{135}_{k^{\mathrm{ac}}}, \Lambda(1))\oplus \mathrm{H}^{0}(X^{01235}_{k^{\mathrm{ac}}}, \Lambda) \oplus \mathrm{H}^{0}(X^{01345}_{k^{\mathrm{ac}}}, \Lambda) \oplus \mathrm{H}^{0}(X^{12345}_{k^{\mathrm{ac}}}, \Lambda)\oplus \mathrm{H}^{0}(X^{012345}_{k^{\mathrm{ac}}}, \Lambda).
\end{aligned}
\end{equation*}
On the other hand, we have
\begin{equation*}
\begin{aligned}
\mathrm{H}^{0}(a_{1*}\Lambda)= &\oplus^{5}_{i=0}\mathrm{H}^{0}(X^{i(i+1)(i+2)(i+3)}_{k^{\mathrm{ac}}}, \Lambda)\\
&\oplus\oplus^{5}_{i=0}\mathrm{H}^{0}(X^{i(i+1)(i+2)(i+4)}_{k^{\mathrm{ac}}}, \Lambda)\\
&\oplus\oplus^{5}_{i=0} \mathrm{H}^{0}(X^{i(i+1)(i+2)(i+3)(i+5)}_{k^{\mathrm{ac}}}, \Lambda)\\
&\oplus\mathrm{H}^{0}(X^{012345}_{k^{\mathrm{ac}}}, \Lambda).\\
\end{aligned}
\end{equation*}
We claim that the terms
\begin{equation*}
\begin{aligned}
\mathrm{H}^{2}(X^{i(i+1)(i+2)}_{k^{\mathrm{ac}}}, \Lambda(1)), \hphantom{a} \mathrm{H}^{2}(X^{024}_{k^{\mathrm{ac}}}, \Lambda(1)),\hphantom{b}\mathrm{H}^{2}(X^{135}_{k^{\mathrm{ac}}}, \Lambda(1))\\
\end{aligned}
\end{equation*}
in $\mathrm{H}^{2}(a_{0*}\Lambda(1))$ are cancelled by the terms
\begin{equation*}
\begin{aligned}
\mathrm{H}^{0}(X^{i(i+1)(i+2)(i+3)}_{k^{\mathrm{ac}}}, \Lambda),\hphantom{c}\mathrm{H}^{0}(X^{i(i+1)(i+2)(i+4)}_{k^{\mathrm{ac}}}, \Lambda)\\
\end{aligned}
\end{equation*}
in $\mathrm{H}^{0}(a_{1*}\Lambda)$ under the Gysin map $\tau$ after localizing at $\mathfrak{m}_{\underline{\mathbf{f}}^{[p]}}$. For example, consider the term
\begin{equation*}
\begin{aligned}
\mathrm{H}^{2}(X^{012}_{[-++],k^{\mathrm{ac}}}, \Lambda(1))&=\mathrm{H}^{2}(\mathbf{P}^{1}(X^{B}_{-}), \Lambda(1))\otimes \mathrm{H}^{0}(\mathbf{P}^{1}(X^{B}_{+}), \mathcal{O})\otimes \mathrm{H}^{0}(\mathbf{P}^{1}(X^{B}_{+}), \Lambda)\\
&\oplus \mathrm{H}^{0}(\mathbf{P}^{1}(X^{B}_{-}), \Lambda)\otimes \mathrm{H}^{2}(\mathbf{P}^{1}(X^{B}_{+}), \Lambda(1))\otimes \mathrm{H}^{0}(\mathbf{P}^{1}(X^{B}_{+}), \Lambda)\\
&\oplus \mathrm{H}^{0}(\mathbf{P}^{1}(X^{B}_{-}), \Lambda)\otimes \mathrm{H}^{0}(\mathbf{P}^{1}(X^{B}_{+}), \Lambda)\otimes \mathrm{H}^{2}(\mathbf{P}^{1}(X^{B}_{+}), \Lambda(1))\\
\end{aligned}
\end{equation*}
then it is cancelled by
\begin{equation*}
\begin{aligned}
&\mathrm{H}^{0}(X^{0124}_{[\pm++], k^{\mathrm{ac}}}, \Lambda)=\mathrm{H}^{0}(X^{B}_{0}(p), \Lambda)\otimes \mathrm{H}^{0}(\mathbf{P}^{1}(X^{B}_{+}), \Lambda) \otimes \mathrm{H}^{0}(\mathbf{P}^{1}(X^{B}_{+}), \Lambda)\\
&\mathrm{H}^{0}(X^{0245}_{[+\pm+], k^{\mathrm{ac}}}, \Lambda)=\mathrm{H}^{0}(\mathbf{P}^{1}(X^{B}_{+}), \Lambda)\otimes \mathrm{H}^{0}(X^{B}_{0}(p), \Lambda) \otimes \mathrm{H}^{0}(\mathbf{P}^{1}(X^{B}_{+}), \Lambda)\\
&\mathrm{H}^{0}(X^{0234}_{[++\pm], k^{\mathrm{ac}}}, \Lambda)=\mathrm{H}^{0}(\mathbf{P}^{1}(X^{B}_{+}), \Lambda)\otimes \mathrm{H}^{0}(\mathbf{P}^{1}(X^{B}_{+}), \Lambda) \otimes \mathrm{H}^{0}(X^{B}_{0}(p), \Lambda)\\
\end{aligned}
\end{equation*}
after applying the localizing at $\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}$. This follows from Proposition \ref{component-grp}(4) that the Gysin map is surjective up to an Eisenstein part. Similar computations as in this case proves the claim. Then it is clear that
\begin{equation*}
A_{2}(Y_{k}, \Lambda)^{0}_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}\cong\oplus^{5}_{i=0}\mathrm{H}^{0}(X^{i(i+1)(i+2)(i+3)(i+5)}_{k}, \Lambda)_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}\oplus \mathrm{H}^{0}(X^{012345}_{k}, \Lambda)_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}.
\end{equation*}
This finishes the proof of the first part.
\end{proof}
\begin{remark}
Those surfaces of the form $Y^{i(i+1)(i+2)(i+3)(i+5)}$ and $Y^{012345}$ should be considered as the \emph{featuring cycles} in the terminology of \cite[\S 3.2]{Liu-cubic}. To compute the potential map, we need to understand the intersection matrix of these featuring cycles. The computation below can be viewed as a down-to-earth way of computing such intersection matrix although the intersection numbers do not appear explicitly. \end{remark}
\begin{myproof}{Theorem}{\ref{arithmetic-level-raising}}
The first isomorphism in the statement of the theorem is clearly a consequence of the second more canonical isomorphism in light of what we have explained in the Shimura curve case in Lemma \ref{comp-char}. Therefore we will prove the more canonical isomorphism.
Recall that we have by Lemma \ref{feature-cycle}
\begin{enumerate}
\item $A_{2}(Y_{k}, \Lambda)^{0}_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}=\oplus^{5}_{i=0}\mathrm{H}^{0}(X^{i(i+1)(i+2)(i+3)(i+5)}_{k}, \Lambda)_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}\oplus \mathrm{H}^{0}(X^{012345}_{k}, \Lambda)_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}$;
\item $A^{2}(Y_{k}, \Lambda)^{0}_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}=\oplus^{5}_{i=0}\mathrm{H}^{2}(X^{i(i+1)(i+2)(i+3)(i+5)}_{k}, \Lambda(1))_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}\oplus \mathrm{H}^{0}(X^{012345}_{k}, \Lambda)_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}$.
\end{enumerate}
with the potential map $A_{2}(Y_{k}, \Lambda)^{0}_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}\xrightarrow{\nabla} A^{2}(Y_{k}, \Lambda)^{0}_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}$ induced by $\tau\circ\rho$.
We consider the terms
\begin{equation*}
\begin{aligned}
&\mathrm{H}^{2}(X^{01235}_{[-\pm\pm]}, \Lambda(1))_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}=\mathrm{H}^{2}(\mathbf{P}^{1}(X^{B}_{-}), \Lambda(1))_{\mathfrak{m}^{[p]}_{1}}\otimes \mathrm{H}^{0}(X^{B}_{0}(p), \Lambda)_{\mathfrak{m}^{[p]}_{2}}\otimes \mathrm{H}^{0}(X^{B}_{0}(p), \Lambda)_{\mathfrak{m}^{[p]}_{3}}\\
&\mathrm{H}^{2}(X^{02345}_{[+\pm\pm]}, \Lambda(1))_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}=\mathrm{H}^{2}(\mathbf{P}^{1}(X^{B}_{+}), \Lambda(1))_{\mathfrak{m}^{[p]}_{1}}\otimes \mathrm{H}^{0}(X^{B}_{0}(p), \Lambda)_{\mathfrak{m}^{[p]}_{2}}\otimes \mathrm{H}^{0}(X^{B}_{0}(p), \Lambda)_{\mathfrak{m}^{[p]}_{3}}\\
\end{aligned}
\end{equation*}
in $A^{2}(Y_{k}, \Lambda)^{0}_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}$ and consider the images of the terms
\begin{equation*}
\begin{aligned}
&\mathrm{H}^{0}(X^{01235}_{[-\pm\pm]}, \Lambda)_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}=\mathrm{H}^{0}(\mathbf{P}^{1}(X^{B}_{-}), \Lambda)_{\mathfrak{m}^{[p]}_{1}}\otimes \mathrm{H}^{0}(X^{B}_{0}(p), \Lambda)_{\mathfrak{m}^{[p]}_{2}}\otimes \mathrm{H}^{0}(X^{B}_{0}(p), \Lambda)_{\mathfrak{m}^{[p]}_{3}}\\
&\mathrm{H}^{0}(X^{02345}_{[+\pm\pm]}, \Lambda)_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}=\mathrm{H}^{0}(\mathbf{P}^{1}(X^{B}_{+}), \Lambda)_{\mathfrak{m}^{[p]}_{1}}\otimes \mathrm{H}^{0}(X^{B}_{0}(p), \Lambda)_{\mathfrak{m}^{[p]}_{2}}\otimes \mathrm{H}^{0}(X^{B}_{0}(p), \Lambda)_{\mathfrak{m}^{[p]}_{3}}\\
\end{aligned}
\end{equation*}
from $A_{2}(Y_{k}, \lambda)^{0}_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}$ via $\nabla$. The map $\nabla$ restricted to $\mathrm{H}^{0}(X^{01235}_{k}, \Lambda)_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}$ fits into the following diagram
\begin{equation*}
\begin{tikzcd}
\mathrm{H}^{2}(Y^{135}_{k}, \Lambda(1))_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}} \arrow[r, "\rho"] &\mathrm{H}^{2}(Y^{012345}_{k}, \Lambda(1))_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}} \arrow[r,"\tau"] & \mathrm{H}^{4}(Y^{135}_{k}, \Lambda(2))_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}} \\
\mathrm{H}^{0}(X^{01235}_{k}, \Lambda)_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}\arrow[u, hook] \arrow[r, "\rho"] & \mathrm{H}^{0}(X^{012345}_{k}, \Lambda)_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}} \arrow[u, hook] \arrow[r, "\tau"] & \mathrm{H}^{2}(X^{01235}_{k}, \Lambda(1))_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}} \arrow[u, hook]
\end{tikzcd}
\end{equation*}
and similarly the map $\nabla$ restricted to $\mathrm{H}^{0}(X^{02345}_{k}, \Lambda)_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}$ fits into the following diagram
\begin{equation*}
\begin{tikzcd}
\mathrm{H}^{2}(Y^{024}_{k}, \Lambda(1))_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}} \arrow[r, "\rho"] &\mathrm{H}^{2}(Y^{012345}_{k}, \Lambda(1))_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}} \arrow[r,"\tau"] & \mathrm{H}^{4}(Y^{024}_{k}, \Lambda(2))_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}} \\
\mathrm{H}^{0}(X^{02345}_{k}, \Lambda)_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}\arrow[u, hook] \arrow[r, "\rho"] & \mathrm{H}^{0}(X^{012345}_{k}, \Lambda)_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}} \arrow[u, hook] \arrow[r, "\tau"] & \mathrm{H}^{2}(X^{02345}_{k}, \Lambda(1))_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}. \arrow[u, hook]
\end{tikzcd}
\end{equation*}
Here the top row of the diagram identifies the locations of the featuring cycles and the second row is the actural restriction of the map $\nabla$ to the featuring cycles. Putting these diagrams together, the restriction of $\nabla$ at
\begin{equation*}
\mathrm{H}^{0}(X^{01235}_{[-\pm\pm], k}, \Lambda)_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}\oplus\mathrm{H}^{0}(X^{02345}_{[+\pm\pm], k}, \Lambda)_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}\xrightarrow{\nabla} \mathrm{H}^{2}(X^{01235}_{[-\pm\pm], k}, \Lambda)_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}\oplus\mathrm{H}^{2}(X^{02345}_{[+\pm\pm], k}, \Lambda)_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}
\end{equation*}
is given by the composite
\begin{equation*}
\begin{tikzcd}
\mathrm{H}^{0}(\mathbf{P}^{1}(X^{B}_{\pm}), \Lambda)_{\mathfrak{m}^{[p]}_{1}}\otimes \mathrm{H}^{0}(X^{B}_{0}(p), \Lambda)_{\mathfrak{m}^{[p]}_{2}}\otimes \mathrm{H}^{0}(X^{B}_{0}(p), \Lambda)_{\mathfrak{m}^{[p]}_{3}}\arrow[d] \\
\mathrm{H}^{0}(X^{B}_{0}(p), \Lambda)_{\mathfrak{m}^{[p]}_{1}}\otimes \mathrm{H}^{0}(X^{B}_{0}(p), \Lambda)_{\mathfrak{m}^{[p]}_{2}}\otimes \mathrm{H}^{0}(X^{B}_{0}(p), \Lambda)_{\mathfrak{m}^{[p]}_{3}} \arrow[d] \\
\mathrm{H}^{2}(\mathbf{P}^{1}(X^{B}_{\pm}), \Lambda(1))_{\mathfrak{m}^{[p]}_{1}}\otimes \mathrm{H}^{0}(X^{B}_{0}(p), \Lambda)_{\mathfrak{m}^{[p]}_{2}}\otimes \mathrm{H}^{0}(X^{B}_{0}(p), \Lambda)_{\mathfrak{m}^{[p]}_{3}}\\
\end{tikzcd}
\end{equation*}
where we write $\mathrm{H}^{*}(\mathbf{P}^{1}(X^{B}_{\pm}), \Lambda)$ for $\mathrm{H}^{*}(\mathbf{P}^{1}(X^{B}_{-}), \Lambda)\oplus \mathrm{H}^{*}(\mathbf{P}^{1}(X^{B}_{+}), \Lambda)$.
Notice that the composite of the above map is given by the intersection matrix
\begin{equation*}
\begin{pmatrix}
-(p+1) &T_{p}\\
T_{p} &-(p+1)\\
\end{pmatrix}
\end{equation*}
on the first factors of the tensor products and the identity map restricted to the rest of the two factors by what we have explained in the Shimura curve case treated as in Theorem \ref{level-raise-curve}. After quotient out by the ideals $(I^{[p]}_{1, n}, I^{[p]}_{2, n}, I^{[p]}_{3, n})$, we see the quotient of
\begin{equation}
\mathrm{H}^{2}(X^{01235}_{[-\pm\pm]}, \Lambda(1))_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}\oplus \mathrm{H}^{2}(X^{02345}_{[+\pm\pm]}, \Lambda(1))_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}
\end{equation}
in $A^{2}(Y_{k}, \Lambda)^{0}_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}$ by the image of
\begin{equation}
\mathrm{H}^{0}(X^{01235}_{[-\pm\pm]}, \Lambda)_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}\oplus\mathrm{H}^{0}(X^{02345}_{[+\pm\pm]}, \Lambda)_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}
\end{equation}
in $A_{2}(Y_{k}, \Lambda)^{0}_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}$
can be identified with
\begin{equation*}
\Phi_{\Lambda}/I^{[p]}_{1, n}\otimes \mathrm{H}^{0}(X^{B}_{0}(p), \Lambda)_{\mathfrak{m}^{[p]}_{2}}/I^{[p]}_{2,n}\otimes \mathrm{H}^{0}(X^{B}_{0}(p), \Lambda)_{\mathfrak{m}^{[p]}_{3}}/I^{[p]}_{3,n}.
\end{equation*}
Next we consider the term
\begin{equation*}
\mathrm{H}^{2}(X^{01235}_{[-\pm\pm]}, \Lambda(1))_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}
\end{equation*}
in $A^{2}(Y_{k}, \Lambda)^{0}_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}$ and the image of the terms
\begin{equation*}
\mathrm{H}^{0}(X^{01345}_{[\pm-\pm]}, \Lambda)_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}\hphantom{a}\text{and}\hphantom{a}\mathrm{H}^{0}(X^{01234}_{[\pm+\pm]}, \Lambda)_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}
\end{equation*}
in $A_{2}(Y_{k}, \Lambda)^{0}_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}$ via $\nabla$.
Then we have similar diagrams as in the previous case
\begin{equation*}
\begin{tikzcd}
\mathrm{H}^{2}(Y^{135}_{k}, \Lambda(1))_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}} \arrow[r, "\rho"] &\mathrm{H}^{2}(Y^{012345}_{k}, \Lambda(1))_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}} \arrow[r,"\tau"] & \mathrm{H}^{4}(Y^{135}_{k},\Lambda(2))_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}} \\
\mathrm{H}^{0}(X^{01345}_{k}, \Lambda)_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}\arrow[u, hook] \arrow[r, "\rho"] & \mathrm{H}^{0}(X^{012345}_{k}, \Lambda)_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}} \arrow[u, hook] \arrow[r, "\tau"] & \mathrm{H}^{2}(X^{01235}_{k}, \Lambda(1))_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}. \arrow[u, hook]
\end{tikzcd}
\end{equation*}
and
\begin{equation*}
\begin{tikzcd}
\mathrm{H}^{2}(Y^{024}_{k},\Lambda(1))_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}} \arrow[r, "\rho"] &\mathrm{H}^{2}(Y^{012345}_{k}, \Lambda(1))_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}} \arrow[r,"\tau"] & \mathrm{H}^{4}(Y^{135}_{k}, \Lambda(2))_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}} \\
\mathrm{H}^{0}(X^{01234}_{k}, \Lambda)_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}\arrow[u, hook] \arrow[r, "\rho"] & \mathrm{H}^{0}(X^{012345}_{k}, \Lambda)_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}} \arrow[u, hook] \arrow[r, "\tau"] & \mathrm{H}^{2}(X^{01235}_{k}, \Lambda(1))_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}. \arrow[u, hook]
\end{tikzcd}
\end{equation*}
Putting these together, the restriction at
\begin{equation*}
\mathrm{H}^{0}(X^{01345}_{[-\pm\pm], k}, \Lambda)_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}\oplus\mathrm{H}^{0}(X^{01234}_{[+\pm\pm], k}, \Lambda)_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}} \xrightarrow{\nabla} \mathrm{H}^{2}(X^{01235}_{[-\pm\pm]}, \Lambda(1))_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}
\end{equation*}
is given by
\begin{equation*}
\begin{tikzcd}
\mathrm{H}^{0}(X^{B}_{0}(p), \Lambda)_{\mathfrak{m}^{[p]}_{1}}\otimes \mathrm{H}^{0}(\mathbf{P}^{1}(X^{B}_{\pm}), \Lambda)_{\mathfrak{m}^{[p]}_{2}}\otimes\mathrm{H}^{0}(X^{B}_{0}(p), \Lambda)_{\mathfrak{m}^{[p]}_{3}}\arrow[d] \\
\mathrm{H}^{0}(X^{B}_{0}(p), \Lambda)_{\mathfrak{m}^{[p]}_{1}}\otimes \mathrm{H}^{0}(X^{B}_{0}(p), \Lambda)_{\mathfrak{m}^{[p]}_{2}}\otimes \mathrm{H}^{0}(X^{B}_{0}(p), \Lambda)_{\mathfrak{m}^{[p]}_{3}} \arrow[d] \\
\mathrm{H}^{2}(\mathbf{P}^{1}(X^{B}_{-}), \Lambda(1))_{\mathfrak{m}^{[p]}_{1}}\otimes \mathrm{H}^{0}(X^{B}_{0}(p), \Lambda)_{\mathfrak{m}^{[p]}_{2}}\otimes \mathrm{H}^{0}(X^{B}_{0}(p), \Lambda)_{\mathfrak{m}^{[p]}_{3}}.\\
\end{tikzcd}
\end{equation*}
In the above diagram, the first arrow is given by the restriction morphism
\begin{equation*}
\mathrm{H}^{0}(\mathbf{P}^{1}(X^{B}_{-}), \Lambda)_{\mathfrak{m}^{[p]}_{2}}\oplus \mathrm{H}^{0}(\mathbf{P}^{1}(X^{B}_{+}), \Lambda)_{\mathfrak{m}^{[p]}_{2}}\rightarrow \mathrm{H}^{0}(X^{B}_{0}(p), \Lambda)_{\mathfrak{m}^{[p]}_{2}}
\end{equation*}
on the second factor of the tensor product and the identity map on the rest of the factors. The second arrow is given by the Gysin morphism
\begin{equation*}
\mathrm{H}^{0}(X^{B}_{0}(p), \Lambda)_{\mathfrak{m}^{[p]}_{1}}\xrightarrow{\tau} \mathrm{H}^{2}(\mathbf{P}^{1}(X^{B}_{-}), \Lambda(1))_{\mathfrak{m}^{[p]}_{1}}
\end{equation*}
on the first factor of the tensor product which is surjective up to Eisenstein part by Proposition \ref{component-grp} $(2)$. Therefore by Proposition \ref{component-grp} $(3)$, the quotient of term
\begin{equation*}
\mathrm{H}^{2}(X^{01235}_{[-\pm\pm]}, \Lambda(1))_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}
\end{equation*}
in $A_{2}(Y_{k}, \Lambda)^{0}_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}$ by the terms
\begin{equation*}
\mathrm{H}^{0}(X^{01345}_{[\pm-\pm], k}, \Lambda)_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}\oplus\mathrm{H}^{0}(X^{01234}_{[\pm+\pm], k}, \Lambda)_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}
\end{equation*}
in $A^{2}(Y_{k}, \Lambda)^{0}_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}$ is given by
\begin{equation*}
\mathrm{H}^{2}(\mathbf{P}^{1}(X^{B}_{-}), \Lambda(1))/I^{[p]}_{1, n}\otimes\mathcal{X}^{\vee}_{\Lambda}/I^{[p]}_{2, n}\otimes \mathrm{H}^{0}(X^{B}_{0}(p), \Lambda)/I^{[p]}_{3, n}
\end{equation*}
after we quotient out the ideals $(I^{[p]}_{1, n}, I^{[p]}_{2, n}, I^{[p]}_{3, n})$.
Finally, we consider the term $\mathrm{H}^{0}(X^{012345}_{k}, \Lambda)_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}$ in $A_{2}(Y_{k}, \Lambda)^{0}_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}$ and the same cohomology group $\mathrm{H}^{0}(X^{012345}_{k}, \Lambda)_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}$ in $A^{2}(Y_{k}, \Lambda)^{0}_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}$. It is not difficult to see by a similar reasoning as before that the restriction of $\nabla$ on this term is given by the identity map. Therefore this term does not contribute to the quotient.
If we make similar computations for all the terms in $A_{2}(Y_{k}, \Lambda)^{0}_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}$ and $A^{2}(Y_{k}, \Lambda)^{0}_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}$, we obtain the following isomorphism by \eqref{sing-exact}
\begin{equation*}
\begin{aligned}
\mathrm{H}^{1}_{\mathrm{sing}}(K, \mathrm{M}^{[p]}_{n}(\underline{\mathbf{f}})(-1)) \cong& \Phi_{\mathcal{O}_{1}}/I^{[p]}_{1,n}\otimes \mathcal{X}^{\vee}_{\mathcal{O}_{2}}/I^{[p]}_{2,n}\otimes \mathcal{X}^{\vee}_{\mathcal{O}_{2}}/I^{[p]}_{3,n}\\
& \mathcal{X}^{\vee}_{\mathcal{O}_{1}}/I^{[p]}_{1,n}\otimes\Phi_{\mathcal{O}_{2}}/I^{[p]}_{2,n}\otimes \mathcal{X}^{\vee}_{\mathcal{O}_{3}}/I^{[p]}_{3,n}\\
&\mathcal{X}^{\vee}_{\mathcal{O}_{1}}/I^{[p]}_{1,n}\otimes \mathcal{X}^{\vee}_{\mathcal{O}_{2}}/I^{[p]}_{2,n}\otimes\Phi_{\mathcal{O}_{3}}/I^{[p]}_{3,n}.
\end{aligned}
\end{equation*}
Finally we apply Lemma \ref{comp-char} and arrive at the desired isomorphism
\begin{equation*}
\mathrm{H}^{1}_{\mathrm{sing}}(K, \mathrm{M}^{[p]}_{n}(\underline{\mathbf{f}})(-1)) \cong (\Phi_{\mathcal{O}_{1}}/I^{[p]}_{1,n}\otimes \Phi_{\mathcal{O}_{2}}/I^{[p]}_{2,n}\otimes \Phi_{\mathcal{O}_{3}}/I^{[p]}_{3,n})^{\oplus 3}.
\end{equation*}
\end{myproof}
\begin{corollary}\label{main-coro}
Let $p$ be an $n$-admissible prime for $\underline{\mathbf{f}}$. Under the assumption in the Theorem \ref{arithmetic-level-raising}, we have the following statements.
\begin{enumerate}
\item The singular quotient $\mathrm{H}^{1}_{\mathrm{sing}}(\mathbf{Q}_{p}, \mathrm{M}^{[p]}_{n}(\underline{\mathbf{f}})(-1))$ is free of rank $3$ over $\mathcal{O}_{n}$.
\item We have a canonical isomorphism
\begin{equation*}
\mathrm{H}^{1}_{\mathrm{sing}}(\mathbf{Q}_{p}, \mathrm{M}^{[p]}_{n}(\underline{\mathbf{f}})(-1)) \cong \oplus^{3}_{j=1}(\otimes^{3}_{i=1}\Phi_{\mathcal{O}_{i}}/I^{[p]}_{i,n})
\end{equation*}
which induces an isomorphism
\begin{equation*}
\mathrm{H}^{1}_{\mathrm{sing}}(\mathbf{Q}_{p}, \mathrm{M}^{[p]}_{n}(\underline{\mathbf{f}})(-1))\cong \oplus^{3}_{j=1}(\otimes^{3}_{i=1}S^{B}_{2}(N^{+}, \mathcal{O}_{i})/I_{i,n})
\end{equation*}
\end{enumerate}
\end{corollary}
\begin{proof}
The first part follows from the same proof as in Lemma \ref{3-fin}. For the second part, it follows from the discussion in see \cite[\S 1.7]{BD-Mumford} that the non-trivial element in the Galois group $\mathrm{Gal}(K/\mathbf{Q}_{p})$ acts by $U_{p}$ on the group of connected components $\Phi$ for the Shimura curve $X$. Thus it acts by the product of $(\epsilon_{p,1}, \epsilon_{p,2}, \epsilon_{p,3})$ on each copy of
$\otimes^{3}_{i=1}\Phi_{\mathcal{O}_{i}}/I^{[p]}_{i,n}$
in $\oplus^{3}_{j=1}(\otimes^{3}_{i=1}\Phi_{\mathcal{O}_{i}}/I^{[p]}_{i,n})$.
Note that the product of $(\epsilon_{p,1}, \epsilon_{p,2}, \epsilon_{p,3})$ is $1$ by the definition of an $n$-admissible prime $p$ for $\underline{\mathbf{f}}$. The result follows.
\end{proof}
\subsection{Diagonal cycle classes and the first reciprocity law} Recall $X=X^{B^{\prime}}_{N^{+}, pN^{-}}$ is the Shimura curve associated to $B^{\prime}$ defined over $\mathbf{Q}$ with integral model $\mathfrak{X}$ over $\mathbf{Z}_{(p)}$. We let
\begin{equation*}
\theta: \mathfrak{X}\rightarrow \mathfrak{X}^{3}
\end{equation*}
be the diagonal embedding of $\mathfrak{X}$ in the triple fiber product of $\mathfrak{X}$ over $\mathbf{Z}_{(p)}$. Then we obtain a class
\begin{equation*}
\theta_{*}[\mathfrak{X}\otimes\mathbf{Q}]\in \mathrm{CH}^{2}(\mathfrak{X}^{3}\otimes \mathbf{Q})
\end{equation*}
which we will refer to as the \emph{Gross-Schoen diagonal cycles}. Since $\mathrm{H}^{*}(\mathfrak{X}^{3}\otimes{\mathbf{Q}^{\mathrm{ac}}}, \Lambda(2))_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}$ vanishes unless $*=3$ as we have assumed that the maximal ideals in ${\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}$ are residually irreducible, the cycle class map and the Hochschild-Serre spectral sequence gives rise to the following Abel-Jacobi map
\begin{equation*}
\mathrm{AJ}^{\circ}_{\underline{\mathbf{f}}}: \mathrm{CH}^{2}(\mathfrak{X}^{3}\otimes\mathbf{Q})\rightarrow \mathrm{H}^{1}(\mathbf{Q}, \mathrm{H}^{3}(\mathfrak{X}^{3}\otimes{\mathbf{Q}^{\mathrm{ac}}}, \Lambda(2))_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}).
\end{equation*}
Note that by \eqref{Kunneth} we have the following isomorphism
\begin{equation*}
\mathrm{H}^{3}(\mathfrak{X}^{3}\otimes{\mathbf{Q}^{\mathrm{ac}}}, \Lambda(2))_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}\cong \otimes^{3}_{i=1}\mathrm{H}^{1}(\mathfrak{X}\otimes{\mathbf{Q}^{\mathrm{ac}}}, \Lambda(1))_{\mathfrak{m}^{[p]}_{i}}.
\end{equation*}
For $i=1, 2, 3$, recall that
\begin{itemize}
\item $\mathrm{T}^{[p]}_{i}=\mathrm{H}^{1}(\mathfrak{X}\otimes{\mathbf{Q}^{\mathrm{ac}}}, \mathcal{O}_{i}(1))_{\mathfrak{m}^{[p]}_{i}}$;
\item $\mathrm{T}^{[p]}_{i, n}= \mathrm{T}^{[p]}_{i}/I^{[p]}_{i, n}$;
\item $\mathrm{M}^{[p]}(\underline{\mathbf{f}})=\mathrm{T}^{[p]}_{1}\otimes \mathrm{T}^{[p]}_{2}\otimes \mathrm{T}^{[p]}_{3}$;
\item $\mathrm{M}^{[p]}_{n}(\underline{\mathbf{f}})=\mathrm{T}^{[p]}_{1, n}\otimes \mathrm{T}^{[p]}_{2, n}\otimes \mathrm{T}^{[p]}_{3, n}$.
\end{itemize}
Note we have a natural map
\begin{equation*}
\otimes^{3}_{i=1}\mathrm{H}^{1}(\mathfrak{X}\otimes{\mathbf{Q}^{\mathrm{ac}}}, \Lambda(1))_{\mathfrak{m}^{[p]}_{i}}\rightarrow \mathrm{M}^{[p]}(\underline{\mathbf{f}})=\otimes^{3}_{i=1}\mathrm{T}_{i}
\end{equation*}
which can be composed with the Abel-Jacobi map $\mathrm{AJ}^{\circ}_{\underline{\mathbf{f}}}$ to obtain the following Abel-Jacobi map for $\mathrm{M}^{[p]}(\underline{\mathbf{f}})(-1)$
\begin{equation*}
\mathrm{AJ}_{\underline{\mathbf{f}}}: \mathrm{CH}^{2}(\mathfrak{X}^{3}\otimes \mathbf{Q})\rightarrow \mathrm{H}^{1}(\mathbf{Q}, \mathrm{M}^{[p]}(\underline{\mathbf{f}})(-1)).\\
\end{equation*}
We also have a natural quotient map $\mathrm{M}^{[p]}(\underline{\mathbf{f}})(-1)\rightarrow \mathrm{M}^{[p]}_{n}(\underline{\mathbf{f}})(-1)$ which we can further compose with $\mathrm{AJ}_{\underline{\mathbf{f}}}$ to obtain the Abel-Jacobi map for $\mathrm{M}^{[p]}_{n}(\underline{\mathbf{f}})(-1)$
\begin{equation*}
\mathrm{AJ}_{\underline{\mathbf{f}}, n}: \mathrm{CH}^{2}(\mathfrak{X}^{3}\otimes \mathbf{Q})\rightarrow \mathrm{H}^{1}(\mathbf{Q}, \mathrm{M}^{[p]}_{n}(\underline{\mathbf{f}})(-1)).\\
\end{equation*}
Now we consider the following diagram
\begin{equation*}
\begin{tikzcd}
\mathrm{CH}^{2}(\mathfrak{X}^{3}\otimes \mathbf{Q}) \arrow[r, "\mathrm{AJ}_{\underline{\mathbf{f}}, n}"] \arrow[rdd, bend right, "\partial_{p} \mathrm{AJ}_{\underline{\mathbf{f}}, n}"'] & \mathrm{H}^{1}(\mathbf{Q}, \mathrm{M}^{[p]}_{n}(\underline{\mathbf{f}})(-1)) \arrow[d, "\mathrm{loc}_{p}"] \\
& \mathrm{H}^{1}(\mathbf{Q}_{p}, \mathrm{M}^{[p]}_{n}(\underline{\mathbf{f}})(-1)) \arrow[d, "\partial_{p}"] \\
& \mathrm{H}^{1}_{\mathrm{sing}}(\mathbf{Q}_{p}, \mathrm{M}^{[p]}_{n}(\underline{\mathbf{f}})(-1))
\end{tikzcd}
\end{equation*}
where $\partial_{p}\mathrm{AJ}_{\underline{\mathbf{f}}, n}$ is the composite of the right part of the diagram. Then we define the \emph{Gross-Schoen diagonal cycle class} to be image of $\theta_{*}[\mathfrak{X}\otimes\mathbf{Q}]$ under the map $\mathrm{AJ}_{\underline{\mathbf{f}}}$ and we denote this element by
\begin{equation*}
\Theta^{[p]}\in \mathrm{H}^{1}(\mathbf{Q}, \mathrm{M}^{[p]}(\underline{\mathbf{f}})(-1)).
\end{equation*}
Similarly we denote the image of $\theta_{*}[\mathfrak{X}\otimes\mathbf{Q}]$ under the map $\mathrm{AJ}_{\underline{\mathbf{f}}, n}$ by
\begin{equation*}
\Theta^{[p]}_{n}\in \mathrm{H}^{1}(\mathbf{Q}, \mathrm{M}^{[p]}_{n}(\underline{\mathbf{f}})(-1)).
\end{equation*}
We will be concerned with the singular residue at $p$ of the element $\Theta^{[p]}_{n}$ in the following. By definition, this is the image of the cycle $\theta_{*}[\mathfrak{X}\otimes\mathbf{Q}]$ under the map $\partial_{p}\mathrm{AJ}_{\underline{\mathbf{f}}, n}$ which we will denote by
\begin{equation*}
\partial_{p}\Theta^{[p]}_{n }\in \mathrm{H}^{1}_{\mathrm{sing}}(\mathbf{Q}_{p}, \mathrm{M}^{[p]}_{n}(\underline{\mathbf{f}})(-1)).
\end{equation*}
By Corollary \ref{main-coro}, we can view $\partial_{p}\Theta^{[p]}_{n}$ as an element of
\begin{equation*}
\oplus^{3}_{j=1}(\otimes^{3}_{i=1}S^{B}_{2}(N^{+}, \mathcal{O}_{i}){/I_{i,n}}).
\end{equation*}
For $j=1, 2, 3$, we denote by
\begin{equation*}
\partial^{(j)}_{p}\Theta^{[p]}_{n}\in \otimes^{3}_{i=1}S^{B}_{2}(N^{+}, \mathcal{O}_{i}){/I_{i,n}}
\end{equation*}
its projection to the $j$-th direct summand in
\begin{equation*}
\oplus^{3}_{j=1}(\otimes^{3}_{i=1}S^{B}_{2}(N^{+}, \mathcal{O}_{i}){/I_{i,n}}).
\end{equation*}
We define a pairing
\begin{equation}\label{pairing}
(\hphantom{a},\hphantom{a}):\otimes^{3}_{i=1}S^{B}_{2}(N^{+}, \mathcal{O}_{i})\times \otimes^{3}_{i=1}S^{B}_{2}(N^{+}, \mathcal{O}_{i}) \rightarrow \mathcal{O}
\end{equation}
by the following formula. Let
\begin{equation*}
\zeta_{1}\otimes \zeta_{2}\otimes \zeta_{3}, \phi_{1}\otimes \phi_{2}\otimes \phi_{3}\in \otimes^{3}_{i=1}S^{B}_{2}(N^{+}, \mathcal{O}_{i})
\end{equation*}
then we define
\begin{equation*}
(\zeta_{1}\otimes \zeta_{2}\otimes \zeta_{3}, \phi_{1}\otimes \phi_{2}\otimes \phi_{3})=\sum_{z_{1}, z_{2}, z_{3}} \zeta_{1}\phi_{1}(z_{1})\otimes\zeta_{2}\phi_{2}(z_{2})\otimes\zeta_{3}\phi_{3}(z_{3})
\end{equation*}
where $(z_{1}, z_{2}, z_{3})$ runs through all the elements in the finite set $(X^{B})^{3}$. Note that when $\mathcal{O}_{1}=\mathcal{O}_{2}=\mathcal{O}_{3}$, the pairing which apriori valued in $\mathcal{O}\otimes \mathcal{O}\times \mathcal{O}$ can be taken to be valued in $\mathcal{O}$ by using the natural multiplication map. This pairing subsequently induces a pairing
\begin{equation*}
\begin{aligned}
(\hphantom{a},\hphantom{a}):\otimes^{3}_{i=1}S^{B}_{2}(N^{+}, \mathcal{O}_{i})[I_{i, n}]\times \otimes^{3}_{i=1}S^{B}_{2}(N^{+}, \mathcal{O}_{i}){/I_{i, n}} \rightarrow \mathcal{O}_{n}.\\
\end{aligned}
\end{equation*}
The following theorem is the analogue of the \emph{first reciprocity law} for Heegner points on Shimura curves \cite[Theorem 4.1]{BD-Main}, the \emph{congruence formulae} in \cite[Theorem 4.11]{Liu-HZ} and \cite[Theorem 4.5]{Liu-cubic}.
\begin{theorem}[First reciprocity law]\label{recip}
Let $p$ be an $n$-admissible prime for $\underline{\mathbf{f}}$. We assume the assumptions in Theorem \ref{arithmetic-level-raising} are satisfied. Let $\phi_{1}\otimes \phi_{2}\otimes \phi_{3}\in \otimes^{3}_{i=1}S^{B}_{2}(N^{+}, \mathcal{O}_{i})[I_{i,n}]$. Then for $j=1, 2, 3$, the following formula holds
\begin{equation*}
(\partial^{(j)}_{p}\Theta^{[p]}_{n}, \phi_{1}\otimes \phi_{2}\otimes \phi_{3})=(p+1)^{3}\sum_{z\in X^{B}}\phi_{1}(z)\otimes\phi_{2}(z)\otimes\phi_{3}(z).
\end{equation*}
\end{theorem}
\begin{proof}
We only prove the formula for $j=1$, the other cases are proved exactly the same way. Consider the diagonal embedding of
\begin{equation*}
\theta: \mathfrak{X}\rightarrow \mathfrak{X}^{3}
\end{equation*}
of the model of $X$ over $\mathcal{O}_{K}$ into its threefold fiber product. Since $\mathfrak{X}$ is regular, the map $\theta$ extends to a map $\tilde{\theta}: \mathfrak{X}\rightarrow \mathcal{Y}$ such that $\pi\circ\tilde{\theta}=\theta$ by the universal property of the blow-up. We use the same notation $\tilde{\theta}: X_{k}\rightarrow Y_{k}$ to denote the map induced on the special fiber. We apply Proposition \ref{cal-aj} to calculate $\partial^{(1)}_{p}\Theta^{[p]}_{n}$. Thus we need to find the image of
\begin{equation*}
Y^{(0)}_{k}\times_{Y_{k}} \tilde{\theta}_{*}X_{k}
\end{equation*}
under the cycle class map in $A^{2}(Y_{k}, \Lambda)^{0}_{\mathfrak{m}^{[p]}_{\underline{\mathbf{f}}}}$. By Lemma \ref{feature-cycle} and the proof of Theorem \ref{arithmetic-level-raising}, the class $\partial^{(1)}_{p}\Theta^{[p]}_{n}$ is represented by the image of the characteristic function $\mathbf{1}_{0}(p)$ on $X^{B}_{0}(p)$ under the map
\begin{equation}
\mathrm{H}^{0}(X^{B}_{0}(p), \Lambda)\rightarrow \mathrm{H}^{2}(\mathbf{P}^{1}(X^{B}_{\pm}), \Lambda(1))\otimes\mathrm{H}^{0}(X^{B}_{0}(p), \Lambda)\otimes\mathrm{H}^{0}(X^{B}_{0}(p), \Lambda).
\end{equation}
which is induced by the Gysin map on the first factor of the tensor product and by the identity maps on the rest of the two factors of the tensor product. Recall that we have two natural transition maps of Shimura sets
\begin{equation*}
\pi_{+}: X^{B}_{0}(p)\rightarrow X^{B} \text{\hphantom{a}and\hphantom{b}} \pi_{-}: X^{B}_{0}(p)\rightarrow X^{B}.
\end{equation*}
By making explicit the maps
\begin{equation*}
\begin{aligned}
&\mathrm{H}^{2}(\mathbf{P}^{1}(X^{B}_{\pm}), \Lambda(1))\otimes\mathrm{H}^{0}(X^{B}_{0}(p), \Lambda)\otimes\mathrm{H}^{0}(X^{B}_{0}(p), \Lambda)\\
&\rightarrow \Phi_{\mathcal{O}_{1}}/I^{[p]}_{1,n}\otimes \mathcal{X}^{\vee}_{ \mathcal{O}_{2}}/I^{[p]}_{2,n}\otimes \mathcal{X}^{\vee}_{ \mathcal{O}_{3}}/I^{[p]}_{3,n}\\
&\rightarrow \Phi_{\mathcal{O}_{1}}/I^{[p]}_{1,n}\otimes \Phi_{ \mathcal{O}_{2}}/I^{[p]}_{2,n}\otimes \Phi_{ \mathcal{O}_{3}}/I^{[p]}_{3,n}\\
&\cong S^{B}_{2}(N^{+}, \mathcal{O}_{1}){/I_{1,n}}\otimes S^{B}_{2}(N^{+}, \mathcal{O}_{2}){/I_{2,n}}\otimes S^{B}_{2}(N^{+}, \mathcal{O}_{3}){/I_{3,n}}.\\
\end{aligned}
\end{equation*}
The element $\partial^{(1)}_{p}\Theta^{[p]}_{n}\in S^{B}_{2}(N^{+}, \mathcal{O}_{1}){/I_{1,n}}\otimes S^{B}_{2}(N^{+}, \mathcal{O}_{2}){/I_{2,n}}\otimes S^{B}_{2}(N^{+}, \mathcal{O}_{3}){/I_{3,n}}$ is given by
\begin{equation*}
\frac{\theta_{*}(\pi_{+, *}+\epsilon_{p,1}\pi_{-, *})}{2}(\mathbf{1}_{0}(p))\otimes\frac{\theta_{*}(\pi_{+, *}+\epsilon_{p,2}\pi_{-, *})}{2}(\mathbf{1}_{0}(p))\otimes \frac{\theta_{*}(\pi_{+, *}+\epsilon_{p,3}\pi_{-, *})}{2}(\mathbf{1}_{0}(p))
\end{equation*}
where we abuse the notation and denote by $\theta: X^{B}\rightarrow (X^{B})^{3}$ the diagonal embedding of the Shimura set $X^{B}$.
Since $\pi_{+, *}\mathbf{1}_{0}(p)=\epsilon_{p, i}\pi_{-, *}\mathbf{1}_{0}(p)$ is the constant function on $X^{B}$ with value $p+1$, we have
\begin{equation*}
\begin{aligned}
&(\partial^{(1)}_{p}\Theta^{[p]}_{n}, \phi_{1}\otimes \phi_{2}\otimes \phi_{3})\\
&= (\frac{\theta_{*}(\pi_{+, *}+\epsilon_{p,1}\pi_{-,*})}{2}(\mathbf{1}_{0}(p))\otimes\frac{\theta_{*}(\pi_{+, *}+\epsilon_{p, 2}\pi_{-, *})}{2}(\mathbf{1}_{0}(p))\otimes\frac{\theta_{*}(\pi_{+, *}+\pi_{-, *})}{2}(\mathbf{1}_{0}(p)), \phi_{1}\otimes \phi_{2}\otimes \phi_{3})\\
&=(p+1)^{3}\sum_{z\in X^{B}}\phi_{1}(z)\otimes\phi_{2}(z)\otimes\phi_{3}(z).\\
\end{aligned}
\end{equation*}
The formula is proved.
\end{proof}
\section{The Bloch-Kato conjecture for the triple product motive}
\subsection{Selmer groups of triple product motive} Let $f=\sum_{n\geq 1}a_{n}(f)q^{n}\in S^{\mathrm{new}}_{2}(\Gamma_{0}(N))$ be a normalized newform of weight $2$. We assume that $N$ admits a factorization $N=N^{+}N^{-}$ such that $(N^{+}, N^{-})=1$ and $N^{-}$ is square-free and is a product of odd number of primes. Let $E=\mathbf{Q}(f)$ be the Hecke field of $f$ and $\lambda$ be a place of $E$ over $l$. We denote by $E_{\lambda}$ the completion of $E$ at $\lambda$. Let $\mathcal{O}=\mathcal{O}_{E_{\lambda}}$ be its ring of integers. To the newform $f$, we can attach a Galois representation
\begin{equation*}
\rho_{f}: G_{\mathbf{Q}}\rightarrow \mathrm{GL}(\mathrm{V}_{f})
\end{equation*}
over $E_{\lambda}$ with determinant the $l$-adic cyclotomic character and satisfying
\begin{equation*}
\mathrm{tr}(\rho_{f}(\mathrm{Frob}_{p}))=a_{p}(f) \text{ for all } p\nmid N.
\end{equation*}
Let $\mathrm{T}_{f}$ be a Galois stable $\mathcal{O}$-lattice in $\mathrm{V}_{f}$ and for each $n\geq 1$ we put
\begin{equation*}
\mathrm{T}_{f, n}=\mathrm{T}_{f}/\varpi^{n}.
\end{equation*}
Let $\bar{\rho}_{f}$ be the residual representation of $\rho_{f}$. Let $\mathrm{A}_{f}=\mathrm{V}_{f}/\mathrm{T}_{f}$. We set $\mathrm{A}_{f, n}=\ker[\mathrm{A}_{f}\xrightarrow{\varpi^{n}}\mathrm{A}_{f}]$. We denote by $\mathbf{Q}(\bar{\rho}_{f})$ the field extension of $\mathbf{Q}$ in ${\mathbf{Q}}^{\mathrm{ac}}$ cut out by $\bar{\rho}_{f}$. To the newform $f$, we can associate maps $\phi_{f}: \mathbb{T}\rightarrow \mathcal{O}$ and $\phi_{f, n}: \mathbb{T}\rightarrow \mathcal{O}_{n}$ corresponding to the Hecke eigensystem of $f$. Recall we denote by $I_{f, n}$ the kernel of the map $\phi_{f, n}$. Let $p$ be an $n$-admissible prime for $f$, by Theorem \ref{level-raise-curve}, we have a morphism $\phi^{[p]}_{f, n}: \mathbb{T}^{[p]}\rightarrow \mathcal{O}_{n}$ with kernel $I^{[p]}_{f, n}$. We have $\mathfrak{m}_{f}=I_{f, 1}$ the maximal ideal in $\mathbb{T}$ containing $I_{f, n}$ and $\mathfrak{m}^{[p]}_{f}=I_{f, 1}$ the maximal ideal in $\mathbb{T}^{[p]}$ containing $I^{[p]}_{f, n}$
Now we consider a triple $\underline{\mathbf{f}}=(f_{1}, f_{2}, f_{3})$ of newforms in $S^{\mathrm{new}}_{2}(\Gamma_{0}(N))^{3}$. We will denote the representation $\mathrm{V}_{f_{i}}$ simply by $\mathrm{V}_{i}$ for $i=1, 2, 3$. They are defined over the $l$-adic Hecke field $E_{\lambda_{i}}$ of $f_{i}$ where $\lambda_{i}$ is a place of $\mathbf{Q}(f_{i})$ over $l$. Let $\mathcal{O}_{i}$ be the ring of integers of $E_{\lambda_{i}}$ with a uniformizer $\varpi_{i}$ and we put $\mathcal{O}_{i, n}=\mathcal{O}_{i}/\varpi^{n}_{i}$. Similarly as before, we have the $\mathcal{O}_{i}$-lattice $\mathrm{T}_{i}$ and the $\mathcal{O}_{i, n}$-module $\mathrm{T}_{i,n}$. Let $I_{i, n}=I_{f_{i}, n}$ and $\mathfrak{m}_{i}=\mathfrak{m}_{f_{i}}$. We have the fields $\mathbf{Q}(\bar{\rho}_{i}):=\mathbf{Q}(\bar{\rho}_{f_{i}})$ defined by the residual Galois representations $\bar{\rho}_{f_{i}}$. Let
\begin{equation*}
\mathrm{V}(\underline{\mathbf{f}})=\mathrm{V}_{1}\otimes \mathrm{V}_{2}\otimes \mathrm{V}_{3}
\end{equation*}
be the triple tensor product representation associated to the triple $\underline{\mathbf{f}}$. Simarly, we put
\begin{equation*}
\mathrm{M}(\underline{\mathbf{f}})=\mathrm{T}_{1}\otimes \mathrm{T}_{2}\otimes \mathrm{T}_{3}
\end{equation*}
and
\begin{equation*}
\mathrm{M}_{n}(\underline{\mathbf{f}})=\mathrm{T}_{1, n}\otimes \mathrm{T}_{2, n}\otimes \mathrm{T}_{3, n} .
\end{equation*}
We have the following result about the Galois cohomology of the representation $\mathrm{V}(\underline{\mathbf{f}})(-1)$ which says the representation $\mathrm{V}(\underline{\mathbf{f}})(-1)$ is tamely pure in the sense \cite[Definition 3.3]{Liu-HZ}.
\begin{lemma}
For all places $v\neq l$ of $\mathbf{Q}$, we have
\begin{equation*}
\mathrm{H}^{1}(\mathbf{Q}_{v}, \mathrm{V}(\underline{\mathbf{f}})(-1))=0.
\end{equation*}
\end{lemma}
\begin{proof}
The proof of this lemma is the same as that of \cite[Lemma 4.6]{Liu-cubic} by replacing the the elliptic curve by the Jacobians of suitable Shimura curves.
\end{proof}
\begin{definition}\label{BK-grp}
The \emph{Bloch-Kato Selmer group} $\mathrm{H}^{1}_{f}(\mathbf{Q}, \mathrm{V}(\underline{\mathbf{f}})(-1))$ is the subspace of classes $s\in \mathrm{H}^{1}(\mathbf{Q}, \mathrm{V}(\underline{\mathbf{f}})(-1))$ such that
\begin{equation*}
{\rm{loc}}_{l}(s)\in \mathrm{H}^{1}_{f}(\mathbf{Q}_{l}, \mathrm{V}(\underline{\mathbf{f}})(-1)):=\ker[ \mathrm{H}^{1}_{f}(\mathbf{Q}_{l}, \mathrm{V}(\underline{\mathbf{f}})(-1))\rightarrow \mathrm{H}^{1}_{f}(\mathbf{Q}_{l}, \mathrm{V}(\underline{\mathbf{f}})\otimes\mathrm{B}_{\mathrm{cris}}(-1))].
\end{equation*}
\end{definition}
Let $(\pi_{1}, \pi_{2}, \pi_{3})$ be the triple of automorphic representation of $\mathrm{GL}_{2}(\mathbf{A})$ associated to the triple $(f_{1}, f_{2}, f_{3})$. Then one can attach the \emph{triple product L-function}
\begin{equation*}
L(f_{1}\otimes f_{2}\otimes f_{3}, s)=L(s-\frac{3}{2}, \pi_{1}\otimes\pi_{2}\otimes \pi_{3}, r)
\end{equation*}
where $L(s-\frac{3}{2}, \pi_{1}\otimes\pi_{2}\otimes \pi_{3}, r)$ is the Langlands $L$-function attached to $r$ which is the natural eight-dimensional representation of the $L$-group of $\mathrm{GL}_{2}\times\mathrm{GL}_{2}\times\mathrm{GL}_{2}$. We will be concerned with case when \emph{global root number}
\begin{equation*}
\epsilon(\pi_{1}\otimes\pi_{2}\otimes\pi_{3}, r)=1
\end{equation*}
that is the order of vanishing of the triple product $L$-function $L(f_{1}\otimes f_{2}\otimes f_{3}, s)$ at the central critical point $s=2$ is even. The following proposition relates $L(f_{1}\otimes f_{2}\otimes f_{3}, 2)$ to certain explicit period integral appeared in the statement of Theorem \ref{recip}.
\begin{proposition}\label{period}
Let $(f^{B}_{1}, f^{B}_{2}, f^{B}_{3})\in S^{B}_{2}(N^{+}, \mathcal{O})$ be the Jacquet-Langlands transfer of the triple $(f_{1}, f_{2}, f_{3})$. If the value $L(f_{1}\otimes f_{2}\otimes f_{3}, 2)$ is non-zero then
\begin{equation}
I(f^{B}_{1}, f^{B}_{2}, f^{B}_{3})=\sum_{z\in X^{B}} f^{B}_{1}(z)\otimes f^{B}_{2}(z)\otimes f^{B}_{3}(z)
\end{equation}
is non-zero.
\end{proposition}
\begin{proof}
This follows from the main result of \cite{KH91} which resolves a conjecture of Jacquet. See also \cite{GK92} and \cite{Ichino} for refined formulas relating
the central critical values of $L(f_{1}\otimes f_{2}\otimes f_{3}, 2)$ to the period integral $I(f^{B}_{1}, f^{B}_{2}, f^{B}_{3})$.
\end{proof}
We make the following conjecture for the motive attached to the triple $\underline{\mathbf{f}}=(f_{1}, f_{2}, f_{3})$ that we hope to address in a future work.
\begin{conjecture}\label{main-conj}
Suppose that the triple $\underline{\mathbf{f}}=(f_{1}, f_{2}, f_{3})$ satisfies the following assumptions: for each $i=1,2, 3$
\begin{enumerate}
\item the maximal ideals $\mathfrak{m}_{i}$ are all residually irreducible;
\item the $\mathbb{T}_{\mathfrak{m}_{i}}$-module $S^{B}_{2}(N^{+}, \mathcal{O}_{i})_{\mathfrak{m}_{i}}$ is free of rank $1$;
\item the residual Galois representations $\bar{\rho}_{i}$ are surjective and the fields $\mathbf{Q}(\bar{\rho}_{i})$ are linearly disjoint.
\end{enumerate}
If the central critical value $L(f_{1}\otimes f_{2}\otimes f_{3}, 2)$ is non-zero, then the Bloch-Kato Selmer group vanishes
\begin{equation*}
\mathrm{H}^{1}_{f}(\mathbf{Q}, \mathrm{V}(\underline{\mathbf{f}})(-1))=0.
\end{equation*}
\end{conjecture}
Note that the assumptions made in the conjecture guarantees us that there are abundance of $n$-admissible primes for $\underline{\mathbf{f}}$.
\begin{lemma}\label{infty-adm}
Under the assumption of Theorem \ref{main-conj}, there are infinitely many $n$-admissible primes $p$ for $\underline{\mathbf{f}}$.
\end{lemma}
\begin{proof}
Let $\rho_{i, n}: G_{\mathbf{Q}}\rightarrow \mathrm{GL}_{2}(\mathcal{O}_{i, n})$ be the representation on $\mathrm{T}_{i, n}$ defined by reducing $\mathrm{T}_{i}$ modulo $\varpi_{i}^{n}$. Consider the direct sum representation
\begin{equation*}
\rho_{1, n}\oplus \rho_{2, n}\oplus \rho_{3, n}: G_{\mathbf{Q}}\rightarrow \mathrm{GL}_{2}(\mathcal{O}_{1, n})\times \mathrm{GL}_{2}(\mathcal{O}_{2, n})\times \mathrm{GL}_{2}(\mathcal{O}_{3, n}).
\end{equation*}
Since $\bar{\rho}_{i}$ is surjective and the fields $\mathbf{Q}(\bar{\rho}_{i})$ are linearly disjoint, we can find infinitely many primes $p$ such that
\begin{equation*}
\rho_{1, n}\oplus \rho_{2, n}\oplus \rho_{3, n}(\mathrm{Frob}_{p})=\begin{pmatrix}\epsilon_{1}p&0\\0&\epsilon_{1}\\ \end{pmatrix}\times \begin{pmatrix}\epsilon_{2}p&0\\0&\epsilon_{2}\\ \end{pmatrix}\times \begin{pmatrix}\epsilon_{3}p&0\\0&\epsilon_{3}\\ \end{pmatrix}\in \mathrm{GL}_{2}(\mathcal{O}_{1, n})\times \mathrm{GL}_{2}(\mathcal{O}_{2, n})\times \mathrm{GL}_{2}(\mathcal{O}_{3, n})
\end{equation*}
with $l\nmid p^{2}-1$ and $\epsilon_{1}\epsilon_{2}\epsilon_{3}=1$ by the Chebotarev density theorem. By definition one can check that $p$ is an $n$-admissible prime for $(f_{1}, f_{2}, f_{3})$.
\end{proof}
For $i=1, 2, 3$, recall that $\mathrm{T}^{[p]}_{i}=\mathrm{H}^{1}(X_{\mathbf{Q}^{\mathrm{ac}}}, \mathcal{O}_{i}(1))_{\mathfrak{m}^{[p]}_{i}}$
and $\mathrm{T}^{[p]}_{i, n}= \mathrm{T}^{[p]}_{i}/I^{[p]}_{i, n}$. We have the following lemma which we have already implicitly used in Lemma \ref{3-fin}.
\begin{lemma}\label{p-lower}
Let $p$ be an $n$-admissible prime for $\underline{\mathbf{f}}$. There is an isomorphism of $G_{\mathbf{Q}}$-representations
\begin{equation*}
\mathrm{T}^{[p]}_{i, n}\cong \mathrm{T}_{i,n}.
\end{equation*}
\end{lemma}
\begin{proof}
This follows from the same proof given for \cite[Theorem 5.17]{BD-Main}.
\end{proof}
\begin{remark}
This Lemma implies that we have an isomorphism $\mathrm{M}^{[p]}_{n}(\underline{\mathbf{f}})\cong \mathrm{M}_{n}(\underline{\mathbf{f}})$ of $G_{\mathbf{Q}}$-modules. Under the assumption of Conjecture \ref{main-conj}, in particular $L(f_{1}\otimes f_{2} \otimes f_{3}, s)$ is non-vanishing at $s=2$, the period integral $I(f^{B}_{1}, f^{B}_{2}, f^{B}_{3})$ is non-vanishing by Proposition \ref{period} . Let $p$ be an $n$-admissible prime for $\underline{\mathbf{f}}$, the cohomology class $\Theta^{[p]}_{n}$ has the property that $\partial_{p}\Theta^{[p]}_{n}$ is non-zero in $\mathrm{H}^{1}_{\mathrm{sing}}(\mathbf{Q}_{p}, \mathrm{M}^{[p]}_{n}(\underline{\mathbf{f}})(-1))\cong\mathrm{H}^{1}_{\mathrm{sing}}(\mathbf{Q}_{p}, \mathrm{M}_{n}(\underline{\mathbf{f}})(-1))$ for some $n$ by the first reciprocity law in Theorem \ref{recip}. Therefore the class $\Theta^{[p]}_{n}$ could be viewed as an annihilator of the Selmer group. However as we have seen in Lemma \ref{3-fin}, the singular quotient at $p$ is of rank $3$ and therefore the class $\Theta^{[p]}_{n}$ alone can not fill up the whole singular quotient and therefore is not enough to bound the Selmer group. However we conjecture here that there exist three global cohomology classes in $\mathrm{H}^{1}(\mathbf{Q}, \mathrm{M}_{n}(\underline{\mathbf{f}})(-1))$ which are intimately related to $\Theta^{[p]}_{n}$ and satisfy a similar reciprocity law as in Theorem \ref{recip}. Using these three classes, one can indeed prove Conjecture \ref{main-conj}.
\end{remark}
\subsection{The symmetric cube motive} We specialize the discussions in this article to the case when $\underline{\mathbf{f}}=(f_{1}, f_{2}, f_{3})=(f, f, f)$ for a single modular form $f$. In this case we have a factorization
\begin{equation*}
\mathrm{V}_{f}^{\otimes 3}(-1)= \mathrm{Sym}^{3} \mathrm{V}_{f}(-1)\oplus \mathrm{V}_{f}\oplus \mathrm{V}_{f}
\end{equation*}
where we will refer to $\mathrm{Sym}^{3}(\mathrm{V}_{f})(-1)$ as the \emph{symmetric cube component} of $\mathrm{V}_{f}^{\otimes 3}(-1)$. Corresponding to this factorization, we have a factorization of the $L$-function
\begin{equation*}
L(f\otimes f \otimes f, s)= L(\mathrm{Sym}^{3}f, s) L(f, s-1)^{2}.
\end{equation*}
We can define the Bloch-Kato Selmer group
\begin{equation*}
\mathrm{H}^{1}_{f}(\mathbf{Q}, \mathrm{Sym}^{3} \mathrm{V}_{f}(-1))
\end{equation*}
for $\mathrm{Sym}^{3} \mathrm{V}_{f}(-1)$ exactly the same way as we did for the triple product representation in Definition \ref{BK-grp}. We will prove the following result towards the rank $0$ case of the Bloch-Kato conjecture for $\mathrm{Sym}^{3} \mathrm{V}_{f}(-1)$.
\begin{theorem}\label{main-symm}
Suppose that the modular form $f$ satisfies the following assumptions:
\begin{enumerate}
\item the maximal ideals $\mathfrak{m}_{f}$ are all residually irreducible;
\item the $\mathbb{T}_{\mathfrak{m}_{f}}$-module $S^{B}_{2}(N^{+}, \mathcal{O})_{\mathfrak{m}_{f}}$ is free of rank $1$;
\item $\bar{\rho}_{f}$ is surjective;
\item the value $L(f, 1)$ is non-vanishing.
\end{enumerate}
If the central critical value $L(\mathrm{Sym}^{3}f, 2)$ is non-zero, then the Bloch-Kato Selmer group
\begin{equation*}
\mathrm{H}^{1}_{f}(\mathbf{Q}, \mathrm{Sym}^{3} \mathrm{V}_{f}(-1))=0.
\end{equation*}
\end{theorem}
\begin{remark}
Let $f^{B}$ be the Jacquet-Langlands transfer of $f$ to $S^{B}_{2}(N^{+}, \mathcal{O})$. Then $(4)$ in the assumptions of the above theorem implies that the period integral $I(f^{B}, f^{B}, f^{B})$ is non-vanishing if $L(\mathrm{Sym}^{3}f, 2)$ is non-vanishing.
\end{remark}
\subsection{Proof of Theorem \ref{main-symm}}
We will prove the theorem in this subsection. We use the following set of notations
\begin{itemize}
\item $\mathrm{N}^{\diamond}(\underline{\mathbf{f}})(-1)=\mathrm{Sym}^{3}\mathrm{A}_{f}(-1)$;
\item $\mathrm{N}^{\diamond}_{n}(\underline{\mathbf{f}})(-1)=\mathrm{Sym}^{3}\mathrm{A}_{f, n}(-1)$;
\item $\mathrm{M}^{\diamond}(\underline{\mathbf{f}})(-1)=\mathrm{Sym}^{3}\mathrm{T}_{f}(-1)$;
\item $\mathrm{M}^{\diamond}_{n}(\underline{\mathbf{f}})(-1)=\mathrm{Sym}^{3}\mathrm{T}_{f, n}(-1)$.
\end{itemize}
We need to slightly modify the notion for $n$-admissible primes for $f$ in this case to incorporate the sign change phenomenon in the triple product setting.
\begin{definition}\label{n-adm}
Let $n\geq 1$ be an integer, a prime $p$ is \emph{$(n, 1)$-admissible} for $f$ if
\begin{enumerate}
\item $p\nmid Nl$
\item $l\nmid p^{2}-1 $
\item $\varpi^{n}\mid p+1-\epsilon_{p}(f)a_{p}(f)$ with $\epsilon_{p}(f)=1$.
\end{enumerate}
\end{definition}
It is easy to see that under the assumptions in Theorem \ref{main-symm}, there are infinitely many $(n, 1)$-admissible prime for $f$ following the proof of Lemma \ref{infty-adm}. The $G_{\mathbf{Q}}$-equivariant pairing
\begin{equation*}
\mathrm{N}^{\diamond}_{n}(\underline{\mathbf{f}})(-1)\times \mathrm{M}^{\diamond}_{n}(\underline{\mathbf{f}})(-1)\rightarrow \mathcal{O}_{n}(1)
\end{equation*}
induces for each place $v$ of $\mathbf{Q}$ a local Tate duality
\begin{equation*}
(\hphantom{a}, \hphantom{b})_{v}: \mathrm{H}^{1}(\mathbf{Q}_{v}, \mathrm{N}^{\diamond}_{n}(\underline{\mathbf{f}})(-1))\times \mathrm{H}^{1}(\mathbf{Q}_{v}, \mathrm{M}^{\diamond}_{n}(\underline{\mathbf{f}})(-1)) \rightarrow \mathrm{H}^{1}(\mathbf{Q}_{v}, \mathcal{O}_{n}(1))\cong \mathcal{O}_{n}.
\end{equation*}
For $s\in \mathrm{H}^{1}(\mathbf{Q}, \mathrm{N}^{\diamond}_{n}(\underline{\mathbf{f}})(-1))$ and $t\in \mathrm{H}^{1}(\mathbf{Q}, \mathrm{M}^{\diamond}_{n}(\underline{\mathbf{f}})(-1))$, we will write the pairing $(s, t)_{v}$ instead of $(\mathrm{loc}_{v}(s), \mathrm{loc}_{v}(t))_{v}$. Let
\begin{equation*}
\mathrm{H}^{1}_{f}(\mathbf{Q}_{l}, \mathrm{M}^{\diamond}(-1))
\end{equation*}
be the pullback of $\mathrm{H}^{1}_{f}(\mathbf{Q}_{l}, \mathrm{Sym}^{3}\mathrm{V}_{f}(-1))$ under the natural map
$$\mathrm{H}^{1}(\mathbf{Q}_{l}, \mathrm{M}^{\diamond}(\underline{\mathbf{f}})(-1))\rightarrow \mathrm{H}^{1}(\mathbf{Q}_{l}, \mathrm{Sym}^{3}\mathrm{V}_{f}(-1)).$$
We define
\begin{equation*}
\mathrm{H}^{1}_{f}(\mathbf{Q}_{l}, \mathrm{M}^{\diamond}_{n}(\underline{\mathbf{f}})(-1))
\end{equation*}
to be the reduction of $\mathrm{H}^{1}(\mathbf{Q}_{l}, \mathrm{M}^{\diamond}(\underline{\mathbf{f}})(-1))$ modulo $\varpi^{n}$. Similarly, we let
\begin{equation*}
\mathrm{H}^{1}_{f}(\mathbf{Q}_{l}, \mathrm{N}^{\diamond}(\underline{\mathbf{f}})(-1))
\end{equation*}
be the image of $\mathrm{H}^{1}_{f}(\mathbf{Q}_{l}, \mathrm{Sym}^{3}\mathrm{V}_{f}(-1))$ in $\mathrm{H}^{1}(\mathbf{Q}_{l}, \mathrm{N}^{\diamond}(\underline{\mathbf{f}})(-1))$. Then we define
\begin{equation*}
\mathrm{H}^{1}_{f}(\mathbf{Q}_{l}, \mathrm{N}^{\diamond}_{n}(\underline{\mathbf{f}})(-1))
\end{equation*}
to be the pullback of
$\mathrm{H}^{1}_{f}(\mathbf{Q}_{l}, \mathrm{N}^{\diamond}(\underline{\mathbf{f}})(-1))$
under the natural map
\begin{equation*}
\mathrm{H}^{1}(\mathbf{Q}_{l}, \mathrm{N}^{\diamond}_{n}(\underline{\mathbf{f}})(-1))\rightarrow \mathrm{H}^{1}(\mathbf{Q}_{l}, \mathrm{N}^{\diamond}(\underline{\mathbf{f}})(-1)).
\end{equation*}
\begin{lemma}\label{sel-pairing}
We have the following statements.
\begin{enumerate}
\item The sum $\sum_{v}(\hphantom{a},\hphantom{b})_{v}$ restricted to $\mathrm{H}^{1}(\mathbf{Q}, \mathrm{N}^{\diamond}_{n}(\underline{\mathbf{f}})(-1))\times \mathrm{H}^{1}(\mathbf{Q}, \mathrm{M}^{\diamond}_{n}(\underline{\mathbf{f}})(-1))$ is trivial. Here $v$ runs through all the places in $\mathbf{Q}$.
\item For every $v\neq l$, there exists an integer $n_{v}\geq 1$, independent of $n$ such that the image of the pairing
\begin{equation*}
(\hfill, \hfill)_{v}: \mathrm{H}^{1}(\mathbf{Q}_{v}, \mathrm{N}^{\diamond}_{n}(\underline{\mathbf{f}})(-1))\times \mathrm{H}^{1}(\mathbf{Q}_{v}, \mathrm{M}^{\diamond}_{n}(\underline{\mathbf{f}})(-1)) \rightarrow \mathrm{H}^{1}(\mathbf{Q}_{v}, \mathcal{O}_{n}(1))\cong \mathcal{O}_{n}.
\end{equation*}
is annihilated by $\varpi^{n_{v}}$.
\item For every $v\neq l$, $\mathrm{H}^{1}_{\mathrm{fin}}(\mathbf{Q}_{v}, \mathrm{N}^{\diamond}_{n}(\underline{\mathbf{f}})(-1))$ is orthogonal to $\mathrm{H}^{1}_{\mathrm{fin}}(\mathbf{Q}_{v}, \mathrm{M}^{\diamond}_{n}(\underline{\mathbf{f}})(-1))$ under the pairing $(\hphantom{a}, \hphantom{b})_{v}$. Similarly, $\mathrm{H}^{1}_{f}(\mathbf{Q}_{l}, \mathrm{N}^{\diamond}_{n}(\underline{\mathbf{f}})(-1))$ is orthogonal to $\mathrm{H}^{1}_{f}(\mathbf{Q}_{l}, \mathrm{M}^{\diamond}_{n}(\underline{\mathbf{f}})(-1))$.
\item Let $p$ be an $(n, 1)$-admissible prime for $f$, then we have a perfect pairing
\begin{equation*}
\mathrm{H}^{1}_{\mathrm{fin}}(\mathbf{Q}_{p}, \mathrm{N}^{\diamond}_{n}(\underline{\mathbf{f}})(-1))\times \mathrm{H}^{1}_{\mathrm{sing}}(\mathbf{Q}_{p}, \mathrm{M}^{\diamond}_{n}(\underline{\mathbf{f}})(-1))\rightarrow \mathcal{O}_{n}
\end{equation*}
of free $\mathcal{O}_{n}$-modules of rank $1$.
\end{enumerate}
\end{lemma}
\begin{proof}
The statement $(1)$ follows from global class field theory. Part $(2)$ follows from the fact that $\mathrm{H}^{1}(\mathbf{Q}_{v}, \mathrm{V}^{\diamond}(\underline{\mathbf{f}}))=0$ for all $v\nmid l$ and thus $\mathrm{H}^{1}(\mathbf{Q}_{v}, \mathrm{M}^{\diamond}(\underline{\mathbf{f}}))$ is torsion for all $v\nmid l$, see \cite[Lemma 4.3]{Liu-HZ}. Part $(3)$ is well-known, see \cite[Theorem 2.17(e)]{DDT} for the first statement and \cite[Lemma 4.8]{Liu-HZ} for the second statement.
For $(4)$, it follows from the definition of an $(n,1)$-admissible prime for $f$ that $\mathrm{M}_{n}(\underline{\mathbf{f}})$ is unramified at $p$ and
$\mathrm{M}_{n}(\underline{\mathbf{f}})\cong\mathcal{O}_{n}\oplus \mathcal{O}^{\oplus 3}_{n}(1) \oplus \mathcal{O}^{\oplus 3}_{n}(2) \oplus\mathcal{O}_{n}(3)$
as a Galois representation of $G_{\mathbf{Q}_{p}}$. Then it follows from a simple computation that $ \mathrm{M}^{\diamond}_{n}(\underline{\mathbf{f}})\cong\mathcal{O}_{n}\oplus \mathcal{O}_{n}(1) \oplus \mathcal{O}_{n}(2) \oplus\mathcal{O}_{n}(3)$. From this, it follows immediately that both $\mathrm{H}^{1}_{\mathrm{sing}}(\mathbf{Q}_{p}, \mathrm{M}^{\diamond}_{n}(\underline{\mathbf{f}})(-1))$ and $\mathrm{H}^{1}_{\mathrm{fin}}(\mathbf{Q}_{p}, \mathrm{M}^{\diamond}_{n}(\underline{\mathbf{f}})(-1))$ are of rank $1$ over $\mathcal{O}_{n}$. The last claim is also clear form this.
\end{proof}
Let $p$ be an $(n, 1)$-admissible prime for $f$. Recall that the class $\theta_{*}[\mathfrak{X}\otimes\mathbf{Q}]\in \mathrm{CH}^{2}(\mathfrak{X}^{3}\otimes \mathbf{Q})$ and the Abel-Jacobi map
\begin{equation*}
\mathrm{AJ}_{\underline{\mathbf{f}}, n}: \mathrm{CH}^{2}(\mathfrak{X}^{3}\otimes \mathbf{Q}^{\mathrm{ac}})\rightarrow \mathrm{H}^{1}(\mathbf{Q}, \mathrm{M}^{[p]}_{n}(\underline{\mathbf{f}})(-1))
\end{equation*}
defined in \S 4.5. By Lemma \ref{p-lower}, we have an isomorphism $\mathrm{M}^{[p]}_{n}(\underline{\mathbf{f}})(-1)\cong \mathrm{M}_{n}(\underline{\mathbf{f}})$ of $G_{\mathbf{Q}}$-modules. Therefore we can project $\mathrm{H}^{1}(\mathbf{Q}, \mathrm{M}^{[p]}_{n}(\underline{\mathbf{f}})(-1))$ to the symmetric component $\mathrm{H}^{1}(\mathbf{Q}, \mathrm{M}^{\diamond}_{n}(\underline{\mathbf{f}})(-1))$. Therefore we arrive at the following Abel-Jacobi map for $\mathrm{M}^{\diamond}_{n}(\underline{\mathbf{f}})(-1)$:
\begin{equation*}
\mathrm{AJ}^{\diamond}_{\underline{\mathbf{f}}, n}: \mathrm{CH}^{2}(\mathfrak{X}^{3}\otimes \mathbf{Q}^{\mathrm{ac}})\rightarrow \mathrm{H}^{1}(\mathbf{Q}, \mathrm{M}^{\diamond}_{n}(\underline{\mathbf{f}})(-1))
\end{equation*}
by composing $\mathrm{AJ}_{\underline{\mathbf{f}}, n}$ with the projection map. We will denote by $\Theta^{\diamond [p]}_{n}$ the image of $\theta_{*}[\mathfrak{X}\otimes\mathbf{Q}]\in \mathrm{CH}^{2}(\mathfrak{X}^{3}\otimes \mathbf{Q})$ under $\mathrm{AJ}^{\diamond}_{\underline{\mathbf{f}}, n}$. We denote by $\partial_{p}\Theta^{\diamond[p]}_{n}$ the image of $\Theta^{\diamond [p]}_{n}$ in the singular quotient $\mathrm{H}^{1}_{\mathrm{sing}}(\mathbf{Q}_{p}, \mathrm{M}^{\diamond}_{n}(\underline{\mathbf{f}})(-1))$. The following proposition summarizes the arithmetic level raising and the first explicit reciprocity law for the symmetric cube representation $\mathrm{M}^{\diamond}_{n}(\underline{\mathbf{f}})(-1)$.
\begin{proposition}\label{reci-symm}
Let $p$ be an $(n, 1)$-admissible prime for $f$. We assume the assumptions in Theorem \ref{main-symm} are satisfied. Then we have the following.
\begin{enumerate}
\item There is an isomorphism
\begin{equation*}
\mathrm{H}^{1}_{\mathrm{sing}}(\mathbf{Q}_{p}, \mathrm{M}^{\diamond}_{n}(\underline{\mathbf{f}})(-1))\cong \otimes^{3}_{i=1}S^{B}_{2}(N^{+}, \mathcal{O}){/I_{f,n}}.
\end{equation*}
\item Let $\phi\otimes \phi\otimes \phi\in \otimes^{3}_{i=1}S^{B}_{2}(N^{+}, \mathcal{O})[I_{f,n}]$. Then we have the following reciprocity formula: \begin{equation*}
(\partial_{p}\Theta^{\diamond[p]}_{n}, \phi\otimes \phi\otimes \phi)=(p+1)^{3}\sum_{z\in X^{B}}\phi(z)\phi(z)\phi(z).
\end{equation*}
\end{enumerate}
\end{proposition}
\begin{proof}
These follow from our main results Corollary \ref{main-coro} and Theorem \ref{recip} by projecting from $\mathrm{M}_{n}(\underline{\mathbf{f}})(-1)$ to $\mathrm{M}^{\diamond}_{n}(\underline{\mathbf{f}})(-1)$. See also the calculation in the proof of Lemma \ref{sel-pairing} $(4)$.
\end{proof}
\begin{myproof}{Theorem}{\ref{main-symm}} We prove the theorem by contradiction. Suppose that $\mathrm{H}^{1}_{f}(\mathbf{Q}, \mathrm{Sym}^{3}\mathrm{V}_{f}(-1))$ has dimension $>0$. Then one can find a free $\mathcal{O}_{n}$-module $S$ of rank $1$ that is contained in $\mathrm{H}^{1}_{f}(\mathbf{Q}, \mathrm{N}^{\diamond}_{n}(\underline{\mathbf{f}})(-1))$ by \cite[Lemma 5.9]{Liu-HZ}. Let $s$ be a generator of $S$. By the same argument as in \cite[Lemma 4.14, Lemma 4.16]{Liu-cubic}, we can choose an $(n, 1)$-admissible prime $p$ for $f$ with the property that $\mathrm{loc}_{p}(s)\neq 0\in \mathrm{H}^{1}_{\mathrm{fin}}(\mathbf{Q}_{p}, \mathrm{N}^{\diamond}_{1}(\underline{\mathbf{f}})(-1))$. Moreover by \cite[Lemma 3.4, Remark 4.7]{Liu-HZ}, we have
\begin{itemize}
\item $\mathrm{loc}_{v}(s)\in \mathrm{H}^{1}_{\mathrm{fin}}(\mathbf{Q}_{v}, \mathrm{N}^{\diamond}_{n}(\underline{\mathbf{f}})(-1))$ for $v\not\in\{l, N\}$;
\item $\mathrm{loc}_{l}(s)\in \mathrm{H}^{1}_{f}(\mathbf{Q}_{l}, \mathrm{N}^{\diamond}_{n}(\underline{\mathbf{f}})(-1))$.
\end{itemize}
The class $\Theta^{\diamond[p]}_{n} \in\mathrm{H}^{1}(\mathbf{Q}, \mathrm{M}^{\diamond}_{n}(\underline{\mathbf{f}})(-1))$ satisfies the following properties:
\begin{itemize}
\item $\mathrm{loc}_{v}(\Theta^{\diamond[p]}_{n})\in \mathrm{H}^{1}_{\mathrm{fin}}(\mathbf{Q}_{v}, \mathrm{M}^{\diamond}_{n}(\underline{\mathbf{f}})(-1))$ for all $v\not\in \{p, l, N\}$,
\item $\mathrm{loc}_{l}(\Theta^{\diamond[p]}_{n})\in \mathrm{H}^{1}_{f}(\mathbf{Q}_{l}, \mathrm{M}^{\diamond}_{n}(\underline{\mathbf{f}})(-1))$.
\end{itemize}
These properties follow from the fact that the integral model $\mathfrak{X}^{3}$ has good reduction at a place $v\not\in \{p, l, N\}$ and at $l$ by \cite{Nekovar}. By Proposition \ref{period} and the assumptions in the theorem, there exists an integer $n_{I}< n$ and an element $\phi\otimes\phi\otimes\phi\in \otimes^{3}_{i=1}S^{B}_{2}(N^{+}, \mathcal{O})[I_{f,n}]$ such that
\begin{equation*}
\varpi^{n_{I}}\nmid \sum_{z\in X^{B}}\phi(z)\phi(z)\phi(z).
\end{equation*}
It follows that $\varpi^{n_{I}}\nmid(\partial_{p}\Theta^{\diamond[p]}_{n}, \phi\otimes \phi\otimes \phi)$ by Proposition \ref{reci-symm} and therefore $\varpi^{n_{I}}\nmid \partial_{p}\Theta^{\diamond[p]}_{n}$.
One can also find an integer $n_{N}< n$ depending on the level $N$ and the class $s$ such that $(\varpi^{n_{N}}s, \kappa)_{v}=0$ for any place $v\mid N$ and any $\kappa\in \mathrm{H}^{1}(\mathbf{Q}_{v}, \mathrm{M}^{\diamond}_{n}(\underline{\mathbf{f}})(-1))$. This follows from Lemma \ref{sel-pairing} $(2)$. We choose $n$ such that $n> n_{I}+n_{N}$. By
Lemma \ref{sel-pairing} $(1)$, we have
\begin{equation*}
\varpi^{n}\mid \sum_{v}(\varpi^{n_{N}}s, \Theta^{\diamond[p]}_{n})_{v}=0.
\end{equation*}
By the properties of the classes of $s$ and $\Theta^{\diamond[p]}_{n}$ recalled above, the above equation implies that
\begin{equation*}
\varpi^{n}\mid (\varpi^{n_{N}}s, \Theta^{\diamond[p]}_{n})_{p}=0.
\end{equation*}
It follows that $\varpi^{n_{I}}\mid (s, \Theta^{\diamond[p]}_{n})_{p}$ as $n_{I}< n-n_{N}$. This is a contradiction since
\begin{equation*}
\varpi^{n_{I}}\mid (s, \Theta^{\diamond[p]}_{n})_{p}= (\mathrm{loc}_{p}(s), \partial_{p}\Theta^{\diamond[p]}_{n})_{p}
\end{equation*}
which implies that $\varpi^{n_{I}}\mid \partial_{p}\Theta^{\diamond[p]}_{n}$.
\end{myproof}
|
2,877,628,091,518 | arxiv | \section{Introduction}
The discovery of the Integer Quantum Hall effect (IQHE) stimulates
novel fundamental concepts in condensed matter physics, such as the
gauge invariance,\cite{prb23.5632}, edge state,\cite{prb25.2185} and
Chern number\cite{prl49.405}. In particular, Y. Hatsugai reveals the
relationship between Chern number and edge states in the
IQHE,\cite{prl71.3697,prb48.11851} which provides another way from
the edge state to understand the topological order in finite
systems.\cite{njp11.123014} Moreover, the successful synthesis of
nano and novel materials, such as the triangular organic material
$\kappa-BEDT(CN)_{3}$,\cite{prl91.107001} the kagome lattice
herbertsmithite\cite{jacs127.13462} and the three-dimensional
hyperkagome lattice magnet $(Na_{4}Ir_{3}O_{8})$,\cite{prl99.137202}
provide many opportunities to examine theoretically and
experimentally some novel physical properties, including the
topological properties,\cite{prl71.3697,prb48.11851} fractionalized
excitation\cite{prl101.197202,prl101.197201}, singlet valence-bond
solid states,\cite{jap69.5962,prb68.214415,prb76.180407}, and edge
states. Theoretically, these materials can be mapped to novel
geometric lattices, such as the star lattice which is also called
the triangle-honeycomb lattice\cite{ap321.2,prl99.247203}, Fisher
lattice or decorated honeycomb lattice.\cite{prb81.104429} These
geometric lattice models provide a new view to understand the
geometric effect and spin
frustration.\cite{prb81.134418,prb62.R6065,njp11.123014} In
particular, the spin models on the star and honeycomb lattices have
shown many novel phases including the Abelian and non-Abelian
anyons, chiral spin-liquid phases, topological orders,
\cite{ap321.2,prl99.247203,prb81.104429}, magnetic
orders\cite{prb80.064404,prb81.134418}, and topological
insulator.\cite{prb81.205115,prb82.075125}
The IQHE reveals some novel transport properties of electrons in
two-dimensional (2D) systems, in which the edge states play a key
role and the Hall conductance can be expressed in terms of Chern
number.\cite{prb25.2185,prl49.405} Interestingly, the bulk-edge
correspondence discovered by Y. Hatsugai
\cite{prl71.3697,prb48.11851} becomes an efficient method to explore
the edge states in various 2D systems, such as the honeycomb lattice
(Graphene)\cite{prb74.205414} and spin-chiral ferromagnetic kagome
lattice.\cite{prb77.125119,njp11.123014} However, the star lattice
consists of a special lattice geometry, which can be mapped to a
class of materials and cold atoms in optical lattices. The mean
field study of Heisenberg model demonstrates the existence of
several spin liquid phases, which depend on the flux configurations
of two triangles and one dodecagon in the unit
cell.\cite{prb81.134418} A natural question arises: what is the
relationship between the spin liquid phases and edge states on the
star lattice with boundaries?
\begin{figure}[t]
\includegraphics[width=3.0in]{Fig1.jpg}
\caption{(Color online) A star lattice with basis $e_{1}$ and
$e_{2}$. There are six sites in the unit cell. $t_{c}$ represents
the hopping amplitude inside the triangle (the bond in black), and
$t_{d}$ corresponds to the hopping amplitude between different
triangles (the bond in blue).}
\end{figure}
In this paper, we focus on the edge states and their topological
orders on the star lattice with boundaries. We begin with a 2D
tight-binding model with the Hund's rule coupling. It can be mapped to an
effective tight-binding spinless model.\cite{prb62.R6065} The mean
field approach predicts that there exists several spin liquid phases
in the ground states with the local time reversal symmetry
breaking.\cite{prb81.134418} We use the bulk-edge correspondence
method to analyze the edge states and their topological orders on
the star lattice with boundaries.
This paper is organized as follows. In Sec. II, we introduce the
tight-binding model with boundaries and map it to a spinless
tight-binding model. In Sec. III, we give the bulk-edge
correspondence for the star lattice. We present the edge states and
their corresponding Chern numbers in various phases in Sec. IV.
Finally, we give the discussion and conclusions.
\begin{figure}[t]
\includegraphics[width=3.5in]{Fig2.jpg}
\caption{(Color online) The elementary plaquette of star lattice
contains two inequivalent triangles $\vartriangleleft$,
$\vartriangleright$, and one dodecagon. The magnetic flux
configurations are labeled by
$SL[\phi_{\vartriangleleft},\phi_{\vartriangleright},\phi_{12}]$
following Ref. \cite{prb81.134418}.}
\end{figure}
\section{Model and spin liquid phases}
In order to understand the relationship between the lattice
geometry, edge states, and their topological properties, we consider
the star lattice with boundaries, in which the conducting electrons
move in a local spin background and couple with them by the Hund's
rule to form a double-exchange system. The corresponding
tight-binding Hamiltonian can be written as,
\begin{equation}
H=\sum_{\langle i,j\rangle \sigma}t_{ij} (c_{j\sigma}^{\dagger}c_{i\sigma}+H.c.)
-J\sum_{i}c_{i\alpha}^{\dagger}\boldsymbol{\sigma}_{\alpha\beta}\cdot\boldsymbol{S}_{i}c_{i\beta}
\label{eq:originalHamilton}
\end{equation}
where $t_{ij}$ is the hopping amplitude between two nearest
neighboring sites $\langle i,j\rangle$;
$c_{i\sigma}^{\dagger}(c_{i\sigma})$ is the creation (annihilation)
operator on site i with spin $\sigma$; $\boldsymbol{S}_{i}$ is the
local spin on site i, which couples with the conducting electron
spins with the effective coupling constant $J$. We consider that the
local spins are approximately classical and the coupling $J$ is
strong enough to have the hoping electrons to align them to the
local spin $S_{i}$ on each site with the spinon function,
$|\chi\rangle=(e^{a_i}\cos(\theta_{i}/2),e^{i(a_{i}+\phi_{i})}\sin(\theta_{i}/2))$,
where $(\theta_{i},\phi_{i})$ are the spinon parameters. In this
spinon representation, the Hamiltonian
Eq.(\ref{eq:originalHamilton}) can be mapped to an effective
tight-binding Hamiltonian,
\begin{equation}
H_{eff}=\sum_{\langle i,j\rangle}(t_{ij}^{eff}c_{i}^{\dagger}c_{j}+H.c.)
\label{effH}
\end{equation}
where the effective hopping amplitude\cite{prb62.R6065}
\begin{eqnarray}
t_{ij}^{eff}&=&t_{ij}\left[\cos(\frac{\theta_{i}}{2})\cos(\frac{\theta_{j}}{2})+e^{-(\phi_{i}-\phi_{j})}\sin(\frac{\theta_{i}}{2})\sin(\frac{\theta_{j}}{2})\right]e^{ia_{ij}}\nonumber\\
&=&t(\theta_{ij},\phi_{ij})e^{ia_{ij}}
\end{eqnarray}
where the phase $a_{ij}$ is the vector potential generated by spin
and corresponds to the Berry phase felt by the hopping electron. It
is noted that the unit cell of the star lattice contains two
triangular plaquettes and one 12-site dodecagon plaquette. The mean
field study of Heisenberg model on the star lattice has revealed the
existence of several spin liquid phases.\cite{prb81.134418} The spin
liquid phases depend on the flux figuration of the unit cell, which
is labeled by the notation
$SL[\phi_{\vartriangleleft},\phi_{\vartriangleright},\phi_{12}]$.\cite{prb81.134418}
In terms of the original spin variable, the fluxes on the triangular
plaquettes correspond to the scalar spin chiralities
$\boldsymbol{S}_{1}\cdot\boldsymbol{S}_{2}\times\boldsymbol{S}_{3}$,
while $\phi_{12}$ is related to the 12 spins around the dodecagon
loop. In the uniform spin liquid phase, $SL[0,0,0]$,
$t_{ij}^{eff}=t(\theta_{ij},\phi_{ij})\in\mathbb{R}$. For the
non-uniform spin liquid state, the spin chirality arises, the
fermion hopping will acquire a phase
$t_{ij}^{eff}=t(\theta_{ij},\phi_{ij})e^{ia_{ij}}$, rendering a
nonzero flux for a fermion moving around a loop
$\phi=\sum_{loop}a_{ij}$. $t(\theta_{ij},\phi_{ij})$, in principle,
depends on the angles between spins $\boldsymbol{S}_{i}$ and
$\boldsymbol{S}_{j}$. However in the mean field approximation,
$t(\theta_{ij},\phi_{ij})$ should be independent on the
angles,$(\theta_{ij},\phi_{ij})$ for the spin liquid phases because
the fluxes through plaquettes are periodic in the whole
lattice.\cite{prb81.134418} Thus the hopping parameters
$t_{ij}^{eff}$ can be classified into two independent variables on
the star lattice. $t_{c}$ labels the hopping amplitude inside the
triangles and $t_{d}$ is the hopping amplitude between different
triangles. For convenience, we introduce
$r\equiv\frac{t_{d}}{t_{c}}$ to measure the ratio of these two
hopping amplitudes.
For a given flux configuration, $\phi=\sum_{loop}a_{ij}$, there are
different phase configurations $a_{ij}$. The effect of the phase
configuration can shift the whole energy band in the k space, but do
not modify the energy band structure. Namely, different choices of
the phase configurations do not change the edge state and their
topological properties of the star lattice. This allows us to set a
simple choice of the phase configuration for various spin liquid
phases to study their edge states and topological orders.
\begin{figure}[t]
\includegraphics[scale=0.25]{Fig3.jpg}
\caption{(Color online) The intersection number between the
canonical loop on the complex-energy surface (Riemann surface) and
the trace of the edge state energy Ref. \cite{prb48.11851}.}
\end{figure}
\section{Edge states and topological orders}
\subsection{Bulk-edge conrrespondance theory of star lattice}
In general, we consider a strip of star lattice with the boundary
along the $\boldsymbol{e}_{1}$ direction and the periodic infinite
$\boldsymbol{e}_{2}$ direction shown in Fig.1. We assume that the
spin liquid phase
$SL[\phi_{\vartriangleleft},\phi_{\vartriangleright},\phi_{12}]$,
where the fluxes satisfy the constraint,
$\phi_{\vartriangleleft}+\phi_{\vartriangleright}+\phi_{12}=0$.
Using the Bloch theorem in the $\boldsymbol{e}_{2}$ direction,
$c_{j}=\frac{1}{L_{2}}\sum_{\boldsymbol{k}}e^{i\boldsymbol{k}\cdot\boldsymbol{e}_{2}}c_{n\ell}(k)$,
where $n$ labels the the unit cell and $\ell$ labels the sites in
the unit cell. We set $\boldsymbol{k}\cdot\boldsymbol{e}_{2}=k$ for convenience. The Hamiltonian in Eq.(\ref{effH}) can be written as
\begin{equation}
H_{eff}=\sum_{k}\boldsymbol{C}^{\dagger}(k)\boldsymbol{h}(k)\boldsymbol{C}(k),
\label{effHK}
\end{equation}
where $\boldsymbol{C}^{\dagger}(k)=(c_{1,1}^{\dagger}(k)...c_{1,6}^{\dagger}(k) c_{2,1}^{\dagger}(k)...c_{2,6}^{\dagger}(k)...c_{N_{1},6}^{\dagger}(k))$, and
\begin{equation}
\boldsymbol{h}(k)=\left[\begin{array}{ccccc}
d(k) & v & 0 & \cdots & 0 \\
v^{\intercal} & d(k) & v & 0 & \vdots\\
0 & v^{\intercal} & \ddots & v & 0\\
\vdots & 0 & v^{\intercal} & d(k) & v\\
0 & \cdots & 0 & v^{\intercal} & d(k)
\end{array}\right]_{N_{1}\times N_{1}}
\label{unitcell}
\end{equation}
with
\begin{eqnarray}
d(k)&=&t_{c}\left[\begin{array}{cccccc}
0 & e^{-i\phi_{1}} & 1 & 0 & 0 & 0\\
e^{i\phi_{1}} & 0 & 1 & 0 & re^{-ik} & 0\\
1 & 1 & 0 & r & 0 & 0\\
0 & 0 & r & 0 & 1 & 1\\
0 & re^{-ik} & 0 & 1 & 0 & e^{i\phi_{2}}\\
0 & 0 & 0 & 1 & e^{-i\phi_{2}} & 0
\end{array}\right]
\label{unitcell} \\
v&=&\left[
\begin{array}{cccccc}
0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 \\
r & 0 & 0 & 0 & 0 & 0%
\end{array}%
\right],
\end{eqnarray}
where $N_{1}$ is the number of unit cell along the $\boldsymbol{e}_{1}$ direction.
The Bloch wave function can be written as
$|\Psi(k)\rangle=\sum_{n,\ell}\psi_{n,\ell}c^{\dagger}_{n,\ell}(k)|0\rangle$, where $n$ runs the unit cell in the $\boldsymbol{e}_{1}$ direction. Inserting it into the Schr\"odinger equation, $H|\Psi\rangle=E|\Psi\rangle$, the solution can be reduced to a set of equations (Harper equation)
\begin{equation}
\left.\left\{ \begin{array}{c}
\psi_{n,2}e^{-i\phi_{1}}+\psi_{n,3}+r\psi_{n-1,6}=\varepsilon\psi_{n,1}\\
\psi_{n,1}e^{i\phi_{1}}+\psi_{n,3}+r\psi_{n,5}e^{-ik}=\varepsilon\psi_{n,2}\\
\psi_{n,1}+\psi_{n,2}+r\psi_{n,4}=\varepsilon\psi_{n,3}\\
r\psi_{n,3}+\psi_{n,5}+\psi_{n,6}=\varepsilon\psi_{n,4}\\
\psi_{n,4}+r\psi_{n,2}e^{-ik}+\psi_{n,6}e^{i\phi_{2}}=\varepsilon\psi_{n,5}\\
\psi_{n,4}+r\psi_{n+1,1}+\psi_{n,5}e^{-i\phi_{2}}=\varepsilon\psi_{n,6}
\end{array}\right.\right.
\label{Harper}
\end{equation}
where $\varepsilon=\frac{E}{t_{c}}$. Rewriting Eq.(\ref{Harper}) to
a matrix form, we can express it in terms of a transfer matrix form,
\begin{equation}
\left(\begin{array}{c}
\psi_{n+1,1}\\
\psi_{n,6}
\end{array}\right)=M(\varepsilon)\left(\begin{array}{c}
\psi_{n,1}\\
\psi_{n-1,6}
\end{array}\right)
\end{equation}
where $M$ is a $2\times2$ matrix and its elements are
\begin{eqnarray*}
M_{11}(\varepsilon)&=&e^{i\frac{k+\phi_1-\phi_2}{2}}\frac{l_4 l_5-r^2 l_1^2}{r^2 l_1 l_2}\\
M_{12}(\varepsilon)&=&-e^{i\frac{k+\phi_1-\phi_2}{2}} \frac{l_5}{r l_1} \\
M_{21}(\varepsilon)&=& e^{i\frac{k+\phi_1-\phi_2}{2}} \frac{l_4}{r l_1}\\
M_{22}(\varepsilon)&=& -e^{i\frac{k+\phi_1-\phi_2}{2}} \frac{l_2}{l_1}
\label{M}
\end{eqnarray*}
where
\begin{eqnarray*}
l_1&=&2[(\varepsilon ^2-r^2)\cos\frac{k+\phi_1-\phi_2}{2}+ \\
&&2\varepsilon\cos\frac{k}{2} \cos\frac{\phi_1+\phi_2}{2}+\cos\frac{k-\phi_1+\phi_2}{2}] \\
l_2&=&1+r^4-2 (1+r^2) \varepsilon ^2+\varepsilon ^4-2 r^2 \cos k\\
l_3&=&\varepsilon(3+2 r^2+r^4-2(2+r^2) \varepsilon ^2+\varepsilon ^4)-2 r^2 \varepsilon\cos k\\
l_4&=&l_3+2 (1-\varepsilon ^2) \cos{\phi_1}-2 r^2 \cos(k+\phi_1)\\
l_5&=&l_3+2 (1-\varepsilon ^2) \cos\phi_2-2 r^2 \cos(k-\phi_2)
\label{P}
\end{eqnarray*}
We assume that the width $L_{1}$ of the star lattice contains an
integer number of the unit cells, we can get the reduced transfer
matrix
\begin{equation}
\left(\begin{array}{c}
\psi_{L_{1}+1,1}\\
\psi_{L_{1},6}
\end{array}\right)=(M(\varepsilon))^{L_{1}}\left(\begin{array}{c}
\psi_{1,1}\\
\psi_{0,6}
\end{array}\right)
\end{equation}
Considering the boundary condition $\psi_{L_{1},6}=\psi_{0,6}=0$,
the edge energy $\varepsilon_{edge}$ satisfies $(M(\varepsilon)^{L_{1}})_{21}=0$.\cite{prl71.3697,prb48.11851}
For $L_{1}\gg 1$, the criterion for edge states follows \cite{prl71.3697,prb48.11851}
\begin{equation}
|(M(\mu_{j}))_{11}|\begin{cases}
<1 & \textrm{edge states localized in site 1}\\
>1 & \textrm{edge states localized in site \ensuremath{L_{x}-1}}\\
=1 & \textrm{coincide with bulk states}
\end{cases}
\end{equation}
The quantum Hall conductance of systems can be expressed in terms of
the Chern number of U(1) bundle over the magnetic Brillouin
zone.\cite{prl49.405} The bulk-edge correspondence theory reveals
that the Chern number $C(\mu_{j})$ is equivalent to the winding
number of the edge state moving around the hole of Riemann surface
(the complex-energy surface), namely the intersection number between
the canonical loop on the Riemann surface and the trace of the edge
state energy $\mu_{j}$.\cite{prl71.3697,prb48.11851,njp11.123014}
(see Fig. 3). Thus, the quantum Hall conductance can be given by the
winding number of the edge states,
$\sigma_{xy}^{edge}=-\frac{e^{2}}{h}C(\mu_{j})$ when the Fermi
energy lies in the $j$th energy gap. Therefore, we can count the
Chern number from the energy spectrum of the system.
The mean field studies of the star lattice with the Hamiltonian of
Eq.(\ref{eq:originalHamilton}) give various spin liquid
phases.\cite{prb81.134418} It is worth studying that the topological
properties of these spin liquid phases.
\begin{figure}[t]
\includegraphics[width=3.5in]{Fig4.jpg}
\caption{(Color online) The energy spectra of $SL[0,0,0]$ for
$r=1/2$ in (a), $r=1$ in (b), and $r=2$ in (c).}
\end{figure}
\subsection{Uniform spin liquid phase: $SL[0,0,0]$}
The uniform spin liquid phase, $SL[0,0,0]$ respects all the
space-group symmetry of the lattice and time reversal symmetry. The
energy spectra of the $SL[0,0,0]$ phase for several set of
parameters $(r,\phi)$ are plotted in Fig. 4. It can be seen that
both of the bulk energy band and edge states are k-symmetric,
$E(k)=E(-k)$, due to the space inverse symmetry, but the edge states
are either embedded in the bulk states or isolated in the gap,
namely there is no nontrivial bulk gap. Interestingly there exist
two flat bands lying in the energy-band gap and touching an
edge-state band, which is caused by interference.
\cite{prb54.R17296} It is similar to a uniform spin liquid on the
kagome lattice and can be spoiled by perturbations, such as the next
nearest neighbor hopping. \cite{prb81.134418} Thus, the Chern number
is not well-defined and corresponds to common metals or insulators
without IQHE. Actually the interactions from spinons can lead to
instability of the uniform spin liquid phase to develop to the
phases with breaking time reversal symmetry.\cite{prb81.134418}
\subsection{Nematic spin liquid phase: $SL[\phi,-\phi,0]$}
The spin liquid phases are characterized by a set of spin chirality
operators.\cite{prb81.134418} Different chiral spin phases exhibit
different local magnetic fluxes. For the nematic spin liquid phase
$SL[\phi,-\phi,0]$, time reversal symmetry is broken
spontaneously,\cite{prb81.134418} we plot the energy spectra for
some parameter $(r,\phi)$ in Fig. 5. It can be seen that the
k-symmetries of both bulk bands and edge states are broken,
$E(k)\neq E(-k)$. It shows that time reversal symmetry breaking
could induce space inverse symmetry breaking that breaks the
k-symmetries of bulk bands and edge states. However, the edge states
are also either embedded in the bulk states or isolated in the gap.
This implies the systems are common metals or insulators, but
without IQHE, as $SL[0,0,0]$ phase.
\begin{figure}[t]
\includegraphics[width=3.5in]{Fig5.jpg}
\caption{(Color online) The energy spectra of $SL[-\phi,\phi,0]$ for
different parameters $(r,\phi)$.}
\end{figure}
\subsection{Chiral spin liquid phase I: $SL[\phi,\phi,-2\phi]$ }
For the chiral spin liquid phase I $SL[\phi,\phi,-2\phi]$, time
reversal symmetry is also broken spontaneously. \cite{prb81.134418}
In principle, the Chern number in the $j$th gap between the bulk
energy bands depends on the parameters $r$ and $\phi$,
$C_{j}(r,\phi)$ (here we use this symbol for Chern number). However,
we find from numerical investigations that the Chern number in the
range of $r$ and $\phi$ obeys the following symmetries :
(1) $C_{j}(r,2\pi-\phi)=-C_{j}(r,\phi)$ for $\phi\in(0,\pi)$;
(2) $C_{j}(r,\pi+\phi)=-C_{5-j}(r,\phi)$ for $\phi\in(0,\pi)$;
(3) $C_{j}(r,\phi)=C_{j}(-r,\phi)$ for $\phi\in(0,2\pi)$;
Thus, we can restrict the parameters only in the range of
$\phi\in(0,\frac{\pi}{2})$ and $r>0$. In Fig. 6 we plot the energy
spectrum for some typical parameters $(t,\phi)$ and $L_{1}=10$. The
Chern number can be counted by the winding number of the torus
formed by two Riemann surfaces.\cite{prl71.3697} (see Fig. 3). The
numbers in the right-hand side of each figures in Figs. 6 are the
Chern number of the system when the Fermi energy lies in the
corresponding energy gap. It can be seen that the bulk energy
spectrum is k-symmetric, $E(k)=E(-k)$ even though $\phi\neq 0$, but
the edge state has no k-symmetry. It implies that the space
inversion symmetry still holds due to the bulk band k-symmetry, but
the spontaneous time reversal breaking breaks the edge state
k-symmetry.
\begin{figure}[t]
\includegraphics[width=3.5in]{Fig6.jpg}
\caption{(Color online) The energy spectra of $SL[\phi,\phi,-2\phi]$
for different parameters $(r,\phi)$.}
\end{figure}
Different Chern number implies different topological orders of the
system. In order to give the phase diagram in the parameter space,
we try to find out the critical lines in the parameter space. When
the energy gaps close, the Chern number must change, namely a phase
transition happens. Notice that the bulk spectrum has the Krammers
degeneracy, $E(k)=E(-k)$, it implies that the close of the energy
gap of the bulk band happens only at the $\Gamma$ point ($k=0$).
Thus, we solve the eigenenergies of the unit cell Hamiltonian (bulk
band) at $k=0$. Numerical investigation indicates the bulk energy
bands at $k=0$ are linear with $r$, which allows us suppose that the
eigenenergies of the bulk energy band at $k=0$ have the form
$E_{i}(k=0)=\pm r+b_{i}(\phi)$, where $i=1,2,3$. Substituting this
form into the eigen equation of the Hamiltonian of the
two-dimensional star lattice in the k space we can find that
$b_{i}(\phi)$ satisfies a cubic equation,
\begin{equation}
b^{3}-3b-2\cos\phi=0
\end{equation}
When we consider $\phi\in(0,\frac{\pi}{2})$, the solutions of the above cubic equation are
\begin{equation}
\left\{ \begin{array}{c}
b_{1}=2\cos\frac{\phi+2\pi}{3}\\
b_{2}=-2\cos\frac{\phi+\pi}{3}\\
b_{3}=2\cos\frac{\phi}{3}
\end{array}\right.
\end{equation}
which we set $b_{1}<b_{2}<b_{3}$. The closes of the bulk energy band gaps yield the critical lines in the parameter space,
\begin{equation}
\left\{ \begin{array}{c}
r_{c,1}=\sqrt{3}\sin\frac{\phi}{3}\\
r_{c,2}=\sqrt{3}\sin\frac{\pi-\phi}{3}\\
r_{c,3}=\sqrt{3}\sin\frac{\phi+\pi}{3}
\end{array}\right.
\label{critical.line}
\end{equation}
These three functions separate the parameter space $r-\phi$ into
several different regions which have different topological orders
shown in Fig. 7. Different regions in the phase diagram represent
the ground states with different Chern number configurations in
which the Chern number in different positions corresponds to the
Fermi energy in different gaps like Fig. 5. The phases with
different Chern numbers reflect different integer quantum Hall
conductance of the system.
\begin{figure}[t]
\includegraphics[scale=0.5]{Fig7.jpg}
\caption{The phase diagram of $SL[\phi,\phi,-2\phi]$. Different
region represents different Chern numbers for various filling
fraction, which are even functions of $r$ and periodic functions of
$\phi$ with periodicity $2\pi$. The energy spectrum along line 1 and
line 2 are shown along
(a)$\rightarrow$(d)$\rightarrow$(b)$\rightarrow$(c) and
(d)$\rightarrow$(e)$\rightarrow$(f) in Fig. 6 respectively. }
\end{figure}
\subsection{Chiral spin liquid phase II: $SL[\phi_{1},\phi_{2},-(\phi_{1}+\phi_{2})]$ }
For more general chiral spin liquid phases II
$SL[\phi_{1},\phi_{2},-(\phi_{1}+\phi_{2})]$, the time reversal and
space inverse symmetries are broken spontaneously. For example, the
energy spectra of the spin liquid phase
$SL[\frac{\phi}{3},\frac{2\phi}{3},-\phi]$ for some specific
parameters are shown in Fig. 8. It can be seen that the k-symmetries
of both bulk bands and edge state are broken $E(k)\neq E(-k)$, and
the Chern numbers in some energy gaps are not well-defined, such as
the top three gaps for $r=1,\phi_{1}=\frac{2\pi}{3}$ and
$\phi_{2}=\frac{4\pi}{3}$. Because the degenerate points of the bulk
band and edge state are not at the $k=0$ point, the phase transition
lines can not be solved easily.
\begin{table}[b]
\caption{The energy band properties of different spin liquid phases}
\label{table1} \centering
\begin{tabular}{lcccccccc}\hline\hline
& EKS & BKS & TRS & SIS & & \multicolumn{3}{c}{Chern number} \\\hline
& & & & & $r$: &\ \ \ \ $\frac{1}{2}$\ \ \ \ &\ \ \ \ $1$\ \ \ \ & $2$ \\\hline
$SL[0,0,0]$ & yes & yes &yes & yes & & $-1$ & $-1$ & x \\
$SL[\frac{\pi}{3},-\frac{\pi}{3},0]$ & no & no &no & no & & x & x & $0$ \\
$SL[\frac{\pi}{3},\frac{\pi}{3},-\frac{2\pi}{3}]$ & no & yes & no & yes & & $-1$ & $-1$ & $0$\\
$SL[\frac{\pi}{3},\frac{2\pi}{3},-\pi]$ & no & no & no & no & & $0$
& $-1$ & $0$ \\\hline\hline
\multicolumn{9}{l}{EKS: Edge-state k-symmetry; BKS: Bulk band k-symmetry;}\\
\multicolumn{9}{l}{TRS: Time reversal symmetry; SIS: Space inverse symmetry;}\\
\multicolumn{9}{l}{x: non-well-defined Chern number.}
\end{tabular}
\end{table}
\section{Discussions}
To compare the basic energy-band properties of different spin liquid
phases, we assume that the Fermi energy lies in the middle of the
middle energy-band gap. The energy band symmetry and Chern numbers
for some specific cases are listed in Table I.
It can be seen from Table I that for $SL[0,0,0]$ both of the edge
k-symmetry (EKS) and bulk k-symmetry (BKS) are held due to the time
reversal invariance and space inversion invariance. When $r>2$, the
Chern number becomes non-well-defined. The cases in the $2\sim4$ lines in
Table I indicate that there is no EKS, but BKS remains for
$SL[\frac{\pi}{3},\frac{\pi}{3},-\frac{2\pi}{3}]$. This implies that
the spontaneous time reversal symmetry breaking does not always
induce the space inversion symmetry breaking and the EKS can be
broken by the time reversal symmetry.
The results in Table I reveal that the Chern number depends on not
only the time reversal and space inverse symmetries, but also the
parameters $(r,\phi_{1},\phi_{2},\phi_{12})$ of the star lattice.
The ground states become normal metals or semiconductors for the
phases without the well-defined Chern number. Interestingly, there
is a topological invariance for the exchange of the magnetic fluxes
in the two triangles $SL[\phi_{1},\phi_{2},-(\phi_{1}+\phi_{2})]$
and $SL[\phi_{2},\phi_{1},-(\phi_{1}+\phi_{2})]$. They have the same
Chern numbers and their bulk energy bands are k-asymmetric. These
findings indicate some new phases in the 2-dimensional
materials.\cite{Nagaosa}
\begin{figure}
\includegraphics[width=3.3in]{Fig8.jpg}[t]
\caption{(Color online) The energy spectra of
$SL[\phi_{1},\phi_{2},-(\phi_{1}+\phi_{2})]$ for different
parameters $(r,\phi)$.}
\end{figure}
\section{Conclusions}
In summary, we have studied the edge states and their topological
orders in the different spin liquid phases of star lattice by using
the bulk-edge correspondence theory. The bulk and edge-state energy
structures and Chern number depend on the spin liquid phases and
hopping parameters because the local spontaneous magnetic flux in
the spin liquid phases breaks the time reversal and space inversion
symmetries. We have given the characteristics of bulk and edge
energy structures and their corresponding Chern numbers in the
uniform, nematic and chiral spin liquid phases. In particular, we
have obtained analytically the phase transition lines of different
topological phases and their corresponding phase diagrams for the
chiral spin liquid states $SL[\phi,\phi,-2\phi]$. We have also found
that the topological invariance for the spin liquid
phases,$SL[\phi_{1},\phi_{2},-(\phi_{1}+\phi_{2})]$ and
$SL[\phi_{2},\phi_{1},-(\phi_{1}+\phi_{2})]$. The results tell us
the relationship between the energy-band and edge-state structures
and their topological orders of the star lattice. Especially, this
star lattice has been synthesized in the material called iron
acetate recently.\cite{acie46.6076} Therefore, our results provide a
Hall conductance experimental guideline to discriminate the spin
liquid phases in real materials and cold atoms in optical lattice.
The changes of filling fraction could be implemented by tuning the
applied gate voltage. These results can also give some hints for
understanding the Heisenberg model on the star lattice.
\begin{acknowledgments}
We thank Ming-Liang Tong and Xiao-Ming Chen for helpful discussions.
G.-Y. Huang thanks An Zhao and Jie-Sen Li for useful discussions on
numerical calculations. This work is supported by the Fundamental
Research Funds for the Central Universities of China (11lgjc12 and
10lgzd09), NSFC-11074310, MOST of China 973 program (2012CB821400),
Specialized Research Fund for the Doctoral Program of Higher
Education (20110171110026), and NCET-11-0547.
\end{acknowledgments}
|
2,877,628,091,519 | arxiv | \section{Introduction}
Why is the Higgs boson important?
The Higgs field couples to all the particles in the standard model (SM).
The Higgs field obtains the vacuum expectation values (VEV) $v$ by
electroweak symmetry breaking (EWSB), triggered by some unknown dynamics.
The weak gauge bosons become massive due to the consequence of the
Higgs mechanism.
All the quarks and the charged leptons get masses via the Yukawa interactions by
the replacement of the Higgs field by $v$.
Even neutrinos (although it is the physics beyond the SM) can have their tiny masses
through dimension five operators or neutrino Yulawa couplings after the Higgs
boson obtains $v$.
The Higgs field is indeed the origin of mass.
It is also known that the Higgs field is necessary to stabilize the unitarity
of partial wave amplitudes of elastic scatterings of longitudinally polarized
weak bosons such as $W_L^+W_L^- \to W_L^+W_L^-$ at high energies.
Without the Higgs field the S-wave amplitude $a^0(W_L^+W_L^- \to W_L^+W_L^-)$
blows up at high energies $a^0 \sim G_F s/(8\pi\sqrt{2})$ where $G_F$ is the Fermi constant and
$\sqrt{s}$ is the collision energy, and the unitarity is broken at a TeV scale.
The introduction of the Higgs field cancels such a behavior, and $a^0$ is
a constant at high energies; $a^0 \sim -G_F m_h^2/(4\pi\sqrt{2})$, where $m_h$ is the mass of the Higgs boson.
Therefore, the Higgs field is necessary to save the unitarity.
A condition that the perturbative calculation does not break unitarity gives
the upper bound such as $m_h < 1$ TeV~\cite{Lee:1977eg}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=50mm]{mass-coupling1TeV.eps}
\end{center}
\caption{
Relation between the mass and the coupling with the Higgs boson in the standard model.
The expected error precision in the full ILC program is also indicated~\cite{ILC_TDR}.}
\label{fig:c-m}
\end{figure}
There is no theoretical principle to determine the structure of the Higgs sector within the SM.
One isospin doublet scalar field $\Phi$ is simply introduced as the minimum form in the SM.
Under the renormalizability, its potential can be uniquely written as
\begin{eqnarray}
V(\Phi) = + \mu^2 |\Phi|^2 + \lambda |\Phi|^4.
\end{eqnarray}
By putting an assumption of $\mu^2 < 0$ and $\lambda > 0$,
the shape of the potential becomes
like a Mexican hut, and the electroweak symmetry is spontaneously broken at the
vacuum $\langle \Phi \rangle = (0, v/\sqrt{2})^T$, where $v \simeq 246$ GeV.
Consequently, all SM particles but photons and gluons obtain masses from the unique
VEV $v$. In Fig.~\ref{fig:c-m}, the universal relation between couplings and masses is shown.
The SM gives a simple description for EWSB. However, the following questions come soon.
Why is it the miminal form? How we obtain $\mu^2 <0$? What is the origin of the Higgs force $\lambda$?
Now that a Higgs boson has been found with the mass of about 125 GeV,
the time has come to consider these questions more seriously.
\section{Extended Higgs sectors and new physics models}
As there is no principle in the SM Higgs sector, there are many possibilities for
non-minimal Higgs sectors.
Notice that while the current LHC data do not contradict the predictions
in the SM, most of the extended Higgs sectors can also satisfy current data as well.
These extended Higgs sectors are sometimes introduced
to give sources to solve the problems beyond the SM such as
baryogenesis, dark matter and tiny neutrino masses.
Each scenario does have a specific Higgs sector.
It is also well known that the introduction of the elementary scalar field is
problematic, predicting the quadratic divergence in the radiative correction
to the mass. Such quadratic divergence causes the hierarchy problem.
There have been many scenarios proposed to solve this problem such as
Supersymmetry, Dynamical Symmetry Breaking, Extra dimensions and so on.
Many of the models based on these new paradigms predict specific Higgs sectors
in their low energy effective theories.
Therefore, to determine the Higgs sector by experiments is essentially important
not only to clarify the mechanism of EWSB but also as a window to new physics beyond the SM.
The discovery of the 125 GeV Higgs boson at the LHC is surely a great step
for determination of the structure of the Higgs sector.
From the detailed study of the Higgs sector, we can determine the model of new physics.
What kind of extended Higgs sectors we can consider?
As the SM Higgs sector does not contradict the current data within the errors,
we may think that there is at least one isospin doublet field.
An extended Higgs sector can contain additional isospin multiplets
to the doublet of the SM. In principle, there can be
infinite kinds of extended Higgs sectors.
As a simple example, we may consider models with one additional
singlet field, one additional doublet field, one additional triplet field and so on.
These extended Higgs sectors can receive constraints from the current data
of many experiments including those for the electroweak rho parameter and
for flavor changing neutral currents (FCNCs).
The rho parameter for a Higgs sector with $N$ multiplets is given at the tree level by
\begin{eqnarray}
\rho = \frac{m_W^2}{m_Z^2 \cos^2\theta_W} = \frac{\sum_i \left\{ 4 T_i (T_i+1)- Y_i^2 \right\} |v_i|^2 c_i}
{\sum_i 2 Y_i^2 |v_i|^2},
\end{eqnarray}
where $T_i$ and $Y_i$ ($i=1, \cdots , N$) are isospin and hyper charges of the
$i$-th multiplet ($Q_i=T_i+Y_i/2$), and $c_i =1/2$ for real fields ($Y_i=0$)
and $1$ for completx fields.
The data shows $\rho=1.0004^{+0.0003}_{-0.0004}$~\cite{PDG}.
It is found that Higgs sectors with additional doublets $(T_i, Y_i) = (1/2, 1)$
(and singlets) predict $\rho=1$ at the tree level, like the SM Higgs sector.
Hence, multi-doublet extensions would be regarded as natural extensions.
On the other hand, the introduction of higher representation fields
except for the septet field causes deviations in the rho parameter from unity at the tree level.
For example, in the model with a triplet field $\Delta$($1,2$) with the VEV $v_\Delta$,
$\rho \sim 1 - 2(v_\Delta/v)^2$ is given, so that a tuning
$(v_\Delta/v)^2 \ll 1$ is required to satisfy the data.
Thus such models are relatively exotic.
It is well known that the multi-Higgs structure receives a severe constraint from the results
of FCNC experiments.
FCNC processes such as $K^0 \to \mu^+\mu^-$ and $B^0-\bar{B}^0$ are strongly suppressed~\cite{PDG}.
In the SM with a doublet Higgs field,
the suppression of FCNC processes is perfectly explained by the GIM mechanism~\cite{GIM}.
In multi Higgs doublet models where multiple Higgs doublets couple to one quark
or charged lepton, Higgs boson mediated FCNC can easily occur.
In order to avoid FCNC, it is required that Higgs bosons have different quantum numbers~\cite{GW}.
\begin{table}[t]
\begin{center}
\begin{tabular}{|c||c|c|c|c|c|c|}
\hline
& $\Phi_1$ & $\Phi_2$ & $u_R^i$ & $d_R^i$ & $e_R^i$ & $Q_L^i$, $L_L^i$ \\
\hline
Type I &$+$&$-$&$-$&$-$&$-$&$+$ \\
Type II &$+$&$-$&$-$&$+$&$+$&$+$ \\
Type X &$+$&$-$&$-$&$-$&$+$&$+$\\
Type Y &$+$&$-$&$-$&$+$&$-$&$+$\\
\hline
\end{tabular}
\end{center}
\caption{
Four types of Yukawa interaction in the 2HDM.}
\label{tbl_4type}
\end{table}
\section{Two Higgs doublet model}
Let us discuss the two Higgs doublet model (2HDM) with $\Phi_1$ and $\Phi_2$,
the minimal extension with multi-doublet structure.
For avoiding FCNC, a softly-broken discrete symmetry under
$\Phi_1\to +\Phi_1$ and $\Phi_2 \to - \Phi_2$ is imposed~\cite{GW}.
The Higgs potential is then given by
\begin{eqnarray}
V &=& + \mu_1^2 |\Phi_1|^2 + \mu_2^2 |\Phi_2|^2 - \mu_{3}^2 (\Phi_1^\dagger \Phi_2 + {\rm h.c.}) \nonumber \\
&& +\lambda_1 |\Phi_1|^4 + \lambda_2 |\Phi_2|^4 +\lambda_3 |\Phi_1|^2|\Phi_2|^2
+\lambda_4 |\Phi_1^\dagger \Phi_2|^2
+ \frac{1}{2} \left\{ \lambda_5 (\Phi_1^\dagger \Phi_2)^2 + {\rm h.c.}\right\}.
\end{eqnarray}
The doublet fields are parameterized as
\begin{eqnarray}
\Phi_{i} = \left(\begin{array}{c}
\omega_{i}^{+} \\
\frac{1}{\sqrt{2}}(v_{i} + h_i + i z_i ) \\
\end{array}
\right), (i=1,2)
\end{eqnarray}
where vacuum expectation values $v_1$ and $v_2$ are expressed by $v$ ($\simeq246$ GeV)
and $\tan\beta$ by $v^2=v_1^2+v_2^2$ and $\tan\beta=v_2/v_1$.
The mass matrix of the CP-even scalars is diagonalized by introducing the mixing angle $\alpha$,
and two mass eigenstates $h$ and $H$ are obtained. The mass matrices of CP-odd and charged
scalars are diagonalized by $\beta$, and physical mass eigenstates $A$ and $H^\pm$ are obtained, respectively.
Their masses are given in the decoupling regime ($M \gg v$) by
\begin{eqnarray}
&&\hspace{-0.6cm}m_h^2 =
\left(\lambda_1 \cos^4\beta +\lambda_2 \sin^4\beta + \frac{1}{2}(\lambda_3+\lambda_4+\lambda_5)\sin^22\beta \right) v^2 + {\mathcal{O}} \left( \frac{v^2}{M^2} \right) ,\nonumber \\
&&\hspace{-0.6cm}m_H^2= M^2+\left(\lambda_1+\lambda_2-2(\lambda_3+\lambda_4+\lambda_5)\right)\sin^2\beta\cos^2\beta \,v^2
+ {\mathcal{O}} \left(\frac{v^2}{M^2}\right) ,\nonumber \\
&&\hspace{-0.6cm}m_{H^\pm}^2= M^2 - \frac{\lambda_4+\lambda_5}{2} v^2,
\hspace{0.6cm}m_A^2= M^2 - \lambda_5 v^2,
\end{eqnarray}
where
$M$ ($=\sqrt{\mu_3^2/\sin\beta\cos\beta}$) represents
the soft breaking scale of the discrete symmetry.
Under the discrete symmetry, there are four possible charge assignments for
quarks and charged leptons in Table. \ref{tbl_4type}~\cite{Berger}.
In Type I, all the quarks and charged leptons obtain their masses from $\Phi_1$.
In Type II, $\Phi_1$ gives masses to down-type quarks and charged leptons, while $\Phi_2$
does to the up-type quarks. In Type X, $\Phi_1$ gives mass to the quarks and $\Phi_2$
does to charged leptons.
The rest possibly is called as Type Y.
The phenomenology for the difference among types of Yukawa interactions have been
studied in Refs.~~\cite{Aoki:2009ha, Mahmoudi:2009zx}
There are two possibilities to explain the current data which show SM-like.
When $M^2 \gg v^2$, the additional Higgs bosons are as heavy as $\sqrt{M^2}$, and
only $h$ stays at the electroweak scale behaving as the SM-like Higgs boson.
The effective Lagrangian is
\begin{eqnarray}
{\mathcal{L}}_{\rm eff} = {\mathcal{L}}_{\rm SM} + {\mathcal O}\left(\frac{v^2}{M^2}\right).
\end{eqnarray}
Another case is $\sqrt{M^2} \sim v$. In the limit where the $hWW$ coupling takes
the same value as the SM prediction $\sin(\beta-\alpha)=1$,
all the Yukawa couplings with $h$ takes the SM values, and $HWW$ is negligible.
In this case, $h$ behaves as the SM-like Higgs boson.
When $\sin(\beta-\alpha)$ is slightly smaller than unity, the couplings
$hVV$ ($V=W$, $Z$), $hff$ ($f=t$,$b$,$c$, $\cdots$)
deviate from the SM predictions depending on type of Yukawa interaction.
By detecting the pattern of the deviation in each Higgs boson coupling,
we can distinguish the type of Yukawa coupling in the 2HDMs.
\section{Fingerprinting of models with future precision data at the ILC}
In 2015, the LHC experiment will restart with the highest energy 14 TeV.
Extra Higgs bosons in extended Higgs sectors
can be discovered as long as their masses are
not too large as compared to the electroweak scale.
On the other hand, at the International Linear Collider (ILC)~\cite{ILC_TDR},
these extended Higgs sectors can also be tested by accurately
measuring the coupling constants with the discovered Higgs bosons $h$.
In non-minimal Higgs models, the relation in Fig.~\ref{fig:c-m} does not hold,
so that we can test the SM by using this relation.
This is complementary with the direct searches at the LHC.
\begin{figure}[t]
\begin{center}
\includegraphics[width=55mm]{KdKe.eps}
\hspace{1cm}
\includegraphics[width=55mm]{KVKF.eps}
\caption{Left: The scaling factors in 2HDM with four types of Yukawa interactions.
Right: The scaling factors in models with universal Yukawa couplings. The current LHC
bounds and the expected LHC and ILC sensitivities are also shown at the 68.27 \% C.L..
For details, see the text and Ref.~\cite{Asner:2013psa}}
\end{center}
\label{fingerprint}
\end{figure}
The gauge couplings and Yukawa interactions of $h$ are given by
\begin{eqnarray}
{\mathcal L}^{\rm int}
= +\kappa_W \frac{2m_W^2}{v} hW^{+\mu}W^-_\mu + \kappa_Z \frac{m_Z^2}{v} hZ^\mu Z_\mu
-\sum_f\kappa_f\frac{m_f}{v} {\overline f}fh + \cdots ,
\end{eqnarray}
where $\kappa_V$ ($V=W$ and $Z$) and $\kappa_f$ ($f=t,b,c, \cdots$) are the scaling factors measuring
the deviation from the SM predictions. In the SM, 4 we have $\kappa_V=\kappa_f=1$.
In the 2HDM, $\kappa_V$ are given by
$\kappa_V=\sin(\beta-\alpha)$, while those for the Yukawa interactions are
given depending on the type of Yukawa interaction~\cite{Aoki:2009ha}.
For the SM-like limit $\kappa_V^{}=1$, all the scaling factors $\kappa_f$ become unity.
In Fig.~\ref{fingerprint} (Left), the scale factors $\kappa_f$ in the 2HDM
with the softly-broken symmetry are shown on the $\kappa_\ell$-$\kappa_d$ plane
for various values of $\tan\beta$ and $\kappa_V^{}$ ($=\sin(\beta-\alpha)$).
The points and the dashed curves denote changes of $\tan\beta$ by steps of one.
$\kappa_V$ ($=\kappa_W=\kappa_Z$) is taken as $\kappa_V^2 = 0. 99, 0.95$ and $0.90$.
The current LHC constraints as well as the expected LHC and ILC sensitivities
for $\kappa_d$ and $\kappa_\ell$ are also shown at the 68.27 \% Confidence Level (C.L.).
For the current LHC constraints (LHC30), we take the numbers
from the universal fit in Eq.~(18) of Ref.~\cite{Giardino:2013bma}.
For the future LHC sensitivities (LHC300 and LHC3000),
the expectation numbers are taken from the Scenario 1 in Table. 1 of Ref.~\cite{CMS:2012zoa}.
The central values and the correlations are assumed to be the same as in LHC30.
The ILC sensitivities are taken from Table. 2.6 in Ref.~\cite{ILC_TDR}.
The same central value without correlation is assumed for the ILC sensitivity curves.
For more details see Refs.~\cite{Asner:2013psa}, and for some revisions see Ref.~\cite{KTYY}.
The analysis including radiative corrections has been done recently~\cite{Kanemura:2014dja}.
Precision measurements for the couplings of the SM-like Higgs boson $h$ at the ILC
can also discriminate exotic Higgs sectors.
In a model with mixing of $h$ with a singlet Higgs field, we have
a universal suppression on the coupling constants, $\kappa_F^{} = \kappa_V^{} = \cos\theta$
with $\theta$ being the mixing angle between the doublet field and the singlet field.
However, $\kappa_F^{} \neq \kappa_V^{}$ is predicted in more complicated
Higgs sectors such as the 2HDM, the Georgi-Machacek model~\cite{Georgi:1985nv} and
the doublet-septet model~\cite{Hisano:2013sn}.
Notice that in exotic models with higher representation
scalar fields such as the Georgi-Machacek model and doublet-septet model,
$\kappa_V$ can be greater than 1.
This can be a signature of exotic Higgs sectors.
In Fig.~\ref{fingerprint} (Right), the predictions for the scale factors of the universal
Yukawa coupling $\kappa_F$ and the gauge coupling $\kappa_V$ are plotted
in exotic Higgs sectors for each set of mixing angles.
The current LHC bounds, expected LHC and ILC sensitivities
for $\kappa_F$ and $\kappa_V$ are also shown at the 68.27 \% C.L..
Therefore, exotic Higgs sectors can be discriminated by measuring $\kappa_V$ and $\kappa_F$
precisely. For details, see Ref.~\cite{Asner:2013psa, KTYY}.
\section{Conclusion}
Extended Higgs sectors appear in new physics models beyond the SM.
We can explore new physics from the structure of the Higgs sector.
The Higgs sector can be determined by precisely measuring the properties of $h$
accurately at the LHC and the ILC.
In particular, using high ability of the ILC for measuring the Higgs boson couplings,
we can discriminate extended Higgs sectors,
and consequently narrow down the new physics models.
\acknowledgments
This talk is partially based on the work with K. Tsumura, H. Yokoya and K. Yagyu~\cite{KTYY}.
|
2,877,628,091,520 | arxiv | \section{Introduction
In Bayesian inference, complete knowledge about a vector of model parameters, $\theta\in\Theta$, obtained by fitting a model $\mathcal{M}$, is contained in the posterior distribution.
Here, prior beliefs about the model parameters as expressed through the prior distribution, $\pi(\theta)$, are updated by observing data $y_{obs}\in\mathcal{Y}$ through the likelihood function $\pi(y_{obs}|\theta)$ of the model. Using Bayes' Theorem, the resulting posterior distribution
\[
\pi(\theta|y_{obs})=\frac{p(y_{obs}|\theta)\pi(\theta)}{\int_\Theta p(y_{obs}|\theta)\pi(\theta)d\theta},
\]
contains all necessary information required for analysis of the model, including model checking and validation, predictive inference and decision making. Typically, the complexity of the model and/or prior means that the posterior distribution, $\pi(\theta|y_{obs})$, is not available in closed form, and so numerical methods are needed to proceed with the inference. A common approach makes use of Monte Carlo integration to enumerate the necessary integrals. This relies on the ability to draw samples $\theta^{(1)},\theta^{(2)},\ldots,\theta^{(N)}\sim\pi(\theta|y_{obs})$ from the posterior distribution so that a finite sample approximation to the posterior is given by the empirical measure
\[
\pi(\theta|y_{obs})\approx \frac{1}{N}\sum_{i=1}^N\delta_{\theta^{(i)}}(\theta),
\]
where $\delta_Z(z)$ denotes the Dirac measure, defined as $\delta_Z(z)=1$ if $z\in Z$ and $\delta_Z(z)=0$ otherwise.
As the size of the sample from the posterior gets large, then the finite sample approximation better approximates the true posterior so that $\lim_{N\rightarrow\infty} \frac{1}{N}\sum_{i=1}^N\delta_{\theta^{(i)}}(\theta) \rightarrow \pi(\theta|y_{obs})$, by the law of large numbers.
As a result, the expectation of a function $a(\theta)$ under $\pi(\theta|y_{obs})$ can be estimated as
\begin{eqnarray*}
\mathbb{E}_\pi[a(\theta)] &=&\int_\Theta a(\theta)\pi(\theta|y_{obs})d\theta\\
& \approx & \int_\Theta a(\theta)\frac{1}{N}\sum_{i=1}^N\delta_{\theta^{(i)}}(\theta)d\theta
= \frac{1}{N}\sum_{i=1}^Na(\theta^{(i)}).
\end{eqnarray*}
There are a number of popular algorithms available for generating samples from posterior distributions, such as importance sampling, Markov chain Monte Carlo (MCMC) and sequential Monte Carlo (SMC) \shortcite{brooksgjm11,chen+si00,doucet+dg01,delmoral+dj06}.
Inherent in such Monte Carlo algorithms is the need to numerically evaluate the posterior distribution, $\pi(\theta|y_{obs})$, up to a normalisation constant, commonly many thousands or millions of times. For example, in the Metropolis-Hastings algorithm, an MCMC algorithm, this arises through computing the probability that the Markov chain accepts the proposed move from a current point $\theta$ to a proposed point $\theta'\sim q(\theta,\theta')$ where $q$ is some proposal density, given by $\alpha(\theta,\theta')=\min\left\{1,\frac{\pi(\theta'|y_{obs})q(\theta',\theta)}{\pi(\theta|y_{obs})q(\theta,\theta')}\right\}$. Similarly in SMC algorithms, the incremental particle weight is given by $w_t(\theta_t)=\frac{\pi_t(\theta_t|y_{obs})L_{t-1}(\theta_t,\theta_{t-1})}{\pi_{t-1}(\theta_{t-1}|y_{obs})M_t(\theta_{t-1},\theta_t)}$, where $M_t$ and $L_{t-1}$ are transition kernels, and $\pi_t$ denotes a function strongly related to the posterior distribution, such as $\pi_t(\theta_t|y_{obs})=[\pi(\theta_t|y_{obs})]^{t/T}\pi(\theta_t)^{1-t/T}$. Evaluating acceptance probabilities or particle weights clearly requires evaluation of the likelihood function.
However, for an increasing range of scientific problems -- see Section \ref{section:FurtherReading} for a selection -- numerical evaluation of the likelihood function, $\pi(y_{obs}|\theta)$, is either computationally prohibitive, or simply not possible. Examples of the former can occur where the size of the observed dataset, $y_{obs}$, is sufficiently large that, in the absence of low dimensional sufficient statistics, evaluating the likelihood function even once is impracticable. This can easily occur in the era of Big Data, for example, through large genomic datsets.
Partial likelihood intractability can arise, for instance, in models for Markov random fields. Here, the likelihood function can be written as
$p(y_{obs}|\theta) = \frac{1}{Z_{\theta}}\tilde{p}(y_{obs}|\theta)$
where $\tilde{p}(y_{obs}|\theta)$ is a function that can be evaluated, and where the normalisation constant, $Z_{\theta}=\sum_{\mathcal{Y}} \tilde{p}(y|\theta)$, depends on the parameter vector $\theta$.
Except for trivial datasets, the number of possible data configurations in the set $\mathcal{Y}$ means that brute-force enumeration of $Z_\theta$ is typically infeasible \shortcite{grelaud+rmrt09,moller+prb06}.
While there are algorithmic techniques available that arrange for the intractable normalising constants to cancel out within e.g. Metropolis-Hastings acceptance probabilities \shortcite{moller+prb06}, or that numerically approximate $Z_\theta$ through e.g. path sampling or thermodynamic integration, these are not viable when $\tilde{p}(y|\theta)$ itself is also computationally intractable.
Instances when the complete likelihood function is unavailable can also occur when the model density function is only implicitly defined, for example, through quantile or characteristic functions \shortcite{drovandi+p11,peters+sf12}. Similarly, the likelihood function may only be implicitly defined as a data generation process.
In these scenarios, if the preferred model is computationally intractable, the need to repeatedly evaluate the posterior distribution to draw samples from the posterior makes the implementation of standard Bayesian simulation techniques impractical.
Faced with this challenge,
one option is simply to fit a different model that is more amenable to statistical computations. The disadvantage of this approach is that the model could then be less realistic, and not permit inference on the particular questions of interest for the given analysis. A more attractive alternative, may be to consider an approximation to the preferred model, so that modelling realism is maintained at the expense of some approximation error. While various posterior approximation methods are available, ``likelihood-free'' Bayesian methods, of which approximate Bayesian computation (ABC) is a particular case, have emerged as an effective and intuitively accessible way of performing an approximate Bayesian analysis.
In this Chapter, we aim to give an intuitive exploration of the basics of ABC methods, illustrated wherever possible by simple examples. The scope of this exploration is deliberately limited, for example, we focus only on the use of simple rejection sampling based ABC samplers, in order that this Chapter will provide an accessible introduction to a subject which is given more detailed and advanced treatments in the rest of this Handbook.
\section{Likelihood-free intuition}%
The basic mechanism of likelihood-free methods can be fairly easily understood at an intuitive level.
For the moment, we assume that data generated under the model, $y\sim p(y|\theta)$, are discrete.
Consider the standard rejection sampling algorithm for sampling from a density $f(\theta)$:
\begin{table}[tbh]
\caption{\bf Standard Rejection Sampling Algorithm}
\noindent {\it Inputs:}
\begin{itemize}
\item A target density $f(\theta)$.
\item A sampling density $g(\theta)$, with $g(\theta)>0$ if $f(\theta)>0$.
\item An integer $N>0$.
\\
\end{itemize}
\noindent {\it Sampling:}\\
\noindent For $i=1, \ldots, N$:
\begin{enumerate}
\item \label{alg:rejection:step1} Generate $\theta^{(i)}\sim g(\theta)$ from sampling density $g$.
\item \label{alg:modified-rejection:step3}
Accept $\theta^{(i)}$ with probability $\frac{f(\theta^{(i)})}{Kg(\theta^{(i)})}$ where $K\geq\max_\theta \frac{f(\theta)}{g(\theta)}$.\\
Else go to \ref{alg:rejection:step1}.
\\
\end{enumerate}
\noindent {\it Output:}\\
A set of parameter vectors $\theta^{(1)},\ldots,\theta^{(N)}$ which are samples from $f(\theta)$.
\end{table}
If we specify $f(\theta)=\pi(\theta|y_{obs})$, and suppose that
the prior is used as the sampling distribution,
then the acceptance probability is proportional to the likelihood, as then $f(\theta)/Kg(\theta)\propto p(y_{obs}|\theta)$. While direct evaluation of this acceptance probability is not available if the likelihood is computationally intractable, it is possible to stochastically determine whether or not to accept or reject a draw from the sampling density, {\it without} numerical evaluation of the acceptance probability. The following discussion assumes that the data $y$ are discrete (this will be relaxed later).
This can be achieved by noting that the acceptance probability is proportional to the probability of generating the observed data, $y_{obs}$, under the model $p(y|\theta)$ for a fixed parameter vector, $\theta$. That is, suitably normalised, the likelihood function $p(y|\theta)$ can be considered as a probability mass function for the data. Put another way, for fixed $\theta$, if we generate a dataset from the model $y\sim p(y|\theta)$, then the probability of generating our observed dataset exactly, so that $y=y_{obs}$, is precisely $p(y_{obs}|\theta)$. From this observation, we can use the Bernoulli event of generating $y=y_{obs}$ (or not) to determine whether to accept (or reject) a draw from the sampling distribution, in lieu of directly evaluating the probability $p(y_{obs}|\theta)$.
This insight permits a rewriting of the simple rejection sampling algorithm, as given below. A critical aspect of this modified algorithm is that it does not require numerical evaluation of the acceptance probability (i.e. the likelihood function). Note that if sampling is from $g(\theta)$ rather than the prior $\pi(\theta)$, then the acceptance probability is proportional to $p(y_{obs}|\theta)\pi(\theta)/g(\theta)$. In this case, deciding whether to accept a draw from $g(\theta)$ can be split into two stages: firstly, as before, if we generate $y\sim p(y|\theta)$ such that $y\neq y_{obs}$ then we reject the draw from $g(\theta)$. If however, $y=y_{obs}$, then we accept the draw from $g(\theta)$ with probability
$\pi(\theta)/[Kg(\theta)]$, where $K\geq\max_\theta f(\theta)/g(\theta)$. (These two steps may be interchanged so that the step with the least computational overheads is performed first.) Importance sampling versions of this and later algorithms are examined in \shortciteN{fan+s18}.
\begin{table}[tbh]
\caption{\bf Likelihood-Free Rejection Sampling Algorithm}
\noindent {\it Inputs:}
\begin{itemize}
\item A target posterior density $\pi(\theta|y_{obs})\propto p(y_{obs}|\theta)\pi(\theta)$, consisting of a prior distribution $\pi(\theta)$ and a procedure for generating data under the model $p(y_{obs}|\theta)$.
\item A proposal density $g(\theta)$, with $g(\theta)>0$ if $\pi(\theta|y_{obs})>0$.
\item An integer $N>0$.
\\
\end{itemize}
\noindent {\it Sampling:}\\
\noindent For $i=1, \ldots, N$:
\begin{enumerate}
\item \label{alg:modified-rejection:step1} Generate $\theta^{(i)}\sim g(\theta)$ from sampling density $g$.
\item Generate $y\sim p(y|\theta^{(i)})$ from the likelihood.
\item \label{alg:modified-rejection:step3} If $y=y_{obs}$ then accept $\theta^{(i)}$ with probability $\frac{\pi(\theta^{(i)})}{Kg(\theta^{(i)})}$,\\ where $K\geq\max_\theta\frac{\pi(\theta)}{g(\theta)}$.
Else go to \ref{alg:modified-rejection:step1}.
\\
\end{enumerate}
\noindent {\it Output:}\\
A set of parameter vectors $\theta^{(1)},\ldots,\theta^{(N)}$ which are samples from $\pi(\theta|y_{obs})$.
\end{table}
\section{A practical illustration: Stereological extremes}%
\label{section:stereological}
In order to illustrate the performance of the likelihood-free rejection sampling algorithm, we perform a re-analysis of a stereological dataset with a computationally intractable model first developed by \shortciteN{bortot+cs07}.
\subsection{Background and model}
Interest is in the distribution of the size of {\em inclusions}, microscopically small particles introduced during the production of steel. The steel strength is thought to be directly related to the size of the largest inclusion.
Commonly, the sampling of inclusions involves measuring the maximum cross-sectional diameter of each observed inclusion, $y_{obs}=(y_{obs,1}, \ldots, y_{obs,n})^\top$, obtained from a two-dimensional planar slice through the steel block. Each cross-sectional inclusion size is greater than some measurement threshold, $y_{obs,i}>u$.
The inferential problem is to analyse the unobserved distribution of the largest inclusion in the block, based on the information in the cross-sectional slice, $y_{obs}$. The focus on the size of the largest inclusion means that this is an extreme value variation on the standard stereological problem \cite{baddeley+j05}.
Each observed cross-sectional inclusion diameter, $y_{obs,i}$, is associated with an unobserved true inclusion diameter $V_i$.
\citeN{anderson+c02} proposed a mathematical model
assuming that the inclusions were spherical with diameters $V$, and that their centres followed a homogeneous Poisson process with rate $\lambda>0$ in the volume of steel.
The distribution of the largest inclusion diameters, $V|V>v_0$
was assumed to follow a generalised Pareto distribution, with distribution function
\begin{equation}
\label{eqn:gpd}
\mbox{Pr}(V\leq v|V>v_0) = 1-\left[1+\frac{\xi(v-v_0)}{\sigma}\right]^{-1/\xi}_+,
\end{equation}
for $v>v_0$, where $[a]_+=\max\{0,a\}$,
following standard extreme value theory arguments \cite{coles01}.
However, the probability of observing the cross-sectional diameter $y_{obs,i}$ (where $y_{obs,i}\leq V_i$) is dependent on the value of $V_i$, as larger inclusion diameters give a greater chance that the inclusion will be observed in the two-dimensional planar cross-section. This means that the number of observed inclusions, $n$, is also a random variable.
Accordingly the parameters of the full spherical inclusion model are $\theta=(\lambda, \sigma, \xi)^\top$.
\citeN{anderson+c02} were able to construct a tractable likelihood function for this model by adapting the solution to Wicksell's corpuscle problem \cite{wicksell25}.
However, while their model assumptions of a Poisson process
are not unreasonable, the assumption that the inclusions are spherical is not plausible in practice.
\shortciteN{bortot+cs07} generalised this model to a family of ellipsoidal inclusions.
While this model is more realistic than the spherical inclusion model, there are analytic and computational difficulties in extending likelihood-based inference to more general families of inclusion \shortcite{baddeley+j05,bortot+cs07}. As a result ABC methods are a good candidate procedure to approximate the posterior distribution in this case.
\subsection{Analysis}
\label{sec:extremesAnalysis}
For simplicity, suppose that we are interested in the spherical inclusions model, so that the true posterior distribution can be estimated directly. Suppose also that the parameters of the generalised Pareto distribution are known to be $\sigma=1.5$ and $\xi=0.1$, so that interest is in the Poisson rate parameter, $\lambda$, only. In this setting, a sufficient statistic for the rate parameter is $n_{obs}$, the observed number of inclusions, so that $\pi(\theta|y_{obs})=\pi(\lambda|n_{obs})$ is the distribution of interest.
Accordingly we can replace $y_{obs}=n_{obs}$ in the likelihood-free rejection sampling algorithm.
For the dataset considered by \shortciteN{bortot+cs07}, $n_{obs}=112$.
Figure \ref{chapter3:intro-bortot}(a) shows scaled density estimates of $\pi(\lambda|n_{obs})$ (solid lines) obtained using the likelihood-free rejection sampling algorithm, for varying numbers of observed inclusions, $n_{obs}=92, 102, 112, 122$ and $132$. As the observed number of inclusions increases, accordingly so does the location and scale of the posterior of the rate parameter. The dashed lines in Figure \ref{chapter3:intro-bortot}(a) denote the same density estimates of $\pi(\lambda|n_{obs})$, but obtained using a conditional version of the standard MCMC sampler developed by \citeN{anderson+c02}, which makes use of numerical evaluations of the likelihood. These estimates are known to correspond to the true posterior. The likelihood-free rejection algorithm estimates clearly coincide with the true posterior distribution.
\begin{figure}[tb]
\centering
\includegraphics[width=12cm]{intro-bortot.pdf}
\caption{\small Posterior density estimates of $\pi(\lambda|n_{obs})$ for the stereological extremes example, based on spherical inclusions.
(a) Density estimates using the likelihood-free rejection sampler (solid lines) and standard MCMC algorithm (dashed lines), with $n_{obs}=92, 102, 112, 122$ and $132$.
(b) Density estimates for $n_{obs}=112$, with the relaxed criterion that $\|y-y_{obs}\|\leq h$ for $h=0, 10$ and $20$.
}
\label{chapter3:intro-bortot}
\end{figure}
The density estimates obtained under the likelihood-free algorithm are each based on approximately 25,000 accepted samples, obtained from 5 million draws from the $U(0,100)$ prior. That is, the acceptance rate of the algorithm is approximately 0.5\%.
This algorithm is clearly very inefficient, with the computational overheads being partially influenced by the mismatch between prior and posterior distributions, but they are primarily dominated by the probability of generating data from the model that exactly matches the observed data, $n_{obs}$. This is the price for avoiding likelihood evaluation.
On balance, the computational inefficiency is practically acceptable for this specific case. However, this raises the question of how viable this approach will be for more complex analyses, when the probability of generating data such that $y=y_{obs}$ becomes even lower. Further, the acceptance probability will be exactly zero if the data generated under the model, $y\sim p(y|\theta)$, are continuous, which is likely to be the case in general.
In order to alleviate such computational overheads,
one possible variation on the likelihood-free rejection algorithm would be to adjust the potentially very low (or zero) probability requirement that $y=y_{obs}$ exactly. Instead, the acceptance criterion could require that the generated data is simply ``close'' to the observed data. For example, this might require that $\|y-y_{obs}\|\leq h$ for some $h\geq 0$ and distance measure $\|\cdot\|$, such as Euclidean distance. This would also permit a relaxation of our previous assumption that data generated under the model, $y\sim p(y|\theta)$, are discrete. In this way, step \ref{alg:modified-rejection:step3} of the {\it Sampling} stage of the likelihood-free rejection algorithm would become:
\begin{table}
\caption{\bf Likelihood-Free Rejection Sampling Algorithm}
\begin{enumerate}
\item[3.] If $\|y-y_{obs}\|\leq h$ then accept $\theta^{(i)}$ with probability $\frac{\pi(\theta^{(i)})}{Kg(\theta^{(i)})}$,\\ where $K\geq\max_\theta\frac{\pi(\theta)}{g(\theta)}$.
Else go to \ref{alg:modified-rejection:step1}.
\end{enumerate}
\end{table}
Of course, the output samples would no longer be draws from $\pi(\theta|y_{obs})$ unless $h=0$, but will instead be draws from an approximation of $\pi(\theta|y_{obs})$.
The logic behind this modification is that increasing $h$ will considerably improve the acceptance rate of the algorithm. The hope is that, if $h$ remains small, then the resulting estimate of the posterior will still be close to the true posterior. An illustration of this is shown in Figure \ref{chapter3:intro-bortot}(b), which shows density estimates obtained using the adjusted requirement that $\|n-n_{obs}\|\leq h$ for $h=0$ (i.e. $n=n_{obs}$), $10$ and $20$. Computationally there is a marked improvement in algorithmic efficiency: the low $0.5\%$ acceptance rate for $h=0$ increases to $10.5\%$ and $20.5\%$ for $h=10$ and $20$ respectively.
However, there are now some clear deviations in the density estimate resulting from the likelihood-free algorithm, compared to the actual posterior, $\pi(\lambda|n_{obs})$ (solid lines). In fact, it is more accurate to refer to these density estimates as an approximation of the posterior. On one hand, the location and shape of the density are broadly correct, and for some applications, this level of approximation may be adequate. On the other hand, however, the scale of the approximation is clearly overestimated for larger values of $h$. Intuitively this makes sense: the adjusted criterion $\|y-y_{obs}\|\leq h$ accepts $\theta\sim g(\theta)$ draws if the generated data $y$ is merely ``close'' to $y_{obs}$. As such, for many values of $\theta$ where it was previously very unlikely to generate data such that $y=y_{obs}$, it may now be possible to satisfy the more relaxed criterion. This will accordingly result in a greater range of $\theta$ values that will be accepted, and thereby increase the variability of the posterior approximation. The more relaxed the criterion (i.e. the larger the value of $h$), the greater the resulting variability.
It is possible to be more precise about the exact form of the posterior obtained through this adjusted procedure -- this will be discussed in detail in the next Section. However, for this particular analysis, based on samples $\lambda^{(1)},\ldots,\lambda^{(N)}$ and datasets $n^{(1)},\ldots,n^{(N)}$ obtained from the likelihood-free rejection algorithm, it can be seen that as the posterior approximation is constructed from those values of $\lambda=\lambda^{(i)}$ such that $\|n^{(i)}-n_{obs}\|\leq h$, then the posterior approximation can firstly be expressed as
\begin{eqnarray*}
\hat{\pi}(\lambda|n_{obs}) = \frac{1}{N} \sum_{i=1}^N\delta_{\lambda^{(i)}}(\lambda)
&=&
\frac{1}{N}\sum_{\lambda^{(i)}:\|n^{(i)}-n_{obs}\|\leq h}\delta_{\lambda^{(i)}}(\lambda)\\
& = &
\sum_{h'=-h^*}^{h^*}\left(\frac{1}{N}\sum_{\lambda^{(i)}:(n^{(i)}-n_{obs})=h'}\delta_{\lambda^{(i)}}(\lambda)\right),
\end{eqnarray*}
where $h^*$ is the largest integer such that $\|h^*\|\leq h$.
It then follows that
\begin{equation}
\label{eqn:discreteMixturePost}
\lim_{N\rightarrow\infty} \hat{\pi}(\lambda|n_{obs})
=
\sum_{h'=-h^*}^{h^*} \mbox{Pr}(n=n_{obs}+h')\pi(\lambda|n_{obs}+h').
\end{equation}
That is, the ``likelihood-free'' approximation of the posterior, $\pi(\theta|y_{obs})$, is precisely an average of the individual posterior distributions $\pi(\lambda|n_{obs}+h')$ for $h'=-h^*,\ldots,h^*$, weighted according to $\mbox{Pr}(n=n_{obs}+h')$, the probability of observing the dataset, $n_{obs}+h'$, based on samples drawn from the (prior predictive) distribution $p(n|\lambda)\pi(\lambda)$.
This can be loosely observed from Figure \ref{chapter3:intro-bortot}, in which the approximations for $h=10$ and $h=20$ in panel (b) respectively correspond to rough visual averages of the centre three and all five displayed posteriors in panel (a). For $h=0$ we obtain $\lim_{N\rightarrow\infty}\hat{\pi}(\lambda|n_{obs})=\pi(\lambda|n_{obs})$ as for standard Monte Carlo algorithms.
Similar interpretations and conclusions arise when the data $y$ are continuous, as we examine for a different model in the following Subsection. This also allows us to introduce a fundamental concept in ABC methods -- the use of summary statistics.
\section{A $g$-and-$k$ distribution analysis}%
\label{section:gandk}
The univariate $g$-and-$k$ distribution is a flexible unimodal distribution that is able to describe data with significant amounts of skewness and kurtosis.
Originally developed by
\citeN{tukey77} (see also \citeNP{martinez+i84,hoaglin85} and
\citeNP{rayner+m02}), the $g$-and-$k$ and related distributions have been analysed in the ABC setting by \shortciteN{peters+s06}, \shortciteN{allingham+km09}, \shortciteN{drovandi+p11} and \shortciteN{fearnhead+p12} among others. Its density function has no closed form, but is alternatively defined through its quantile function as
\begin{eqnarray}
\label{eqn:g&k}
Q(q|A,B,g,k) = A + B\left[1+c\frac{1-\exp\{-gz(q)\}}{1+\exp\{-gz(q)\}}\right] (1+z(q)^2)^k z(q)
\end{eqnarray}
for $B>0, k>-1/2$,
where $z(q)=\Phi^{-1}(q)$ is the $q$-th quantile of the standard normal distribution function. The parameter $c$ measures overall asymmetry, and is conventionally fixed at $c=0.8$ (resulting in $k>-1/2$) \cite{rayner+m02}. This distribution is very flexible, with many common distributions obtained or well approximated by particular parameter settings, such as the normal distribution when $g=k=0$. Given $\theta=(A,B,g,k)^\top$, simulations $z(q)\sim N(0,1)$ drawn from a standard normal distribution can be transformed into samples from the $g$-and-$k$ distribution through equation (\ref{eqn:g&k}).
Figure \ref{image:gandk} shows a scatterplot of samples from the likelihood-free approximation of the posterior $\pi(\theta|y_{obs})$ (grey dots), based on a simulated dataset $y_{obs}$ of length $n=1,000$ generated from the $g$-and-$k$ distribution with parameter vector $\theta_0=(3,1,2,0.5)^\top$.
This analysis was based on defining
$\|y-y_{obs}\| = (y-y_{obs})^\top\hat{\Sigma}^{-1}(y-y_{obs})\leq h$ as Mahalanobis distance, with $h$ given by the 0.005 quantile of the differences $\|y-y_{obs}\|$ for $i=1,\ldots, N=100,000$ Monte Carlo samples from the joint prior $\pi(\theta)=\pi(A)\pi(B)\pi(g)\pi(k)=N(1,5)\times N(0.25,2)\times U(0,10)\times U(0,1).$
The matrix $\hat{\Sigma}$ was determined as the sample covariance matrix of $y$ using 2,000 samples generated under the model $y|\theta_0$ with $\theta=\theta_0$ fixed at its true value.
\begin{figure}[tb]
\centering
\includegraphics[width=12cm]{g-and-k.pdf}
\caption{{\protect\small Pairwise scatterplots of samples from the likelihood-free approximation to the posterior using the full dataset (grey dots), and four summary statistics (black dots). True parameter values $(A,B,g,k)=(3,1,2,0.5)$ are indicated by the cross $\times$.
}}
\label{image:gandk}
\end{figure}
As is apparent from Figure \ref{image:gandk}, the likelihood-free approximation to $\pi(\theta|y_{obs})$ (grey dots) is particularly poor -- the true parameter vector $\theta_0$ is not even close to the estimated posterior samples. This outcome is a direct result of the dimension of the comparison $y-y_{obs}$. The chance of generating an $n=1,000$-dimensional vector $y$ that is close to $y_{obs}$, even if $\theta=\theta_0$, is vanishingly small.
The odds of matching $y$ with $y_{obs}$ can be increased by redefining both in terms of their order statistics, although the chances still remain extremely low (see Example 3 in Section \ref{section:summaryStatisticBasics} for an illustration).
This means that $h$ must be relatively large, which results in accepting samples $\theta^{(i)}$ that generate data $y^{(i)}$ that are not actually close to $y_{obs}$, and thereby producing a poor approximation to $\pi(\theta|y_{obs})$.
The obvious way to avoid this problem is to reduce the dimension of the data comparison $y-y_{obs}$. Suppose that lower dimensional statistics $s=S(y)$ and $s_{obs}=S(y_{obs})$ are available, such that $S(y)$ is sufficient for, or highly informative for $\theta$ under the model, but where $\dim(S(y))\ll\dim(y)$. Then the comparison $\|y-y_{obs}\|$ might be replaced by $\|s-s_{obs}\|$ without too much loss of information, but with the advantage that the dimension of $S(y)$ is now much lower. That is, step 3 in the likelihood-free rejection sampling algorithm could be further replaced by:
\begin{table}
\caption{\bf Likelihood-Free Rejection Sampling Algorithm}
\begin{enumerate}
\item[3.] Compute $s=S(y)$.\\
If $\|s-s_{obs}\|\leq h$ then accept $\theta^{(i)}$ with probability $\frac{\pi(\theta^{(i)})}{Kg(\theta^{(i)})}$\\ where $K\geq\max_\theta \frac{\pi(\theta)}{g(\theta)}$.
Else go to \ref{alg:modified-rejection:step1}.
\end{enumerate}
\end{table}
Using this idea, \shortciteN{drovandi+p11} suggested the statistics
\begin{eqnarray*}
S_A&=&E_4,
\quad
S_B=E_6-E_2,
\quad
S_g=(E_6+E_2-2E_4)/S_B,\\
\mbox{and }
S_k&=&(E_7-E_5+E_3-E_1)/S_B
\end{eqnarray*}
as informative for $A, B, g$ and $k$ respectively, so that $S(y)=(S_A,S_B,S_g,S_k)^\top$, where
$E_1\leq E_2\leq\ldots\leq E_8$ are the octiles of $y$. Repeating the above $g$-and-$k$ analysis but using the 4-dimensional comparison $\|s-s_{obs}\|$ rather than $\|y-y_{obs}\|$ (and recomputing $\hat{\Sigma}$ and $h$ under the same conditions), the resulting posterior samples are shown in Figure \ref{image:gandk} (black dots).
The difference in the quality of the approximation to $\pi(\theta|y_{obs})$ when using $S(y)$ rather than $y$, is immediately apparent. The true parameter value $\theta_0$ is now located firmly in the centre of each pairwise posterior sample, several parameters (particularly $A$ and $g$) are more precisely estimated, and evidence of dependence between parameters (as is to be expected) is now
clearly seen.
While it is unreasonable to expect that there has been no loss of information in moving from $y$ to $S(y)$, clearly the overall gain in the quality of the approximation to the likelihood-free posterior
has been worth it in this case. This suggests that the use of summary statistics $S(y)$ is a useful tool more generally in approximate Bayesian computational techniques.
\section{Likelihood-free methods or approximate Bayesian computation (ABC)?}%
The terms {\em likelihood-free} methods and {\em approximate Bayesian computation} are both commonly used to describe Bayesian computational methods developed for when the likelihood function is computationally intractable, or otherwise unavailable. Of course, ``likelihood-free'' is arguably a misnomer -- in no sense is the likelihood function not involved in the analysis. It is the function used to generate the data $y\sim p(y|\theta)$, and it accordingly must exist, whether or not it can be numerically evaluated or written down. Rather, in this context, ``likelihood-free'' refers to any likelihood-based analysis that proceeds without direct numerical evaluation of the likelihood function. There are several techniques that could be classified according to this description.
``Approximate Bayesian computation'', commonly abbreviated to ``ABC'', was first coined by \shortciteN{beaumont+zb02} in the context of Bayesian statistical techniques in population genetics (although see \citeNP{tavare17}, this volume), and refers to the specific type of likelihood-free methods considered in this book. In particular, given the ``approximate'' in ABC, it refers to those likelihood-free methods that produce an approximation to the posterior distribution resulting from the imperfect matching of data $\|y-y_{obs}\|$ or summary statistics $\|s-s_{obs}\|$.
Thus, the likelihood-free rejection algorithm described above with $h=0$, which only accepts samples, $\theta$, which have exactly reproduced the observed data $y_{obs}$, is not an ABC algorithm, as the method produces exact samples from the posterior distribution -- there is no approximation. (The Monte Carlo approximation of the posterior is not considered an approximation in this sense.) It is, however, a likelihood-free method. Whereas, the likelihood-free rejection algorithm which may accept samples if $\|y-y_{obs}\|\leq h$, for $h>0$, is an ABC algorithm, as the samples will be drawn from an approximation to the posterior distribution.
Similarly, when the sampler may alternatively accept samples if $\|s-s_{obs}\|\leq h$, for any $h\geq 0$ (including $h=0$), the resulting samples are also drawn from an approximate posterior distribution. As such, this is also an ABC algorithm. The only exception to this is the case where $h=0$ and the summary statistics are sufficient: here there is no posterior approximation -- the algorithm is then likelihood-free but not an ABC method.
With a few exceptions (such as indirect inference, see \shortciteNP{drovandi18}) all of the methods considered in this book are both ABC and (by definition) likelihood-free methods. The aim of any ABC analysis is to find a practical way of performing the Bayesian analysis, while keeping the Approximation and the Computation to a minimum.
\section{The approximate posterior distribution
\label{chapter3:section:TheApproximatePosteriorDistribution}
In contrast to the intuitive development of likelihood-free methods in the previous Sections,
we now describe the exact form of the ABC approximation to the posterior distribution that is produced from the likelihood-free rejection algorithm.
The procedure of (i) generating $\theta$ from the sampling distribution, $g(\theta)$, (ii) generating data, $y$, from the likelihood, $p(y|\theta)$, conditional on $\theta$, and (iii) rejecting $\theta$ if $\|y-y_{obs}\|\leq h$,
is equivalent to drawing a sample $(\theta,y)$ from the joint distribution proportional to
\[
I(\|y-y_{obs}\leq h\|)p(y|\theta)g(\theta),
\]
where $I$ is the indicator function, with $I(Z)=1$ if $Z$ is true, and $I(Z)=0$ otherwise.
If this sample $(\theta,y)$ is then further accepted with probability proportional to $\pi(\theta)/g(\theta)$, this implies that the likelihood-free rejection algorithm is sampling from the joint distribution proportional to
\begin{equation}
\label{eqn:simplejoint}
I(\|y-y_{obs}\|\leq h)p(y|\theta)g(\theta)\frac{\pi(\theta)}{g(\theta)}\\
=
I(\|y-y_{obs}\|\leq h)p(y|\theta)\pi(\theta).
\end{equation}
Note that if $h=0$, then the $\theta$ marginal of (\ref{eqn:simplejoint}) equals the true posterior distribution, as
\begin{eqnarray*}
\lim_{h\rightarrow 0}\int I(\|y-y_{obs}\|\leq h)p(y|\theta)\pi(\theta) dy
& = &
\int\delta_{y_{obs}}(y)p(y|\theta)\pi(\theta) dy\\
& = &
p(y_{obs}|\theta)\pi(\theta).
\end{eqnarray*}
That is, for $h=0$, the likelihood-free rejection algorithm draws samples, $(\theta,y)$, for which the marginal distribution of the parameter vector is the true posterior, $\pi(\theta|y_{obs})$. (The marginal distribution of the auxiliary dataset $y$ is a point mass at $\{y=y_{obs}\}$ in this case.)
It is useful in the following to generalise the above formulation slightly. In (\ref{eqn:simplejoint}), the indicator term $I(\|y-y_{obs}\leq h\|)$ only takes the values 0 or 1. This is useful in the sense that it allows clear ``{\it If $\|y-y_{obs}\|\leq h$ then \ldots }'' statements to be made in any algorithm, which can simplify implementation. However it is intuitively wasteful of information, as it does not discriminate between those samples, $\theta$, for which the associated dataset $y$ exactly equals the observed dataset $y_{obs}$, and those samples, $\theta$, for which the associated dataset is the furthest away from $y_{obs}$, i.e. $\|y-y_{obs}\|=h$. As the former case produces samples that are exact draws from the true posterior distribution, whereas the latter case does not, this produces a motivation for a more continuous scaling from 1 (when $y=y_{obs}$) to 0 (when $\|y-y_{obs}\|$ is large).
This can be achieved by replacing the indicator function, $I(\|y-y_{obs}\|\leq h)$, with a standard smoothing kernel function, $K_h(u)$, with $u=\|y-y_{obs}\|$, where
\[
K_h(u)=\frac{1}{h}K\left(\frac{u}{h}\right).
\]
Kernels are symmetric functions such that $K(u)\geq 0$ for all $u$, $\int K(u)du=1$, $\int uK(u)du=0$ and $\int u^2K(u)du<\infty$.
Here, $h>0$ corresponds to the scale parameter, or ``bandwidth'' of the kernel function. Several common forms for kernel functions are given in Table \ref{Chapter3:table:StandardKernels}, and these are illustrated in Figure \ref{chapter3:figure:StandardKernels}. Following convention, we define $\lim_{h\rightarrow 0}K_h(u)$ as a point mass at the origin ($u=0$).
\begin{table}[tbh!]
\caption{\small The functional forms of several common kernel functions.}
\label{Chapter3:table:StandardKernels}
\setlength{\tabcolsep}{0.25 cm}
\begin{center}
\begin{tabular}{ll}
Kernel & $K(u)$ \\ \hline
\vspace{1mm}
Uniform & $\frac{1}{2}I(|u|\leq 1)$\\
\vspace{1mm}
Triangular & $(1-|u|)I(|u|\leq 1)$\\
\vspace{1mm}
Epanechnikov & $\frac{3}{4}(1-u^2)I(|u|\leq 1)$ \\
\vspace{1mm}
Biweight & $\frac{15}{16}(1-u^2)^3I(|u|\leq 1)$ \\
Gaussian & $\frac{1}{\sqrt{2\pi}}e^{-\frac{1}{2}u^2}$\\
\end{tabular}
\end{center}
\end{table}
\begin{figure}[tbh!]
\centering
\includegraphics[width=9cm]{kernel.pdf}
\caption{\small Standard kernel functions, $K(u)$, listed in Table \ref{Chapter3:table:StandardKernels} plotted on a common scale (with maximum at $1$).
}
\label{chapter3:figure:StandardKernels}
\end{figure}
An alternative specification of a smoothing kernel for multivariate datasets is obtained by writing $u=y-y_{obs}$, where $u=(u_1,\ldots,u_n)^\top$, $y=(y_1,\ldots,y_n)^\top$ and $y_{obs}=(y_{obs,1},\ldots,y_{obs,n})^\top$, so that $u_i=y_i-y_{obs,i}$. Then we can write
$K_h(u)=\prod_{i=1}^n K_{h_i}(u_i)$, where the scale parameter of each individual kernel function, $K_{h_i}(u_i)$, may vary. A further, more general specification may determine $K_h(u)$ as a fully multivariate, smooth and symmetric function, satisfying the above moment constraints. One such example is a multivariate $N(0,\Sigma)$ distribution, for some fixed covariance matrix $\Sigma$.
Substituting
the kernel function, $K_h(u)$, into the likelihood-free rejection algorithm results in the ABC Rejection Sampling Algorithm:
\begin{table}
\caption{\bf ABC Rejection Sampling Algorithm}
\noindent {\it Inputs:}
\begin{itemize}
\item A target posterior density $\pi(\theta|y_{obs})\propto p(y_{obs}|\theta)\pi(\theta)$, consisting of a prior distribution $\pi(\theta)$ and a procedure for generating data under the model $p(y_{obs}|\theta)$.
\item A proposal density $g(\theta)$, with $g(\theta)>0$ if $\pi(\theta|y_{obs})>0$.
\item An integer $N>0$.
\item A kernel function $K_h(u)$ and scale parameter $h>0$.
\\
\end{itemize}
\noindent {\it Sampling:}\\
\noindent For $i=1, \ldots, N$:
\begin{enumerate}
\item \label{chapter3:alg:ABC-rejection:step1} Generate $\theta^{(i)}\sim g(\theta)$ from sampling density $g$.
\item Generate $y\sim p(y|\theta^{(i)})$ from the likelihood.
\item Accept $\theta^{(i)}$ with probability $\frac{K_h(\|y-y_{obs}\|)\pi(\theta^{(i)})}{Kg(\theta^{(i)})}$\\ where $K\geq K_h(0)\max_\theta \frac{\pi(\theta)}{g(\theta)}$.
Else go to \ref{chapter3:alg:ABC-rejection:step1}.
\\
\end{enumerate}
\noindent {\it Output:}\\
A set of parameter vectors $\theta^{(1)},\ldots,\theta^{(N)}$ $\sim$ $\pi_{ABC}(\theta|y_{obs})$.
\end{table}
In order to determine the form of the target distribution, $\pi_{ABC}(\theta|y_{obs})$, of this algorithm, we can
follow the same argument as before. By (i) generating $\theta$ from the importance distribution, $g(\theta)$, (ii) generating data, $y$, from the likelihood, $p(y|\theta)$, conditional on $\theta$, and then (iii) accepting the sample $(\theta,y)$ with probability proportional to $K_h(\|y-y_{obs}\|)\pi(\theta^{(i)})/g(\theta^{(i)})$, this results in samples from the joint distribution
\begin{equation}
\label{Chapter3:eqn:ABCjointposterior}
\pi_{ABC}(\theta,y|y_{obs}) \propto K_h(\|y-y_{obs}\|)p(y|\theta)\pi(\theta).
\end{equation}
When $K_h(u)$ is the uniform kernel (see Table \ref {Chapter3:table:StandardKernels}), then (\ref{Chapter3:eqn:ABCjointposterior}) reduces to (\ref{eqn:simplejoint}).
Accordingly, we define the ABC approximation to the true posterior distribution as
\begin{equation}
\label{Chapter3:eqn:ABCposterior}
\pi_{ABC}(\theta|y_{obs}) = \int \pi_{ABC}(\theta,y|y_{obs}) dy,
\end{equation}
where $\pi_{ABC}(\theta,y|y_{obs})$ is given by (\ref{Chapter3:eqn:ABCjointposterior}).
As before, as $h\rightarrow 0$, so that only those samples, $\theta$, that generate data for which $y=y_{obs}$ are retained,
then (\ref{Chapter3:eqn:ABCjointposterior}) becomes
\begin{eqnarray*}
\lim_{h\rightarrow 0} \pi_{ABC}(\theta,y|y_{obs})
& \propto & \lim_{h\rightarrow 0} K_h(\|y-y_{obs}\|)p(y|\theta)\pi(\theta)\\
& = & \delta_{y_{obs}}(y)p(y|\theta)\pi(\theta),
\end{eqnarray*}
and so
$\lim_{h\rightarrow 0}\pi_{ABC}(\theta|y_{obs})\propto\int\delta_{y_{obs}}(y)p(y|\theta)\pi(\theta)dy
=p(y_{obs}|\theta)\pi(\theta).$
That is, samples from the true posterior distribution are obtained as $h\rightarrow 0$. However, $h=0$ is not a viable choice in practice, as for continuous $y_{obs}$ it corresponds to an algorithm with an acceptance rate of zero.
To see what marginal distribution the ABC rejection algorithm is sampling from for $h>0$ we can integrate $\pi_{ABC}(\theta,y|y_{obs})$ over the auxiliary data margin, $y$.
A natural question to ask is, how accurate is this approximation? Re-writing the right hand side of (\ref{Chapter3:eqn:ABCjointposterior}) without the prior distribution, $\pi(\theta)$, we can similarly define the ABC approximation to the true likelihood, $p(y|\theta)$, for a fixed value of $\theta$, as
\begin{equation}
\label{Chapter3:eqn:ABClikelihood}
p_{ABC}(y_{obs}|\theta) = \int K_h(\|y-y_{obs}\|)p(y|\theta) dy.
\end{equation}
In this manner, ABC can be interpreted as a regular Bayesian analysis, but with an approximated likelihood function.
Working in the univariate case for simplicity of illustration, so that $y,y_{obs}\in{\mathcal Y}=\mathbb{R}$ and $\|u\|=|u|$, we can obtain
\begin{eqnarray}
p_{ABC}(y_{obs}|\theta) & = & \int K_h(|y-y_{obs}|)p(y|\theta)dy\nonumber \\
& = &
\int K(u)p(y_{obs}-uh|\theta)du\nonumber\\
& = &
\int K(u)\left[p(y_{obs}|\theta) - uhp'(y_{obs}|\theta) + \frac{u^2h^2}{2}p''(y_{obs}|\theta) - \ldots \right] du\nonumber\\
& = &
p(y_{obs}|\theta)+ \frac{1}{2}h^2p''(y_{obs}|\theta)\int u^2K(u)du - \ldots\label{eqn:conDenEst}
\end{eqnarray}
using the substitution $u=(y_{obs}-y)/h$, a Taylor expansion of $p(y_{obs}-uh|\theta)$ around the point $y_{obs}$, and the kernel function properties of $K_h(u)=K(u/h)/h$, $\int K(u)du=1$, $\int uK(u)du=0$ and $K(u)=K(-u)$.
The above is a standard smoothing kernel density estimation expansion, and assumes that the likelihood, $p(y|\theta)$, is infinitely differentiable. As with kernel density estimation, the choice of scale parameter is more important than the choice of kernel function in terms of the quality of the approximation.
Then, the pointwise bias in the likelihood approximation for fixed $\theta$ can be expressed as
\begin{equation}
\label{Chapter3:eqn:b}
b_h(y|\theta) := p_{ABC}(y|\theta)-p(y|\theta),
\end{equation}
as a function of $y$,
which to second order can be written as
\begin{equation*}
\hat{b}_h(y|\theta) = \frac{1}{2}h^2\sigma^2_Kp''(y|\theta),
\end{equation*}
where $\sigma^2_K=\int u^2K(u)du$ is the variance of the kernel function. Accordingly,
the magnitude of the bias is reduced if $h$ is small, corresponding to better approximations. Clearly, the second derivative of the likelihood function, $p''(y|\theta)$, is typically also unavailable if the likelihood function itself is computationally intractable.
When $y,y_{obs}\in{\mathcal Y}$ is multivariate, a similar derivation to the above is available. In either case, the ABC approximation to the true posterior is defined through (\ref{Chapter3:eqn:ABCposterior}).
In a similar manner, we can determine the pointwise bias in the resulting ABC posterior approximation.
From (\ref{Chapter3:eqn:b}) we can write
\begin{eqnarray}
\label{Chapter3:eqn:bit}
b_h(y_{obs}|\theta)\pi(\theta) & = & p_{ABC}(y_{obs}|\theta)\pi(\theta)-p(y_{obs}|\theta)\pi(\theta)\nonumber\\
& = & \pi_{ABC}(\theta|y_{obs})c_{ABC}-\pi(\theta|y_{obs})c,
\end{eqnarray}
where $c_{ABC}=\int p_{ABC}(y_{obs}|\theta)\pi(\theta)d\theta>0$ and $c=\int p(y_{obs}|\theta) \pi(\theta)d\theta>0$. Rearranging (\ref{Chapter3:eqn:bit}), we obtain
\begin{eqnarray}
\label{eqn:ahat}
a_h(\theta|y_{obs}) & := &
\pi_{ABC}(\theta|y_{obs})-\pi(\theta|y_{obs})\nonumber\\
& \textcolor{red}{=} & \frac{b_h(y_{obs}|\theta)\pi(\theta) + \pi(\theta|y_{obs})c}{c_{ABC}}-\pi(\theta|y_{obs})\\
& = & \frac{b_h(y_{obs}|\theta)\pi(\theta)}{c_{ABC}}+\pi(\theta|y_{obs})\left(\frac{c}{c_{ABC}}-1\right),\nonumber
\end{eqnarray}
as a function of $\theta$. As $h\rightarrow 0$, then $b_h(y_{obs}|\theta)\rightarrow 0$ from (\ref{Chapter3:eqn:b}), and so $p_{ABC}(y_{obs}|\theta)\rightarrow p(y_{obs}|\theta)$ pointwise, for fixed $\theta$. Further, $c/c_{ABC}\rightarrow 1$ as $h$ gets small, so that $a_h(\theta|y_{obs})\rightarrow 0$.
\subsection{Simple examples}%
In many simple cases, the ABC approximation to the posterior distribution can be derived exactly.
\\
\noindent {\bf Example 1:}\\
\noindent Suppose that the observed data, $y_{obs}$, is a single draw from a univariate density function $p(y|\theta)$, and that $\theta$ is a scalar. If we consider the particular case where $K_h(\|u\|)$ is the uniform kernel on $[-h,h]$ (see Table \ref{Chapter3:table:StandardKernels}), and $\|u\|=|u|$, then we have
\begin{eqnarray}
\pi_{ABC}(\theta|y_{obs}) &\propto & \pi(\theta)\int_{-\infty}^{\infty} K_h(|y-y_{obs}|)p(y|\theta)dy\nonumber\\
& = &
\frac{\pi(\theta)}{2h}\int_{y_{obs}-h}^{y_{obs}+h}p(y|\theta)dy\nonumber\\
& = &
\pi(\theta)\frac{\left[P(y_{obs}+h|\theta)-P(y_{obs}-h|\theta)\right]}{2h}, \label{easyex1}
\end{eqnarray}
where $P(y|\theta)=\int_{-\infty}^y p(z|\theta)dz$ is the cumulative distribution function of $y|\theta$. Noting that as
$\lim_{h\rightarrow 0}[P(y_{obs}+h|\theta)-P(y_{obs}-h|\theta)]/2h = p(y_{obs}|\theta)$
via l'Hopital's rule, then $\pi_{ABC}(\theta|y_{obs})\rightarrow \pi(\theta|y_{obs})$ as $h\rightarrow 0$, as required. Also,
$[P(y_{obs}+h|\theta)-P(y_{obs}-h|\theta)]/2h\approx 1/2h$ for large $h$, and so $\pi_{ABC}(\theta|y_{obs})\rightarrow \pi(\theta)$ as $h\rightarrow\infty$.
Suppose now that $p(y|\theta)=\theta e^{-\theta y}$, for $\theta, y\geq 0$, is the density function of an {\em Exp}$(\theta)$ random variable, and that the prior $\pi(\theta)\propto\theta^{\alpha-1}e^{-\beta\theta}$ is given by a {\em Gamma}$(\alpha, \beta)$ distribution with shape and rate parameters $\alpha>0$ and $\beta>0$. Then from (\ref{easyex1}), and for $0<h<y_{obs}+\beta$, we can directly obtain
\begin{eqnarray*}
p_{ABC}(y_{obs}|\theta) & = & \frac{1}{2h}e^{-\theta y_{obs}}(e^{\theta h}-e^{-\theta h})\\
\hat{b}_h(y_{obs}|\theta) & = & \frac{1}{6}h^2\theta^3e^{-\theta y_{obs}}\\
\pi_{ABC}(\theta|y_{obs}) &=& \frac{
\theta^{\alpha-1}e^{-\theta(y_{obs}+\beta)}\left(e^{\theta h}-e^{-\theta h}\right)
}{
\frac{\Gamma(\alpha)}{(y_{obs}+\beta-h)^\alpha}-\frac{\Gamma(\alpha)}{(y_{obs}+\beta+h)^\alpha}
},
\end{eqnarray*}
where $\Gamma(\alpha)=\int_0^\infty z^{\alpha-1}e^{-z}dz$ is the gamma function.
Figure \ref{chapter3:toy}(a) illustrates the true likelihood function, $p(y|\theta)$, (black dashed line) and the ABC approximation to the true likelihood function, $p_{ABC}(y|\theta)$, (solid grey line) as a function of $y$ for $h=0.91$ and $\theta=2$. Also shown (grey dashed line), is the second order approximation to the ABC likelihood function, $p(y|\theta)+\hat{b}_h(y|\theta)$. In this case, the second order approximation provides a reasonable representation of the ABC likelihood, $p_{ABC}(y|\theta)$. For other choices of $h$ and $\theta$, the quality of this representation will vary.
\begin{figure}[tb]
\centering
\includegraphics[width=12cm]{toy1.pdf}
\caption{\small
Approximations involved in the ABC analysis of the Exponential-Gamma example.
(a) Various likelihood functions with $h=0.91$ and $\theta=2$. The true likelihood function, $p(y|\theta)$, and the ABC approximation to the likelihood, $p_{ABC}(y|\theta)$, are denoted by black-dashed and solid grey lines respectively. The second order approximation to $p_{ABC}(y|\theta)$, given by $p(y|\theta)+\hat{b}_h(y|\theta)$, is illustrated by the grey-dashed line.
(b) The ABC posterior approximation, $\pi_{ABC}(\theta|y_{obs})$ with $y_{obs}=2$ for various values of $h=0.01, 0.91, 1.80, 2.70$.
(c) Approximation bias in the ABC posterior as a function of $h$ for $y_{obs}=2$. Dashed lines indicate the exact bias $a(\theta|y_{obs})$ for each $h$, whereas solid lines denote the second order bias $\hat{a}(\theta|y_{obs})$.
}
\label{chapter3:toy}
\end{figure}
The ABC approximation, $\pi_{ABC}(\theta|y_{obs})$, to the true posterior $\pi(y_{obs}|\theta)$, given $y_{obs}=2$ and $\alpha=\beta=1.2$ is shown in Figure \ref{chapter3:toy}(b) for various values of $h=0.01, \ldots, 2.7$ (grey lines). The true posterior is illustrated by the black dashed line. For small $h$ ($h=0.01$), $\pi_{ABC}(\theta|y_{obs})$ is indistinguishable from the true posterior. As $h$ increases, so does the scale of the approximate posterior, which begins to exhibit a large loss of precision compared to the true posterior. Both mean and mode of $\pi_{ABC}(\theta|y_{obs})$ increase with $h$.
Finally, Figure \ref{chapter3:toy}(c) shows the resulting
bias, $a_h(\theta|y_{obs})$, in the ABC posterior approximation as a function of $\theta$ and $h$. Dashed and solid lines respectively show the exact bias $a_h(\theta|y_{obs})$ and the second order bias $\hat{a}_h(\theta|y_{obs})$ (defined as $a_h(\theta|y_{obs})$ in (\ref{eqn:ahat}) but with $\hat{b}_h(y|\theta)$ substituted for $b_h(y|\theta)$).
Clearly, the bias in the main body of the distribution, particularly in the region around the mode, is well described by the second order approximation, $\hat{a}_h(\theta|y_{obs})$, whereas the bias in the distributional tails is more heavily influenced by terms of higher order than two.
\\
\noindent{\bf Example 2:}\\
Suppose that the observed data, $y_{obs}=(y_{obs,1},\ldots,y_{obs,n})^\top$, are $n$ independent draws from a univariate ${N}(\theta,\sigma_0^2)$ distribution, where the standard deviation, $\sigma_0>0$, is known.
For this model we know that $p(y_{obs}|\theta)\propto p(\bar{y}_{obs}|\theta)$, where $\bar{y}_{obs}=\frac{1}{n}\sum_i y_{obs,i}$, as the sample mean is a sufficient statistic for $\theta$.
If we specify $K_h(u)$ as a Gaussian ${N}(0,h^2)$ kernel (see Table \ref{Chapter3:table:StandardKernels}), then the ABC approximation to the likelihood, $p(\bar{y}_{obs}|\theta)$ is given by
\begin{eqnarray*}
p_{ABC}(\bar{y}_{obs}|\theta)
& = & \int_{-\infty}^{\infty} K_h(|\bar{y}-\bar{y}_{obs}|)p(\bar{y}|\theta)d\bar{y}\nonumber\\
& = & \int_{-\infty}^\infty \frac{1}{\sqrt{2\pi}h}\exp\left\{-\frac{(\bar{y}-\bar{y}_{obs})^2}{2h^2}\right\}
\frac{\sqrt{n}}{\sqrt{2\pi}\sigma_0}\exp\left\{-\frac{n(\bar{y}-\theta)^2}{2\sigma_0^2}\right\} d\bar{y}\\
& \propto & \exp\left\{ -\frac{(\theta-\bar{y}_{obs})^2}{2(\sigma^2_0/n+h^2)}\right\}
\end{eqnarray*}
for $h\geq0$.
That is, $\bar{y}_{obs}\sim{N}(\theta,\sigma_0^2/n+h^2)$ under the ABC approximation to the likelihood. In comparison to the true likelihood, for which $\bar{y}_{obs}\sim{N}(\theta,\sigma_0^2/n)$,
the variance is inflated by $h^2$, the variance of the Gaussian kernel.
Accordingly, if the prior for $\theta$ is given by a ${N}(m_0,s_0^2)$ distribution, where $m_0$ and $s_0>0$ are known, then
\[
\pi_{ABC}(\theta|y_{obs})=\phi\left(\frac{m_0s_0^{-2}+\bar{y}_{obs}(\sigma_0^2/n+h^2)^{-1}}{s_0^{-2}+(\sigma_0^2/n+h^2)^{-1}},
\frac{1}{s_0^{-2}+(\sigma_0^2/n+h^2)^{-1}} \right),
\]
where $\phi(a,b^2)$ denotes the density of a $N(a,b^2)$ distributed random variable.
Clearly $\pi_{ABC}(\theta|y_{obs})\rightarrow \pi(\theta|y_{obs})$ as $h\rightarrow 0$. However, the approximation will be quite reasonable if $\sigma^2/n$ is the dominating component of the variance so that $h$ is small in comparison \cite{drovandi12}.
A similar result to the above is available in the case of a multivariate parameter vector, $\theta$.
\begin{figure}[tb]
\centering
\includegraphics[width=12cm]{toy-normal.pdf}
\caption{\small
ABC posterior approximations, $\pi(\theta|y_{obs})$, for a $N(0,1)$ target distribution (dashed lines) for various values of kernel scale parameter $h$. The posterior approximations are based on (a) $N(0,h^2)$ and (b) uniform over $[-h,h]$ kernel functions, $K_h(u)$.
}
\label{chapter3:toy-normal}
\end{figure}
Figure \ref{chapter3:toy-normal}(a) illustrates the resulting ABC posterior approximation $\pi_{ABC}(\theta|y_{obs})$ with $\bar{y}_{obs}=0$ when $\sigma^2/n=1$ for the improper prior given by $m_0=0, s^2_0\rightarrow\infty$, so that the true posterior distribution is $N(0,1)$ (dashed line). The approximation is clearly quite reasonable for $h=0.5$ and $h=0.1$ as then $h^2<\sigma^2_0/n$.
Figure \ref{chapter3:toy-normal}(b) shows the same posterior approximations but based on a uniform kernel over $[-h, h]$ for $K_h(u)$, rather than the Gaussian $N(0,h^2)$ kernel. This ABC posterior is derived from (\ref{easyex1}). The resulting forms for $\pi_{ABC}(\theta|y_{obs})$ are no longer within the Gaussian family for $h>0$, exhibit a flatter behaviour around the mean, and are more concentrated around the mean due to the compact support of the uniform kernel. The approximations with either kernel perform well for small $h$.
This example additionally provides some insight into the asymptotic behaviour of the ABC posterior approximation. Following standard likelihood asymptotic results, when the amount of data, $n$, becomes large, the true likelihood function, $p(y|\theta)$, will approximately behave as a Gaussian distribution. As most prior distributions will have little impact in this setting (they will be approximately constant over the region of high posterior density), it follows that the ABC posterior approximation, $\pi_{ABC}(\theta|y_{obs})$ will follow a Gaussian distribution with a variance that is inflated by an $h^2$ term. Consequently, the ABC posterior approximation, $\pi_{ABC}(\theta|y_{obs})$ may then in principle be improved simply by rescaling the posterior variance to remove this term \cite{drovandi12}.
\section{The use of summary statistics
\subsection{Summary statistic basics
\label{section:summaryStatisticBasics}
\noindent Despite the development in the previous Section, the ABC posterior approximation $\pi_{ABC}(\theta|y_{obs})\propto\int K_h(\|y-y_{obs}\|)p(y|\theta)p(\theta)dy$ is rarely used in practice. This is because, except in very specific scenarios (such as when $y_{obs}$ is very low dimensional, or when the likelihood function $p(y|\theta)$ factorises into very low dimensional components), it is highly unlikely that $y\approx y_{obs}$ can be generated from $p(y|\theta)$ for any choice of $\theta$ for realistic datasets. This results in the need to use a large value of the kernel scale parameter $h$ in order to achieve viable rejection sampling algorithm acceptance rates (or a similar loss of performance in other algorithms), and in doing so produce poorer ABC posterior approximations.
In the stereological extremes analysis in Section \ref{section:stereological} we replaced the full dataset $y_{obs}$ with a sufficient statistic $n_{obs}$ for the model parameter $\lambda$ when estimating $\pi(\theta|y_{obs})=\pi(\lambda|n_{obs})$. As sufficient statistics can be much lower dimensional than the full dataset, it is clear that greater approximation accuracy can be achieved for the same computational overheads when using low dimensional statistics (which is hinted at in the $g$-and-$k$ distribution analysis in Section \ref{section:gandk}).
The following example, based on \citeN{drovandi12}, highlights the computational benefits in using lower dimensional, and less variable sufficient statistics.
\\
\noindent {\bf Example 3:}\\
Suppose that $y=(y_1,y_2)^\top$, where $y_i\sim${\em Binomial}$(n,\theta)$ with $\theta\sim U(0,1)$. Consider three possible vectors of sufficient statistics: $s^1=(y_1,y_2)^\top$ is the full dataset, $s^2=(y_{(1)},y_{(2)})^\top$ are the order statistics $y_{(1)}\leq y_{(2)}$, and $s^3=y_1+y_2$ is the sum of the two individual values. All three vectors of statistics are sufficient for this simple model.
It is easy to compute the marginal distribution of each summary statistic $p_i(s^i) = \int_0^1p(s^i|\theta)\pi(\theta)d\theta$ as follows:
\begin{eqnarray*}
p_1(s^1) & = &\int_0^1\prod_{i=1}^2\left(\begin{array}{c}n\\y_i\end{array}\right)
\theta^{y_i}(1-\theta)^{n-y_i} d\theta\\
& = & \left(\begin{array}{c}n\\y_1\end{array}\right)\left(\begin{array}{c}n\\y_2\end{array}\right)
B(y_1+y_2+1,2n-y_1-y_2+1),\\
p_2(s^2) & = & \left[2-I(y_{(1)}=y_{(2)})\right]\int_0^1\prod_{i=1}^2\left(\begin{array}{c}n\\y_i\end{array}\right)
\theta^{y_i}(1-\theta)^{n-y_i} d\theta\\
& = & \left[2-I(y_{(1)}=y_{(2)})\right]
\left(\begin{array}{c}n\\y_1\end{array}\right)\left(\begin{array}{c}n\\y_2\end{array}\right)
B(y_1+y_2+1,2n-y_1-y_2+1),\\
p_3(s^3) & = & \int_0^1\left(\begin{array}{c}2n\\s^3\end{array}\right)
\theta^{s^3}(1-\theta)^{2n-s^3} d\theta\\
& = & \left(\begin{array}{c}2n\\s^3\end{array}\right)
B(s^3+1,2n-s^3+1)\\
& = & 1/(2n+1),
\end{eqnarray*}
where $\mbox{B}(a,b)=\int_0^1z^{a-1}(1-z)^{b-1}dz$ is the beta function. Here, $p_i(s^i)$ is the probability of generating the vector $s^i$ under an ABC rejection sampling algorithm with sampling distribution given by the prior, $g(\theta)=\pi(\theta)$. That is, $p_i(s_i)$ is the acceptance probability of the algorithm if we only accept those sufficient statistics that exactly match the observed sufficient statistics.
Suppose that we observe $y_{obs}=(y_{obs,1},y_{obs,2})^\top=(1,2)^\top$ from $n=5$ experiments. From the above, we have algorithm acceptance rates of:
\[
p_1(s^1_{obs})=\frac{5}{132}\approx 0.038, \:\:\:\: p_{2}(s^2_{obs})=\frac{5}{66}\approx 0.076\quad\mbox{and}\:\:\:\: p_3(s^3_{obs})=\frac{1}{11}\approx 0.091,
\]
where $s^i_{obs}$ denotes the statistic $s^i$ derived from $y_{obs}$.
The probability $p_1(s^1)$ is the probability of generating first $y_1=1$ and then $y_2=2$. As a result, $p_1(s^1)$ will decrease rapidly as the length of the observed dataset $y_{obs}$ increases. The probability $p_2(s^2)$ corresponds to the probability of generating either $y=(1,2)^\top$ or $y=(2,1)^\top$, which are equivalent under the binomial model. Hence, $s^2$ has twice the probability of $s^1$ of occurring. Finally, the probability $p_3(s^3)$, is the probability of generating $y=(1,2)^\top, (2,1)^\top, (0,3)^\top$ or $(3,0)^\top$. Each of these cases are indistinguishable under the assumed model, and so the event $s^3$ occurs with the largest probability of all.
Quite clearly, while still producing samples from the true target distribution, $\pi(\theta|y_{obs})$, the impact on the efficiency of the sampler of the choice of sufficient statistics is considerable, even for an analysis with only two observations, $y_1$ and $y_2$. The most efficient choice is the minimal sufficient statistic. The differences in the acceptance rates of the samplers would become even greater for larger numbers of observations, $n$.
\\
While the optimally informative choice of statistic for an ABC analysis is a minimal sufficient statistic, this may still be non-viable in practice. For example, if the minimal sufficient statistic is the full dataset $y_{obs}$, sampling from $\pi_{ABC}(\theta|y_{obs})$ will be highly inefficient even for moderately sized datasets. Similarly, in a scenario where the likelihood function may not be known beyond a data generation procedure, identification of any low-dimensional sufficient statistics (beyond, trivially, the full dataset $y_{obs}$) may be impossible. Further, low dimensional sufficient statistics may not even exist, depending on the model.
In general, a typical ABC analysis will involve specification of a vector of summary statistics $s=S(y)$, where $\dim(s)\ll\dim(y)$. The rejection sampling algorithm with then contrast $s$ with $s_{obs}=S(y_{obs})$, rather than $y$ with $y_{obs}$. As a result, this procedure will produce samples from the distribution $\pi_{ABC}(\theta|s_{obs})$ as follows:
\begin{table}
\caption{\bf ABC Rejection Sampling Algorithm}
\noindent {\it Inputs:}
\begin{itemize}
\item A target posterior density $\pi(\theta|y_{obs})\propto p(y_{obs}|\theta)\pi(\theta)$, consisting of a prior distribution $\pi(\theta)$ and a procedure for generating data under the model $p(y_{obs}|\theta)$.
\item A proposal density $g(\theta)$, with $g(\theta)>0$ if $\pi(\theta|y_{obs})>0$.
\item An integer $N>0$.
\item A kernel function $K_h(u)$ and scale parameter $h>0$.
\item A low dimensional vector of summary statistics $s=S(y)$.
\\
\end{itemize}
\noindent {\it Sampling:}\\
\noindent For $i=1, \ldots, N$:
\begin{enumerate}
\item \label{chapter3:alg:ABC-rejectionSS:step1} Generate $\theta^{(i)}\sim g(\theta)$ from sampling density $g$.
\item Generate $y\sim p(y|\theta^{(i)})$ from the likelihood.
\item Compute summary statistic $s=S(y)$.
\item Accept $\theta^{(i)}$ with probability $\frac{K_h(\|s-s_{obs}\|)\pi(\theta^{(i)})}{Kg(\theta^{(i)})}$\\
where $K\geq K_h(0)\max_\theta\frac{\pi(\theta)}{g(\theta)}$. Else go to \ref{chapter3:alg:ABC-rejectionSS:step1}.
\\
\end{enumerate}
\noindent {\it Output:}\\
A set of parameter vectors $\theta^{(1)},\ldots,\theta^{(N)}$ $\sim$ $\pi_{ABC}(\theta|s_{obs})$.
\end{table}
Similar to the discussion in Section \ref{chapter3:section:TheApproximatePosteriorDistribution}, it can be seen that the ABC posterior approximation now has the form
\begin{equation}
\label{ABCpostApproxSS}
\pi_{ABC}(\theta|s_{obs}) \propto \int K_h(\|s-s_{obs}\|)p(s|\theta)\pi(\theta)ds,
\end{equation}
where $p(s|\theta)$ denotes the likelihood function of the summary statistic $s=S(y)$ implied by $p(y|\theta)$.
(That is, $p(s|\theta) = \int_\mathcal{Y}\delta_{s}(S(y))p(y|\theta) dy$.)
If we let $h\rightarrow 0$, so that only those samples, $\theta$, that generate data for which $s=s_{obs}$ are retained, then
\begin{eqnarray*}
\lim_{h\rightarrow 0} \pi_{ABC}(\theta|s_{obs}) & \propto & \int \lim_{h\rightarrow 0}K_h(\|s-s_{obs}\|)p(s|\theta)\pi(\theta)ds\\
& = & \int \delta_{s_{obs}(s)}p(s|\theta)\pi(\theta)ds\\
& = & p(\theta|s_{obs})\pi(\theta).
\end{eqnarray*}
Hence, samples from the distribution $\pi(\theta|s_{obs})$ are obtained as $h\rightarrow 0$. If the vector of summary statistics, $s=S(y)$, is sufficient for the model parameters, then $\pi(\theta|s_{obs})\equiv\pi(\theta|y_{obs})$, and so samples are produced from the true posterior distribution. However, if $S(y)$ is not sufficient -- and this is typically the case in practice -- then the ABC posterior approximation is given by (\ref{ABCpostApproxSS}), where in the best scenario (i.e. as $h\rightarrow 0$) the approximation is given by $\pi(\theta|s_{obs})$.
The following example illustrates the effect of using a non-sufficient summary statistic.
\\
\noindent{\bf Example 4:}\\
Consider again the univariate Gaussian model in Example 2. Suppose that we modify this example \cite{drovandi12}, so that the model still assumes that the observed data $y_{obs}=(y_{obs,1},\ldots,y_{obs,n})^\top$ are random draws from a univariate $N(\theta,\sigma^2_0)$ distribution, but where we now specify an insufficient summary statistic, $s=\bar{y}_{1:n'}=\frac{1}{n'}\sum_{i=1}^{n'}y_i$ with $n'< n$.
Writing $s_{obs}=S(y_{obs})$, the resulting ABC approximation to the likelihood function becomes
\begin{eqnarray*}
p_{ABC}(s_{obs}|\theta)
& = & \int K_h(s-s_{obs})p(s|\theta)ds\\
&\propto & \int_{-\infty}^\infty\frac{1}{\sqrt{2\pi}h}\exp\left\{-\frac{(s-s_{obs})^2}{2h^2}\right\} \frac{\sqrt{n'}}{\sqrt{2\pi}\sigma_0}\exp\left\{-\frac{n'(s-\theta)^2}{2\sigma^2_0}\right\}ds\\
& \propto & \exp\left\{ -\frac{(\theta-s_{obs})^2}{2(\sigma^2_0/\omega n+h^2)}\right\},
\end{eqnarray*}
where $\omega=n'/n$ is the proportion of the $n$ observations used in the vector of summary statistics.
That is, $s_{obs}\sim N(\theta,\sigma^2/\omega n + h^2)$. When $\omega=1$, then $s_{obs}=\bar{y}_{obs}$ is sufficient for $\theta$ and so $s_{obs}\sim N(\theta,\sigma^2/n + h^2)$ recovers the same result as Example 2.
When $n'<n$, so that $s$ is no longer sufficient for $\theta$,
the mean of the Gaussian likelihood function is centred on the mean $\bar{y}_{obs,1:n'}$ rather than $\bar{y}_{obs,1:n}$, but more critically
the variance of the Gaussian likelihood is $\sigma^2/\omega n + h^2$. It is evident that there are now two sources of error, both of which inflate the variance of the likelihood. The first, $h^2$, arises through the matching of the simulated and observed data through the Gaussian kernel. The second source of error comes from the $0<\omega<1$ term, which can be interpreted as the degree of inefficiency of replacing $y$ by $s=S(y)$. That is, the use of non-sufficient statistics reduces the precision of the likelihood (and by turn, the posterior distribution) in this case.
From Example 2, it follows that when $n$ is large and the posterior is asymptotically Gaussian, the ABC posterior approximation, $\pi_{ABC}(\theta|s_{obs})$, can be improved by rescaling to remove $h^2$ from the posterior variance. However, correcting for the lack of sufficiency in the summary statistic, $s$, would require knowledge of the relative inefficiency of $s$ over $y$, which may be difficult to obtain in practice.
\\
The choice of summary statistics for an ABC analysis is a critical decision that directly affects the quality of the posterior approximation. Many approaches for determining these statistics are available, and these are reviewed in \shortciteN{blum+nps13} and \citeN{prangle17}, this volume. These methods seek to trade off two aspects of the ABC posterior approximation that directly result from the choice of summary statistics. The first is that $\pi(\theta|y_{obs})$ is approximated by $\pi(\theta|s_{obs})$. As this represents an irrevocable potential information loss, the information content in $s_{obs}$ should be high. The second aspect of the ABC posterior approximation is that the simulated and observed summary statistics are compared within a smoothing kernel $K_h(\|s-s_{obs}\|)$ as part of the form of $\pi_{ABC}(\theta|s_{obs})$ (\ref{ABCpostApproxSS}). As stochastically matching $s$ and $s_{obs}$ becomes increasingly difficult as the dimension of the summary statistics increases, the dimension of $s$ should be low.
As such, the dimension of the summary statistic should be large enough so that it contains as much information about the observed data as possible, but also low enough so that the curse-of-dimensionality of matching $s$ and $s_{obs}$ is avoided. For illustration, in Example 3, the optimum choice of summary statistic is a minimal sufficient statistic. However, for other models it may be the case that the dimension of the minimal sufficient statistic is equal to that of the original dataset. As this will cause curse-of-dimensionality problems in matching $s$ with $s_{obs}$, it is likely that a more accurate ABC posterior approximation can be achieved by using a lower-dimensional non-sufficient statistic, rather than remaining within the class of sufficient statistics. This was indeed the case in the $g$-and-$k$ distribution analysis in Section \ref{section:gandk}.
\subsection{Some practical issues with summary statistics}
Even with the above principles in mind, summary statistic choice remains one of the most challenging aspects of implementing ABC in practice. For instance, it is not always viable to continue to add summary statistics to $s$ until the resulting ABC posterior approximation does not change for the worse, as is illustrated by the following example.
\\
\noindent {\bf Example 5:}\\
Suppose that $y=(y_1,\ldots,y_n)^\top$ with $y_i\sim${\em Poisson}$(\lambda)$. Combined with conjugate prior beliefs $\lambda\sim${\em Gamma}$(\alpha,\beta)$ this gives $\lambda|y\sim${\em Gamma}$(\alpha+n\bar{y},\beta+n)$. For this model we know that the sample mean $\bar{y}$ is a sufficient statistic. However, we also know that the mean and variance of a {\em Poisson}$(\lambda)$ model are both equal to $\lambda$, and so we might also expect the sample variance $v^2$ to also be informative for $\lambda$, although it is not sufficient. Suppose that we observe $y_{obs}=(0,0,0,0,5)^\top$ which gives $(\bar{y}_{obs},v^2_{obs})=(1,5)$. Here, as the sample mean and variance are quite different from each other, we might expect that the Poisson model is not appropriate for these data.
\begin{figure}[tb]
\centering
\includegraphics[width=12cm]{bias.pdf}
\caption{\small
Various ABC posterior approximations (histograms) for a {\em Gamma}$(\alpha+\bar{y},\beta+n)$ target distribution (solid line) with a {\em Gamma}$(\alpha,\beta)$ prior (dashed lines).
Columns illustrate posterior estimates based on (left) sample mean $s=\bar{y}$, (centre) standard deviation $s=v$ and (right) $s=(\bar{y},v)^\top$ as summary statistics. Top row shows results with $h=0$ and the bottom row with $h=0.3$.}
\label{chapter3:bias}
\end{figure}
Figure \ref{chapter3:bias} illustrates various ABC posterior approximations to the true target distribution (solid lines)
based on a prior with $\alpha=\beta=1$ (dashed lines), with
$K_h(u)$ specified as a uniform kernel over $[-h,h]$ and $\|u\|$ representing Euclidean distance.
The top row illustrates the resulting posterior approximations, $\pi(\lambda|s_{obs})$, when the summary statistics $s$ are given as the sample mean $\bar{y}$ (left panel), the sample standard deviation $v$ (centre), or both (right) when the kernel scale parameter is $h=0$. Using $s=\bar{y}$ recovers the true posterior exactly, which is no surprise as $\bar{y}$ is a sufficient statistic. Using $s=v$ produces an informed ABC approximation, but one which is based on a variance that is consistent with a larger mean under the Poisson model. When $s=(\bar{y},v)^\top$ then we again obtain the true posterior distribution as $\pi(\lambda|\bar{y}_{obs},v_{obs})\equiv \pi(\lambda|\bar{y}_{obs})$ through sufficiency, and the additional information that $v$ brings about the sample $y$ has no effect on the ABC estimated posterior.
The bottom row in Figure \ref{chapter3:bias} shows the same information as the top row, except that the kernel scale parameter is now non-zero ($h=0.3$). The posterior approximations based on $s=\bar{y}$ and $s=v$ are minor deviations away from those in the top row when $h=0$. This occurs as the values of $\lambda$ that are able to reproduce the observed summary statistics within a non-zero tolerance $h=0.3$ are slightly different to those that can reproduce the summary statistics exactly. However, the third panel with $s=(\bar{y},v)^\top$ is clearly biased to the right, with the resulting ABC posterior approximation visually appearing to be a loose average of those
distributions with $s=\bar{y}$ and $s=v$.
This behaviour is different from when $h=0$. In that case, when adding more information in the vector of summary statistics in going from $s=\bar{y}$ to $s=(\bar{y},v)^\top$, the posterior approximation does not change as the summary statistic $s=\bar{y}$ is sufficient and it is being matched exactly. However, when $h>0$, because the ABC algorithm allows a non perfect matching of the sufficient statistic $\bar{y}$, it additionally allows the extra information in the sample standard deviation $v$ to also contribute to the approximation. In this case, because the observed summary statistics $\bar{y}_{obs}$ and $v_{obs}$ are inconsistent with respect to the model, this then results in a strongly biased fit when moving from $s=\bar{y}$ to $s=(\bar{y},v)^\top$.
As such, while it may be tempting to include progressively more summary statistics into $s_{obs}$ until the ABC posterior approximation does not change appreciably, the assumption that that this will provide the most accurate posterior approximation is clearly incorrect. Even if $s_{obs}$ contains sufficient statistics for the model, the inclusion of further statistics can still bias the posterior approximation, particularly in the case where the observed data are inconsistent with the model.
\\
The identification of suitable summary statistics is clearly a critical part of any analysis. Accordingly many techniques have been developed for this purpose -- see e.g. \shortciteN{blum+nps13} and \citeN{prangle17} (this volume) for a detailed review and comparison of these methods. While the choice of summary statistics is itself of primary importance, it is less appreciated that the distance measure $\|\cdot\|$ can also have a substantial impact on ABC algorithm efficiency, and therefore the quality of the posterior approximation.
Consider the distance measure $\|s-s_{obs}\|=(s-s_{obs})^\top\Sigma^{-1}(s-s_{obs})$. Here we can specify the covariance matrix $\Sigma$ as the identity matrix to produce Euclidean distance, or as a diagonal matrix of non-zero weights to give weighted Euclidean distance (e.g. \shortciteNP{hamilton+crhbe05,luciani+sjft09}) or as a full covariance matrix to produce Mahalanobis distance (e.g \shortciteNP{peters+fs12,erhardt+s16}).
To see why standard and weighted Euclidean distance can be a poor choice, consider the setting in Figure \ref{image:type12}, where candidate parameter values, $\theta$, generating continuous bivariate statistics, $s|\theta$, $s=(s_1,s_2)^\top$, are accepted as draws from $\pi_{ABC}(\theta|s_{obs})$ if $s$ lies within a ball of radius $h$, centered on $s_{obs}$. That is, $K_h$ is the uniform kernel on $[-h,h]$, and $\|\cdot\|$ denotes Euclidean distance.
\begin{figure}[tb]
\centering
\includegraphics[width=7cm]{type12.pdf}
\caption{{\protect\small The concept of type I and II errors for accept/reject decisions in ABC samplers under a uniform kernel, $K_h(u)$, over $[-h,h]$ and Euclidean distance, $\|\cdot\|$. The circle represents the acceptance region for a simulated summary statistic $s=(s_1,s_2)^\top$, centred on $s_{obs}$. The ellipse represents the possible dependence between $s_1$ and $s_2$.
}}
\label{image:type12}
\end{figure}
If we reasonably suppose that the elements of $s$ may be dependent and on different scales, their true distribution under the model may be better represented by an ellipse (grey lines). As such, an efficient ABC algorithm should accept candidate draws from $\pi_{ABC}(\theta|s_{obs})$ if $s|\theta$ lies within this ellipse.
Consequently, implementing a circular acceptance region (implying independence and identical scales) induces both type I (i.e. candidate samples are rejected when they should be accepted) and type II (i.e. candidate samples are accepted when they should be rejected) errors.
Work linking the ABC posterior with non-parametric density estimation methods (\citeNP{blum10}; see Section \ref{section:interpretations}) provides support for this argument.
Here, for a multivariate kernel $K_H(u)=\det(H)^{-1}K(H^{-1}u)$, where $K$ is a symmetric multivariate density function with zero mean and finite variance, a general rule of thumb is to specify the bandwidth matrix as
$H\propto\Sigma^{-1/2}$ where $\Sigma$ is the covariance matrix of the data (e.g. \citeNP{scott92,wand+j95}). In the ABC context, this is equivalent to defining $\|\cdot\|$ as Mahalanobis distance where $\Sigma$ is the covariance matrix of $s$ (or $s|\theta$).
Note that the above argument assumes that the summaries $s_1$ and $s_2$ are both informative for the model parameter $\theta$. For example, in the case where $s_1+s_2$ is uninformative, but $s_1-s_2$ is informative, then it is credible that the circular acceptance region could result in a more accurate ABC posterior approximation than that resulting from the elliptical region. In general, the best acceptance region is tied up with the choice of the summary statistics in a more complicated way than that presented here (see e.g. \citeNP{prangle17b} for a discussion).
The following example illustrates the effect that different covariance matrices $\Sigma$ can have on the ABC posterior approximation.
\\
\noindent {\bf Example 6:}\\
Suppose that the model is specified as $y_1,\ldots,y_{50}\sim N(\theta,1)$, with a uniform prior $\theta\sim U(-5,5)$. Various sufficient statistics are available for this model. We consider two alternatives: $s^1=(\bar{y}_{1:40},\bar{y}_{41:50})^\top$ and $s^2=(\bar{y}_{1:25}-\bar{y}_{26:50}, \bar{y}_{26:50})^\top$ where $\bar{y}_{a:b}=(b-a+1)^{-1}\sum_{i=a}^by_i$. In each case, given the observed sufficient statistics $s_{obs}=(0,0)^\top$, the exact posterior distribution $\pi(\theta|y_{obs})$ is $N(0,1/50)$ truncated to $(-5,5)$. However, the covariance matrices of $s^1$ and $s^2$ for fixed $\theta$ are quite different
(though they do not depend on the exact value of $\theta$),
namely
\begin{equation}
\label{eqn:13}
\mbox{Cov}(s^1|\theta)=\left(\begin{array}{cc}1/40&0\\0&1/10\end{array}\right),
\quad
\mbox{Cov}(s^2|\theta)=\left(\begin{array}{cc}\phantom{-}2/25&-1/25\\-1/25&\phantom{-}1/25\end{array}\right),
\end{equation}
with a negative correlation between the elements of $s^2$ of $-1/\sqrt{2}\approx-0.71$.
We implement ABC using the distance measure as $\|s-s_{obs}\|=(s-s_{obs})\Sigma^{-1}(s-s_{obs})'$ and consider the impact of the choice of $\Sigma$.
We use a version of the ABC rejection sampling algorithm (see box) that maintains a sample $\theta^{(1)}, \ldots, \theta^{(N)}$ of size $N$ from the ABC posterior approximation, which progressively lowers the kernel scale parameter $h$ until a stopping rule is satisfied. On algorithm termination, the samples are identical to those samples that would have been obtained under the standard ABC rejection sampling algorithm if it was implemented with the lowest value of $h$ achieved under the stopping rule. This allows us to implement a rejection sampling algorithm that will terminate when a pre-specified degree of accuracy has been achieved. The (random) number of iterations obtained before algorithm termination will accordingly be an indicator of the efficiency of the model specification -- in this case, the effect of different covariance matrices $\Sigma$.
\begin{table}
\caption{\bf ABC Rejection Sampling Algorithm (with Stopping Rule)}
\noindent {\it Initialise:}\\
For each particle $i=1,\ldots,N$:
\begin{itemize}
\item Generate $\theta^{(i)}\sim\pi(\theta)$ from the prior, $y^{(i)}\sim p(y|\theta^{(i)})$ from the likelihood.
\item Compute summary statistics $s^{(i)}=S(y^{(i)})$, and distance $\rho^{(i)}=\|s^{(i)}-s_{obs}\|$.
\item Generate $u^{(i)}\sim\mbox{U}(0,1)$ that determines whether to accept the particle. \\(i.e. accept if $u^{(i)}\leq K_h(\rho^{(i)})/K_h(0)$.)
\item Determine the {\it smallest} $h$ that results in the acceptance of all $N$ particles. E.g.
\[
h=\sqrt{\max_i\{-[\rho^{(i)}]^2/(2\log(u^{(i)}))\}}\qquad\mbox{or}\qquad
h=\max_i\{\rho^{(i)}\}
\]
if (respectively)
\[
K_h(\rho)\propto\exp\{-\rho^2/(2h^2)\}\qquad\mbox{or}\qquad
K_h(\rho)\propto1\:\:\mbox{on } [-h,h].
\]
\item Calculate the acceptance probabilities $W^{(i)}=K_h(\rho^{(i)})/K_h(0)$, $i=1,\ldots,N$.
\end{itemize}
\noindent {\it Simulation:}\\
\noindent While the stopping rule is not satisfied, repeat:
\begin{enumerate}
\item Identify the index of the particle that will first be rejected if $h$ is reduced: $r=\arg_i\min\{W^{(i)}-u^{(i)}\}$.
\item Set the new value of $h$ to be the lowest value which would result in the acceptance of all particles, except particle $r$.
\item Recompute acceptance probabilities $W^{(i)}$ given the new value of $h$.
\item Replace particle $r$ by repeating:
\begin{enumerate}
\item Generate $\theta^{(r)}\sim\pi(\theta)$, $y^{(r)}\sim p(y|\theta^{(i)})$, $u^{(r)}\sim U(0,1)$.
\item Compute $s^{(r)}=S(y^{(r)})$, $\rho^{(r)}=\|s^{(r)}-s_{obs}\|$,\\ $W^{(r)}=K_h(\rho^{(r)})/K_h(0)$
\end{enumerate}
Until $u^{(r)}\leq W^{(r)}$.
\end{enumerate}
\noindent {\it Output:}\\
A set of parameter vectors $\theta^{(1)},\ldots,\theta^{(N)}$ $\sim$ $\pi_{ABC}(\theta|s_{obs})$, with $h$ determined as the largest achieved value that satisfies the stopping rule.
\end{table}
Table \ref{table:stoppingrule} displays the average number of data generation steps (i.e. generating $y\sim p(y|\theta)$) in each algorithm implementation, per final accepted particle, as a function of smoothing kernel type and the form of $\Sigma$, based on 100 replicate simulations of $N=500$ samples. The stopping rule continued algorithm execution until an estimate of the absolute difference between empirical ($F_N(\theta)$) and true ($F(\theta)$) model cumulative distribution functions was below a given level. Specifically
when $\sum_{i=1}^N|F_N(\theta^{(i)})-F(\theta^{(i)})|<0.01825$.
In Table \ref{table:stoppingrule}, the true form of $\Sigma$ is given by $\mbox{Cov}(s^1|\theta)$ and $\mbox{Cov}(s^2|\theta)$ (\ref{eqn:13}), and the diagonal form refers to the matrix constructed from the diagonal elements of $\mbox{Cov}(s^2|\theta)$.
\begin{table}[tb]
\centering
\begin{tabular}{cl|cc|cc|cc}
Summary &&\multicolumn{6}{c}{Form of $\Sigma$}\\
Statistic & Kernel & \multicolumn{2}{c}{Identity} & \multicolumn{2}{|c|}{Diagonal} & \multicolumn{2}{c}{True}\\
\hline
& Uniform & 134.7 & \phantom{1}(5.8) &&& \phantom{1}84.5 & (2.4) \\
$s=s^1$ & Epanechnikov & 171.6 & \phantom{1}(4.7) &&& 111.1 & (3.8) \\
& Triangle & 232.3 & \phantom{1}(7.1) &&& 153.0 & (5.1) \\
& Gaussian & 242.4& \phantom{1}(6.5) &&& 153.6 & (4.9) \\
\hline
& Uniform & 182.5 & \phantom{1}(5.6) & 161.0 & (4.1) & \phantom{1}84.4 & (2.4) \\
$s=s^2$ & Epanechnikov & 245.5 & \phantom{1}(6.6) & 209.2 & (7.2) & 111.1 & (3.8)\\
& Triangle & 336.3 & \phantom{1}(8.9) & 277.2 & (6.9) & 144.2 & (3.8)\\
& Gaussian & 368.2 & (12.6) & 289.7 & (9.7) &157.7 & (4.3) \\
\end{tabular}
\caption{{\small Mean number of summary statistic generations per final accepted particle (with standard errors in parentheses), as a function of the form of covariance matrix, $\Sigma$, and smoothing kernel $K_h$, and for two different sets of sufficient statistics $s^1=(\bar{y}_{1:40},\bar{y}_{41:50})^\top$ and $s^2=(\bar{y}_{1:25}-\bar{y}_{26:50}, \bar{x}_{26:50})^\top$. Results are based on 100 replicates of posterior samples of size $N=500$.
}}
\label{table:stoppingrule}
\end{table}
The summary statistics for $s=s^1$ are independent, but are on different scales. Accordingly, when this difference of scale is accounted for ($\Sigma =$ true), algorithm efficiency, and therefore ABC posterior approximation accuracy, is greatly improved compared to when the difference in scale is ignored ($\Sigma =$ identity). The summary statistics $s^2$ are both negatively correlated and on different scales. As for $s^1$, when summary statistic scale is taken into consideration ($\Sigma =$ diagonal) an improvement in algorithm efficiency and ABC posterior approximation accuracy is achieved compared to when it is ignored. However in this case, further improvements are made when the correlation between the summary statistics is also accounted for ($\Sigma=$ true). These results are consistent regardless of the form of the smoothing kernel $K_h$. Note that the uniform kernel produces the most efficient algorithm and most accurate ABC posterior approximation, and that this steadily worsens as the form of the kernel deviates away from the uniform density, with the worst performance is obtained under the Gaussian kernel.
This approach has been implemented in practice by e.g. \shortciteN{luciani+sjft09} and \citeN{erhardt+s16}, who identify some value of $\theta=\theta^*$ in a high posterior density region via a pilot analysis, and then estimate $\mbox{Cov}(s|\theta^*)$ based on repeated draws from $p(s|\theta^*)$.
\section{An ABC analysis in population genetics
To illustrate some of the points concerning summary statistics we consider here a population genetic example, very similar to that considered in the paper by \shortciteN{pritch99}, a key paper in the development of ABC methods. In population genetics we are often confronted with sequence data (as illustrated in Table \ref{table:pg-data1}), and we wish to infer demographic parameters that may be associated with such data. The standard modelling framework that is used is Kingman's coalescent \shortcite{hein04}, which describes the genealogical relationship of DNA sequences in a sample. The general likelihood problem that we wish to solve then can be represented as
$$
p(y_{obs} | \phi) = \int_H p(y_{obs} | H)p(H | \phi) dH
$$
where $y_{obs}$ represents the observed set of sequences in a sample, $\phi$ is an unobserved vector of parameters, and $H$ represents the unobserved genealogy history, including mutations. A common mutation model, used here, is the infinite-sites model, in which every mutation that occurs in a genealogy is unique. Typically $H$ is high dimensional, represented as a variable-length vector of times of events in the genealogical history, and the types of events. Although the likelihood can be computed exactly for simple demographic models and small data sets \shortcite{hein04} it is generally more flexible to resort to Monte Carlo methods \cite{marj06}.
One approach is through importance sampling. Here, an instrumental distribution $q_{\phi,y}(H)$ is available that describes the distribution of all genealogical histories $H$ that are consistent with the data $y$, as a function of the model parameters $\phi$. The distribution $q_{\phi,y}(H)$ is easy to simulate from and has a known functional form that can be directly evaluated. It also has the property that $p(y|H')=1$ for $H'\sim q_{\phi,y}(H)$.
Hence, $p(y_{obs}|\phi)$ can be estimated by
$$
\hat{p}(y_{obs} | \phi) = \frac{1}{N}\sum_{i=0}^{N}\frac{p(H^{(i)}|\phi)}{q_{\phi,y_{obs}}(H^{(i)})}
$$
where $H^{(i)}\sim q_{\phi,y_{obs}}(H)$ for $i=1,\ldots,N$.
In this analysis we compare an ABC approach to the above importance sampling method that targets the true likelihood. The aim is to investigate the performance of different summary statistics on ABC inferences, using the importance sampling-based inferences as a (noisy) ground-truth. The demographic model that generates the data is one of smooth exponential expansion. In this model the current population size $N_0$ contracts backwards in time as $N_0(t) = N_0\exp(-\beta t)$ where time $t$ is expressed in units of $2N_0$ and $\beta = 2N_0b$ is the growth rate in this scaled time. An additional parameter in the model is the scaled mutation rate $\theta_0 = 4N_0\mu$.
\begin{center}
\begin{table}[tb]
\centering
{\small \tt
1 : 000000000000000000000001000100000000000000\\
1 : 000000000000000000001010001000000000101001\\
1 : 000000000000000100000010001000010000101001\\
5 : 000000100000100000000000000000000000000000\\
1 : 000000100000100000000000000000001000000000\\
2 : 000000100000100000000000000001000000000000\\
1 : 000000100000100000000000000010000000000000\\
2 : 000000100000100001000000000000000000000000\\
2 : 000000100000100010000000000000000000000000\\
1 : 000000100001100001000000000000000000000000\\
1 : 000000100100100000100000000000000001000000\\
1 : 000000100100100000110000000000000000000000\\
1 : 000000101000100000000100100000000000000110\\
2 : 000001100010010000000000000000000000000000\\
2 : 000010010000000000000001010000000000010000\\
2 : 000100000000000000000000001000100000001000\\
1 : 001000000000001000000000001000000110101000\\
1 : 010000100000100000000000000000000000000000\\
2 : 100000000000000000000010001000000000101001\\
}
\caption{{\small Infinite sites data simulated with \emph{ms} in a format suitable for the \emph{Genetree} program. The left hand column gives the number of times the sequence on the right is observed in the sample (of size 30 in this case). The ancestral type is denoted by 0 and the mutant (derived) type is denoted by 1. The length of the sequence is equal to the number of segregating sites $S$ and is equal to the number of mutations that occurred in the genealogy. All sequences that share a mutation at a given position are descendent (and possibly further mutated) copies of the sequence in which that mutation first occurred. The sequences are ordered lexicographically.}}
\label{table:pg-data1}
\end{table}
\end{center}
In the ABC analysis, simulations are carried out using the \emph{ms} program of \citeN{hudson02}. A technical complication that needs to be accounted for when using \emph{ms} is that time in this program is scaled in units of $4N_0$ rather than $2N_0$ that appears standardly in most treatments (\emph{e.g.} \shortciteNP{hein04}), and, more importantly, in the \emph{Genetree} importance sampling program \cite{griff94} that is used for the ground-truth. The data in Table \ref{table:pg-data1} were generated using the \emph{ms} command:
\begin{verbatim}
ms 20 1 -t 50 -G 30
\end{verbatim}
which simulates one instance of 20 sequences with $\theta=50$ and $\alpha = 30$, where $\alpha = \beta/2$ (because of the different scaling of time, noted above). Assuming independent uniform priors $U(0,200)$ for each parameter $\phi=(\theta_0,\alpha)^\top$, it is straightforward to generate particles by sampling parameter values from the prior and then compute an importance weight for each particle using an algorithm suggested by \citeN{stephens00}. The implementation here (described in \citeNP{maciuca12}) is a modification of the \emph{Genetree} program
to include the Stephens and Donnelly algorithm, following \citeN{deiorio04}. Although the particles could be used directly for weighted density estimation, it is computationally easier to first resample them in proportion to their weights $w^{(i)}$, because the distribution of weights is typically very skewed (they have high variability). For the data in Table \ref{table:pg-data1}, $N=10^8$ generated particles yielded an effective sample size
(estimated by $(\sum_i w^{(i)})^2/\sum_i w^{(i)2}$) of around $300$. The following analyses are based on resampling 1000 particles.
For the ABC analysis, parameter values $\phi=(\theta_0,\alpha)^\top$ are simulated from the prior, data sets are simulated using {\em ms}, and summary statistics computed. The four summary statistics examined comprise the number of segregating sites, $S_0$, which corresponds to the number of mutations in the genealogy under the infinite sites mutation model, the average pairwise Hamming distance between all pairs of sequences in the sample, $\pi_0$, Tajima's $D$,
and Fay and Wu's $H_0$.
These latter two statistics express the difference in estimates of the scaled mutation parameter $\theta_0$, assuming a standard coalescent model (\emph{i.e.} with no population growth), based on two different unbiased estimators, one of which is $\pi_0$. The average pairwise distance, $\pi_0$, is directly an estimate of $\theta_0$ because in the standard constant size model the expected time to coalescence for a pair of sequences is $2N_0$, and therefore the expected number of mutations occurring down both branches since the common ancestor is $(2N_0 + 2N_0)\mu$. Other estimators have been developed, based on the number of segregating sites (Watterson's estimator, used in Tajima's $D$), or the number of segregating sites weighted by the number of times the mutant type occurs in the sample (Fu's estimator, used in Fay and Wu's $H_0$). Only under the standard constant size model will these estimators all have the same expectation, and therefore deviations between them can be used to identify departures from this model. Negative values of $D$ and positive values of $H_0$ are expected to be found in growing populations. The output of the \emph{ms} program can be piped to a program \verb"sample_stats", included with \emph{ms}, which computes these four summary statistics.
The observed summary statistics are:
\[
s_{obs}=(\pi_0,S_0,D,H_0)^\top= (5.90, 42,-1.64,3.67)^\top.
\]
ABC methods were implemented by first simulating $N=1,000,000$ parameter values from the $U(0,200)$ prior distributions, storing these in the file \texttt{params.txt}
(in the order indicated by the key-word \texttt{tbs}) and then running the \emph{ms} program with the command
\begin{verbatim}
ms 20 1 -t tbs -G tbs < params.txt
\end{verbatim}
The summary statistics corresponding to these simulated data were then obtained and then $\|s-s_{obs}\|$ computed as Euclidean distance. The ABC posterior approximation was obtained by using a uniform kernel $K_h$ over $[-h,h]$ and determining the kernel scale parameter $h$ as the value retaining the 1000 samples for which $s^{(i)}$ is closest to $s_{obs}$.
The summary statistics are measured on different scales. A common practice is to centre and scale them using the standard deviation for each summary statistic sampled from the prior predictive distribution.
(However, some authors argue that the motivations for this are flawed as an arbitrary change in the prior can change the scaling of a summary statistic within the analysis. Instead, following a similar discussion to that in Example 6, the scaling should be based on $\mbox{Cov}(s|\theta^*)$ for some value of $\theta=\theta^*$ in the high posterior density region, rather than $\mbox{Cov}(s)$. See e.g. \citeNP{erhardt+s16}.)
For the present analysis, the prior predictive sample standard deviations for $\pi_0$, $S_0$, $D$ and $H_0$ are 14.3, 69.0, 0.50 and 7.3 respectively.
In Figure \ref{image:pg-simfig} the estimated posterior distributions using both scaled and unscaled summary statistics are shown.
\begin{figure}[tbh]
\centering
\includegraphics[width=10cm]{popgen_sims}
\caption{{\protect\small Various ABC posterior approximations using different summary statistics and scalings, compared to the `ground-truth' importance sampling based posterior (black lines).
The true parameter value is indicated by a $+$.
Estimates show the 95\%, 50\%, and 5\% highest posterior density contours.
ABC posteriors are based on (a) all four summary statistics; (b) $\pi_0$ and $S_0$ only; (c) $D$ and $H_0$ only; (d) $\pi_0$ (green dotted) and $S_0$ (blue dotted).
For panels (a)-(c) the ABC posterior is based on scaled summary statistics (blue dotted line), and unscaled summary statistics (green dotted line).
}}
\label{image:pg-simfig}
\end{figure}
Figure \ref{image:pg-simfig} compares the resulting ABC posterior approximation using (a) all four summary statistics, (b) $D$ and $H_0$ only, (c) $\pi_0$ and $S_0$ only, or (d) $\pi_0$ or $S_0$ alone.
The first point to note is that the data, although quite informative about $\theta_0$ or $\alpha$ jointly, do not allow us to make very detailed inference about either parameter individually \emph{i.e.} they are only partially identifiable in the model -- at least for these data. This is the case both for the full-likelihood and ABC inferences, although the density for the full-likelihood method, as estimated by importance sampling, tends to be more localised towards the true parameter value (indicated by a $+$).
When all four summary statistics are used (panel a) the 95\% HPD envelope for ABC is quite similar to that for importance sampling (black line), but is shifted towards higher values of $\alpha$ and $\theta_0$. Scaled or unscaled summary statistics give similar results. The ABC posterior approximation for $\pi_0$ and $S_0$ together (panel b) is very similar to that for the full set of summary statistics. In this case the distances for scaled and unscaled summaries are the same because $S$ is discrete and matched exactly. This outcome perhaps indicates that one should be cautious of adding summaries such as Tajima's $D$ because it is simply a nonlinear function of $\pi_0$ and $S_0$. Whereas $H_0$ includes additional information from the site frequency spectrum, and would be expected to be informative (positive $H_0$ indicates a deficit of high-frequency derived mutations compared with that expected under the standard model). Using $D$ and $H_0$ together (panel c) yields a less concentrated posterior approximation. Both statistics are based on the difference of two estimators of mutation rate, and therefore it is unsurprising that $\theta_0$ is not well localised. The posteriors based on $\pi_0$ and $S_0$ individually (panel d) superficially look surprisingly similar to the full-likelihood posterior. However there is much stronger support for larger values of $\theta_0$ and $\alpha$ than in the importance-sampling based posterior.
\begin{table}[tb]
\centering
{\small \tt
1 : 000000000000000000000000000000000010100001\\
1 : 000000000000000000000000001000000000000010\\
1 : 000000000000000000000001010100111001000100\\
4 : 000000000000000011010000000100000000000000\\
1 : 000000000000000111010010000100000100000000\\
4 : 000000000000000111010010000101000100000000\\
1 : 000000000000010000000000000000000000000000\\
1 : 000000000000100111010000000100000000000000\\
1 : 000000000001000000000000010100000000001100\\
1 : 000000000010000000000000010100000000000100\\
1 : 000000000100001000000100100000000010000000\\
1 : 000000010000000000000000000000000010100001\\
1 : 000100001000000000000000000100000000000100\\
1 : 001000000001000000101000010100000000010100\\
1 : 010001100000000000000000001000000000000000\\
1 : 100010000000000011010000000110000000000000\\
}
\caption{{\small Data from locus 9pMB8 surveyed in 11 Biaka pygmies (Hammer et al. 2010),
using the same layout as for Table \ref{table:pg-data1} }}
\label{table:pg-data2}
\end{table}
We conduct a similar analysis with sequence data published in \shortciteN{hammer10} from locus 9pMB8 surveyed in 11 Biaka pygmies (resulting in 22 sequences). The data are shown in Table \ref{table:pg-data2}. Like the simulated data above, there are 42 sites that are segregating within the Biaka sample and which are compatible with the infinite sites model. The ABC simulations were performed as previously, using all four summary statistics. The observed summary statistics for these data are
\[
s_{obs}=(\pi_0,S_0,D,H_0)^\top= (7.52,42,-1.35,4.0)^\top.
\]
The posterior computed using importance sampling was also computed as before, but required $12 \times 10^8$ particles to achieve a similar effective sample size to that for the previous data set.
\begin{figure}[tb]
\centering
\includegraphics[width=5cm]{popgen_real_data}
\caption{{\protect\small Comparison of ABC posterior approximations (dotted lines) and full-likelihood (black lines) posterior for the Biaka pygmy data in Table \ref{table:pg-data2}. ABC posterior approximations are based on all four summary statistics which, are scaled (blue dotted line) and unscaled (green dotted line).
}}
\label{image:pg-datafig}
\end{figure}
It is immediately apparent from Figure \ref{image:pg-datafig} that the ABC posterior approximation and ground-truth posterior are very similar, unlike the previous analysis. This differing behaviour is not due to Monte Carlo error.
The result illustrates a point that outside the exponential family there is no single, low-dimensional set of summary statistics $s$ that will be highly informative for $\theta$, for all observed datasets. Summary statistics that work well for one dataset may perform less well on another. In the case of the two datasets considered here, it may be argued that in the latter, despite the smaller sample size, there is a stronger signal of growth in these data, which is more readily captured by the summary statistics. For the simulated data the signal is less strong, and information in other summary statistics, such as the site frequency spectrum, or higher moments of the distribution of pairwise Hamming distances, may be required for the ABC posterior to better match the true posterior.
From a computational perspective, the $10^6$ ABC simulations took about 3 minutes on a desktop computer, whereas $10^8$ importance sampling simulations took around 4 hours i.e. the computational effort per iteration is broadly similar for both approaches. The algorithms used in each are `similar yet different', in that they both generate genealogical trees, but in one case the tree is constrained by the data, and in the other it is independent of the data. Naively one might think that an importance sampling algorithm should be more efficient because it always generates a tree that is compatible with the data. However, it is typically very difficult to devise an algorithm that samples trees in proportion to their conditional distribution under the model, and therefore genealogical importance sampling tends to be inefficient, as illustrated here, where $10^8$ simulations only give an effective sample size of around 300. Of course, it is possible to use sequential methods, or a pseudo-marginal method to improve efficiency (\shortciteNP{andrieu+lv17,cornuet12,beaumont03}), but similar approaches are available for ABC as well.
\section{Levels of approximation in ABC
The primary challenge in implementing an ABC analysis is to reduce the impact of the approximation, while restricting the required computation to acceptable levels. In effect this is the usual ``more computation for more accuracy'' tradeoff. It is therefore worthwhile to briefly summarise the quality and nature of the approximations involved in any ABC analysis.
While some of these approximations are common with standard Bayesian analyses, in particular points \ref{chapter3:approxList1} and \ref{chapter3:approxList5} below, within the ABC framework these have additional, more subtle implications.
In order, from model conception to implementation of the analysis, the ABC approximations are:
\begin{enumerate}
\item \label{chapter3:approxList1} {\em All models are approximations to the real data-generation process.}
While this is true for any statistical analysis, this approximation can produce an ABC-specific issue if the assumed model is not sufficiently flexible to be able to reproduce the observed summary statistics.
In this scenario
the kernel scale parameter $h$ will necessarily be large (as all simulated data are far from the observed data), and as a consequence the quality of the ABC approximation may be low. Further, if, for this inflexible model, the observed summary statistics contain conflicting information for a model parameter, this may cause additional bias in the posterior approximation for this parameter, as is illustrated in Example 5. In summary, this means that the more unlikely a model is to have generated the observed data, the worse the ABC approximation will be.
In general this is problematic, as it implies that routine inspection of the fitted ABC posterior may not in itself be enough to determine model adequacy, as the ABC posterior may be a poor estimate of the true posterior, and poor data generation models may appear more likely (with $h>0$) than they actually are (with $h=0$).
By extension, this also implies that posterior model probabilities of inadequate models (constructed from the normalising constant of the poorly estimated ABC posterior distribution) may also be affected, although this has yet to be fully explored in the literature. See \citeN{fearnhead18}, for an exploration of related ABC asymptotics results to date, and \shortciteN{marin+per18} for particular methods for performing ABC model choice.
\item \label{chapter3:approxList2} {\em Use of summary statistics rather than full datasets.}
The full posterior distribution $\pi(\theta|y_{obs})\propto p(y_{obs}|\theta)\pi(\theta)$ is replaced by the partial posterior $\pi(\theta|s_{obs})\propto p(s_{obs}|\theta)\pi(\theta)$ where $s_{obs}=S(y_{obs})$ is a vector of summary statistics. If $S$ is sufficient for $\theta$, then there is no approximation at this stage. More commonly, for non-sufficient $S$, there is a loss of information.
\item \label{chapter3:approxList3} {\em Weighting of summary statistics within a region of the observed summary statistics.}
The partial posterior $\pi(\theta|s_{obs})$ is replaced by the ABC approximation to the partial posterior
\[
\pi_{ABC}(\theta|s_{obs})\propto \pi(\theta)\int K_h(\|s-s_{obs}\|)p(s|\theta)ds
\]
where $K_h$ is a standard smoothing kernel with scale parameter $h\geq 0$. If $h=0$ or in the limit as $h\rightarrow 0$ then there is no further approximation at this stage. In most cases however, $h>0$ and so ABC makes use of a kernel density estimate as an approximation to the true likelihood function. This aspect of approximation can be a particular problem in ABC when the number of model parameters $\theta$ is large, as then the vector of summary statistics, $s$, must be equivalently large for parameter identifiability, and hence the comparison $\|s-s_{obs}\|$ will suffer from the curse of dimensionality.
\item \label{chapter3:approxList4} {\em Approximations due to other ABC techniques.}
There are a number of other ABC techniques not discussed in this Chapter that are optionally implemented in ABC analyses in order to improve some aspect of the approximations in points 1 and 2, or to achieve a greater computational performance. Many of these are discussed in later Chapters, but some common methods involve post-processing techniques such as regression and marginal adjustments (e.g. \shortciteNP{beaumont+zb02,blum+f10,blum+nps13,blum17,nott+ofs17}),
or develop alternative approximations to the intractable likelihood function, while remaining in the ABC framework, such as Expectation-Propagation ABC, synthetic likelihoods, and copula or regression-density estimation models (e.g. \shortciteNP{barthelme+c14,barthelme+cc17,wood10,price+dln17,drovandi+mr17,li+nfs15,fan+ns13,nott+ofs17}).
\item \label{chapter3:approxList5} {\em Monte Carlo error.}
In common with most Bayesian analyses, performing integrations using Monte Carlo methods introduces Monte Carlo error. Typically this error may be reduced by using larger numbers of samples from the posterior, or by reducing the variability of importance weights. The same is true for an ABC analysis, although with the additional point that more posterior samples effectively allows for a lower kernel scale parameter $h$ and consequently an improved ABC posterior approximation.
As a result,
for a fixed number of Monte Carlo samples, the choice of kernel scale parameter represents a typical bias-variance tradeoff: if $h$ is large, more posterior draws are available, reducing variance, but at the cost of a poorer ABC approximation; if $h$ is small, the ABC posterior approximation is improved, but Monte Carlo variance is increased.
\end{enumerate}
\section{Interpretations of ABC
\label{section:interpretations}
There are a number of closely related ways in which ABC methods may be understood or interpreted.
The most common of these is conditional density estimation of the posterior (e.g. \shortciteNP{blum10,bonassi+yw11,nott+ofs17}) in the sense usually understood in a conventional Bayesian analysis. Before observing the data, the distribution $\pi(\theta,y)=p(y|\theta)\pi(\theta)$ describes prior beliefs about the model parameters and credible datasets under the model. When a dataset $y_{obs}$ is observed, interest is then in the conditional distribution of $\theta$ given that $y=y_{obs}$. In the ABC setting, $\pi(\theta,y)$ is represented by the joint sample $(\theta^{(i)},y^{(i)})\sim\pi(\theta,y)$, $i=1,\ldots, N$. Weighting the vectors $\theta^{(i)}$ based on the value of $\|y^{(i)}-y_{obs}\|$ (larger weights for smaller $\|y^{(i)}-y_{obs}\|$), then produces an empirical conditional density estimate of $\pi(\theta|y_{obs})$.
Similarly, we have already discussed that the ABC approximation to the true likelihood, $p_{ABC}(y_{obs}|\theta)$, is a kernel density estimate of $p(y|\theta)$, following (\ref{Chapter3:eqn:ABClikelihood}) and (\ref{eqn:conDenEst}). This allows ABC to be considered as a regular Bayesian analysis with an approximated likelihood function.
\citeN{fearnhead+p12} noted that the ABC approximation to the posterior can be considered as a continuous mixture of posterior distributions
\begin{eqnarray*}
\pi_{ABC}(\theta|y_{obs}) & \propto & \int K_h(\|y-y_{obs}\|)p(y|\theta)\pi(\theta) dy\\
& = & \int w(y)\pi(\theta|y) dy
\end{eqnarray*}
where $\pi(\theta|y)=p(y|\theta)\pi(\theta)/\pi(y)$, with weight function $w(y)\propto K_h(\|y-y_{obs}\|)\pi(y)$. This is the continuous equivalent of equation (\ref{eqn:discreteMixturePost}) obtained during the analysis of stereological extremes in Section \ref{sec:extremesAnalysis}.
While ABC is most often thought of as an approximate method, \citeN{wilkinson13} pointed out that ABC methods can be considered as exact if $e=y-y_{obs}$ (or $e=\|y-y_{obs}\|$) is considered as the error (either from observation error or model misspecification) obtained in fitting the model $p(y|\theta)$ to the observed data $y_{obs}$. From this perspective, the smoothing kernel $K_h$ is simply the density function of this error, so that $e\sim K_h$, and $h$ is a scale parameter to be estimated.
Finally, while ABC methods are universally used for the analysis of models with computationally intractable likelihood functions, it is often overlooked that they also provide a useful inferential mechanism for tractable models.
As an illustration, consider a scenario where a standard Bayesian analysis is available for a complex, but incorrect model, given the observed dataset. Under this model, predictions of some particular quantity of interest, $T(y)$, could be precise, but completely implausible due to the limitations in the model. Consider now an ABC analysis based on this model, based on matching summary statistics that include $T(y)$. ABC methods would identify those parameter values $\theta$ that are most likely to have produced these statistics under the model. This means that predictions of $T(y)$ under the ABC approximation now have some chance of being accurate (although they may be less precise), as the model may be able to predict the summary statistics, including $T(y)$, even if it can't accurately predict the full dataset.
This allows ABC to be interpreted as a mechanism for fitting models based on summary statistics that may in fact be more useful than the exact inference with the full dataset. An explicit example of this in the robust model selection context was given by \shortciteN{li+nfs15}.
Related arguments allow ABC to be thought of as a natural method to fit models when the full dataset ($y_{obs}$) is only partially observed ($s_{obs}$) and has missing data (see e.g. \shortciteNP{rodrigues+fst18}). ABC methods have also been used to determine weakly informative prior distributions in a regular tractable Bayesian analysis, exploiting the mechanism of predictive data matching to identify a priori non-viable regions of the parameter space \shortcite{nott+dme17}.
\section{Further reading
\label{section:FurtherReading}
ABC methods have been extensively and rapidly developed since their first modern appearance in \shortciteN{tavare+bgd97} and \shortciteN{pritch99}. Naturally a number of review articles have been written for various discipline audiences to review the techniques available at the time. While with time such reviews can rapidly become dated, they often provide useful perspectives on ABC methods as viewed at the time. See, for example, the reviews by
\citeN{beaumont10},
\shortciteN{bertorelle+bm10},
\shortciteN{blum+nps13},
\shortciteN{csillery+bgf10},
\citeN{sisson+f11},
\shortciteN{marin+prr12},
\citeN{turner+v12},
\citeN{robert16},
\citeN{erhardt+s16},
\shortciteN{lintusaari+gdkc16}
and
\citeN{drovandi17}.
Each of the chapters in this Handbook also makes for excellent reading and review material on focused aspects of ABC \shortcite{tavare17,blum17,fan+s18,prangle17,marin+per18,drovandi18,nott+ofs17,andrieu+lv17,fearnhead18,ratmann+chc18,drovandi+gkr18,wegmann18,barthelme+cc17}.
Because ABC methods are now recognised as a standard Bayesian tool, their scientific reach has effectively become as extensive as standard Bayesian methods. While it is accordingly futile to exhaustively describe all areas in which ABC has applied, the below selection is provided to provide a flavour of the impact ABC methods have had. Beyond the applications in this Handbook, ABC methods have been successfully applied to applications in
$\alpha$-stable models \shortcite{peters+sf12},
archaeology \shortcite{wilkinson+t09},
cell biology \shortcite{johnston+smbr14,vo+dpp15,vo+dps15},
coalescent models \shortcite{fan+k11,tavare+bgd97},
ecology \cite{jabot+c09,wood10},
evolutionary history of mosquitos \shortcite{bennett+slmkdahihplw16},
filtering \shortcite{jasra+smm12},
extreme value theory \shortcite{erhardt+s12,erhardt+s16},
financial modelling \shortcite{peters+sf12},
host-parasite systems \shortcite{baudet+dscgms15},
HIV contact tracing \shortcite{blum+t10},
human evolution \shortcite{fagundes+rbnsbe07},
hydrological models \shortcite{nott+fms14},
infectious disease dynamics \shortcite{luciani+sjft09,aandahl+rst12},
infinite mixture models for biological signalling pathways \shortciteN{koutroumpas+bvc16},
image analysis \shortcite{nott+fms14},
long range dependence in stationary processes \shortcite{andrade+r15},
operational risk \shortcite{peters+s06},
quantile distributions \shortcite{allingham+km09,drovandi+p11},
pathogen transmission \shortcite{tanaka+fls06},
phylogeography \shortcite{beaumont+al10},
protein networks \shortcite{ratmann+ahwr09,ratmann+jhsrw07},
population genetics \shortcite{beaumont+zb02},
psychology \cite{turner+v12},
single cell gene expression \shortcite{lenive+ks16},
spatial point processes \cite{shirota+g16},
species migration \shortcite{hamilton+crhbe05},
state space models \shortcite{vakilzadeh+hba17},
stochastic claims reserving \shortcite{peters+fs12},
susceptible-infected-removed (SIR) models \shortcite{toni+wsis09},
trait evolution \shortcite{slater+hwjra12}
and
wireless communications engineering \shortcite{peters+nsfy10}.
Within this Handbook novel analyses can be found in \shortciteN{peters18}, \shortciteN{rodrigues+fst18}, \shortciteN{stumpf18}, \shortciteN{estoup18}, \shortciteN{holden18}, \shortciteN{wood18} and \shortciteN{fan18}.
\section{Conclusions}
ABC methods are based on an inherently simple mechanism -- simulating data under the model of interest and comparing the output to the observed dataset. While more sophisticated ABC algorithms and techniques have subsequently been developed (and many of these are discussed in more detail in this Handbook), this core mechanic remains a constant.
It is this methodological simplicity that has made ABC methods highly accessible to researchers in across many disciplines. We anticipate that this will continue in the future.
\section*{Acknowledgments}
SAS is supported by the Australian Research Council under the Discovery Project scheme (DP160102544), and the Australian Centre of Excellence in Mathematical and Statistical Frontiers (CE140100049).
\bibliographystyle{chicago}
|
2,877,628,091,521 | arxiv | \section{Introduction}}
\IEEEPARstart{M}{any} Smart City projects rely on information collected from public sensor networks monitoring, among others: traffic, parking availability, pollution, noise, etc. (see, for instance,~\cite{iot_santander}). Examples, such as the city of Barcelona~\cite{iot_barcelona}, show benefits of such knowledge in governing the city, optimizing operational expenditures, and improving citizens' quality of life. However, the initial costs of such initiatives are very high, with major contributing factors including the purchase and installation of sensors, infrastructure cost (e.g. high-throughput network), or development and integration of software. Moreover, introducing new ``data sources'' often requires deploying new sensor networks, or upgrading existing ones (both generating substantial costs). Finally, to achieve the expected benefits, the ecosystem must be maintained and adapted to follow changes in technology and development and growth of the city.
Some of those shortcomings can be addressed by taking advantage of the rapidly growing number of personal, and home-based, IoT devices, therefore reducing the costs of hardware infrastructure needed to gather data. Moreover, the variety of citizen-owned sensing devices is systematically increasing, generating new dimensions of useful knowledge. For instance, the popularity of fitness tracking solutions has lead to a massive growth of health and lifestyle related data, which could be used to improve the medical and living conditions of the society.
Obviously, motivating the citizens to share their data ``with the city'' is a serious challenge, but it has been shown~\cite{klasnja2009exploring} that one of the key obstacles to achieving this is ``privacy management''. Specifically, how to facilitate adequate control over personal data, and thus convincingly assure the protection of privacy. Therefore, a successful solution, gathering personal sensor information for public use, should provide solid means of managing privacy preferences and fine-grained access control. Here, let us note that while there exist ways of anonymizing data to make it less sensitive, research on removing user/data anonymity limits the relevance of such approaches~\cite{el2011systematic,porter2008constitutional}.
Furthermore, as discussed in~\cite{smart_city_privacy}, when dealing with health-related data and/or movement patterns extracted by fitness trackers, the actual challenge concerns \textit{relative perception of privacy}. Specifically, it involves not only \textit{which data is to be shared}, but also \textit{with whom} and \textit{for what purpose}.
Finally, let us consider access to personal data by government agencies, e.g. related to criminal investigations, or national security. Analysis by Nojeim et al.~\cite{nojeim2014_governmentaccess} shows that, from legal and practical perspective, existing regulations and tools fail to reconcile public security with basic human rights and legal regulations (e.g. the GDPR). Therefore, when facing access requests, businesses storing the personal information rely on own judgment and/or interests, while agencies resort to broad, uncontrolled, and often unnecessarily detailed, surveillance.
The described problem is an instance of a more general topic of \textit{access control}. It requires defining rules of who is allowed to access data, the same way as a company defines who can access a specific area in a building. Access control is well studied, and many approaches to solving it have been suggested and implemented. Here, Access Control Lists, Role Based Access Control or Attribute Based Access Control mechanisms have been created to tackle generalization of user roles and resource groups, static and dynamic Separation of Duties, spatiotemporal authorization, etc. Acknowledging this, note that several specific aspects of privacy management in Smart Cities need to be addressed:
\begin{itemize}
\item Access requester is, likely, an organization. Moreover, the structure of the organization is, often, not known up-front. Hence, representation of (hierarchical) organizational structure is needed. Obviously, question of identity verification arises, but it is out of scope of this contribution.
\item Data request is completed on behalf of an external organization, e.g. local government, which should be allowed to access only some data. Hence, token-based authorization (such as OAuth) is not feasible.
\item Considered data is often a series/stream of observations that should be abstracted to types/categories, to avoid authorizing them individually. However, due to differences between devices/services, enforcement of a common observation vocabulary is not likely. Hence, use of access control mechanisms that depend on a fixed set of ``scopes'' (e.g. OAuth), may be challenging.
\item Potential (large) scale of social participation, coupled with heterogeneity of data gathering applications, brings interoperability challenges that also materialize in access authorization. Differences in data representation necessitate either (i) conversion of data to some common format, which may not be feasible from closed data ecosystems (e.g. commercial fitness trackers), or (ii) introduction of a mapping layer. This would also result in authorization decisions involving centralized rules and policies.
\item Time and location of issuing the request may not be essential, however certain spatiotemporal data related to the accessed information may be of use.
\item Legal access to data (e.g. governed by GDPR) must be allowed, while rigorously controlled. Corresponding policies should consider purpose of use, retention routines, type of information, etc.
\end{itemize}
In this context, in~\cite{jms} we have proposed a semantically-enriched authorization system for fine-grained control of data access. Here, we expand on the idea, focusing on \textit{if} and \textit{in what way} ontological modeling, and semantic reasoning, can help manage privacy preferences in participatory sensing, within Smart Cities. The proposed solution recognizes that different information may be perceived as more or less private, depending not only on the nature of data, purpose of collection, and requesting entity, but also on purely subjective criteria. This, coupled with semantic representation and processing of pertinent meta-data, enables individuals to precisely manage their data access permissions. Finally, we recognize that certain legal regulations should be enforced and prioritized over the individuals' personal preferences. In this context, let us describe the use case scenarios, which guides the remaining parts of the paper.
\section{Overview of the use case scenario} \label{intro_use_case}
As discussed in~\cite{fitness_smartcity}, fitness data, collected by users for health tracking, could be useful for Smart Cities' agencies (e.g. public health organizations). It may not encounter known problems in adopting participatory sensing (also known as crowdsensing~\cite{wiki_crowdsensing}), such as the need for incentives~\cite{participatory_incentives} and/or change of behavior. Therefore, the proposed use cases involve tracking of an individual's movement habits, when the data is generated either by a GPS, or a pedometer (e.g. in a smartphone, smartwatch, or smart-shoes).
The general use case, is that of Sally, who uses a smartphone and a fitness application for tracking her running and cycling workouts. Thus far, she has collected data for personal benefits and shared it with friends (using some application). However, she is considering participation in a program analyzing sport activities in her home city. The primary use case (UC1) concerns the local Health Center, wishing to investigate the workout and training habits of the citizens. The second entity interested in her data (UC2) is the Police, investigating a crime in a certain area, searching for potential witnesses. For UC1 Sally would like to specify (independently) what information, and at what level of detail, she will share. UC2 illustrates how the proposed system handles legal obligations, while providing sufficient control over what data may be accessed under what conditions.
To deliver the needed functionality, we will build upon the semantically enriched Attribute Based Access Control system, introduced in~\cite{ict,jms,aciids}, and expand it with a more detailed model of privacy preferences, as well as means of handling legal access control policies.
Thus, in Section~\ref{state_of_art}, we give an overview of the state-of-the-art of solutions for enforcing privacy in Smart Cities as well as access control solutions making use of semantic technologies. Further, in Section~\ref{solution}, we briefly describe the SXACML access control system and discuss how to design an ontology that can be used to manage privacy and trust in Smart City. Finally, in Section~\ref{use_case} we revisit our guiding scenarios, to show in detail how proposed system enables citizens to manage access to their personal data, using the proposed system.
\section{Related work} \label{state_of_art}
\subsection{Privacy and access control}
Let us first look into the methods of general access control, in which authorization policies and rules are used to validate if a \textit{Subject} is permitted to perform an \textit{Action} on a \textit{Resource} in a certain request \textit{Context}. We purposefully focus on the decision process and leave out consideration of ``orthogonal aspects'', such as identification, authentication, or action tracing.
Attribute Based Access Control (ABAC) provides the most flexible, and context aware, approach to authorization. Here, \textit{Subject}, \textit{Action}, \textit{Resource}, and \textit{Context} are described with sets of attribute values. Authorization decision is based on evaluation of policies, that specify conditions on the attributes. The most common implementation of ABAC is the eXtensible Access Control Markup Language (XACML\footnote{\url{http://docs.oasis-open.org/xacml/3.0/xacml-3.0-core-spec-os-en.html}};~\cite{hu2014guide}).
In Figure~\ref{fig:sxacml} we depict a typical sequence of actions undertaken during request evaluation, which follows the XACML standard.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.8\columnwidth]{images/sxacml_smaller.pdf}
\caption{Evaluation of request in XACML}
\label{fig:sxacml}
\end{center}
\end{figure}
When a system using XACML is configured, an administrator defines and manages policies within the Policy Administration Point (\textit{PAP}) and supplies them to the Policy Decision Point (\textit{PDP}). Once an access request is sent, by the \textit{Subject}, to the Policy Enforcement Point (\textit{PEP}), it is forwarded to the \textit{Context Handler} which, in turn, notifies the \textit{PDP}. Here, the \textit{PDP} verifies the values of all attributes used in the policy definitions. Such values may be found in the request itself, or need to be retrieved from one, or more, Policy Information Point (\textit{PIP}) components. Next, the \textit{PDP} evaluates policy rules, combines results (if multiple policies are involved) and builds the response context, which is returned to the \textit{Context Handler}. Results are then sent to the \textit{PEP}, which enforces the decision.
ABAC, however, omits (a)~implicit structure of entities requesting access, (b)~nature of data, or (c)~relations between \textit{Subject(s)} and \textit{Resource(s)}. This leads to significant complexity in defining and managing the policies. Regardless of improvements, like ALFA (Abbreviated Language For Authorization\footnote{\url{{http://docs.oasis-open.org/xacml/alfa-for-xacml/v1.0/alfa-for-xacml-v1.0.doc}}}),maintaining policies storing organizational and resource hierarchies, or more sophisticated relationships between data, requires major effort.
Additionally, authoring and administration of policies requires understanding of the policy definition language and the attribute space of the system under control. Therefore, ``policy management'' requires skilled specialists. As a result, ABAC has been adopted mostly in large organizations, e.g. in military, government, healthcare, or finance, where potential negative implications of unauthorized access justify expenditures related to managing policies.
Note that our first use case assumes privacy control managed by the user who is not an access management expert. Therefore, it is crucial to provide tools and methods to easily express users' attitude towards data access.
Turning our attention back to the personal sensing,~\cite{klasnja2009exploring} explores the objections to participating in such programs, based on studying a group of people who used different sensors. It was observed that raised objections depend on what was recorded, in what circumstances, and on the created value. Moreover, giving users more knowledge and control over data increased the potential for adoption of crowdsensing-type approaches.
In this context, \cite{smart_city_privacy} proposes a framework for classifying citizen-related data, which also considers subjective \textit{feelings} about how personal the data is. For instance, location data could be seen as \textit{personal}, while ``generic traffic data'', not tied to an individual, would likely be \textit{impersonal}. Secondly, the \textit{purpose} of collecting data, ranging from ``service'' to ``surveillance'' is examined. For instance, use of location data for traffic management would, most probably, be seen as a ``service'', while using such information for predictive policing could be considered as ``surveillance''. When applying this framework to access control and privacy enforcement, two points are worth noticing. First, how ``personal'' a given ``piece of data'' appears to the user is rather complex. The data collected by any given sensor can be very personal if connected with personally identifiable information, or collected with high granularity (e.g. exact jogging location data). However, when such data is aggregated over time (e.g. distance run daily) and/or using strong anonymization (e.g. spatial k-anonymity;~\cite{spatial_k_anonymity}), it may be considered as not sensitive at all. Furthermore, different persons may care more, or less, about ``releasing'' personal data (see, information shared within social networks). Overall, ``level of sensitivity'' cannot be connected to a specific sensor. Instead, it is related to (a) type of observation, (b) its aggregation, (c) anonymization, and (d) ``personality'' of the user. Second, computer that evaluates access request has limited reasoning capacity. Therefore, it is important to let the user define permission rules, based (i) on the specific organization requesting the information and/or (ii) purpose of use, as specified in the request. For instance, users may assume that requests by the police department represent ``surveillance''. However, request for data to be used for traffic control, shifts the assessment towards ``service'' (as long as the user is willing to trust the police).
In~\cite{fitness_smartcity} and~\cite{clarke_jasist}, authors discuss the possibility of using personal health and fitness information. They also propose a privacy preserving architecture for data collection. Focus of their work is on aggregating and anonymizing the data, to become ``unbreakable'' by data mining. Here, let us note that numerous papers describe different solutions to data anonymization, especially for the location data; see, for instance~\cite{privacy_tesselation},~\cite{spatial_k_anonymity},~\cite{LIU2019421} and~\cite{LI20171}. However, these papers solve an issue that is, in a way, orthogonal to our concerns. We are interested in designing a system that gives its users the best possible control over their information, regardless of data anonymization. Hence, we accept that, in some cases, sharing fine-grained, personal information may be necessary. Moreover, we assume that the user might not want certain parties to acquire even anonymized information. Finally, we recall that some users may be willing to share ``very personal data'' regardless if it is anonymized or not.
Authors of~\cite{privacy_policies} propose a framework for managing and enforcing privacy policies in the context of data collection, as well as consent regulations, such as GDPR. The information model used in policies includes devices, entities (agents or organizations), data items, and purposes of use. Semantics of the policy elements can be expressed using subsumption of organizations and a partial order of attributes, which represents data item composition (e.g. street name as part of an address attribute). Each policy is a defined as a set of rules governing:
\begin{itemize}
\item what attribute can be accessed, by which entity, under what condition (Data Communication Rule),
\item for what purposes it can be used, and how long it can be retained (Data Usage Rule), and
\item what are the rules for transferring the data to other entities, each governed by separate Data Communication and Usage rules.
\end{itemize}
Conditions within Data Communication Rules are defined using a formal, logics-based language, including negation and conjunction of predicates. Its relative simplicity brings the possibility of formal verification and provability of policies.
The framework, however, offers limited possibilities to express semantics and relationships between the data attributes, which could lead to issues when working with more complex domains and policies. Additionally, given the scope of the framework, covering data collection, usage, and transfer, should the policies be fully enforced, the solution would need to be applied throughout the process of data collection, storage, and handling. This in turn may prove hard to implement in practice, given the wide variety of systems used by the data controllers and processors.
The Personalized Privacy Assistant Project\footnote{https://www.privacyassistant.org/}, by the Carnegie Melon University, seeks to provide individuals with tools enabling them to control how their data is collected and processed. Related publications such as~\cite{privacy_assistant_mobile1} and~\cite{privacy_assistant_mobile2} describe a mobile solution monitoring the device permissions (e.g. location, camera access) granted to various applications installed on the user's smartphone. It uses machine learning algorithms for clustering and classification, to group these programs into categories based on their functionality profile and purpose of data collection, finally assisting the user in making decisions about their privacy preferences. Additionally, to make configuration of preferences easier, the tool includes a number of privacy profiles and attempts to semi-automatically assign a user to one of them, based on answers to a simple survey. Compared to our research goals, the solution limits the scope of data under control to only the permissions recognized by the Android operating system and, therefore, does not address scenarios where the number of data items, or resources, is large or ever-changing. In a similar way, the set of applications installed on a mobile device do not change as often as the potential consumers of user data, in IoT and personal sensing scenarios.
Recent results of the project, described in~\cite{privacy_assistant_iot}, expand the solution to cover use cases dealing with a proliferation of IoT devices around the individuals. Here the authors address the problem that a typical person is continuously monitored by various devices, collecting personal information, and has little knowledge or control over the purpose and practices of data processing. They propose a distributed system, in which the personal assistant, residing on the user's smartphone, interacts with registries of surrounding IoT devices and uses its privacy preference policies to control what kind of consent it should give to device owners, i.e. data controllers. Finally, for certain types of more private data, external Policy Enforcement Points can be deployed that control what information can be provided to each data collector, depending on the privacy preferences of the data subject. The approach is sound, but tackles a somewhat different problem than the one we attempt to solve -- as in the case of~\cite{privacy_policies}, it deals primarily with user consent to collect and process data originating from devices owned and operated by third parties. In our context, we are more concerned with data generated by user-owned devices.
\subsection{Ontologies in Access Control} \label{onto_access_control}
In this context, let us note that knowledge representation and automatic reasoning, based on the structure and semantics of data, are dealt by ontology engineering. Over the years, it developed mature methods for formally representing concepts and their relationships. Here, an ontology is understood as a specification of a vocabulary for a domain, including classes of objects, relations, functions, and other concepts~\cite{GRUBER1993199}. Ontology-based models have been successfully applied in various areas, e.g. for genome modeling \cite{gene_ontology}; in healthcare (SAPPHIRE project;~\cite{semantic_web}), or in the Internet of Things \cite{interiot_onto}.
Overall, as shown in~\cite{ict,priebe,ferrini2009supporting}, by introducing semantic extensions to an ABAC system, it is possible to:
\begin{itemize}
\item Define the structure of \textit{Subjects} and \textit{Resources} to closely model actual organizations and domains. Semantics also provides a consistent way of implementing RBAC and hierarchical resources.
\item Represent additional relationships and reason over the model, to uncover implicit knowledge, to be used for checking request consistency, applicability of rules, and making decisions.
\item Define mapping ontologies, making it straightforward to employ an ontological model of the domain that is reusable in other ways than just authorization.
\item Flexibly and efficiently deal with heterogeneity, by utilizing ontology alignment and mapping.
\item Define additional attributes that are automatically inferred from the information contained in ontologies.
\item Delegate permissions from one \textit{Subject} to others in a hierarchy, by utilizing property transitivity.
\item Infer conflicts between roles, by defining disjointness axioms in an ontology (thus, satisfying Separation of Duty). By defining rules, it is possible to tie the role assignment to dynamic conditions, such as time or the \textit{Action} being performed, to handle also Dynamic Separation of Duty.
\end{itemize}
Another example of combining semantics and access control is~\cite{alcalde2007towards}, where authors propose SenTry -- a language and framework for personal privacy control. The solution is based on an OWL ontology modeling policies, and the Semantic Web Rule Language based (SWRL\footnote{\url{http://www.w3.org/Submission/2004/SUBM-SWRL-20040521/}}) rules for context-specific predicates used for decision making. Specifically, semantic reasoner evaluates applicable predicates, grouped into: filter, static and dynamic categories. This solution implements the ABAC model, but dismisses the de-facto standard of access control -- XACML. By building the solution completely from ground up, it misses the opportunity to benefit from the large number of existing (and, sometimes, very mature) tools built around XACML, dealing with handling of rule conflict resolution, request processing, geospatial functions, etc.
Another solution, combining XACML with semantics is reported in~\cite{CHINGHSU201333}. Here, the LAPAR engine uses XML transformations (implemented as XSLT templates) to convert XACML policies to SWRL rules and further transform OWL ontologies and SWRL rules into Jess\footnote{\url{https://www.jessrules.com/}} inference engine statements. Proposed system reasons over combined knowledge, and computes authorization decisions. While presented results concern access to documents in a university, it is not clear (and somewhat doubtful) if the solution is capable of transforming the entire grammars of XACML, OWL, and SWRL into Jess rules solely by processing their XML representations. In absence of other use cases proving the concept also for other domains, we are not convinced the approach is applicable within the context of privacy control.
Summarizing, while the ABAC approach forms the best base for authorization systems in large-scale IoT applications, it lacks flexibility and meta-data modelling capabilities necessary in dynamic, heterogenous environments. Combining ABAC with semantic reasoning on ontological knowledge bases addresses these shortcomings. Finally, extending an established standard like XACML, instead of developing a purely semantic solution, brings numerous benefits, as well as giving the possibility to mix ontological and traditional ABAC-based rules in the same policy set. Let us now pursue this line of reasoning further.
\section{Ontologies for privacy management in Smart City} \label{solution}
\subsection{Semantic XACML}
In this context, in~\cite{ict,jms}, we have introduced a semantics-driven implementation of the \textit{PIP}, thus defining the Semantic XACML~(SXACML\footnote{\url{https://github.com/mdrozdo/SXACML}}) approach. The complete solution extends the XACML architecture in the following ways:
\begin{itemize}
\item In addition to managing XACML policies, the \textit{PAP} module has been complemented with means of administering the ontologies used in the system. It includes a graphical front-end allowing one to define class mappings, expressions, and instances that are then added to the ontology and used during policy processing.
\item The \textit{PDP} loads an additional resource finder module that handles multi-resource request scenarios, in which the access request does not specify a concrete resource, but rather a category that needs to be resolved to a set of individuals. The functionality of the semantic resource finder is depicted in Algorithm~\ref{alg:find_resources} and also handles resource class hierarchies, i.e. it can traverse an entire class-subclass structure defined in the ontology. Note that, in this scenario, the pure-XACML way of specifying the hierarchy relationships between resources, in policies, is very complex (see the XACML Hierarchical Resource Profile\footnote{\url{http://docs.oasis-open.org/xacml/3.0/rbac/v1.0/xacml-3.0-rbac-v1.0.html}}).
\item A semantic \textit{PIP} module has been implemented, to enrich the set of attributes provided in the request with new information retrieved from the ontology. Thanks to the use of a semantic reasoner, the attribute values need not be explicitly defined in the knowledge base, but can also be inferred from other known facts. Additionally, we have introduced special attributes denoting the class identifier of each XACML attribute category (subject, resource, action, environment) that can be used in the policies for rules involving the type of a resource, or the role of the subject. As in the case of other attribute values, such classification of request categories can be deduced by means of automatic reasoning. The procedure of retrieving attribute values (labeled in Figure~\ref{fig:sxacml} as ``Find attribute value'') is performed according to Algorithm~\ref{alg:find_attribute})
\end{itemize}
\begin{algorithm}
\SetKwInOut{Input}{input}\SetKwInOut{Output}{output}
\Input{ URI of resource class $class$}
\Output{set of permision decisions}
\BlankLine
$O_{d}$ $\leftarrow$ load domain ontology\;
$O_{m}$ $\leftarrow$ load mapping ontology()\;
$O_{r}$ $\leftarrow$ new ontology \tcp*{temp request ontology}
$O_{r}$ imports \{$O_{d}$, $O_{m}$\}\;
run semantic reasoning on $O_{r}$\;
\BlankLine
$results$ $\leftarrow$ empty set of permission decisions\;
$sol$ $\leftarrow$ query ontology for instances of $class$ \tcp*{takes into account entire subclass hierarchy}
\ForEach{resource individual $I_{r}$ in $sol$}{
$decision$ $\leftarrow$ evaluate policies for $I_{r}$
add $decision$ to $results$
}
\KwRet{$result$}\;
\BlankLine
\caption{Evaluation for multiple resource class instances}\label{alg:find_resources}
\end{algorithm}
\begin{algorithm}
\SetKwInOut{Input}{input}\SetKwInOut{Output}{output}
\Input{evaluation context $ctx$, id of attribute to find $attrId$}
\Output{bag of values of attribute}
\BlankLine
$O_{d}$ $\leftarrow$ load domain ontology\;
$O_{m}$ $\leftarrow$ load mapping ontology()\;
$O_{r}$ $\leftarrow$ new ontology \tcp*{temp request ontology}
$O_{r}$ imports \{$O_{d}$, $O_{m}$\}\;
\ForEach{category from $ctx$}{
$I_{c}$ $\leftarrow$ new OWL individual\;
\ForEach{attribute in category}{
add property assertion to $I_{c}$\;
}
add $I_{c}$ to $O_{r}$\;
}
run semantic reasoning on $O_{r}$\;
\BlankLine
$result$ $\leftarrow$ empty bag of attribute values\;
$sol$ $\leftarrow$ query ontology for attribute value\;
\ForEach{result in $sol$}{
$val$ $\leftarrow$ convert $result$ to an XACML attribute value\;
add $val$ to $result$\;
}
\KwRet{$result$}\;
\BlankLine
\caption{Finding attribute values}\label{alg:find_attribute}
\end{algorithm}
The advantages of the SXACML approach include, but are not limited to:
\begin{enumerate}
\item Simplified policies -- information common to multiple policies can be ``extracted into the ontology'', resulting in the policies being represented in a ``more compact'' form.
\item Better support for RBAC -- role hierarchies can be modeled as ontology classes, user membership in a role can be inferred from attributes, and Separation of Duty can be verified by semantic reasoning.
\item More flexibility in defining relationships between concepts -- an attribute value may be inferred from a complex graph of linked data, utilizing properties of \textit{Subject}, \textit{Resource}, \textit{Action}, and \textit{Environment}.
\item Improved interoperability, by semantic mapping of disparate concepts in requests and policies -- allows decisions even in the case of different vocabularies.
\end{enumerate}
In prior work, we have used the semantic \textit{PIP} only as a provider of attribute values to policies (specified in XACML). The goal, in considered use cases, was to simplify XACML policies, and move domain models to OWL, assuming that the policy administrator has knowledge of XACML but not of OWL. We have also employed the OntoPlay\footnote{\url{https://github.com/mdrozdo/OntoPlay}} ontology editor~\cite{ontoplay, aciids} to assist the administrator in managing ontological concepts, such as \textit{Resource} categories, or \textit{Subject} roles. OntoPlay has proven to be a valuable tool enabling users not accustomed to semantic technologies to create complex class expressions and individual definitions, with applications not only in access control, but also in querying grid computational nodes at the University of Aizu, Japan~\cite{tsunami}. Furthermore, it has been utilized in student projects during the Semantic Technologies seminar at the Warsaw University of Technology, as well as in several master's theses defended at that institution, some of them resulting in publications, e.g. \cite{AIP_Chmiel, AIP_Szczekutek}.
However, in participatory sensing and personal privacy control, there is no system administrator -- the solution must be simple enough that the users are able to easily manage their own preferences and policies. We have, therefore, considered how to draw the boundary between OWL and XACML, taking into account that managing XACML policies manually is far beyond the capabilities of a casual user. Hence, we moved most responsibilities for the decision to the the semantic part, by defining the \texttt{PermittedRequest} class as a subclass of \texttt{Request} and providing the user with an OntoPlay-based interface that lets them define the relevant class expression in a point\&click manner, without knowledge of the ontology, and being fully agnostic of the XACML back-end.
On the other hand, in the UC2 use case (i.e. the police investigation), access to the personal data should be rigorously controlled, considering legal conditions and obligations. Here, while it is possible to realize the ABAC model using semantics, it would introduce unnecessary complexity to the user. Moreover, implementing OWL concepts, capturing policy sets, combining multiple policies, obligations, etc., would change the policy processing engine. However, in comparison to subjective personal preferences, legal rules are likely to be relatively static, long-lived, and independent of the user. Therefore, legal policies can be implemented as standard XACML policies (by an access control expert).
In summary, we have separated (1) the subjective, dynamical personal privacy preferences -- defined in OWL -- and (2) the, potentially complex, static, legal rules defined as XACML policies and the policy sets. In the latter case, the seam between XACML and OWL follows earlier research -- the semantic \textit{PIP} infers and provides the \textit{PDP} values of certain attributes, and the remaining parts of policy processing are performed by the \textit{PDP} component. The final solution uses a policy combining algorithm to reconcile the privacy preferences with hard legal rules in the same policy set, giving higher priority to the regulatory requirements.
Note that, we only consider and describe the context of the access request permission and thus focus on the \textit{PAP}, \textit{PDP} and \textit{PIP}. Therefore, we purposefully ignore collecting and storing sensor/activity tracker data. We assume that, in a working system, another layer, responsible for data collection, would be instantiated. Examples of modules, realizing such functionality, can be found in~\cite{clarke_jasist, personal_data_vaults,fitness_integration}. Likewise, as mentioned earlier, we omit issues related to data anonymization. We assume that data requiring anonymization has already been processed, using techniques stated in Section~\ref{state_of_art}). Nevertheless, these simplifications do not influence the way that the proposed approach works. Finally, we leave out the details of the \textit{PEP} implementation, which would need to be tightly related to the way of storing the data as well as means of requesting the information by third parties. In the prototype implementation we have used WSO2 Identity Server\footnote{\url{https://wso2.com/identity-and-access-management}} as the gateway and \textit{PEP}.
Let us now turn our attention to another aspect of adapting SXACML to the Smart City use cases. Obviously, the crucial aspect of applying the system to a new domain is the selection or design of ontologies. While the term ontology has many definitions, here we understand it as formal representation of pertinent (application-specific) aspects of knowledge about a domain. Moreover, we have decided to use the Web Ontology Language (OWL\footnote{\url{https://www.w3.org/TR/owl2-overview/}}; \cite{owl_primer}) to formally represent ontologies.
One of the key features of OWL ontologies is that they can be reused by other ontologies, composed and adapted for more specific purposes. Hence, it is important to, first, search for existing resources in ontology catalogues such as the Linked Open Vocabularies\footnote{\url{http://lov.okfn.org/dataset/lov}}. However, we were unable to find a complete ontology covering the privacy management of data acquired from sensing devices. Therefore we have split the domain of interest into several parts, which are then combined into the final representation of the domain of interest.
To this effect, we discuss the following components of the ontological structure, used in the proposed solution:
\begin{itemize}
\item Access Control ontology -- generic representation of ABAC concepts,
\item Internet of Things ontology and Fitness Tracking ontology -- jointly representing \textit{Resources},
\item Privacy ontology -- providing additional privacy-related concepts.
\end{itemize}
Note that, following principles of ontology engineering, we have been re-using existing ontologies whenever possible, while modifying them only when necessary.
\subsection{Access Control ontology}
Let us start from the ontology describing concepts related to ABAC and XACML. This ontology has to be generic and domain independent. In~\cite{jms,ict} this purpose was fulfilled by a simplistic Request Ontology. Here, we introduce a more complete Access Control Ontology (ACO), as an ontological representation of the XACML request elements, but also providing core concepts related to data access. Basic elements of ACO are taken directly from the ABAC model, and reflect the same attribute categories as in XACML:
\begin{itemize}
\item \texttt{Subject}
\item \texttt{Resource}
\item \texttt{Action}
\item \texttt{Environment}
\end{itemize}
\noindent
For the \textit{Subject} part of the ABAC model, an ontology covering relationships between different organizational entities that can be authorized to access the personal data was needed. Hence, we have decided to directly use the W3C Organization Ontology\footnote{\url{http://www.w3.org/TR/2014/REC-vocab-org-20140116/}}. The XACML \textit{Subject} has been mapped to the \texttt{foaf:Agent} class, which may represent a person, group or organization. The ontology also contains concepts and relations needed to define complex organizational structures, and membership in them. Finally, we have reused the \texttt{org:Role} class, to be used in policies that assume decisions based on roles of the \textit{Subject} (in the RBAC approach). Moreover, we have defined classes related to the \textit{Resource}:
\begin{itemize}
\item \texttt{Sensitivity} -- capturing how personal the information is, and under what conditions it may be disclosed.
\item \texttt{Confidentiality} -- describing the level of legal restrictions associated with the information.
\item \texttt{Owner} -- specifying the entity (person or organization) owning the \textit{Resource} or being the main object described by the \textit{Resource}.
\end{itemize}
Another element is the \texttt{Trust} class, describing level of confidence of resource owner in given \textit{Subject}. While trust modeling is an interesting topic on its own, here, it is only a class, with subclasses corresponding to different degrees of confidence. Obviously, if needed, this class can be replaced by a more comprehensive ontology (fragment).
Finally, the ontology includes the \texttt{PurposeOfUse} class, describing the reason for requesting the \textit{Action} (how the obtained \textit{Resource} will be used).
Figure~\ref{fig:access_control} summarizes the Access Control Ontology.
\begin{figure}
\includegraphics[width=1\columnwidth]{images/accesscontrol.pdf}
\caption{Access Control Ontology}\label{fig:access_control}
\end{figure}
\subsection{Domain ontologies} \label{sec:onto_iot}
First of all, considering that the guiding use cases deal with information collected using IoT devices, a ``sensor ontology'' is needed . This domain is well covered by the W3C Semantic Sensor Network Ontology (SSN\footnote{\url{https://www.w3.org/TR/2017/WD-vocab-ssn-20170105/}}), which contains vocabulary describing sensors, observations, and actuators, as well as observed properties, features of interest, etc.
In SSN the \texttt{sosa:Observation} class represents a single act of measurement (\texttt{sosa:ObservableProperty}, e.g. heart rate) of a certain ``feature of interest'' (\path{sosa:FeatureOfInterest}, e.g. a specific person). It is described with (a) properties related to the sensor (\texttt{sosa:Sensor}, e.g. the heart rate monitor), (b) the feature/object that was measured, (c) the measurement procedure (\texttt{sosa:Procedure}, e.g. the method of measuring heart rate), and (d) detailed information about the result (\texttt{sosa:Result}). When accessing observations, no special requirements on what kind of actions can be performed are present (create, read, update or delete). We have extended the SSN ontology with an \texttt{AnonymizationProcedure} class (subclass of texttt{sosa:Procedure}) that describes the method used for removing personal information.
To represent privacy and access control in IoT scenarios, we needed to extend and adapt the SSN ontology to deal with accessing sensor generated observations, hence we have defined a mapping of \textit{Resource} and \textit{Action} from the ACO ontology to appropriate classes in SSN, as described in detail in Section~\ref{sec:mapping_onto}.
Second, to provide the needed vocabulary we have investigated several ontologies representing training activities and fitness tracking data. Here, the authors of~\cite{dragoni2018semantic} propose ontologies and a rule-based reasoner, for supporting people in following a healthy lifestyle.
The presented research focuses on eating habits and omits fitness activities, as well as data tracking, and thus is not a good fit for our needs.
In~\cite{dewabharata2013activity}, a framework for inferring person's physical state and activity, based on contextual information obtained from sensor network surrounding user, is proposed. Here, the \textit{Context Modelling Ontology} transforms external information into knowledge about user activity. Unfortunately, the vocabulary is rather high-level and is of limited use for our needs.
The knowledge base in the Physical Activity, Health and Fitness Knowledge Model~\cite{icfoods_physicalactivity} contains comprehensive vocabulary of physical activities. Furthermore, it includes concepts such as \textit{activity frequency}, \textit{activity intensity}, \textit{activity duration} and \textit{activity condition} (in terms of natural, social and legal environment). While some elements of this ontology, especially sports related, or describing workout intensity, could be reused, it is much too broad for our current needs.
As a result of not finding an appropriate solution, we have decided to create a fitness tracking ontology, containing the most important elements relevant to collecting information to workouts and physical activities. To that effect, our Fitness Ontology imports the SSN ontology, and extends it with the following elements (also depicted in Diagrams~\ref{fig:fitness_training} and \ref{fig:fitness_location}).
\begin{itemize}
\item \texttt{Training} and its subclasses represent various workouts, e.g. running or cycling. This element is meant to be extended with a larger set of activities (perhaps using concepts from the Physical Activity Knowledge Base), depending on the application requirements.
\item Several subclasses of the \texttt{sosa:Observation} class, representing the measurements related to training: (\texttt{BloodPressure}, \texttt{HeartRate}), or general physical metrics (\texttt{Height}, \texttt{Weight}).
\item \texttt{TrainingMetric} representing workout attributes, such as calories burned, distance, or step count.
\item \texttt{GeospatialMeasurement}, with subclasses \texttt{Location} and \texttt{Route}, capturing the training location.
\end{itemize}
\begin{figure}[htpb]
\includegraphics[width=1\columnwidth]{images/fitnessTraining.pdf}
\caption{Fitness Tracking Ontology -- classes related to training types}\label{fig:fitness_training}
\end{figure}
\begin{figure}[htpb]
\includegraphics[width=1\columnwidth]{images/fitnessLocation.pdf}
\caption{Fitness Tracking Ontology -- classes related to routes and locations}\label{fig:fitness_location}
\end{figure}
The SSN and Fitness ontologies represent data that is to be subject to access control, i.e. the XACML \textit{Resource} category. The Access Control Ontology allows the \textit{Subject} category to be described as a person, machine agent, or an organization. Let us now consider how to describe the \textit{Action} category, taking into account attributes such as: purpose of use, retention policy, etc.
When it comes to privacy ontologies, authors of~\cite{ppo} have introduced a lightweight ontology for privacy preferences, in the context of Semantic Web and linked data. Moreover, creators of Semantic Cyber Information Modeling Initiative (SCIMI\footnote{\url{http://privacyontology.org}}) have proposed a Domain Specific Language for describing a privacy meta-model. However, since 2015, there was no recognizable progress of this work.
The Platform for Privacy Preferences (P3P\footnote{\url{https://www.w3.org/TR/P3P/}}) is a specification, and an ontology, that allows web site authors to describe privacy practices in a machine readable format, enabling browsers to make semi-autonomous privacy related decisions. Even though the use case of P3P is different, developed ontology contains concepts useful in modeling privacy preferences in the participatory sensing, such as:
\begin{itemize}
\item Classes capturing information describing web site visitors: name, email address, IP address, etc.
\item Categories to classify information about a person: demographic, financial, health, location, etc.
\item Purpose of use categories, such as: administration, contact, telemarketing, etc.
\item Retention policies for the collected data.
\end{itemize}
The P3P specification has been retired in 2018. However, it is a good base for a privacy preference ontology.
In~\cite{hecker2009privacy}, an ontology, describing various aspects of privacy and their interrelationships is presented. The main goal was to categorize data privacy in certain situations (e.g. medical data of a patient admitted to a hospital). Rating is based on aggregating atomic scores, such as: \textit{Data Quality}, \textit{Security}, \textit{Data Subject's Rights}, \textit{Legitimate Grounds of Processing}, \textit{Transparency}, \textit{Consent}, \textit{Anonymity}. While this approach is quite interesting, it is not applicable to our use case as it does not take into account subjectivity of privacy. Specifically, even if policies and procedures are the same, some people may hesitate to expose personal data, while others do so without second thoughts.
The Privacy Preference Ontology (PPO), described in~\cite{sacco2011privacy}, is aimed at providing vocabulary and means of specifying privacy policies using RDF and SPARQL queries. The example uses FOAF to represent resources under control. Unfortunately, it captures only generic concepts (e.g. \texttt{PrivacyPreference}, \texttt{AccessSpace}, \texttt{Resource}, \texttt{Condition} etc.), which are already defined in our Access Control Ontology.
Finally, authors of PrOnto~\cite{palmirani2018pronto} decided to model privacy and data protection concepts, in the context of GDPR, to enable legal reasoning and compliance verification. Therefore, it does not contain elements describing privacy preferences or data protection policies, but focuses on legal rules, rights, obligations, purpose of use, etc. Moreover, at the time of writing, no complete ontology could be found. Therefore it was hard to fully evaluate its suitability to our needs.
Eventually, we have decided that the P3P ontology can best serve as a base, however due to its size, for this report we have used only the elements relevant to our requirements, namely:
\begin{itemize}
\item The hierarchy of subclasses of the \texttt{Data} class, representing various data categories.
\item The \texttt{Purpose} class as a representation of purpose of collection or use.
\item The \texttt{Retention} class.
\end{itemize}
\subsection{Mapping ontology}\label{sec:mapping_onto}
Having described the ontological components representing the domain under consideration, let us consider in more details how they relate to each other. In order for the \textit{PIP} to access the knowledge base, it must first be consolidated in what we call the Mapping Ontology. It is built of several predefined mapping axioms, complemented with class expressions and / or individuals defined by the user as part of configuring their privacy preferences.
\begin{figure}[htpb]
\includegraphics[width=1\columnwidth]{images/mapping.pdf}
\caption{Imports hierarchy of ontologies}\label{diag:mapping}
\end{figure}
Figure~\ref{diag:mapping} depicts the high-level import hierarchy of the ontologies introduced in the previous sections. Specifically, the predefined mappings state that the \texttt{aco:Resource} class is a superclass of: \texttt{fit:Training}, \texttt{fit:TrainingMetric}, and \texttt{fit:GeospatialMeasurement}, which reflects what information could be requested.
The mapping also joins the Fitness and Personal Privacy ontologies -- the \texttt{fit:HealthMeasurement} and \texttt{fit:GeospatialMeasurement} classes are marked as subclasses of \texttt{ppo:Health-data-category} and \texttt{ppo:Location-data-category} respectively. Moreover, we have added a custom category \texttt{ppo:FitnessData}, as a subclass of \texttt{ppo:OtherCategories}, that became a superclass for the \texttt{fit:Training} and \texttt{fit:TrainingMetric} classes. Finally, the \texttt{ppo:Purpose} class has been used as the range of the \texttt{ppo:hasPurposeOfUse} attribute, describing the \texttt{aco:Action} class (representation of the XACML \textit{Action} category). Analogously, we have added the \texttt{p3p:Retention} class as an attribute of \textit{aco:Action} (\texttt{ppo:hasRetentionPolicy}).
\section{Experimental Evaluation} \label{use_case}
With all elements in place, let us now illustrate how the proposed approach can be used in our two use case scenarios, introduced in Section~\ref{intro_use_case}:
\begin{itemize}
\item UC1: Health Center requesting aggregated (monthly) information about training metrics.
\item UC2: Police department requesting Sally's locations during a specific time period.
\end{itemize}
\subsection{Health center}
We start with UC1, where the Health Center requests access to information about Sally's training metrics. First, Sally has to define her privacy preferences. Here, she specifies a permission stating that the requester, belonging to the Health Center organization, may access aggregated monthly distance observations. This policy can be easily created using OntoPlay, as depicted in Figure~\ref{fig:uc1_ontoplay}. The result is a class expression describing a subclass of \texttt{PermittedRequest} called \texttt{HealthCenterPermission} that is subsequently added to the ontology.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=1\columnwidth]{images/uc1_ontoplay.eps}
\caption{OntoPlay interface for Health Center permission}
\label{fig:uc1_ontoplay}
\end{center}
\end{figure}
Figure~\ref{fig:uc1_req} presents the XACML request for the \texttt{Read} action (line 20), on resources of the \texttt{TrainingMetric} class (line 15), made by the Health Centre (line 9). Here, the semantic \textit{PIP} component adds a new individual of type \texttt{Request} to the temporary ontology.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=1\columnwidth]{images/uc1_request_new_3.eps}
\caption{XACML request for UC1}
\label{fig:uc1_req}
\end{center}
\end{figure}
As the request does not refer to a specific resource, but rather to a resource category -- the \textit{Resource} is only described as the \texttt{TrainingMetric} class. This is because the organization would like to collect as much relevant data as possible. Therefore, the initial step is to retrieve the appropriate resource individuals from the ontology. Considering the hierarchy of metrics shown in Figure~\ref{fig:fitness_training}, there exist several types of metrics: \texttt{StepCount}, \texttt{Distance}, and \texttt{AggregateMetrics}. Applying our semantic implementation of the XACML Hierarchical Resource Profile, the system translates this request into multiple decisions, based on the results of inference, which individuals in the ontology are instances of the \texttt{OutdoorTraining} class or its subclasses. The following evaluation steps are subsequently repeated for each resource.
The \textit{Subject} is specified with the class \texttt{HealthCentre}. Hence, an individual of that class is added to the request ontology, and linked to the request individual. The request individual is also connected to the \textit{Resource}. Next, the semantic reasoner can infer if the request individual satisfies all constraints specified by Sally, and classify it as \texttt{HealthCenterPermission}, and as \texttt{PermittedRequest}. Note that, the XACML policy includes a condition on the request class id attribute. Therefore the \textit{PIP} returns a collection of classes describing the request individual: \texttt{Request}, \texttt{PermittedRequest} and \texttt{HealthCenterPermission}. The rule evaluates to a \textit{true} value and the request is permitted by the \textit{PDP}.
\subsection{Police department}
Next, let us consider the case of the Police Department, requesting location records from the night a crime took place (case UC2). Here, it is not up to the individual to define the rules of data access, as they represent en existing legal framework. In our, somewhat artificial, example we assume that the policy permits the Police to access information about the location of an individual as related to a committed crime. In the real-world, such a request would need to be accompanied by a warrant, i.e. specifying event location and time. Such warrant would need to be verified and digitally signed before being included in the data access request. Here, we will omit details as to how a warrant issuer can secure the request, and protect it from tampering. Nevertheless, let us stress that existing specifications such as the XACML XML Digital Signature Profile\footnote{\url{http://docs.oasis-open.org/xacml/3.0/dsig/v1.0/xacml-3.0-dsig-v1.0.html}}, provide appropriate solutions. The example request, depicted in Figure~\ref{fig:uc2_req}, contains the following attribute values:
\begin{itemize}
\item The \texttt{Subject} is Police Department (line 9).
\item The \texttt{Resource} to be accessed is \texttt{Location} -- which should be understood not as a single, specific, position, but rather as all permitted locations (line 14).
\item The \texttt{Action} is Read (line 18).
\item \texttt{Environment} encloses attributes related to the crime event location and time (lines 22-29).
\end{itemize}
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=1\columnwidth]{images/uc2_request_new_2.eps}
\caption{XACML request for UC2}
\label{fig:uc2_req}
\end{center}
\end{figure}
The associated policy is presented in Figure~\ref{fig:uc2_policy} (encoded in ALFA for brevity). It contains a number of conditions on different attributes. The \textit{Subject} is limited to the Police Department, and only \texttt{Read} actions are permitted. Moreover, the request is only permitted if the \textit{Resource} location attribute is within a one kilometer radius from the event location, and the position record time is in the period of one hour before and after the event. To apply geospatial comparison, the policy makes use of an extension to the XACML standard -- the Geospatial eXtensible Access Control Markup Language (GeoXACML\footnote{\url{https://www.opengeospatial.org/standards/geoxacml}}). Such spatiotemporal conditions would be hard to define in a typical OWL reasoner and would require specific geospatial extensions to the ontology language, as well as the inference engine. The policy could also be expanded to feature conditions related to the purpose of use of the information, warrant chain, etc. Additionally, not illustrated in the listing, the policy is also part of a Policy Set together with other policies (e.g. the one used in UC1), configured with a combining algorithm which secures that the law enforcement regulations override any user-defined preferences.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=1\columnwidth]{images/uc3_policy.eps}
\caption{Policy for UC2}
\label{fig:uc2_policy}
\end{center}
\end{figure}
In this case, the request defines the resource as an instance of the \texttt{Location} class and therefore the \textit{PDP}, again, needs to resolve it to a number of individual resources. The semantic reasoner is used by the resource finder to fetch concrete instances of the \texttt{Location} class from the ontology. While evaluating the policy for each of the locations, the \textit{PDP} does not encounter the \texttt{locationTime} and \texttt{locationPoint} attributes from the \texttt{Resource} category and therefore it requests their values from the (semantic) \textit{PIP}. Having acquired the attribute values, the policy condition is evaluated, by means of date-time XACML functions and the GeoXACML engine. The result is a multi-resource decision consisting of a number of individual responses, one for each location contained in the ontology.
\section{Concluding remarks}
In this paper we have considered how a semantically enriched Attribute Based Access Control system can be applied to (self-)management of user data privacy in Smart Cities. We have shown that practical application of semantic technologies brings important advantages for development of flexible, though robust, privacy controlling environments. In this context, our main contributions are as follows.
\begin{itemize}
\item We have reviewed existing ontologies, covering different aspects of the domain of interest, including sensors, fitness tracking and personal privacy, finally composing well established vocabularies into a complete ontology. This outlines the path that should be followed when systems similar to ours are to be developed for other domains.
\item On the basis of our earlier work, we have presented a more refined approach to combining the XACML policies with semantical reasoning. Here, among others, we have taken into account observation that certain rules may need to be more rigid than others. The proposed approach allows ``mixing and matching'' (depending on specific circumstances of the developed system) XACML rules with attributes resulting from semantic reasoning. In other words, the boundary between XACML and semantics can be instantiated as needed.
\item We managed to separate the (fixed) ``rules of the system'' that are to be formulated by specialists, from user-preferences. Here, each user can (``dynamically'') formulate her/his rules, representing personal attitude towards privacy. User preferences will be captured within the system, without the need to change the rules.
\item Expression of personal preferences does not require knowledge of semantic technologies. Rather, it is realized using OntoPlay, a novel interface to ontology-driven systems.
\item Moreover, use of OntoPlay allows easy modification of system ontology. After ontology is modified, it will automatically materialize in the user interface, without the need of changing the system code.
\end{itemize}
One area that we have not included in the research, but intend to investigate in the future, is the specification of obligations, i.e. additional actions that should be performed by the PEP, following the enforcement of the decision. The considered solution will fully support the default XACML obligation specification format, but taking advantage of the rich body of knowledge regarding semantic web services could improve the possibilities of describing mandatory data storage and processing regulations.
\bibliographystyle{IEEEtran}
|
2,877,628,091,522 | arxiv |
\section*{Acknowledge}
\vspace{-0.3cm}
We thank Zeyuan Allen-Zhu for valuable discussions and comments, Microsoft Research Technology Engineering team for setting up GPU machines.
Research was sponsored in part by DARPA No. W911NF-17-C-0099 and FA8750-19-2-1004, National Science Foundation IIS 16-18481, IIS 17-04532, and IIS-17-41317, and DTRA HDTRA11810026.
\begin{figure}[t]
\begin{minipage}{0.38\textwidth}
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{fig/variance_approx.pdf}
\vspace{-0.6cm}
\caption{The value of Equation~\ref{eqn:analytic-sqrt-var}, Equation~\ref{eqn:variance_appro} and their difference (absolute difference). The x-axis is $\rho$ and the y-axis is the variance (log scale).}
\label{fig:var_approx}
\end{figure}
\end{minipage}
\;
\begin{minipage}{0.61\textwidth}
\begin{figure}[H]
\centering
\begin{tabular}[H]{ ccc }
\includegraphics[width=0.3\textwidth]{fig/simu_var/0_1_var.pdf} &
\includegraphics[width=0.3\textwidth]{fig/simu_var/0001_1_var.pdf} &
\includegraphics[width=0.3\textwidth]{fig/simu_var/001_1_var.pdf} \\
$\mu=0$ & $\mu=0.001$ & $\mu=0.01$ \\
\includegraphics[width=0.3\textwidth]{fig/simu_var/01_1_var.pdf} &
\includegraphics[width=0.3\textwidth]{fig/simu_var/1_1_var.pdf}&
\includegraphics[width=0.3\textwidth]{fig/simu_var/10_1_var.pdf} \\
$\mu=0.1$ & $\mu=1$ & $\mu=10$ \\
\end{tabular}
\vspace{-0.2cm}
\caption{The simulation of $\mathrm{Var}[\frac{1}{v_t}]$ and $\mathrm{Var}[\frac{c_t}{v_t}]$. The x-axis is iteration \# (from 5), the y-axis is the variance (log scale).}
\label{fig:vt_var_simulation}
\end{figure}
\end{minipage}
\vspace{-0.4cm}
\end{figure}
\section{Introduction}
\vspace{-0.1cm}
\begin{wrapfigure}{r}{0.35\textwidth}
\centering
\vspace{-0.8cm}
\includegraphics[width=0.35\textwidth]{fig/ppl_loss_legend_small.pdf}
\vspace{-0.6cm}
\caption{Training loss v.s. \# of iterations of Transformers on the De-En IWSLT'14 dataset. }
\vspace{-0.4cm}
\label{fig:eps2k}
\end{wrapfigure}
Fast and stable optimization algorithms are what generations of researchers have been pursuing~\citep{gauss1823theoria,cauchy1847methode}.
Remarkably, stochastic gradient-based optimization, such as stochastic gradient descent (SGD), has witnessed tremendous success in many fields of science and engineering despite its simplicity.
Recently, many efforts have been made
to accelerate optimization by applying \textit{adaptive learning rate}.
In particular, Adagrad~\citep{duchi2011adaptive} and its variants, {\textit{e.g.}}, RMSprop~\citep{tieleman2012lecture}, Adam~\citep{kingma2014adam}, Adadelta~\citep{zeiler2012adadelta} and Nadam~\citep{dozat2016incorporating},
stand out due to their fast convergence, and have been considered as the optimizer of choice in many applications.
However, it has been observed that these optimization methods may converge to bad/suspicious local optima,
and have to resort to a warmup heuristic -- using a small learning rate in the first few epochs of training to mitigate such problem~\citep{vaswani2017attention, popel2018training}.
For example, when training typical Transformers based neural machine translation models
on the
De-En IWSLT'14 dataset,
removing the warmup stage
increases the training loss
from 3 to around 10, as shown in Figure~\ref{fig:eps2k}.
Similar phenomena are observed in other scenarios like BERT (a bidirectional transformer language model) pre-training~\citep{devlin2018bert}
Due to the lack of the theoretical underpinnings,
there is neither guarantee that warmup would
bring consistent improvements for various machine learning settings
nor guidance on how we should conduct warmup.
Thus, researchers typically use different settings in different applications and have to take a trial-and-error approach,
which can be tedious and time-consuming.
In this paper, we conduct both empirical and theoretical analysis of the convergence issue to identify its origin.
We show that its root cause is: the adaptive learning rate has undesirably large variance in the early stage of model training, due to the limited amount of training samples being used.
Thus, to reduce such variance, it is better to use smaller learning rates in the first few epochs of training, which justifies the warmup heuristic.
Inspired by our analysis results, we propose a new variant of Adam, called Rectified Adam (RAdam), which explicitly rectifies the variance of the adaptive learning rate based on derivations.
We conduct extensive experiments on language modeling, image classification, and neural machine translation.
RAdam brings consistent improvement over the vanilla Adam, which verifies the variance issue generally exists on various tasks across different network architectures.
In summary, our main contributions are two-fold:
\begin{itemize}[leftmargin=*]
\item
\vspace{-0.1in}
We identify the variance issue of the adaptive learning rate and present a theoretical justification for the warmup heuristic.
We show that the convergence issue is due to the undesirably large variance of the adaptive learning rate in the early stage of model training.
\item
We propose a new variant of Adam ({\textit{i.e.}}, RAdam), which not only explicitly rectifies the variance and is theoretically sound, but also compares favorably with the heuristic warmup.
\end{itemize}
\section{Preliminaries and Motivations}
\noindent
\textbf{Generic adaptive methods.} Algorithm~\ref{algo:adaptive} is a generic framework (all operations are element-wise).
It describes various popular stochastic gradient descent algorithms~\citep{reddi2019convergence}.
Specifically, different optimization algorithms can be specified by different choices of $\phi(.)$ and $\psi(.)$, where
$\phi(.)$ specifies how the momentum at time step $t$ is calculated, and $\psi(.)$ how the adaptive learning rate at $t$ is calculated.
For example,
in the Adam algorithm, we have:
\begin{align}
\phi(g_1, \cdots, g_t) = \frac{(1 - \beta_1)\sum_{i = 1}^t \beta_1^{t-i}g_i}{1 - \beta_1^t}
\quad\mbox{and}\quad
\psi(g_1, \cdots, g_t) = \sqrt{\frac{1-\beta_2^t}{(1 - \beta_2)\sum_{i = 1}^t \beta_2^{t-i}g_i^2}}.
\label{eqn:adam}
\end{align}
For numerical stability, the function $\psi(.)$ in Equation~\ref{eqn:adam} is usually calculated as $\hat{\psi}(g_1, \cdots, g_t) = \frac{\sqrt{1-\beta_2^t}}{\epsilon + \sqrt{(1 - \beta_2)\sum_{i = 1}^t \beta_2^{t-i}g_i^2}}$,
where $\epsilon$ is a relatively small / negligible value ({\textit{e.g.}}, $1\times 10^{-8}$).
\begin{algorithm}[!ht]
\DontPrintSemicolon
\KwIn{$\{\alpha_t\}_{t = 1}^T$: step size, $\{\phi_t, \psi_t\}_{t = 1}^T$: function to calculate momentum and adaptive rate, \newline
$\theta_0$: initial parameter, $f(\theta)$: stochastic objective function.}
\KwOut{$\theta_T$: resulting parameters}
\While{$t = 1$ to $T$}{
$g_t \gets \nabla_{\theta} f_t(\theta_{t - 1})$ (Calculate gradients w.r.t. stochastic objective at timestep t)\;
$m_t \gets \phi_t (g_1, \cdots, g_t)$ (Calculate momentum)\;
$l_t \gets \psi_t (g_1, \cdots, g_t)$ (Calculate adaptive learning rate)\;
$\theta_t \gets \theta_{t-1} - \alpha_t m_t l_t$ (Update parameters)\;
}
\Return{$\theta_T$}
\caption{Generic adaptive optimization method setup. All operations are element-wise. }
\label{algo:adaptive}
\end{algorithm}
\begin{figure}[t]
\centering
\vspace{-0.3in}
\includegraphics[width=\textwidth]{fig/histogram_4.pdf}
\vspace{-0.6cm}
\caption{The absolute gradient histogram of the Transformers on the De-En IWSLT' 14 dataset during the training (stacked along the y-axis). X-axis is absolute value in the log scale and the height is the frequency. Without warmup, the gradient distribution is distorted in the first 10 steps. }
\vspace{-0.3cm}
\label{fig:histogram_2}
\end{figure}
\noindent
\textbf{Learning rate warmup.} Instead of setting the learning rate $\alpha_t$ as a constant or in a decreasing order,
a learning rate warmup strategy sets $\alpha_t$ as smaller values in the first few steps, thus not satisfying $\forall t\, \alpha_{t+1} \leq \alpha_{t}$.
For example, linear warmup sets $\alpha_t = t \,\alpha_0$ when $t < T_w$.
Warmup has been demonstrated to be beneficial in many deep learning applications.
For example, in the NMT experiments in Figure~\ref{fig:eps2k}, the training loss convergences around 10 when warmup is not applied (Adam-vanilla), and it surprisingly decreases to below 3 after applying warmup (Adam-warmup).
To further analyze this phenomenon, we visualize the histogram of the absolute value of gradients on a log scale in Figure~\ref{fig:histogram_2}.
We observe that, without applying warmup, the gradient distribution is distorted to have a mass center in relatively small values within 10 updates. Such gradient distortion means that the vanilla Adam is trapped in bad/suspicious local optima after the first few updates.
Warmup essentially reduces the impact of these problematic updates to avoid the convergence problem.
In the following sections, we focus our analysis on learning rate warmup for the Adam algorithm, while it can be applied to other algorithms that use similar adaptive learning rate ($\psi(.)$) designs, {\textit{e.g.}}, RMSprop~\citep{tieleman2012lecture} and Nadam~\citep{dozat2016incorporating}.
\section{Variance of the Adaptive Learning Rate}
\label{sec:ana}
\vspace{-0.1cm}
In this section, we first introduce empirical evidence, then analyze the variance of the adaptive learning rate to support our hypothesis
-- \emph{Due to the lack of samples in the early stage, the adaptive learning rate has an undesirably large variance, which leads to suspicious/bad local optima}.
To convey our intuition, we begin with a special case.
When $t = 1$, we have $\psi(g_1) = \sqrt{1/g_1^2}$.
We view $\{g_1, \cdots, g_t\}$ as i.i.d. Gaussian random variables following $\cN(0, \sigma^2)$\footnote{The mean zero normal assumption is valid at the beginning of the training, since weights are sampled from normal distributions with mean zero \citep{balduzzi2017shattered}, further analysis is conducted in Section~\ref{subsec:sma-ema}.}.
Therefore, $1/g_1^2$ is subject to the scaled inverse chi-squared distribution, $\mbox{Scale-inv-}\mathcal{X}^2(1, 1/\sigma^2)$, and $\mathrm{Var}[\sqrt{1/g_1^2}]$ is divergent.
It means that the adaptive ratio can be undesirably large in the first stage of learning.
Meanwhile, setting a small learning rate at the early stage can reduce the variance ($\mathrm{Var}[\alpha x] = \alpha^2\mathrm{Var}[x]$), thus alleviating this problem.
Therefore, we suggest it is the unbounded variance of the adaptive learning rate in the early stage
that causes the problematic updates.
\begin{figure}[t]
\centering
\includegraphics[width=0.8\textwidth]{fig/histogram_3.pdf}
\vspace{-0.3cm}
\caption{The histogram of the absolute value of gradients (on a log scale) during the training of Transformers on the De-En IWSLT' 14 dataset. using Adam-2k, RAdam and Adam-eps. }
\vspace{-0.4cm}
\label{fig:histogram_3}
\end{figure}
\vspace{-0.1cm}
\subsection{Warmup as Variance Reduction}
\vspace{-0.1cm}
In this section, we design a set of controlled experiments to verify our hypothesis.
Particularly, we design two variants of Adam that reducing the variance of the adaptive learning rate: \textit{Adam-2k} and \textit{Adam-eps}. We compare them to vanilla Adam with and without warmup on the IWSLT'14 German to English translation dataset~\citep{cettolo2014report}.
In order to reduce the variance of the adaptive learning rate ($\psi(.)$), Adam-2k only updates $\psi(.)$ in the first two thousand iterations, while the momentum ($\phi(.)$) and parameters ($\theta$) are fixed\footnote{Different from \cite{gotmare2018a}, all parameters and first moments are frozen in the first 2000 iterations.}; other than this, it follows the original Adam algorithm.
To make comparison with other methods, its iterations are indexed from -1999 instead of 1.
In Figure~\ref{fig:eps2k}, we observe that, after getting these additional two thousand samples for estimating the adaptive learning rate, Adam-2k avoids the convergence problem of the vanilla-Adam.
Also, comparing Figure~\ref{fig:histogram_2} and Figure~\ref{fig:histogram_3}, getting large enough samples prevents the gradient distribution from being distorted.
These observations verify our hypothesis that the lack of sufficient data samples in the early stage is the root cause of the convergence issue.
Another straightforward way to reduce the variance is to increase the value of $\epsilon$ in $\hat{\psi}(g_1, \cdots, g_t) = \frac{\sqrt{1-\beta_2^t}}{\epsilon + \sqrt{(1 - \beta_2)\sum_{i = 1}^t \beta_2^{t-i}g_i^2}}$.
Actually, if we assume $\hat{\psi}(.)$ is subject to the uniform distribution, its variance equals to $\frac{1}{12 \epsilon^2}$.
Therefore, we design Adam-eps, which uses a non-negligibly large $\epsilon = 10^{-4}$, while $\epsilon = 10^{-8}$ for vanilla Adam.
Its performance is summarized in Figure~\ref{fig:eps2k}.
We observe that it does not suffer from the serious convergence problem of vanilla-Adam. This further demonstrates that the convergence problem can be alleviated by reducing the variance of the adaptive learning rate, and also explains why tuning $\epsilon$ is important in practice \citep{liu2019roberta}.
Besides, similar to Adam-2k, it prevents the gradient distribution from being distorted (as shown in Figure~\ref{fig:histogram_3}).
However, as in Figure~\ref{fig:eps2k}, it produces a much worse performance comparing to Adam-2k and Adam-warmup.
We conjecture that this is because large $\epsilon$ induces a large bias into the adaptive learning rate and slows down the optimization process.
Thus, we need a more principled and rigorous way to control the variance of the adaptive learning rate.
In the next subsection, we will present a theoretical analysis of the variance of the adaptive learning rate.
\vspace{-0.1cm}
\subsection{Analysis of Adaptive Learning Rate Variance}
\vspace{-0.1cm}
As mentioned before, Adam uses the exponential moving average to calculate the adaptive learning rate.
For gradients $\{g_1, \cdots, g_t\}$, their exponential moving average has a larger variance than their simple average.
Also, in the early stage ($t$ is small), the difference of the exponential weights of $\{g_1, \cdots, g_t\}$ is relatively small (up to $1 - \beta_2^{t-1}$).
Therefore, for ease of analysis, we approximate the distribution of the exponential moving average as the distribution of the simple average~\citep{nau2014forecasting}, {\textit{i.e.}}, $p(\psi(.)) = p(\sqrt{\frac{1-\beta_2^t}{(1 - \beta_2)\sum_{i = 1}^t \beta_2^{t-i}g_i^2}}) \approx p(\sqrt{\frac{t}{\sum_{i=1}^t g_i^2}})$. Since $g_i \sim {\mathcal{N}}(0, \sigma^2)$, we have $\frac{t}{\sum_{i=1}^t g_i^2} \sim \mbox{Scale-inv-}\mathcal{X}^2(t, \frac{1}{\sigma^2})$.
Therefore, we assume $\frac{1-\beta_2^t}{(1 - \beta_2)\sum_{i = 1}^t \beta_2^{t-i}g_i^2}$ also subjects to a scaled inverse chi-square distribution with $\rho$ degrees of freedom (further analysis on this approximation is conducted in Section~\ref{subsec:sma-ema}).
Based on this assumption, we can calculate $\mathrm{Var}[\psi^2(.)]$ and the PDF of $\psi^2(.)$.
Now, we proceed to the analysis of its square root variance, {\textit{i.e.}}, $\mathrm{Var}[\psi(.)]$, and show how the variance changes with $\rho$ (which corresponds to number of used training samples).
\vspace{-0.1cm}
\begin{theorem}
If $\psi^2(.) \sim \mbox{Scale-inv-}\mathcal{X}^2(\rho, \frac{1}{\sigma^2})$, $\mathrm{Var}[\psi(.)]$ monotonically decreases as $\rho$ increases.
\label{theorem: variance_mono}
\end{theorem}
\vspace{-0.5cm}
\begin{proof}
For $\forall\, \rho > 4$, we have:
\begin{align}
\mathrm{Var}[\psi(.)] =
\mathbb{E}[\psi^2(.)] - \mathbb{E}[\psi(.)]^2
= \tau^2 (\frac{\rho}{\rho-2} - \frac{\rho \,2^{2\rho - 5}}{\pi}{\mathcal{B}}(\frac{\rho-1}{2}, \frac{\rho-1}{2})^2),
\label{eqn:analytic-sqrt-var}
\end{align}
where ${\mathcal{B}}(.)$ is the beta function.
By analyzing the derivative of $\mathrm{Var}[\psi(.)]$, we know it monotonically decreases as $\rho$ increases.
The detailed derivation is elaborated in the Appendix~\ref{app:proof_mono}.
\end{proof}
\vspace{-0.3cm}
Theorem~\ref{theorem: variance_mono} gives a qualitative analysis of the variance of the adaptive learning rate.
It shows that, due to the lack of used training samples in the early stage, $\mathrm{Var}[\psi(.)]$ is larger than the late stage (Figure~\ref{fig:var_approx}).
To rigorously constraint the variance, we perform a quantified analysis on $\mathrm{Var}[\psi(.)]$ by estimating the degree of freedoms $\rho$.
\section{Rectified Adaptive Learning Rate}
\vspace{-0.1cm}
\label{sec:fix}
In the previous section, Equation~\ref{eqn:analytic-sqrt-var} gives the analytic form of $\mathrm{Var}[\psi(.)]$, where $\rho$ is the degree of freedoms.
Here, we first give an estimation of $\rho$ based on $t$ to conduct a quantified analysis for $\mathrm{Var}[\psi(g_1, \cdots, g_t)]$, then we describe the design of the learning rate rectification, and compare it to the heuristic warmup strategies.
\vspace{-0.1cm}
\subsection{Estimation of $\rho$}
\vspace{-0.1cm}
The exponential moving average (EMA) can be interpreted as an approximation to the simple moving average (SMA) in real application~\citep{nau2014forecasting}, {\textit{i.e.}},
\begin{align}
\vspace{-0.2in}
p\left(\frac{(1 - \beta_2)\sum_{i = 1}^t \beta_2^{t-i}g_i^2}{1-\beta_2^t} \right) \approx p\left(\frac{\sum_{i = 1}^{f(t, \beta_2)} g_{t+1-i}^2}{f(t, \beta_2)}\right).
\label{eqn:sma_ema}
\vspace{-0.2in}
\end{align}
where $f(t, \beta_2)$ is the length of the SMA which allows the SMA to have the same ``center of mass'' with the EMA.
In other words, $f(t, \beta_2)$ satisfies:
\begin{align}
\vspace{-0.7in}
\frac{(1 - \beta_2)\sum_{i = 1}^t \beta_2^{t-i} \cdot i }{1-\beta_2^t} = \frac{\sum_{i = 1}^{f(t, \beta_2)} (t+1-i)}{f(t, \beta_2)}.
\label{eqn:sma_eql_ema}
\vspace{-0.7in}
\end{align}
By solving Equation~\ref{eqn:sma_eql_ema}, we have:
$f(t, \beta_2) = \frac{2}{1 - \beta_2} - 1 - \frac{2 t \beta_2^t}{1 - \beta_2^t}$.
In the previous section, we assume: $\frac{1-\beta_2^t}{(1 - \beta_2)\sum_{i = 1}^t \beta_2^{t-i}g_i^2} \sim \mbox{Scale-inv-}\mathcal{X}^2(\rho, \frac{1}{\sigma^2})$.
Here, since $g_i \sim {\mathcal{N}}(0, \sigma^2)$, we have $\frac{\sum_{i = 1}^{f(t, \beta_2)} g_{t+1-i}^2}{f(t, \beta_2)} \sim \mbox{Scale-inv-}\mathcal{X}^2(f(t, \beta_2), \frac{1}{\sigma^2})$.
Thus, Equation~\ref{eqn:sma_ema} views $\mbox{Scale-inv-}\mathcal{X}^2(f(t, \beta_2), \frac{1}{\sigma^2})$ as an approximation to $\mbox{Scale-inv-}\mathcal{X}^2(\rho, \frac{1}{\sigma^2})$.
Therefore, we treat $f(t, \beta_2)$ as an estimation of $\rho$.
For ease of notation, we mark $f(t, \beta_2)$ as $\rho_t$. Also, we refer $\frac{2}{1 - \beta_2} - 1$ as $\rho_\infty$ (maximum length of the approximated SMA), due to the inequality $f(t, \beta_2) \leq \lim_{t \to \infty} f(t, \beta_2) = \frac{2}{1 - \beta_2} - 1$.
\vspace{-0.2cm}
\begin{algorithm}[t]
\DontPrintSemicolon
\KwIn{$\{\alpha_t\}_{t = 1}^T$: step size,
$\{\beta_1, \beta_2\}$: decay rate to calculate moving average and moving 2nd moment,
$\theta_0$: initial parameter, $f_t(\theta)$: stochastic objective function.}
\KwOut{$\theta_t$: resulting parameters}
$m_0, v_0 \gets 0, 0$ (Initialize moving 1st and 2nd moment)\;
$\rho_\infty \gets 2/(1 - \beta_2) - 1$ (Compute the maximum length of the approximated SMA)\;
\While{$t = \{1, \cdots, T\}$}{
$g_t \gets \nabla_{\theta} f_t(\theta_{t - 1})$ (Calculate gradients w.r.t. stochastic objective at timestep t)\;
$v_t \gets 1 / \beta_2 v_{t - 1} + (1 - \beta_2) g_t^2$ (Update exponential moving 2nd moment)\;
$m_t \gets \beta_1 m_{t - 1} + (1 - \beta_1) g_t$ (Update exponential moving 1st moment)\;
$\widehat{m_t} \gets m_t / (1 - \beta_1^t)$ (Compute bias-corrected moving average)\;
$\rho_t \gets \rho_\infty - 2t\beta_2^t/(1 - \beta_2^t)$(Compute the length of the approximated SMA)\;
\uIf{the variance is tractable, {\textit{i.e.}}, $\rho_t > 4$}{
$l_t \gets \sqrt{(1 - \beta_2^{t}) / v_t}$ (Compute adaptive learning rate)\;
$r_t \gets \sqrt{\frac{(\rho_t - 4)(\rho_t - 2)\rho_\infty}{(\rho_\infty - 4)(\rho_\infty - 2)\rho_t}}$ (Compute the variance rectification term)\;
$\theta_t \gets \theta_{t-1} - \alpha_t r_t \widehat{m_t} l_t$ (Update parameters with adaptive momentum)\;
}
\Else{
$\theta_t \gets \theta_{t-1} - \alpha_t \widehat{m_t}$ (Update parameters with un-adapted momentum)\;
}
}
\Return{$\theta_T$}
\caption{Rectified Adam. All operations are element-wise. }
\label{algo:cadam}
\end{algorithm}
\subsection{Variance Estimation and Rectification}
\vspace{-0.1cm}
Based on previous estimations, we have $\mathrm{Var}[\psi(.)] = \tau^2 (\frac{\rho_t}{\rho_t-2} - \frac{\rho_t \,2^{2\rho_t - 5}}{\pi}{\mathcal{B}}(\frac{\rho_t-1}{2}, \frac{\rho_t-1}{2})^2)$.
The value of this function in the early stage is significantly larger than the late stage (as analyzed later, it decays roughly at the speed of $O(\frac{1}{\rho_t
})$).
For example,
the variance at $\rho_t = 5$ is over $100$ times larger than the variance at $\rho_t = 500$.
Additionally, based on Theorem~\ref{theorem: variance_mono}, we know $\min_{\rho_t} \mathrm{Var}[\psi(.)] = \mathrm{Var}[\psi(.)]|_{\rho_t = \rho_\infty}$ and mark this minimal value as $C_{\mbox{var}}$.
In order to ensure that the adaptive learning rate ($\psi(.)$) has consistent variance, we rectify the variance at the $t$-th timestamp as below,
\begin{align*}
\mathrm{Var}[r_t \,\psi(g_1, \cdots, g_t)] = C_{\mbox{var}}
\quad
\mbox{where}
\quad
r_t = \sqrt{{C_{\mbox{var}}}/{\mathrm{Var}[\psi(g_1, \cdots, g_t)]}}.
\end{align*}
Although we have the analytic form of $\mathrm{Var}[\psi(.)]$ ({\textit{i.e.}}, Equation~\ref{eqn:analytic-sqrt-var}), it is not numerically stable.
Therefore, we use the first-order approximation to calculate the rectification term.
Specifically, by approximating $\sqrt{\psi^2(.)}$ to the first order~\citep{wolter2007taylor},
\begin{align*}
\sqrt{\psi^2(.)} \approx \sqrt{\mathbb{E}[\psi^2(.)]} + \frac{1}{2\sqrt{\mathbb{E}[\psi^2(.)]}}(\psi^2(.) - \mathbb{E}[\psi^2(.)])
\quad\mbox{and}\quad
\mathrm{Var}[\psi(.)] \approx \frac{\mathrm{Var}[\psi^2(.)]}{4\mathbb{E}[\psi^2(.)]}.
\end{align*}
Since $\psi^2(.) \sim \mbox{Scale-inv-}\mathcal{X}^2(\rho_t, \frac{1}{\sigma^2})$, we have:
\begin{align}
\mathrm{Var}[\psi(.)] \approx {\rho_t}/[{2(\rho_t - 2) (\rho_t - 4) \sigma^2}].
\label{eqn:variance_appro}
\end{align}
In Section~\ref{subsec:approx}, we conduct simulation experiments to examine Equation~\ref{eqn:variance_appro} and find that it is a reliable approximation.
Based on Equation~\ref{eqn:variance_appro}, we know that $\mathrm{Var}[\sqrt{\psi(.)}]$ decreases approximately at the speed of $O(\frac{1}{\rho_t})$.
With this approximation, we can calculate the rectification term as:
\begin{align*}
r_t = \sqrt{\frac{(\rho_t - 4)(\rho_t - 2)\rho_\infty}{(\rho_\infty - 4)(\rho_\infty - 2)\rho_t}}.
\end{align*}
Applying our rectification term to Adam, we come up with a new variant of Adam, Rectified Adam (RAdam), as summarized in Algorithm~\ref{algo:cadam}.
Specifically, when the length of the approximated SMA is less or equal than 4, the variance of the adaptive learning rate is intractable and the adaptive learning rate is inactivated.
Otherwise, we calculate the variance rectification term and update parameters with the adaptive learning rate.
It is worth mentioning that, if $\beta_2 \leq 0.6$, we have $\rho_\infty \leq 4$ and RAdam is degenerated to SGD with momentum.
\begin{figure}[t]
\begin{minipage}{0.69\textwidth}
\includegraphics[width=\textwidth]{fig/one_billion.pdf}
\vspace{-0.7cm}
\captionof{figure}{Language modeling (LSTMs) on the One Billion Word.}
\label{fig:1bw}
\end{minipage}
\begin{minipage}{0.3\textwidth}
\centering
\captionof{table}{Image Classification}
\vspace{-0.2cm}
\begin{tabular}{c|c|c}
& Method & Acc.\\
\hline
& & \\[-7pt]
\parbox[t]{2mm}{\multirow{3}{*}{\rotatebox[origin=c]{90}{\small \;CIFAR10}}}
& SGD & 91.51 \\[2pt]
& Adam & 90.54 \\[2pt]
& RAdam & 91.38 \\[2pt]
\hline
& & \\[-7pt]
\parbox[t]{2mm}{\multirow{3}{*}{\rotatebox[origin=c]{90}{\small \;ImageNet}}}
& SGD & 69.86 \\[2pt]
& Adam & 66.54 \\[2pt]
& RAdam & 67.62
\end{tabular}
\end{minipage}
\end{figure}
\vspace{-0.4cm}
\begin{figure}[t]
\centering
\vspace{-0.2cm}
\includegraphics[width=\textwidth]{fig/cifa_imagenet.pdf}
\vspace{-0.6cm}
\caption{Training of ResNet-18 on the ImageNet and ResNet-20 on the CIFAR10 dataset.}
\vspace{-0.2cm}
\label{fig:cifa10}
\end{figure}
\subsection{In Comparison with Warmup and Other Stabilization Techniques}
Different from the analysis in this paper, warmup is originally proposed to handle training with very large batches for SGD~\citep{goyal2017accurate,gotmare2018a,bernstein2018signsgd,Xiao2017DSCOVRRP}.
We notice that $r_t$ has a similar form to the heuristic linear warmup, which can be viewed as setting the rectification term as $\frac{min(t, T_w)}{T_w}$.
It verifies our intuition that warmup works as a variance reduction technique.
RAdam deactivates the adaptive learning rate when its variance is divergent, thus avoiding undesired instability in the first few updates.
Besides, our method does not require an additional hyperparameter ({\textit{i.e.}}, $T_w$)
and can automatically adapt to different moving average rules.
Here, we identify and address an underlying issue of adaptive optimization methods independent of (neural) model architectures.
Thus, the proposed rectification term is orthogonal to other training stabilization techniques such as gradient clipping~\citep{bengio2013advances}, smoothing the adaptive learning rate ({\textit{i.e.}}, increasing $\epsilon$, applying geometric mean filter~\citep{chen2018closing}, or adding range constraints~\citep{luo2019adaptive}), initialization~\citep{balduzzi2017shattered,zhang2019fixup} and normalization~\citep{ba2016layer,ioffe2015batch}.
Indeed, these techniques can be combined with the proposed variance rectification method.
\section{Experiments}
\vspace{-0.1cm}
We evaluate RAdam on several benchmarks:
One Billion Word for language modeling; Cifar10 and ImageNet for image classification; IWSLT'14 De-En/EN-DE and WMT'16 EN-De for neural machine translation.
Following~\cite{loshchilov2017fixing}, we decouple weight decays in the vanilla Adam, Adam with warmup and RAdam
in our experiments. Details are in Appendix~\ref{app:implement}.
\vspace{-0.1cm}
\subsection{Comparing to Vanilla Adam}
\vspace{-0.1cm}
As analyzed before, the adaptive learning rate has undesirably large variance in the early stage of training and leads to suspicious/bad local optima on NMT.
One question we are interested in is: whether such an issue widely exits in other similar tasks and applications.
Thus, we conduct a set of experiments with two classical tasks of NLP and CV, {\textit{i.e.}}, language modeling and image classification.
RAdam not only results in consistent improvements over the vanilla Adam, but also demonstrates its robustness to the change of learning rates.
It verifies that the variance issue exists in various machine learning applications, and has a big impact on the model behavior.
\noindent
\textbf{Performance Comparison.}
The performances on language modeling ({\textit{i.e.}}, One Billion Word~
\citep{Chelba2013OneBW}) and image classification ({\textit{i.e.}}, CIFAR10~\citep{krizhevsky2009learning} and ImageNet~\citep{deng2009imagenet}) are
presented in Figure~\ref{fig:1bw}, \ref{fig:cifa10}.
The results show that RAdam outperforms Adam in all three datasets.
As shown in Figure~\ref{fig:1bw}, although the rectification term makes RAdam slower than the vanilla Adam in the first few epochs, it allows RAdam to converge faster after that.
In other words, by reducing the variance of the adaptive learning rate in the early stage, it gets both faster convergence and better performance, which verifies the impact of the variance issue.
We also observe that RAdam obtains consistent improvements over Adam on image classification.
It is worth noting that, on both ImageNet and CIFAR10, although RAdam fails to outperform SGD in terms of test accuracy, it results in a better training performance ({\textit{e.g.}}, the training accuracy of SGD, Adam, and RAdam on ImageNet are $69.57$, $69.12$ and $70.30$ respectively).
\noindent
\textbf{Robustness to Learning Rate Change.}
Besides performance improvements, RAdam also improves the robustness of model training.
We use different initial learning rates, conduct experiments with ResNet-20 on the CIFAR10 datasets, and summarize their performance in Figure~\ref{fig:cifa10_lr}.
For learning rates within a broad range ({\textit{i.e.}}, $\{0.1, 0.03, 0.01, 0.003\}$), RAdam achieves consistent model performances (their test accuracy curves highly overlap with each other), while Adam and SGD are shown to be more sensitive to the learning rate.
The observation can be interpreted that by rectifying the variance of the adaptive learning rate, RAdam improves the robustness of model training and can adapt to different learning rates of a broader range.
\begin{figure}[t]
\centering
\vspace{0.2cm}
\includegraphics[width=0.94\textwidth]{fig/cifa10_lr.pdf}
\vspace{-0.5cm}
\caption{Performance of RAdam, Adam and SGD with different learning rates on CIFAR10.}
\label{fig:cifa10_lr}
\end{figure}
\begin{figure}[t]
\vspace{-0.2cm}
\centering
\includegraphics[width=\textwidth]{fig/cifa10_wu.pdf}
\vspace{-0.7cm}
\caption{Performance of RAdam, Adam with warmup on CIFAR10 with different learning rates.}
\vspace{-0.5cm}
\label{fig:cifa10_wu}
\end{figure}
\vspace{-0.1cm}
\subsection{Comparing to Heuristic Warmup}
\vspace{-0.1cm}
To examine the effectiveness of RAdam, we first conduct comparisons on neural machine translation, on which the state-of-the-art employs Adam with the linear warmup.
Specifically, we conduct experiments on three datasets, i.e., IWSLT'14 De-En, IWSLT'14 En-De, and WMT'16 En-De.
Due to the limited size of the IWSLT'14 dataset, we conduct experiments using 5 different random seeds and report their mean and standard derivation.
As discussed before, the vanilla Adam algorithm leads to suspicious/bad local optima (i.e., converges to a training perplexity around 500), and needs a learning rate warmup stage to stabilize the training.
We summarize the performance obtained with the heuristic warmup and our proposed rectification term in Table~\ref{tab:nmt} and visualize the training curve of IWSLT De-En in Figure~\ref{fig:eps2k}.
With a consistent adaptive learning rate variance, our proposed method achieves similar performance to that of previous state-of-the-art warmup heuristics.
It verifies our intuition that the problematic updates of Adam are indeed caused by the undesirably large variance in the early stage.
\begin{table}[t]
\centering
\vspace{-0.2cm}
\caption{BLEU score on Neural Machine Translation.
}
\vspace{-0.2cm}
\label{tab:nmt}
\begin{tabular}{c|c|c|c}
Method & IWSLT'14 DE-EN & IWSLT'14 EN-DE & WMT'16 EN-DE \\
\hline
Adam with warmup & $34.66 \pm 0.014$ & $28.56 \pm 0.067$ & $27.03$\\
RAdam & $34.76 \pm 0.003$ & $28.48 \pm 0.054$ & $27.27$\\
\end{tabular}
\vspace{-0.6cm}
\end{table}
Moreover, we applied Adam with warmup on the CIFAR10 dataset.
Its best accuracy on the test set is $91.29$, which is similar to RAdam ($91.38$).
However, we found that RAdam requires less hyperparameter tuning.
Specifically, we visualize their learning curves in Figure~\ref{fig:cifa10_wu}.
For some warmup steps, Adam with warmup is relatively more sensitive to the choice of the learning rate.
RAdam, at the same time, is not only more robust, but also can automatically control the warmup behavior ({\textit{i.e.}}, without requiring the length of warmup).
For example, when setting the learning rate as $0.1$, Adam with 100 steps of warmup fails to get satisfying performance and only results in an accuracy of $90.13$; RAdam successfully gets an accuracy of $91.06$, with the original setting of the moving average calculation ({\textit{i.e.}}, $\beta_1 = 0.9, \beta_2 = 0.999$).
We conjecture the reason is due to the fact that RAdam, which is based on a rigorous variance analysis, explicitly avoids the extreme situation where the variance is divergent, and rectifies the variance to be consistent in other situations.
\input{7approx.tex}
\section{Simulated Verification}
\subsection{Simulated Verification}
In Sections~\ref{sec:ana} and \ref{sec:fix}, we approximate $\mathrm{Var}[\sqrt{t/\sum_{i=1}^t g_i^2}]$ to the first order, and assume $\psi^2(.) = \frac{1-\beta_2^t}{(1 - \beta_2)\sum_{i = 1}^t \beta_2^{t-i}g_i^2}$ subjects to a scaled inverse chi-square distribution (this assumption covers the approximation from EMA to SMA).
Here, we examine these two approximations using simulations.
\noindent\textbf{First Order Approximation of $\mathrm{Var}[\sqrt{t/\sum_{i=1}^t g_i^2}]$.}
\label{subsec:approx}
To compare Equations~\ref{eqn:variance_appro} and \ref{eqn:analytic-sqrt-var}, we assume $\tau = 1$ and plot their values and difference for $\nu = \{5, \cdots, 500\}$ in Figure~\ref{fig:var_approx}.
The curve of the analytic form and the first-order approximation highly overlap, and their difference is much smaller than their value.
This result verifies that our first-order approximation is very accurate
\noindent\textbf{Scaled Inverse Chi-Square Distribution Assumption.}
\label{subsec:sma-ema}
In this paper, we assume $g_i$ accords to a Normal distribution with a zero mean.
We also assume $\psi^2(.)$ accords to the scaled inverse chi-square distribution to derive the variance of $\mathrm{Var}[\psi(.)]$,
based on the similarity between the exponential moving average and simple moving average.
Here, we empirically verify this assumption.
Specifically, since $g_i$ in the optimization problem may not be zero-mean, we assume its expectation is $\mu$ and sample $g_i$ from $\cN(\mu, 1)$.
Then, based on these samples, we calculate the variance of the original adaptive learning rate and the proposed rectified adaptive learning rate, {\textit{i.e.}}, $\mathrm{Var}[\frac{1}{\widehat{v_t}}]$ and $\mathrm{Var}[\frac{r_t}{\widehat{v_t}}]$ respectively.
We set $\beta_2$ to $0.999$, the number of sampled trajectories to $5000$, the number of iterations to $6000$, and summarize the simulation results in Figure~\ref{fig:vt_var_simulation}.
Across all six settings with different $\mu$, the adaptive learning rate has a larger variance in the first stage and the rectified adaptive learning rate has relative consistent variance.
This verifies the reliability of our assumption.
\section{Related Work}
There exist two aspects of related work regarding the topic here, which are training stabilization and adaptive gradient descent.
\subsection{Stabilization}
Many efforts have been made to stabilize the training procedure of deep neural networks, including gradient clipping~\citep{bengio2013advances}, better initialization~\citep{balduzzi2017shattered,zhang2019fixup}, normalization in many machine learning problems.
Learning rate warmup \citep{vaswani2017attention,goyal2017accurate} also serves as an important heuristics in stabilizing training procedure. Despite the lack of rigorous analysis, it has been found by experience that the warmup stage can stabilize the initial stage of training, with a high learning rate and advanced network architecture, including ResNet and Transformer.
Recent work \citep{gotmare2018a} showed the warmup stage prevent the deeper layers from creating training instability.
Different from previous work, we dissect the problem from the variance of the gradient statistics (i.e. adaptive learning), which is too large in the early stage of training and thus induce instability. On the other hand \citep{balduzzi2017shattered} showed the variance of the gradient of residual network keep increasing as the network become deeper and the large variance make the training difficult. As a result, wamup and RAdam mainly focusing on fixing the deep layers, which is in consistent with \citep{gotmare2018a}.
\section{Conclusion}
In this paper, we explore the underlying principle of the effectiveness of the warmup heuristic used for adaptive optimization algorithms.
Specifically, we identify that, due to the limited amount of samples in the early stage of model training, the adaptive learning rate has an undesirably large variance and can cause the model to converge to suspicious/bad local optima.
We provide both empirical and theoretical evidence to support our hypothesis, and further propose a new variant of Adam, whose adaptive learning rate is rectified so as to have a consistent variance.
Empirical results demonstrate the effectiveness of our proposed method.
In future work, we plan to replace the rectification strategy by sharing the second moment estimation across similar parameters.
\section{Proof of Theorem~\ref{theorem: variance_mono}}\label{app:proof_mono}
For ease of notation, we refer $\psi^2(.)$ as $x$ and $\frac{1}{\sigma^2}$ as $\tau^2$.
Thus, $x \sim \mbox{Scale-inv-}\mathcal{X}^2(\rho, \tau^2)$ and:
\begin{align}
\vspace{-0.3in}
p(x) = \frac{(\tau^2\rho/2)^{\rho/2}}{\Gamma(\rho/2)}\frac{\exp[\frac{-\rho\tau^2}{2x}]}{x^{1 + \rho/2}}
\quad\mbox{and}\quad
\mathbb{E}[x] = \frac{\rho}{(\rho - 2) \sigma^2} \;(\forall\, \rho > 2)
\label{eqn:inv-chi-sq-pdf-mean}
\vspace{-0.3in}
\end{align}
where $\Gamma(.)$ is the gamma function. Therefore, we have:
\begin{align}
\vspace{-0.3in}
\mathbb{E}[\sqrt{x}] = \int_{0}^{\infty} \sqrt{x}\, p(x)\,dx = \frac{\tau \sqrt{\rho} \,\Gamma( (\rho - 1)/2)}{\sqrt{2} \,\Gamma(\rho/2)}\; (\forall\, \rho > 4).
\label{eqn:expect-sqrt-x}
\vspace{-0.3in}
\end{align}
Based on Equation~\ref{eqn:inv-chi-sq-pdf-mean} and \ref{eqn:expect-sqrt-x}, for $\forall\, \rho > 4$, we have:
\begin{align}
\vspace{-0.5in}
\mathrm{Var}[\psi(.)] = \mathrm{Var}[\sqrt{x}] = \mathbb{E}[x] - \mathbb{E}[\sqrt{x}]^2 = \tau^2 (\frac{\rho}{\rho-2} - \frac{\rho \,2^{2\rho - 5}}{\pi}{\mathcal{B}}(\frac{\rho-1}{2}, \frac{\rho-1}{2})^2),
\vspace{-0.5in}
\end{align}
where ${\mathcal{B}}(.)$ is the beta function. To prove the monotonic property of $\mathrm{Var}[\psi(.)]$, we need to show:
\begin{lemma}
for $t \geq 4$, $\frac{\partial }{\partial t} (\frac{t}{t-2} - \frac{t \,2^{2t - 5}}{\pi}{\mathcal{B}}(\frac{t-1}{2}, \frac{t-1}{2})^2) < 0$
\end{lemma}
\begin{proof}
The target inequality can be re-wrote as
\begin{align*}
& \frac{\partial }{\partial t} (\frac{t}{t-2} - \frac{t \,2^{2t - 5}}{\pi}{\mathcal{B}}(\frac{t-1}{2}, \frac{t-1}{2})^2) \\
&= \frac{-2}{(t-2)^2}
- \frac{2^{2t - 5}}{\pi}{\mathcal{B}}(\frac{t-1}{2},
\frac{t-1}{2})^2
- \frac{t \,2^{2t - 5} \ln 4}{\pi}{\mathcal{B}}(\frac{t-1}{2}, \frac{t-1}{2})^2
\\
& - \frac{2t \,2^{2t - 5}}{\pi}{\mathcal{B}}(\frac{t-1}{2}, \frac{t-1}{2})^2 (\Psi(\frac{t-1}{2})-\Psi(t-1)), \quad \left( \Psi(x) = \frac{\Gamma'(x)}{\Gamma(x)} \right)
\\ & < 0
\\
\end{align*}
This inequality is equivalent to:
\begin{align*}
\frac{64 \pi }{(t-2)^2 4^t \cB(\frac{t-1}{2},\frac{t-1}{2})^2} + 1 + t \ln 4+2t\Psi(\frac{t-1}{2}) \\
> 2t\Psi(t-1) \stackrel{(i)}{=} t[\Psi(\frac{t-1}{2}) + \Psi(\frac{t}{2})+ \ln 4],
\end{align*}
where $(i)$ is derived from Legendre duplication formula.
Simplify the above inequality, we get:
\begin{align*}
\frac{64 \pi }{(t-2)^2 4^t \cB(\frac{t-1}{2},\frac{t-1}{2})^2} + 1 + t \Psi(\frac{t-1}{2}) - t\Psi(\frac{t}{2}) > 0,
\end{align*}
We only need to show
\begin{align*}
&\frac{64 \pi }{(t-2)^2 4^t \cB(\frac{t-1}{2},\frac{t-1}{2})^2} + 1 + t \Psi(\frac{t-1}{2}) - t\Psi(\frac{t}{2})
\\
&\geq \frac{64 \pi }{(t-2)^2 4^t \cB(\frac{t-1}{2},\frac{t-1}{2})^2} + 2 + t ( \ln(t/2)-1/(t/2-0.5) ) - t\ln (t/2)
\\
&= \frac{64 \pi }{(t-2)^2 4^t \cB(\frac{t-1}{2},\frac{t-1}{2})^2} - \frac{2}{t-1}
\\
&> \frac{64 \pi }{(t-2)^2 4^t \cB(\frac{t-1}{2},\frac{t-1}{2})^2} - \frac{2}{t-2} \geq 0 ,
\end{align*}
where the first inequality is from $\ln(x)-1/(2x)>\Psi(x) > \ln(x+0.5)-1/x$.
Therefore, we only need to show
\begin{align*}
32 \pi \geq (t-2) 4^t \cB(\frac{t-1}{2},\frac{t-1}{2})^2,
\end{align*}
which is equivalent to
\begin{align*}
&(t-2) 4^t \cB(\frac{t-1}{2},\frac{t-1}{2})^2
= (t-2) 4^t \frac{\Gamma(\frac{t-1}{2})^4}{\Gamma(t-1)^2}
\\ &\stackrel{(i)}{=} (t-2) 4^t \frac{\Gamma(\frac{t-1}{2})^2}{\Gamma(t/2)^2} 4^{2-t} \pi = 16 \pi (t-2) \frac{\Gamma(\frac{t-1}{2})^2}{\Gamma(t/2)^2} \leq 32 \pi,
\end{align*}
where $(i)$ is from Legendre duplication formula.
So we only need to show
\begin{align}
(t-2) \frac{\Gamma(\frac{t-1}{2})^2}{\Gamma(t/2)^2} \leq 2
\end{align}
Using Gautschi's inequality ($\frac{\Gamma(x+1)}{\Gamma(x+s)} < (x+1)^{1-s}$), we have
\begin{align}
(t-2) \frac{\Gamma(\frac{t-1}{2})^2}{\Gamma(t/2)^2} \leq (t-2) (\frac{t-1}{2})^{-1} = \frac{2(t-2)}{t-1} < 2
\end{align}
\end{proof}
\section{Implementation Details} \label{app:implement}
\subsection{Language Modeling}
Our implementation is based on the previous work~\citep{liu2018efficient}.
Specifically, we use two-layer LSTMs with 2048 hidden states with adaptive softmax to conduct experiments on the one billion words dataset.
Word embedding (random initialized) of 300 dimensions is used as the input and the adaptive softmax is incorporated with a default setting (cut-offs are set to $[4000,40000,200000]$).
Additionally, as pre-processing, we replace all tokens occurring equal or less than 3 times with as UNK, which shrinks the dictionary from 7.9M to 6.4M.
Dropout is applied to each layer with a ratio of $0.1$, gradients are clipped at 5.0.
We use the default hyper-parameters to update moving averages, {\textit{i.e.}} $\beta_1=0.9$ and $\beta_2=0.999$.
The learning rate is set to start from 0.001, and decayed at the start of 10th epochs.
LSTMs are unrolled for 20 steps without resetting the LSTM states and the batch size is set to 128.
All models are trained on one NVIDIA Tesla V100 GPU.
\subsection{Imageine Classification}
We use the default ResNet architectures~\citep{he2016deep} in a public pytorch re-implementation\footnote{\url{https://github.com/bearpaw/pytorch-classification}}.
Specifically, we use $20$-layer ResNet ($9$ Basic Blocks) for CIFAR-10 and 18-layer ResNet ($8$ Basic Blocks) for ImageNet.
Batch size is $128$ for CIFAR-10 and $256$ for ImageNet. The model is trained for $186$ epoches and the learning rate decays at the $81$-th and the $122$-th epoches by $0.1$ on CIFAR-10, while the model is trained for $90$ epoches and the learning rate decays at the $31$-th and the $61$-th epoch by $0.1$ on ImageNet.
For Adam and RAdam, we set $\beta_1=0.9, \beta_2=0.999$. For SGD, we set the momentum factor as $0.9$. The weight decay rate is $10^{-4}$. Random cropping and random horizontal flipping are applied to training data.
\subsection{Neural Machine Translation}
Our experiments are based on the default Transformers~\citep{vaswani2017attention} implementation from the fairseq package~\citep{ott2019fairseq}.
Specifically, we use word embedding with 512 dimensions and 6-layer encoder / decoder with 4 head and 1024 feedforward dimensions on the IWSLT14' dataset; use word embedding with 512 dimension and 6-layer encoder/decoder with 8 heads and 2048 feedforward dimensions on the WMT14' dataset.
Label smoothed cross entropy is used as the objective function with an uncertainty $= 0.1$~\citep{szegedy2016rethinking}.
We use linear learning rate decay starting from $3e^{-4}$, and the checkpoints of the last $20$ epoches are averaged before evaluation.
As to the wamrup strategy, we use a linear warmup for Adam in the first $4000$ updates, and set $\beta_2$ to satisfy $\nu=4000$ ($\beta_2 = 0.9995$).
In the IWSLT'14 dataset, we conduct training on one NVIDIA Tesla V100 GPU, set maximum batch size as $4000$, apply dropout with a ratio $0.3$, using weight decay of $0.0001$ and clip the gradient norm at $25$.
In the WMT'16 dataset, we conduct training on four NVIDIA Quadro R8000 GPUs and set maximum batch size as $8196$.
\section{Downgrading to SGDM} \label{app:foursteps}
As a byproduct determined by math derivations, we degenerated RAdam to SGD with momentum in the first several updates.
Although this stage only contains several gradient updates, these updates could be quite damaging (e.g., in our Figure~\ref{fig:histogram_2}, the gradient distribution is distorted within 10 gradient updates).
Intuitively, updates with divergent adaptive learning rate variance could be more damaging than the ones with converged variance, as divergent variance implies more instability.
As a case study, we performed experiments on the CIFAR10 dataset.
Five-run average results are summarized in Table~\ref{tab:foursteps}.
The optimizer fails to get an equally reliably model when changing the first 4 updates to Adam, yet the influence of switching is less deleterious when we change 5-8 updates instead.
This result verifies our intuition and is in agreement with our theory — the first few updates could be more damaging than later updates.
By saying that, we still want to emphasize that this part (downgrading to SGDM) is only a minor part of our algorithm design whereas our main focus is on the mechanism of warmup and the derivation of the rectification term.
\begin{table}[h]
\caption{Performance on CIFAR10 (lr = 0.1).}
\begin{center}
\begin{tabularx}{\columnwidth}{l l l *{3}{Y}}
\toprule
1-4 steps & 5-8 steps & 8+ steps & test acc & train loss & train error\\
\midrule
RAdam & RAdam & RAdam & 91.08 & 0.021 & 0.74 \\
\midrule
Adam (w. divergent var.) & RAdam & RAdam & 89.98 & 0.060 & 2.12 \\
\midrule
SGD & Adam (w. convergent var.) & RAdam & 90.29 & 0.038 & 1.23\\
\bottomrule
\end{tabularx}
\end{center}
\label{tab:foursteps}
\end{table}
|
2,877,628,091,523 | arxiv | \section{Introduction}
Since spin-transfer torque (STT) lives at the heart of the data storage industry, it attracts great interest\cite{Chappert20099,Parkin 2008,Pinarbasi,Mamura2022JMMM,Tsymbal2009Book,Cai2021Sci,Yuasa2018}. The mutual interactions between the spin of charge carriers and magnetic orders cause the STT. This torque originates on the transfer of spin angular momentum
of spin current to the magnetization or vice versa\cite{Bazaliy1998PRB,Tsoi19998PRL,Stiles2002PRB}. This effect works in electronic devices such as oscillator circuits or magnetic random access memory\cite{Zutic2011,Cui2022Spintronics}. The heat dissipation due to electric resistance is one of the most critical issues in spintronics. The dissipationless current in superconductors in combination with ferromagnets proposes new types of devices to manipulate spin and charge currents. \cite{Linder2015NatPhys,Shomali,Moen2018PRB,Bobkova,Haltermann}.
The interface of superconductors can reflect the incoming electron from the non-superconducting side as a backscattered hole while a Cooper pair enters the superconductor\cite{Andreev1964JETP}. This process, known as Andreev reflection, is dominant at voltages below the superconducting gap. During the Andreev reflection, the dissipative current converts to the dissipationless current\cite{BTK}.
The reflected hole will be created in the conduction band when its energy excitation is less than chemical potential. This hole moves back alongside its incident electron in real space, known as retro-reflection. For an incident electron with energy bigger than the chemical potential, the corresponding hole locates in the valence band during the Andreev process. This hole, similar to an optical ray in front of a mirror, moves back specularly\cite{Beenakker2006PRL,Zhang2008PRL,Schelter2012PRL}.
On the other hand, topological insulators (TIs) are a class of materials with non-trivial properties, first proposed theoretically and then confirmed experimentally\cite{Kane2005PRL,Kane2005PRL-2,Fu2007PRB,Bernevig2006Sci,Fu2007PRL,Teo2008PRB,Hsieh2008Nau,Hsieh2009Nat,Hsieh2009Sci,Zhang2009NP,Kuroda2010PRL,Hazzan2010RMP,Jackiw1976PRD}. Due to the bulk-edge correspondence, they have gapless states on their edges or surfaces\cite{Jackiw1976PRD}. In contrast to the Graphene\cite{Novoselov2004Sci,Novoselov2005Nature}, these states are fully spin-orbit coupled and protected against local perturbations\cite{Hazzan2010RMP}. A Dirac-like Hamiltonian governs on the carriers in the low excitation approximation \cite{Dirac1928PRSL}.
In the presence of spin-orbit interaction, the spin operators could not be well-defined for metallic materials\cite{Rashba2003PRB,Soori,Soori2}, whereas chirality can be a well-defined operator for TIs because of strong spin-momentum locking\cite{Beiranvand2021JOP}.
Also, magnetization and superconductivity can be induced in these states\cite{Tikhonov2016PRL,Cheklesky2012NP,Cheklesky2014NP,Qi2009Sci} to make them an exciting platform for exploring new phenomena\cite{Yokoyama2010PRB,Yokoyama2009PRL,Linder2010PRL,Linder2010PRB,McIver2011NN,Salehi2011}.
In our previous work, unlike early attempts to explore the physics of STT on TIs, we focus on low energy excitation regime to reveal the Dirac physics. We show a current transfer torque imposed on the ferromagnet/ normal (F/N) junction of three-dimensional topological insulators (3D TIs) which can be detected via a Hall voltage\cite{Beiranvand2021JOP}.
So far, the STT of superconducting-based devices is considered without focus on the role of Andreev reflection.
\begin{figure}
\includegraphics*[scale=0.4]{Dispersion.pdf}
\caption{(color online) The dispersion relation of electron-like (blue cone) and hole-like (red cone) quasi-particles in the $k$-space, respectively. In the presence of in-plane magnetization, two cones separate in the $k$-space with $2 \sqrt{(m_x^2+m_y^2)}$. Here, we set $m_y=\mu=0$. The upper part of blue cone is empty for electron-like excitations whereas the upper part of red cone is filled for hole-like excitations.}
\label{Fig.Dispersion}
\end{figure}
To upgrade our theory to explore the importance of Andreev reflection, we consider a ferromagnet/superconductor (F/S) junction on the surface of 3D TIs, where the proximity effect induces magnetization and superconductivity. We assume superconductivity has an s-wave character and the propagation of Dirac fermions occurs in the ballistic regime. To explore it theoretically, we use the Bogoliubov-deGennes (BdG) equation\cite{deGennes1999Book},
\begin{equation}
H_{BdG}=\left(
\begin{array}{cc}
H_D(k)-\mu & \Delta \\
\Delta^{*}& \mu-\mathcal{T}H_D(k)\mathcal{T}^{-1}\\
\end{array}\right).
\label{Eq.HBdG}
\end{equation}
Here, $H_D(k)$ is the effective Hamiltonian that governs on the Dirac fermions of the surface of 3D TI in the presence of magnetization. It can be written as,
\begin{equation}
H_D(\textbf{k})=\hbar v_F (\boldsymbol{\sigma}\times \textbf{k}).\hat{e}_z-( \textbf{m}. \boldsymbol{\sigma})
\label{Eq.Hamiltoina},
\end{equation}
where $\boldsymbol{\sigma}$ and $\textbf{k}$ are Pauli spin vector and wave vector, respectively. Also, $\textbf{m}=\textbf{m}_0 \Theta(-x)$ is the effective magnetization coupled to the spin degrees of freedom and $v_F$ is the Fermi velocity. To avoid complexity, we set $\hbar v_F=1$ in the remainder of the paper. Moreover $\mathcal{T}=i\sigma_y K$ is the time-reversal operator and $K$ is the complex one. Moreover, $\Delta=\Theta(x)\Delta_0 \sigma_0$ is the complex superconducting order parameter and $\sigma_0$ is a $2\times 2$ unit matrix in the spin space. Here, $\Theta(x)$ is the Heaviside step function. The $\mu$ stands for the chemical potential that can be tuned by external gate. In the absence of superconductivity, the excitation energy of electron-like and hole-like quasi-particles according to Eq.(\ref{Eq.HBdG}) are,
\begin{eqnarray}
\epsilon_e=\pm \sqrt{(k_x+m_y)^2+(k_y-m_x)^2+m_z^2}-\mu,
\label{Eq.Dispersion1} \\
\epsilon_h=\pm \sqrt{(k_x-m_y)^2+(k_y+m_x)^2+m_z^2}+\mu
\label{Eq.Dispersion2}
\end{eqnarray}.
Here, the $\{x,y,z\}$ indices belong to the space components of magnetization and wave vector. The Eq.(\ref{Eq.Dispersion1}) and Eq.(\ref{Eq.Dispersion2}) show two cones of electron-like and hole-like quasi-particles in the $k$-space, respectively. In the absence of magnetization, these cones initially locate at the center of the Brillouin zone. As shown in Fig.(\ref{Fig.Dispersion}), in-plane magnetization, $\{m_x \neq 0, m_y\neq 0, m_z=0\}$, tunes the Dirac cone's location and separate them from each other with $2\sqrt{m_x^2+m_y^2}$\cite{Yokoyama2009PRL}. Also, out of plane magnetization induces a direct gap for both cones\cite{Linder2010PRB}.
The blue cone demonstrates the electron-like dispersion, whereas the red cone indicates on the hole-like one. An electron fills an empty state above the chemical potential at zero temperature. This electron moves in the real space along its group velocity, $\langle V_i \rangle_g=\langle \partial \epsilon / \partial k_i\rangle$. During the Andreev reflection, a hole is created on the hole cone's empty part. Since the parallel component of the wave vector and energy are conserved during the scattering processes, the cone's separation leads to angle-dependent Andreev reflection. It means the probability of Andreev reflection depends on the propagation direction of the incoming particle.
This effect imposes a torque on the junction called \textit{Andreev transfer torque} (ATT). Only the $z$-component of ATT is non-zero. Because of strong spin-orbit interaction on Dirac fermions of 3D TIs, a transverse current flows parallel to the interface of the F/S junction. This current can be detected via a four-terminal setup as its experimental signature.
This paper organized as follows. In Sec.\ref{Sec.Theory}, we
demonstrate the physics of angle-dependent Andreev reflection due to the separation of Dirac cone's location in the $k$-space. The transport properties and the
continuity equation of spin density wave are calculated,
too. In the steady-state approximation, we show
that only the $z$-component of ATT is non zero. In Sec.\ref{Sec.Re1}, we illustrate the creation of an indirect gap in the transport probabilities. Since the Majorana-bound states locate at the zero energy on the interface of the F/S junction, this gap removes their signatures on the transport properties. Also, we show that transport probabilities are dependent on the propagation direction of incoming particles in the real space. In Sec.\ref{Sec.Re2}, we show a transverse current flows parallel to the interface. This effect is related to the direction of in-plane magnetization. In Sec.\ref{Sec.Re3}, the $z$-component of ATT is calculated. Finally, the conclusion is given in Sec.IV
\begin{figure}
\includegraphics[scale=0.26]{Fig1.pdf}
\caption{(a) The schematic illustration four-terminal 3D TI-based ferromagnet/superconductor junction that can be used in experiment to detect ATT and transverse conductance. (b) The Andreev reflection processes are shown in $k$-space configuration in the presence of in-plane magnetization. (c) The corresponding real space processes of Andreev reflection, both for specular and retro reflections.}
\label{Fig.1}
\end{figure}
\section{Theory and formalism}
\label{Sec.Theory}
As illustrated in part (a) of Fig(\ref{Fig.1}), we consider a 3D TI-based four terminal F/S junction. The in-plane magnetization induces by means of proximity in the $x < 0$ part of the junction.
In the ballistic regime, the energy and parallel component of wave vector with respect to the junction are conserved during the scattering processes. As shown in Fig.(\ref{Fig.Dispersion}), the $x$-component of magnetization moves the Dirac point toward $k_y$-direction whereas its $y$-component moves the Dirac point in $k_x$-direction of the $k$-space. Since the $x$-component causes the relative motion of reflected holes in Andreev processes, we assume the magnetization applied perpendicular to the junction,$\{m_x=-m_0, m_y=0, m_z=0\}$, where $m_0$ is the magnitude of magnetization.
A top view of TI-based F/S junction in the $k$-space is shown in part (b) of Fig.(\ref{Fig.1}). The blue circle belongs to the empty states of electron-like cone with energy $\epsilon$ higher than chemical potential, $\mu$. The radius of the electron-like circle is $\mu+\epsilon$. The black dot, labeled by ($a_I$), stands for an incoming fermion. Also, the black arrow centered at the origin of the blue circle determines the propagation direction of incoming fermion in the real space via its group velocity. Since energy is conserved, the radius of empty states at hole-like circle (the red one) is equal to $|\mu-\epsilon|$. Moreover, the $k_y$ is conserved during the scattering processes. Using these conditions, the incoming fermion with energy $\epsilon \leq \Delta_0 $ encounters two scenarios:
\begin{itemize}
\item In the Andreev zone where the electron-like and hole-like circles overlap, the incoming fermion can find an empty state with the probability of $R_A$ on the hole-like circle. If the hole-like circle locates at the valence band, the hole reflects specularly. This effect is shown by the white dot labeled by ($a_s$) in part (b) of Fig.(\ref{Fig.1}). On the other hand, the reflected hole in the conduction band belong to the retro-reflection type that is illustrated by the white dot labeled by ($a_r$) in part (b) of Fig.(\ref{Fig.1}). Also, there is a probability of $R_N$ for the incoming fermion to be reflected usually, which called normal reflection. It is shown by ($a_n$) in part (b) of Fig.(\ref{Fig.1}). The probability conservation ensures $R_N+R_A=1$. The black arrows centered at the origin of electron-like and hole-like circles determine the propagation directions of incoming and back-scattered states in the real space. This effect is shown schematically in the real space in the upper part of part (c) in Fig.(\ref{Fig.1}).
\item In the reflection zone where electron-like and hole-like circles do not overlap, there is no state on the hole-like circle for the incoming fermion.
It means the perfect normal reflection occurs, $R_N=1$. This effect is shown by label of ($b_I$) for the incoming fermion and ($b_n$) for the reflected one in part (b) of Fig.(1). Also, this effect is depicted schematically in the real space in the lower part of part (c) in Fig.(\ref{Fig.1}).
\end{itemize}
Incoming fermions with energy $\epsilon > \Delta_0$ have another possibilities. They can transport across the junction to the states that exist above the superconducting gap.
These different scenarios show that the junction is sensitive to the propagation direction of incoming particles due to the moving of the Dirac cones in the k-space. The incoming fermions located at the upper part of the electron-like circle can be reflected with the probability of $R_A$ in the Andreev reflection processes. In contrast, the incoming fermions located at the lower part have to be reflected normally. This process causes an anisotropic angle-dependent Andreev reflection. Since incoming fermion and its reflected hole move in different directions, the net current they carry differs. So, it creates a transverse current that flows parallel to the interface and can be detected via a four-terminal setup. Due to the strong spin-momentum locking on the surface of 3D TIs, any change in the propagation direction of carriers causes a change in the spin configuration. This effect manifests itself through a torque that imposes on the junction.
We define the basis of wave functions of Eq.(\ref{Eq.HBdG}), as
$\psi=(\phi_\uparrow, \phi_\downarrow, \phi^*_\downarrow, -\phi^*_\uparrow)^T$. The $\uparrow (\downarrow)$ stands for up (down) spin direction.
The corresponding wave functions of electron-like excitations, Eq.(\ref{Eq.Dispersion1}), are derived such as,
\begin{equation}
\psi^\pm_e(r)=\left(\begin{array}{c}
i \\
\pm e^{\pm i \theta_e}\\
0\\
0\\
\end{array}\right)e^{\pm i \textbf{k}_e.\textbf{r}},
\label{Eq.Psie}
\end{equation}
where the $\pm$ sign refers to the right or left propagation direction and $\textbf{k}_e=(k^e_x,k_y)$ is the two dimensional wave vector of carrier. The eigenvalues of hole-like excitations is determined by Eq.(\ref{Eq.Dispersion2}) and their wave functions are,
\begin{equation}
\psi^\pm_h(r)=\left(\begin{array}{c}
0 \\
0\\
\pm \mathcal{S} e^{\pm i \alpha}\\
i\\
\end{array}\right)e^{\pm i \textbf{k}_h.\textbf{r}}.
\label{Eq.psih}
\end{equation}
Here, $\mathcal{S}=sign(\epsilon-\mu)$ determines the location of hole-like excitation in the valence or conduction band. The $\alpha$ refers to the propagation direction.
The $\textbf{k}_h=(k^h_x,k_y)$ is its wave vector. Since $k_y$ is conserved during the scattering process, we obtain from Eq.(\ref{Eq.Dispersion2}),
\begin{equation}
k^h_{x}=\pm \sqrt{(\epsilon-\mu)^2-(k_y+m_x)^2}+m_y
\label{Eq.Kh}.
\end{equation}
This relation indicates that the reflected holes must satisfy $ |\epsilon-\mu|\geq |k_y+m_x| $ condition to find an stable state in hole-like cone.
In the presence of superconductivity, $\Delta_0 \neq 0$, the wave functions of electron-like and hole-like excitations can be derived in a similar way. The group velocity operator is
\begin{equation}
\hat{V}= -\eta_0\sigma_y \hat{i}+\eta_0\sigma_x \hat{j},
\end{equation}
where $\eta_0$ is a $2 \times 2$ unit matrix in the Nambu space. One can derive the propagation direction of electron-like excitation in the F region such as,
\begin{equation}
\begin{array}{cc}
\langle \hat{V}_{e,x}\rangle=\cos \theta, & \langle \hat{V}_{e,y}\rangle=\sin\theta.
\end{array}
\end{equation}
The $\{\theta, \alpha \}$ are depicted in Fig.(\ref{Fig.1}). Also, the probability current density is $\textbf{J}=\langle \psi | \hat{V} | \psi\rangle $.
The reflection and transport probabilities of the junction can be obtained when an incoming fermion hits the interface from the F side. The wave function on the left side of the junction $(x \leq 0)$ is,
\begin{equation}
\psi_L(\textbf{r})=\psi^{+}_{e}(\textbf{r})+r_n\psi^{-}_{e}(\textbf{r})+r_A\psi^{-}_h(r),
\label{Eq.LeftWaveFunction}
\end{equation}
The $r_n$ and $r_A$ are the amplitude of noraml and Andreev reflections, respectively. The wave function on the superconducting side of the junction $(x\geq 0)$ is
\begin{equation}
\psi_R(\textbf{r})=t_e \psi_{e}^{S,+}(\textbf{r})+t_h \psi_{h}^{S,+}
\label{Eq.RightWaveFunction}.
\end{equation}
Here, $t_e$ and $t_h$ are the transmission amplitudes of electron-like and hole-like states on the superconducting side. The $\psi^S_e(\textbf{r})$ and $\psi^S_h(r)$ are the corresponding wave functions of electron-like and hole-like excitations in the superconducting side. We use heavily doped approximation, $\mu_S \rightarrow \infty$, in the S region to satisfy the necessary density of states of induced superconductivity. The boundary condition that matches the wave functions
of two sides of the junction is necessary to calculate the reflection and
transmission amplitudes\cite{Mondal2010PRL,Yokoyama2009PRL,Yokoyama2010PRB,Linder2010PRB,Linder2010PRL},
\begin{equation}
\psi_L(x=0)=\psi_R(x=0).
\label{Eq.BoundaryCondition}
\end{equation}
The amplitude of Andreev reflection is,
\begin{equation}
r_A=\frac{ie^{i \frac{\theta-\alpha}{2}} \sqrt{\cos\theta \cos\alpha}}{\cos\beta\cos(\frac{\theta-\alpha}{2})+i\sin\beta\cos\frac{\theta+\alpha}{2}}.
\label{Eq.rA}
\end{equation}
Also, the amplitude of normal reflection is,
\begin{equation}
r_e=\frac{e^{i\theta}\left(-\sin\beta\sin(\frac{\theta-\alpha}{2})+i \cos\beta \sin(\frac{\theta+\alpha}{2})\right)}{\cos\beta\cos(\frac{\theta-\alpha}{2})+i\sin\beta\cos\frac{\theta+\alpha}{2}}.
\label{Eq.re}
\end{equation}
The probability of Andreev reflection is $R_A=|r_A|^2$. Also, the other probabilities can be obtained by multiplication of its value to its complex conjugate. These probabilities are useful to calculate the transport properties of the junction.
We define $\Psi(\textbf{r})$ and $\Psi^\dagger(\textbf{r})$ as the annihilation and creation field operators in the Nambu space for Eq.(\ref{Eq.HBdG}). We have $\Psi(\textbf{r})=\left(\Phi(\textbf{r}), \Xi(\textbf{r})\right)^T$, where $\Phi(\textbf{r})=\left(\phi_\uparrow(\textbf{r}), \phi_\downarrow(\textbf{r}) \right)$ is its electron part. The hole part of the field operator is defined as $\Xi(\textbf{r})=\mathcal{T}\Phi(\textbf{r})$ to satisfy the s-wave character of superconductivity.
Using these operators, the $\alpha$-component of spin density wave on the surface of 3D TI is $S_\alpha=\Psi^\dagger \eta_0\sigma_\alpha \Psi$. We rewrite the BdG equation, Eq.(\ref{Eq.HBdG}), in the real space as:
\begin{equation}
\mathcal{H}_{BdG}=\int d^2r \Psi^\dagger(\textbf{r})H_{BdG} \Psi(\textbf{r}).
\label{Eq.rHBdG}
\end{equation}
We need the commutators of field operators with the Hamiltonian of Eq.(\ref{Eq.rHBdG}) to obtain the dynamics of spin density wave,
\begin{equation}
\begin{array}{l}
\left[\mathcal{H}_{BDG}, \Phi(\textbf{r})\right]=-H_D(\textbf{k})\Phi(\textbf{r})-\sigma_0\Delta_0 \Xi(\textbf{r}),\\
\\
\left[\mathcal{H}_{BdG}, \Phi^\dagger(\textbf{r})\right]=\Phi^\dagger(\textbf{r})H_D(-\textbf{k})-\Xi^\dagger(\textbf{r})\sigma_0\Delta_0^*.
\end{array}
\label{Eq.CommutatorHPsi}
\end{equation}
The commutators of $\Xi(\textbf{r})$ can be calculated in a similar way. The Heisenberg equation of motion can be written for any time-independent operator such as $\hat{A}$,
\begin{equation}
\partial_t\hat{A}=i \left[H , \hat{A} \right].
\label{Eq.Heisenberg}
\end{equation},
This can be used to calculate the dynamics of $S_z$ as below,
\begin{equation}
\begin{array}{rl}
\partial_t S_z =&\left(\partial_t \Psi^\dagger(\textbf{r})\right)\eta_0 \sigma_z \Psi(\textbf{r})+\Psi^\dagger(\textbf{r})\eta_0 \sigma_z \left(\partial_t \Psi(\textbf{r})\right)\\
& \\
=&\left(\partial_t \Phi^\dagger(\textbf{r})\right)\sigma_z \Phi(\textbf{r})+\Phi^\dagger(\textbf{r})\sigma_z\left(\partial_t \Phi(\textbf{r})\right)\\
& \\
+ & \left(\partial_t \Xi^\dagger(\textbf{r})\right)\sigma_z \Xi(\textbf{r})+\Xi^\dagger(\textbf{r})\sigma_z\left(\partial_t \Xi(\textbf{r})\right).
\end{array}
\label{Eq.SD1}
\end{equation}
The commutation relations of Eq.(\ref{Eq.CommutatorHPsi}) can be used to have,
\begin{equation}
\begin{array}{rl}
\partial_t S_z= & i \left[\mathcal{H}_{BdG}, \Phi^\dagger(\textbf{r})\right] \sigma_z \Phi(\mathbf{r}) + \Phi^\dagger(\mathbf{r}) \sigma_z i \left[\mathcal{H}_{BdG}, \Phi(\textbf{r})\right]\\
& \\
+ & i \left[\mathcal{H}_{BdG}, \Xi^\dagger(\textbf{r})\right] \sigma_z \Xi(\mathbf{r}) + \Xi^\dagger(\mathbf{r}) \sigma_z i \left[\mathcal{H}_{BdG}, \Xi(\textbf{r})\right]\\.
\end{array}.
\label{Eq.SD2}
\end{equation}
The straightforward algebra leads to the dynamics of $S_z$ as:
\begin{equation}
\begin{array}{ll}
\partial_t S_z+ \boldsymbol{\nabla}. \textbf{J}^s_z = dT_z.
\end{array}
\label{Eq.SD3}
\end{equation}
Where, $J^s_z=i \hat{e}_z\times\textbf{J} $ is the $z$-component of spin density current and $dT_z=2 \Psi^\dagger\eta_z (\textbf{m}\times \boldsymbol{\sigma}).\hat{e}_z\Psi$ is the density of ATT.
The spin density wave is independent of time in the steady state approximation, $\partial_t S_z=0$. So, the integration of Eq.(\ref{Eq.SD3}) leads to the $z$-component of ATT as below,
\begin{equation}
T_z= \int dT_z= \int \boldsymbol{\nabla}.\textbf{J}^s_z d^3 r.
\label{Eq.CTT1}
\end{equation}
One can use the divergence theorem to convert the volume integral into boundary one and rewrite it,
\begin{equation}
T_z=\oint ( \hat{\textbf{e}}_z \times \textbf{J}).d\textbf{l}
\label{Eq.CTT2}.
\end{equation}
The $d\textbf{l}$ is carried over a closed loop. Since the current is conserved, the Eq.(\ref{Eq.CTT2}) demonstrates that the ATT is related to current bending. Its reduction in one direction leads to an increase in the other. So, the ATT is related to the transverse current that flows parallel to the interface. Due to the absence of $\sigma_z$ in the Eq.(\ref{Eq.Hamiltoina}) and the 2D nature of the junction, the other components of ATT are zero.
\begin{figure}
\includegraphics*[scale=0.31]{Transmission.pdf}
\caption{(color online) The anisotropic angle-dependent Andreev probability with excitation energy $\epsilon$ normalized by the magnitude of superconducting gap, $\Delta_0$. Here, $\theta$ is the propagation angle of incoming fermion. We set magnetization direction perpendicular to the interface, $m_x=-0.2 \Delta_0$. In the range of $\epsilon \leq m_0$, the Andreev probability is zero. This effect creates an indirect gap that induced by magnetization in the junction.In the above of indirect gap, there is a difference in the probabilities of upward and downward propagating fermions that creates the transverse conductance}.
\label{Fig.Transmission}
\end{figure}
\section{Results and discussion}
\label{Sec.Results}
\subsection{Anisotropic angle-dependent Andreev reflection}
\label{Sec.Re1}
We set the ferromagnetic exchange field in the x-direction, $\textbf{m}=(-m_0,0,0)$, to have negative transverse current in the Andreev dominant regime.
Also, we set $\mu=0$ and normalize the energy values with respect to $\Delta_0$.
The incoming fermions which belong to the half part of the
electron-like circle with $k_x \geq 0$ hit
the interface from F side of the junction. The hole-like cone located at $\{0,m_0\}$ in the $k$-space. The incoming fermions with $\epsilon \leq \Delta_0$ can not transport to the states at the above of superconducting gap. Since
the radius of each energy circle is $\epsilon$, the minimum energy for incoming fermion is $\epsilon=m_0$ to find a steady states on the hole-like circle. It means an indirect gap induced in the junction in the range of $0 \leq \epsilon \leq m_0$. As illustrated in Fig.(\ref{Fig.Transmission}), all incoming fermions reflect, and $R_A=0$ creates a no-current area in the Andreev reflection probability figure. The electron-hole duality of the BdG wave functions, make them a suitable candidate to be host of the Majorana-bound states. These fermionic states describes particles that can be simultaneously their anti-particles\cite{Fu2010PRL,Yokoyama2009PRL,Linder2010PRL}.
The TI-based superconducting region hosts the Majorana bound states at the $\epsilon=0$. The perfect Andreev reflection, $R_A=1$, and the robust conductance peak at $G(\epsilon=0)=2G_0$ are their experimental signatures\cite{Salehi2017SciRep}. As shown in Fig.(\ref{Fig.Transmission}) and Fig (\ref{Fig.GL}), the in-plane magnetization sets the Majorana-bound states into the indirect gap, and their signatures disappear. Since magnetization induced by proximity effect in our device, this effect can act as a on/off switch for Majorana-bound states in future technology.
The two energy circles overlaps for values bigger than the indirect gap threshold, $\epsilon \geq m_0$. The incoming fermion can find a steady-state on the hole-like circle during the Andreev process. So, the propagation direction of reflected hole determines with respect to its location on the hole-like circle. The probability of Andreev reflection becomes sensitive to the propagation angle of incoming fermions, $\theta$. As shown in Fig.(\ref{Fig.Transmission}), incoming fermions with the same energy have different probabilities with respect to its propagation angle. These probabilities grow with respect to energy values, and the perpendicular fermions have a chance more than others. This phenomenon leads an imbalance between upward and downward fermions and creates a transverse current that flows parallel to the interface. This current can be detected via a two extra leads located aside of the junction. The electron-like and hole-like quasi-particles are two types of carriers in our device. The sign of transverse current determines the carrier dominant. We use $m_0 \leq 0$ to have negative value for transverse conductance in hole-like dominant regime and positive value for electron-like dominant regime.
For incoming fermions with $\epsilon \geq \Delta_0$, the states at the above of superconducting gap is accessible. So, the Andreev probabilities encounter a major reduction in their values. This means the transverse conductance approaches to zero for $\epsilon \gg \Delta_0$.
\subsection{Transverse Conductances}
\label{Sec.Re2}
\begin{figure}
\includegraphics*[scale=0.35]{Longitudinal.pdf}
\caption{(color online) The longitudinal conductances vs. excitation energy. The longitudinal conductance reaches to its maximum in $\epsilon \simeq \Delta_0$, where the Andreev reflection is dominant. The $G_L(\epsilon)$ grows exponentially in the energy range above the indirect gap and approaches to its ballistic value in the high energy regime where the effect of superconducting gap can be ignored. }.
\label{Fig.GL}
\end{figure}
Using transport probabilities of Eq.(\ref{Eq.BoundaryCondition}), one can calculate the transverse and longitudinal conductances \cite{BTK} in x and y directions, respectively.
\begin{equation}
G_T(\epsilon)=G_0 \int_{-\frac{\pi}{2}}^{\frac{\pi}{2}} \left(R_N(\theta)\sin\theta-\mathcal{S} R_A(\theta)\sin\alpha\right) d\theta,
\label{Eq.HallConductance}
\end{equation}
\begin{equation}
G_L(\epsilon)=G_0 \int_{-\frac{\pi}{2}}^{\frac{\pi}{2}} \left((1-R_N(\theta))\cos\theta+R_A(\theta) \cos\theta\right)d\theta,
\label{Eq.LongitudinalConductance}
\end{equation}
where $G_0=(e^2/h)N(\epsilon)$ is the ballistic conductance and $N(\epsilon)=(|\epsilon+\mu|W)/\hbar v_F$ is the density of states. Also, $W$ is the width of the junction.
In contrast to Graphene\cite{Beenakker2006PRL,Sengupta2006PRL,Linder2007PRL} and other relativistic materials such as TIs \cite{Yokoyama2010PRB,Linder2010PRB} or Weyl semimetals where the zero-bias conductance peak and Majorana-bound states are responsible for sub gap conductances, the no-current domain originates from the indirect gap. As shown in Fig.(\ref{Fig.GL}), the longitudinal conductance arises exponentially at $\epsilon=m_0$ and tends to its maximum value at the edge of superconducting gap, where the Andreev probability is dominant. In high energy regime, the effect of Dirac cone displacement and superconducting gap can be ignored and the $G_L$ approaches to its ballistic value. As shown in part (a) of Fig (\ref{Fig.1}), the longitudinal conductance passes through the battery and can be measured by an ammeter.
The transverse conductance is demonstrated in Fig.(\ref{Fig.GH}). Based on the Eq.(\ref{Eq.HallConductance}), those propagation directions with perfect normal reflection, $\{R_N=1, R_A=0\}$, have no effect on the $G_T$. Because of $s$-wave character of superconductivity, the states above the superconducting gap in the S side of the junction have no effect on the $G_T$. Also, each integral can be considered into two parts, the downward section with $-\pi/2 \leq \theta \leq 0$ and upward one with $ 0 \leq \theta \leq \pi/2$. The normal reflection positively contributes to the $G_T$ from the upward part, whereas its contribution is negative from the downward part. This term is a little complicated for the reflected hole because it has two types of retro and specular reflections. We use the $\mathcal{S}$ term to take this into account. In the absence of chemical potential, $\mu=0$, we have $\mathcal{S}=1$. These parts compete to determines the sign of $G_T$. The $G_T$ changes from zero at $\epsilon=m_0$, where the incoming fermion can find an spot on the hole-like cone during the Andreev process. Since Andreev reflection is dominant in the range of $m_0 \leq \epsilon \leq \Delta_0$, the transverse conductance is negative. The $G_T$ reaches its minimum around $\epsilon=2 m_0$ where the normal reflection starts to overcome the Andreev one. The damping of Andreev reflection at the edge of superconducting gap leads to the positive value for $G_T$. This effect can be used in a real experiment to determine the value of superconducting gap. Since the physics of $G_T$ originates from the relative location of electron-like and hole-like cones, it tends to zero at the high energy regime , $\epsilon \gg \Delta_0$, where this effect can be ignored. Also, the sign of $G_T$ depends on the Dirac cone's location. The ferromagnetism induced into the surface of 3D TI by means of the proximity effect. This occurs by a ferromagnetic lead such as $MnO_2$ or $EuO$. So, the rotation of the ferromagnetic lead rotates the direction of magnetization and changes the sign of $G_T$ and can be detected in the Hall voltage experiment.
\subsection{Andreev Transfer Torque}
\label{Sec.Re3}
\begin{figure}
\includegraphics*[scale=0.43]{Hall.pdf}
\caption{(color online) The transverse conductance vs. energy excitation for different values of magnetization direction. This type of transverse conductance originates from the scattering at the interface. }
\label{Fig.GH}
\end{figure}
To obtain the $z$-component of ATT, we consider a square with side length of A on the junction. According to the Eq.(\ref{Eq.CTT2}), the incoming fermion enters to the square with propagation direction of $\theta$. During the Andreev process it reflects as a hole quasi-particle and leaves the square with the probability of $R_A$ and different propagation direction of $\alpha$. We want to obtain the net current which passes through these mutually parallel lines. From Eq.(\ref{Eq.CTT2}), we have,
\begin{equation}
T_z=\oint (J_y dx-J_x dy)=A ( \delta G_y-\delta G_x)
\label{Eq.CTT3}
\end{equation}
where $\delta G_y \sim G_T$ is difference between conductances flows in the presence and absence of in-plane magnetization parallel to the junction, respectively. The probability conservation dictates the incoming current density into the square to be equal with the outgoing one. So, we have $\delta G_x=-\delta G_y$ and the final result is,
\begin{equation}
T_z \sim 2 G_T A.
\label{Eq.CTTFinal}
\end{equation}
Here, $T_z$ directly related to the transverse conductance. The absence of $\sigma_z$ term in the spin-orbit coupling term of Eq.(\ref{Eq.Hamiltoina}) and two-dimensional nature of TI-based junction lead the other components of ATT, $\{T_x,T_y\}$, to zero. The ATT imposes on the junction positively and negatively corresponds to positive and negative transverse conductances, respectively. In the energy range that transverse conductance is maximum, the ATT reaches to its maximum, too. This originates in the bending of propagation direction. These effects in the energy range of $m_0 \leq \epsilon \leq \Delta_0$ are important and experimentally detectable. Moreover, the ATT approaches zero in the high energy limit where the Dirac cone's displacement can be ignored.
\section{Conclusion}
\label{Sec.Concl}
Lets discuss about the output prediction of our model. The typical value of induced ferromagnetic field by means of proximity effect on the surface of 3D TIs is $5 \sim 50 meV$. This occurs by a ferromagnetic electrode such as $EuO$ or $MnO_2$ that deposits on the surface of $Bi_2 Se_3$ \cite{Haugen2008PRB}. Also, the induced superconducting gap has the same order of ferromagnetic field. The high-quality topological insulators that can be fabricated now, have sufficiently large coherence length up to $\sim 370 nm$. It means the boundary details and localization effect are negligible, and the ballistic limit is a well approximation to explore the junction \cite{Yokoyama2010PRB}. We normalized our results with respect to the superconducting gap. It means the approximated value of the indirect gap would be $2.5 \sim 25 meV$. Also, the usual width of the junction in a real situation is of the order of the coherence length. So, the approximated value of the transverse conductance would be $\sim 10^{-4} Simens$, which can be measurable with available technology. Finally, the magnitude of ATT is $\sim 10^{-4} Simens (\mu m)^2$.
In this paper, we consider the effect of angle-dependent Andreev reflection of an F/S junction on the surface of 3D TIs. We show whenever in-plane magnetization has a component perpendicular to the interface, the electron-like and hole-like cones separate in the $k$-space. This induces an indirect gap in the junction. Also it creates a no-current area in longitudinal conductance. Moreover, the Andreev reflection is angle-dependent that leads to a transverse conductance. Further, we show the sign of transverse conductance is related to the magnetization direction. Based on this, we design an experimental set up to reveal it. At las but not least, we illustrate that the transverse conductance imposes a torque on the junction that is important in the low-energy limit near the Dirac point. The $z$-component of this torque, ATT, is non-zero, and its sign and magnitude are related to the sign and magnitude of the transverse conductance. Since the STT is very important for data storage technology, we sure its extension to ATT can be very useful in designing and fabricating new devices for further applications.
|
2,877,628,091,524 | arxiv | \section{Introduction}
\label{sec:intro}
Recently, Bern, Dixon and Smirnov have proposed an ansatz~\cite{Bern:2005iz}
for the colour-stripped $l$-loop $n$-gluon scattering amplitude in the maximally supersymmetric
$\begin{cal}N\end{cal}=4$ Yang-Mills theory (MSYM), with the maximally-helicity violating (MHV) configuration
for arbitrary $l$ and $n$.
They checked that the ansatz agrees analytically with the evaluation
of the three-loop four-gluon amplitude.
The ansatz has been proven to be correct also for the two-loop five-gluon amplitude,
which has been computed numerically~\cite{Bern:2006vw,Cachazo:2008vp}.
The ansatz implies a tower of iteration formul\ae, which allow one to determine the $n$-gluon
amplitude at a given number of loops in terms of amplitudes with fewer loops.
For example, the iteration formula for the colour-stripped two-loop MHV amplitude $m_n^{(2)}(\epsilon)$ is
\begin{equation}
m_n^{(2)}(\epsilon) = \frac{1}{2} \left[m_n^{(1)}(\epsilon)\right]^2
+ f^{(2)}(\epsilon)\, m_n^{(1)}(2\epsilon) + Const^{(2)} + {\cal O} (\epsilon)\, ,\label{eq:ite2bds}
\end{equation}
thus the two-loop amplitude is determined in terms of a constant, $Const^{(2)}$,
a known function, $f^{(2)}(\epsilon)$, of the
dimensional-regularisation parameter $\epsilon$ (which is related to the
cusp~\cite{Korchemsky:1987wg,Beisert:2006ez} and
collinear~\cite{Magnea:1990zb,Sterman:2002qn} anomalous dimensions)
and the one-loop MHV amplitude $m_n^{(1)}(\epsilon)$ evaluated to ${\cal O} (\epsilon^2)$.
The BDS ansatz was first predicted to fail by Alday and Maldacena~\cite{Alday:2007he,Alday:2007hr},
for amplitudes with a large number of gluons in the strong-coupling limit.
They claimed that the finite pieces of the two-loop amplitudes with six or more gluons
would be incorrectly determined. One can characterise this statement by the quantity $R_n^{(2)}$
\begin{equation}
R_n^{(2)} = m_n^{(2)}(\epsilon) - \frac{1}{2} \left[m_n^{(1)}(\epsilon)\right]^2
- f^{(2)}(\epsilon)\, m_n^{(1)}(2\epsilon) - Const^{(2)}\, ,\label{eq:discr}
\end{equation}
where $R_n^{(2)}$ may be a function of the kinematical parameters of the $n$-gluon amplitude,
but a constant with respect to $\epsilon$. Then the claim was that $R_n^{(2)}\ne 0$ for $n\ge 6$.
This prediction was backed up by Drummond et al.~\cite{Drummond:2007bm}, who
considered the finite contribution to the
hexagonal light-like Wilson loop at two loops. The conclusion was that either the BDS ansatz
was wrong, or the equivalence between Wilson loops and scattering amplitudes did not work at
two loops. The question was settled in Ref.~\cite{Bern:2008ap,Cachazo:2008hp}
by the numerical calculation of $m_6^{(1)}(\epsilon)$ to ${\cal O} (\epsilon^2)$ and
of $m_6^{(2)}(\epsilon)$, which allowed for the
numerical evaluation of $R_6^{(2)}$ and showed that it was different from zero.
This result also confirmed the equivalence between the scattering amplitude and the finite part of
the light-like hexagon Wilson loop~\cite{Drummond:2008aq}.
The question remains of how one can determine the function $R_n^{(2)}$? A direct analytical evaluation in general
kinematics is currently beyond our capability: it would require the computation of the one-loop hexagon to
${\cal O} (\epsilon^2)$, as well as the two-loop hexagon through to ${\cal O} (\epsilon^0)$. Another approach is to try to
constrain $R_n^{(2)}$ using some simplified kinematics, where one knows that the amplitude has certain
factorisation properties. Examples include the limit where one or more of the gluons
are soft or where two or more of the
gluons are collinear. In this paper, we consider another limit where the kinematics is simplified - the high
energy limit (HEL). For a multiparticle process there are several high energy limits that one can take,
corresponding to two or more of the gluon rapidities being strongly ordered, together with constraints on the
transverse momenta of the gluons. By relaxing the restriction on the gluon rapidities and transverse momenta, one
can systematically return to the general kinematics.
The simplest kinematics corresponds
to the multi-Regge kinematics~\cite{Kuraev:1976ge}, where all of the produced gluons are
strongly ordered in rapidity and have comparable transverse momenta.
We shall start then with the simplest possible kinematics and we will show that, in the Euclidean region,
$R_n^{(2)}$ does not contribute for any $n$.
Then we shall consider various quasi-multi-Regge kinematics, which gradually approach the more general kinematics, with a
view to determining where the function $R_n^{(2)}$ might not vanish and could therefore be constrained by the HEL.
Our paper is organised as follows. In Section~\ref{sec:mrkforall}, we review the multi-Regge kinematics and
discuss the Regge factorisation that tree-level (colour stripped) amplitudes obey. In Section~\ref{sec:npthel},
we extend the Regge factorisation beyond leading order and provide a conjecture for the factorised form for the
colour stripped $n$-gluon amplitude to all orders, both in the Euclidean region, where all invariants are
space-like, and in the physical region, where the $s$-type invariants are time-like and the $t$-type invariants are
space-like\footnote{For the one-loop six-gluon amplitude,
in the Minkowski region where the centre-of-mass energy squared $s$ and the energy squared $s_2$
of the two gluons emitted along the ladder are time-like while all other invariants stay space-like, the factorised form
conjectured in Section~\ref{sec:npthel} is not valid~\cite{Bartels:2008ce, Schabinger:2009bb}.}.
The high-energy limits of the four-, five- and six-gluon MHV amplitudes are
developed in section~\ref{sec:456pthel}, including explicit expressions for the Regge trajectory (up to three
loops), the coefficient functions (up to three loops) and the Lipatov vertex in MSYM. In
section~\ref{sec:bdsmrk} we consider the BDS ansatz in the multi-Regge kinematics. By considering the four- and
five-point amplitudes, we show that both the coefficient function and the Lipatov vertex satisfy an iterative
structure very similar to the BDS ansatz
itself\footnote{It is well known that the $l$-loop Regge trajectory is
directly related to $f^{(l)}(\epsilon)$.}. This iterative structure ensures that the six-point amplitude is completely
determined by known functions, and, in the multi-Regge kinematics is guaranteed to satisfy the BDS ansatz in the Euclidean and in the
physical regions. In other words, in those regions the remainder function $R_6^{(2)}$ vanishes in the multi-Regge kinematics.
We derive exponentiated forms for the
coefficient functions and Lipatov vertex in section~\ref{sec:proof} and prove that we recover
the BDS ansatz in the multi-Regge kinematics for any number of loops.
We consider other quasi-multi-Regge kinematics in
section~\ref{sec:quasi}. In particular, we consider the slightly more general kinematics where all but two of the
gluons (the two gluons with either the largest or smallest rapidities) are strongly ordered in rapidity. This
quasi-multi-Regge kinematics first occurs in the five gluon amplitude and introduces a new coefficient function
with two final state gluons which also satisfies an iterative structure similar to the BDS ansatz. Once again,
$R_6^{(2)}$ does not contribute in this limit and we note that the conformal kinematic ratios also take a
particularly simple form in this quasi-multi-Regge kinematics. Finally, in \sec{sec:outlook} we consider
more general kinematics - with
three gluons having similar rapidities, or where the two central gluons have similar rapidities. These
configurations first appear with four gluons in the final state. The new vertices and coefficient functions
associated with these kinematics cannot be determined using the five-gluon amplitude, and require explicit
knowledge of the six-gluon amplitude. We therefore cannot say anything about the sensitivity of the HEL to
$R_6^{(2)}$, but note that in each of these cases, the three
conformal kinematic ratios relevant for six-gluon scattering do not simplify, and take general
values. We enclose appendices detailing the multi-Regge and quasi-multi-Regge kinematics.
\section{Multi-Regge kinematics}
\label{sec:mrkforall}
Because in this work we make repeated use of the multi-Regge kinematics,
we shall give here a short pedagogical introduction to it. We consider an
$n$-gluon amplitude, $g_1\,g_2\to g_3\,g_4\,\cdots g_n$, with
all the momenta taken as outgoing,
and label the gluons cyclically clockwise. In the
multi-Regge kinematics~\cite{Kuraev:1976ge}, the produced gluons are
strongly ordered in rapidity and have comparable transverse momenta,
\begin{equation}
y_3 \gg y_4\gg \cdots\gg y_n;\qquad |p_{3\perp}| \simeq |p_{4\perp}| ...\simeq|p_{n\perp}|\,
.\label{mrknpt}
\end{equation}
Accordingly, we can write the Mandelstam invariants in the approximate
form\footnote{In Appendices~\ref{sec:mpk} and \ref{sec:mrk}, we write the invariants (\ref{invb})
and the spinor products (\ref{ypro}), in terms of light-cone coordinates. Although the light-cone
formulation is more convenient for performing calculations, we prefer to give here
those quantities in terms of rapidities because it is physically more intuitive.
The translation between light-cone coordinates and rapidities is straightforward
(please see Appendix~\ref{sec:mpk}).}
\begin{eqnarray}
s_{12} &\simeq& |p_{3\perp}| |p_{n\perp}| e^{y_3-y_n}\, ,\nonumber \\
s_{2i} &\simeq& - |p_{3\perp}| |p_{i\perp}| e^{y_3-y_i}\, ,\label{invb}\\
s_{1i} &\simeq& - |p_{i\perp}| |p_{n\perp}| e^{y_i-y_n}\, ,\nonumber\\
s_{ij} &\simeq& |p_{i\perp}| |p_{j\perp}| e^{|y_i-y_j|}\, .\nonumber
\end{eqnarray}
for $i,j = 3,\ldots ,n$. We label the momenta transferred in the $t$-channel as
\begin{eqnarray}
q_1 &=& p_1+p_n \nonumber\\
q_2 &=& q_1+p_{n-1} = q_3 - p_{n-2} \nonumber\\
&\vdots& \label{eq:mome}\\
q_{n-4} &=& q_{n-5} + p_5 = q_{n-3} - p_4 \nonumber\\
q_{n-3} &=& -p_2-p_3\, ,\nonumber
\end{eqnarray}
with virtualities $t_i = q_i^2$.
Then it is easy to see that in the multi-Regge kinematics the transverse
components of the momenta $q_i$ dominate over the longitudinal components,
$q_i^2 \simeq - |q_{i\perp}|^2$. In addition, $t_1=s_{1n}$ and
$t_{n-3}=s_{23}$, and we label $s = s_{12}$, and $s_1=s_{n-1,n}$,
$s_2=s_{n-2,n-1}$, \ldots, $s_{n-3}=s_{34}$ for $n > 4$.
Thus, the multi-Regge kinematics (\ref{mrknpt}) become
\begin{equation}
s \gg s_{1},\ s_{2}, \ldots, s_{n-3} \gg -t_1,\ -t_2,\ldots, -t_{n-3}\, ,\label{eq:mrknpt2}
\end{equation}
with the special case $s \gg -t$ for $n = 4$.
Labelling the transverse momenta of the gluons emitted along the ladder as
$\kappa_1=|p_{n-1\perp}|^2$, $\kappa_2=|p_{n-2\perp}|^2$,
\ldots, $\kappa_{n-4}=|p_{4\perp}|^2$, and using \eqn{invb}, we can write
\begin{equation}
\kappa_1 = \frac{s_{1}\, s_{2}}{s_{n-2,n-1,n}} \qquad
\kappa_2 = \frac{s_{2}\, s_{3}}{s_{n-3,n-2,n-1}} \qquad \cdots \qquad
\kappa_{n-4} = \frac{s_{n-4}\, s_{n-3}}{s_{345}}
,\label{massnpt}
\end{equation}
for $n > 4$, which are known as the mass-shell conditions (\ref{eq:masshell})
for the gluons along the ladder. \eqn{invb} also implies a relation amongst the mass-shell
conditions,
\begin{equation}
s\, \kappa_1 \cdots \kappa_{n-4} = s_1\, s_2 \cdots s_{n-3}\, .\label{eq:condit}
\end{equation}
In the multi-Regge kinematics the spinor products are given by \eqn{mrpro}
\begin{eqnarray}
\langle 2 1\rangle &\simeq& -\sqrt{|p_{3\perp}|
|p_{n\perp}|} \exp\left(\frac{y_3-y_n}{2}\right)\, ,\nonumber\\
\langle 2 i\rangle &\simeq& -i \sqrt{\frac{|p_{3\perp}|}{ |p_{i\perp}|}}\,
p_{i\perp} \exp\left(\frac{y_3-y_i}{ 2}\right)\, ,\label{ypro}\\
\langle i 1\rangle &\simeq& i \sqrt{|p_{i\perp}||p_{n\perp}|}\, \exp\left(\frac{y_i-y_n}{ 2}\right)\, ,\nonumber\\
\langle i j\rangle &\simeq& -\sqrt{\frac{|p_{i\perp}|}{ |p_{j\perp}|}}\,
p_{j\perp} \exp\left(\frac{y_i-y_j}{2}\right)\, \qquad {\rm for}\, y_i>y_j .\nonumber
\end{eqnarray}
\subsection{MHV amplitudes in multi-Regge kinematics}
\label{sec:mhvmrk}
The colour decomposition of the tree-level $n$-gluon amplitude
is~\cite{Mangano:1990by}
\begin{equation}
{\cal M}_n^{(0)} = 2^{n/2}\, g^{n-2}\, \sum_{S_n/Z_n} {\rm tr}(T^{d_1}
\cdots
T^{d_n}) \, m_n^{(0)}(1,\ldots, n)\, ,\label{one}
\end{equation}
where $d_i$ is the colour of a gluon of momentum $p_i$ and helicity $\nu_i$.
The $T$'s are the colour
matrices\footnote{We use the normalization
${\rm tr}(T^c T^d) = \delta^{cd}/2$,
although it is immaterial in what follows.} in the
fundamental representation of SU($N$) and the sum is over the noncyclic
permutations $S_n/Z_n$ of the set $[1, \ldots ,n]$. We consider the MHV
configurations $(-,-,+, \ldots ,+)$ for which the tree-level gauge-invariant
colour-stripped amplitudes
assume the form
\begin{equation}
m_n^{(0)}(1,2, \ldots ,n) = \frac{\langle p_i p_j\rangle^4}
{\langle p_1 p_2\rangle \cdots\langle p_{n-1} p_n\rangle
\langle p_n p_1\rangle}\, ,\label{two}
\end{equation}
where $i$ and $j$ are the gluons of negative helicity. The colour structure
of \eqn{one} in multi-Regge kinematics is
known~\cite{DelDuca:1993pp,DelDuca:1995zy,DelDuca:1999rs}
and will not be considered further.
Here we shall concentrate on the behaviour of the colour-stripped
amplitudes (\ref{two}), which in multi-Regge kinematics
has the factorised form~\cite{DelDuca:1995zy}
\begin{eqnarray}
m_n^{(0)}(1,2, \ldots ,n) &=&
s \left[g\, C^{(0)}(p_2,p_3) \right]\,
\frac{1}{t_{n-3}}\, \left[g\,V^{(0)}(q_{n-3},q_{n-4};\kappa_{n-4})\right]
\label{treenpt}\\ & &\quad \cdots \times\ \frac{1}{t_2}\,
\left[g\,V^{(0)}(q_2,q_1,\kappa_1)\right]\, \frac{1}{ t_1}\,
\left[g\, C^{(0)}(p_1,p_n) \right]\, .\nonumber
\end{eqnarray}
\begin{figure}[!t]
\begin{center}
\begin{fmffile}{mr}
\begin{fmfgraph*}(200,250)
\fmfstraight
\fmfleft{p1,pdn1,pdn2,pd5,pd4,p2}
\fmfright{x1,x2,x3,x5,x6,x7}
\fmf{phantom}{p1,u1,v1n,u2,pn,u3,x1}
\fmf{phantom}{p2,o1,v23,o2,p3,o3,x7}
\fmffreeze
\fmf{phantom}{pn,pn1,pn2,p5,p4,p3}
\fmffreeze
\fmf{gluon,label=$p_2$,label.side=left,l.d=0.03w}{p2,v23}
\fmf{gluon,label=$p_3$,label.side=left,l.d=0.03w}{v23,p3}
\fmf{phantom}{v23,v4,v5,vn2,vn1,v1n}
\fmffreeze
\fmfv{decor.shape=circle,decor.filled=shaded,decor.size=.09w,fore=green}{v23}
\fmfv{decor.shape=circle,decor.filled=shaded,decor.size=.09w,fore=green}{v4}
\fmfv{decor.shape=circle,decor.filled=shaded,decor.size=.09w,fore=green}{v5}
\fmfv{decor.shape=circle,decor.filled=shaded,decor.size=.09w,fore=green}{vn2}
\fmfv{decor.shape=circle,decor.filled=shaded,decor.size=.09w,fore=green}{vn1}
\fmfv{decor.shape=circle,decor.filled=shaded,decor.size=.09w,fore=green}{v1n}
\fmf{zigzag,label=$q_{n-3}$,label.side=right,l.d=0.055w}{v23,v4}
\fmf{zigzag,label=$q_{n-4}$,label.side=right,l.d=0.055w}{v4,v5}
\fmf{zigzag,label=$q_{2}$,label.side=right,l.d=0.055w}{vn2,vn1}
\fmf{zigzag,label=$q_{1}$,label.side=right,l.d=0.055w}{vn1,v1n}
\fmf{gluon,label=$p_1$,label.side=left,l.d=0.03w}{p1,v1n}
\fmf{gluon,label=$p_{n}$,label.side=left,l.d=0.03w}{v1n,pn}
\fmffreeze
\fmf{gluon,label=$p_4$,label.side=left,l.d=0.03w}{v4,p4}
\fmf{gluon,label=$p_5$,label.side=left,l.d=0.03w}{v5,p5}
\fmf{gluon,label=$p_{n-2}$,label.side=left,l.d=0.03w}{vn2,pn2}
\fmf{gluon,label=$p_{n-1}$,label.side=left,l.d=0.03w}{vn1,pn1}
\fmffreeze
\fmf{phantom}{u3,ou4,ou5,oun2,oun1,o3}
\fmffreeze
\fmf{phantom}{o3,ox1,ox2,ox3,ox4,ox5,ox6,ox7,ox8,ox9,ox10,oun1}
\fmffreeze
\fmf{plain,tension=0.2,left=0.3,label=$s_{n-3}$}{ox1,ox10}
\fmffreeze
\fmf{phantom}{oun1,oun1x1,oun1x2,oun1x3,oun1x4,oun1x5,oun1x6,oun1x7,oun1x8,oun1x9,oun1x10,oun2}
\fmffreeze
\fmf{plain,tension=0.2,left=0.3,label=$s_{n-4}$}{oun1x1,oun1x10}
\fmffreeze
\fmf{phantom}{ou5,ou5x1,ou5x2,ou5x3,ou5x4,ou5x5,ou5x6,ou5x7,ou5x8,ou5x9,ou5x10,ou4}
\fmffreeze
\fmf{plain,tension=0.2,left=0.3,label=$s_2$}{ou5x1,ou5x10}
\fmffreeze
\fmf{phantom}{ou4,ou4x1,ou4x2,ou4x3,ou4x4,ou4x5,ou4x6,ou4x7,ou4x8,ou4x9,ou4x10,u3}
\fmffreeze
\fmf{plain,tension=0.2,left=0.3,label=$s_1$}{ou4x1,ou4x10}
\fmffreeze
\fmfv{label=$\kappa_{n-4}$,l.a=-180,l.d=-0.14w}{p4}
\fmfv{label=$\kappa_{n-5}$,l.a=-180,l.d=-0.14w}{p5}
\fmfv{label=$\kappa_{2}$,l.a=-180,l.d=-0.08w}{pn2}
\fmfv{label=$\kappa_{1}$,l.a=-180,l.d=-0.08w}{pn1}
\fmf{phantom}{v5,v67,vn2}
\fmfv{label=$\vdots$,l.d=-1mm}{v67}
\end{fmfgraph*}
\end{fmffile}
\end{center}
\caption{\label{fig:MR}Amplitude in the multi-Regge kinematics.
The green blobs indicate the coefficient functions (impact factors) and the Lipatov vertices describing the emission of gluons along the ladder.}
\end{figure}
This factorization is shown schematically in~\fig{fig:MR}.
The gluon coefficient functions $C^{(0)}$, which yield the LO gluon impact factors,
are given in Ref.~\cite{Kuraev:1976ge} in terms of their spin structure
and in Ref.~\cite{DelDuca:1995zy,DelDuca:1996km} at fixed
helicities of the external gluons,
\begin{equation}
C^{(0)}(p_2^-,p_3^+) = 1 \qquad C^{(0)}(p_1^-,p_n^+) = \frac{p_{n\perp}^*}
{p_{n\perp}}\, ,\label{centrc}
\end{equation}
with $p_{\perp}=p_x+ip_y$ the complex transverse momentum.
The vertex for the emission of a gluon along the ladder
is the Lipatov vertex~\cite{DelDuca:1995zy,Lipatov:1976zz,Lipatov:1991nf}
\begin{equation}
V^{(0)}(q_{j+1},q_{j},\kappa_j) = \sqrt{2}\, \frac{q^*_{{j+1}\perp} q_{{j}\perp}}{p_{n-j\perp}}\,
,\label{lipeq}
\end{equation}
with $p_{n-j} = q_{j+1} - q_j$.
\section{The high-energy limit of the $n$-gluon amplitude}
\label{sec:npthel}
The virtual radiative corrections to Eq.~(\ref{treenpt}) in the
leading logarithmic (LL) approximation are obtained, to all orders
in $\as$, by replacing the propagator of the $t$-channel gluon by its
reggeised form~\cite{Kuraev:1976ge}. That is, by making the replacement
\begin{equation}
\frac{1}{ t_i} \to \frac{1}{t_i}
\left(\frac{s_i}{ \tau}\right)^{\alpha(t_i)}\, ,\label{sud}
\end{equation}
in Eq.~(\ref{treenpt}), where $\alpha(t_i)$ can be written in
dimensional regularization in $d=4-2\epsilon$ dimensions as
\begin{equation}
\alpha(t_i) = \gs^2\, c_{\Gamma}\,
\left(\frac{\mu^2}{ -t_i}\right)^{\epsilon} \, N\, \frac{2}{\epsilon}
,\label{alph}
\end{equation}
with $N$ colours, and
\begin{equation}
c_{\Gamma} = \frac{1}{(4\pi)^{2-\epsilon}}\, \frac{\Gamma(1+\epsilon)\,
\Gamma^2(1-\epsilon)}{ \Gamma(1-2\epsilon)}\, .\label{cgam}
\end{equation}
$\alpha(t_i)$ is the Regge trajectory and accounts for the
higher order corrections to gluon exchange in the $t_i$ channel. In \eqn{sud},
the reggeisation scale $\tau$ is introduced to separate contributions to
the
reggeized propagator, the coefficient
function and the Lipatov vertex. It is much smaller than any of the $s$-type
invariants, and it is of the order of the $t$-type invariants.
In order to go beyond the LL approximation and to compute the higher-order
corrections to the Lipatov vertex (\ref{lipeq}), we need a high-energy
prescription~\cite{Fadin:1993wh} that
disentangles the virtual corrections to the Lipatov vertex
from those to the coefficient functions (\ref{centrc})
and from those that reggeize the gluon (\ref{sud}).
The high-energy prescription of Ref.~\cite{Fadin:1993wh} is given
at the colour-dressed amplitude level in QCD, where it holds
to the next-to-leading-logarithmic (NLL) accuracy. However, it has been
shown to break down in the imaginary part of the QCD one-loop four-parton
amplitude~\cite{DelDuca:1998kx}, in the imaginary part of the QCD
one-loop five-gluon amplitude~\cite{DelDuca:1998cx},
and in the two-loop four-point amplitude in MSYM~\cite{DelDuca:2008pj}.
This is because the mismatches between the colour orderings and the
multi-Regge kinematics become apparent at NLL. When the colour ordering
is correctly aligned with the multi-Regge limit, the factorisation applies to NLL
and beyond. In Ref.~\cite{DelDuca:2008pj}, we showed that the
high-energy prescription, applied to the colour-stripped four-point
amplitude is valid up to three loops.
Thus, we conjecture that in the multi-Regge kinematics in the Euclidean region
a generic colour-stripped $n$-gluon amplitude has the factorised form,
\begin{eqnarray}
\lefteqn{ m_n(1,2, \ldots ,n) =
s \left[g\, C(p_2,p_3) \right]\,
\frac{1}{t_{n-3}}\, \left(\frac{-s_{n-3}}{ \tau}\right)^{\alpha(t_{n-3})}
\left[g\,V(q_{n-3},q_{n-4},\kappa_{n-4})\right] }
\nonumber\\ &&\qquad\qquad \cdots \times\
{1\over t_2}\, \left({-s_2\over \tau}\right)^{\alpha(t_2)}
\left[g\,V(q_2,q_1,\kappa_1)\right]\,
{1\over t_1}\, \left({-s_1\over \tau}\right)^{\alpha(t_1)}
\left[g\, C(p_1,p_n) \right]\, , \label{loopnpt}
\end{eqnarray}
where we suppressed the dependence of the
coefficient function and of the Lipatov vertex on the reggeisation scale $\tau$,
and on the dimensional regularisation parameters $\mu^2$ and $\epsilon$.
In the Euclidean region, where the invariants are all negative,
\begin{equation}
s, s_1, s_2, \ldots, s_{n-3}, t_1, t_2, \ldots t_{n-3} < 0\, ,\label{eq:unphys}
\end{equation}
the colour-stripped amplitude $m_n$, \eqn{loopnpt}, is real.
Then the multi-Regge kinematics~(\ref{eq:mrknpt2}) are
\begin{equation}
-s \gg -s_{1}, -s_{2}, \ldots, -s_{n-3}\gg -t_1, -t_2 \dots, -t_{n-3}\, ,\label{eq:mrkneg}
\end{equation}
and the on-shell condition (\ref{massnpt}) is
\begin{equation}
-\kappa_1 = {(-s_{1})\, (-s_{2})\over -s_{n-2,n-1,n}}\, ,\quad
-\kappa_2 = {(-s_{2})\, (-s_{3})\over -s_{n-3,n-2,n-1}}\, ,\quad \cdots \quad
-\kappa_{n-4} = {(-s_{n-4})\, (-s_{n-3})\over -s_{345}}
.\label{nmassnpt}
\end{equation}
In \eqn{loopnpt}, the Regge trajectory has the perturbative expansion,
\begin{equation}
\alpha(t_i) = \bar\gs^{2} \bar\alpha^{(1)}(t_i) + \bar\gs^4 \bar\alpha^{(2)}(t_i) +
\bar\gs^6 \bar\alpha^{(3)}(t_i) + {\cal O} (\bar\gs^8)\, ,\label{alphb}
\end{equation}
with $i=1,\ldots,n-3$, and with the rescaled coupling
\begin{equation}
\bar\gs^2 = \gs^2 c_\Gamma N\, .\label{rescal}
\end{equation}
In \eqn{loopnpt}, the coefficient functions $C$ and the Lipatov vertex $V$ are also
expanded in the rescaled coupling,
\begin{eqnarray}
C(p_i,p_j,\tau) &=& C^{(0)}(p_i,p_j)\left(1 + \sum_{r=1}^{s-1} \bar\gs^{2r} \bar C^{(r)}(t_k,\tau)
+ {\cal O} (\bar\gs^{2s}) \right)\,
,\label{fullv} \\
V(q_{j+1},q_j,\kappa_j,\tau) &=& V^{(0)}(q_{j+1},q_j)\left(1 + \sum_{r=1}^{s-1} \bar\gs^{2r}
\bar V^{(r)}(t_{j+1},t_j,\kappa_j,\tau)
+ {\cal O} (\bar\gs^{2s}) \right)\, .\nonumber
\end{eqnarray}
with $(p_i+p_j)^2=t_k$
where $C$ and $V$ are real,
up to overall complex phases in $C^{(0)}$, \eqn{centrc}, and $V^{(0)}$,
\eqn{lipeq}, induced by the complex-valued helicity bases.
Note that because several transverse scales
occur, we prefer to keep the dependence on $\mu^2$ of the trajectory, coefficient
function and Lipatov vertex within the loop coefficient rather than in the
rescaled coupling,
\begin{eqnarray}
&& \bar\alpha^{(n)}(t_i) = \left({\mu^2\over -t_i}\right)^{n\epsilon} \alpha^{(n)}\,
,\quad \bar C^{(n)}(t_k,\tau) = \left({\mu^2\over -t_k}\right)^{n\epsilon}
C^{(n)}(t_k,\tau)\, ,\nonumber\\
&& \bar V^{(n)}(t_{j+1},t_j,\kappa_j,\tau) = \left({\mu^2\over -\kappa_j}\right)^{n\epsilon}
V^{(n)}(t_{j+1},t_j,\kappa_j,\tau)\, .\label{eq:coeffrescal}
\end{eqnarray}
The expansion of \eqn{loopnpt} can be written as
\begin{equation}
m_n = m_n^{(0)} \left( 1 + \bar\gs^2\ m_n^{(1)} + \bar\gs^4 m_n^{(2)} + \bar\gs^6 m_n^{(3)}
+ {\cal O} (\bar\gs^8) \right)\, .\label{elasexpand}
\end{equation}
\subsection{Analytic continuation of the $n$-gluon amplitude to the physical region}
\label{sec:analytic}
We analytically continue the high-energy prescription for the
colour-stripped amplitude (\ref{loopnpt}) to the physical region\footnote{Care must be
exercised in analytically continuing \eqn{loopnpt}: in Ref.~\cite{Bartels:2008ce} it has
been shown that in the Minkowski region where $s, s_2$ are positive while all other
invariants stay negative, the one-loop six-gluon amplitude cannot be cast in the form
of \eqn{loopnpt}.}, where
\begin{equation}
s, s_1, s_2, \ldots s_{n-3} > 0\, ,\qquad t_1, t_2, \ldots t_{n-3} < 0\, ,\label{eq:phys}
\end{equation}
through the usual prescription $\ln(-s_j) = \ln(s_j) - i\pi$, for $s_j > 0$.
Then the multi-Regge kinematics are given by \eqn{eq:mrknpt2}
and the mass-shell condition by \eqn{massnpt}. We still use the
expansions of Eqs.~(\ref{alphb}--\ref{eq:coeffrescal}), but because of
the analytic continuation on $\kappa_1,\ldots,\kappa_{n-3}$ (which follows directly from the Eq.~(\ref{nmassnpt}) once the analytic continuation on the $s$-type invariants is established), in going
from \eqn{nmassnpt} to \eqn{massnpt}, the Lipatov vertices become complex,
\begin{equation}
\bar V^{(n)}(t_{j+1},t_j,\kappa_j,\tau) =
\left(\frac{\mu^2}{\kappa_j}\right)^{n\epsilon}
V^{(n)}_{\rm phys}(t_{j+1},t_j,\kappa_j,\tau)\, ,\label{eq:posrescal}
\end{equation}
with
\begin{equation}
V^{(n)}_{\rm phys}(t_{j+1},t_j,\kappa_j,\tau) = e^{i\pi n\epsilon}\, V^{(n)}(t_{j+1},t_j,\kappa_j,\tau)\, .
\label{eq:vnlip}
\end{equation}
\section{The high-energy limit of the four--, five-- and six--point MHV amplitudes}
\label{sec:456pthel}
\subsection{The four--point amplitude in multi-Regge kinematics}
\label{sec:4pthel}
For the 4--point amplitude, $g_1\,g_2\to g_3\,g_4$, the high-energy
prescription~(\ref{loopnpt}) becomes
\begin{equation}
m_4(1, 2, 3, 4) = s \left[\gs\, C(p_2,p_3,\tau) \right]
{1\over t} \left({-s\over \tau}\right)^{\alpha(t)}
\left[\gs\, C(p_1,p_4,\tau) \right]\, .\label{elasuchan}
\end{equation}
In order for the colour-stripped amplitude $m_4$ to be real, we take it in the
unphysical region where $s$ is negative. Then the Regge kinematics are,
\begin{equation}
-s \gg -t\, .\label{neghe}
\end{equation}
Using the loop expansions of the Regge trajectory (\ref{alphb}) and of the
coefficient function (\ref{fullv}), \eqn{elasuchan} can be written as \eqn{elasexpand}
for $n = 4$. Then the knowledge of the $l$-loop coefficient $m_4^{(l)}$ allows one
to derive the $l$-loop trajectory $\alpha^{(l)}$ and coefficient function $C^{(l)}(t,\tau)$.
For example, the one-loop coefficient is given by
\begin{equation}
m_4^{(1)} = \bar\alpha^{(1)}(t) L +\ 2 \bar C^{(1)}(t,\tau)\, ,\label{4pt1l}
\end{equation}
with $L=\ln(-s/\tau)$, and $\bar\alpha$ and $\bar C$ rescaled as in \eqn{eq:coeffrescal}.
The one-loop trajectory is given by \eqn{alph},
\begin{equation}
\alpha^{(1)} = \frac{2}{\epsilon}\, ,\label{alpha1}
\end{equation}
and it is the same in QCD and in MSYM. The one-loop coefficient
function, $C^{(1)}$, has been computed in
Ref.~\cite{Fadin:1993wh,DelDuca:1998kx,Bern:1998sc,Fadin:1992zt,Fadin:1993qb}.
In MSYM it is, to all orders in $\epsilon$
\begin{eqnarray}
C^{(1)}(t,\tau) &=&
{\psi(1+\epsilon) - 2\psi(-\epsilon) + \psi(1)\over\epsilon}
- \frac{1}{\epsilon} \ln\frac{-t}{\tau} \nonumber\\
&=& \frac{1}{\epsilon^2} \left( -2 - \epsilon\, \ln\frac{-t}{\tau}
+ 3\sum_{n=1}^{\infty} \zeta_{2n}\, \epsilon^{2n}
+ \sum_{n=1}^{\infty} \zeta_{2n+1}\, \epsilon^{2n+1} \right)
\, .\label{eq:ifonel}
\end{eqnarray}
In fact, in the formul\ae\ that follow we shall need $C^{(1)}(t,\tau)$ through ${\cal O} (\epsilon^4)$.
The two-loop coefficient of \eqn{elasexpand} with $n = 4$ is
\begin{eqnarray}
m_4^{(2)} &=& {1\over 2} \left(\bar\alpha^{(1)}(t)\right)^2 L^2
+ \left( \bar\alpha^{(2)}(t) + 2\, \bar C^{(1)}(t,\tau) \bar\alpha^{(1)}(t) \right)\, L \nonumber\\
&+& 2\, \bar C^{(2)}(t,\tau) + \left(\bar C^{(1)}(t,\tau) \right)^2 \nonumber\\
&=& {1\over 2} \left( m_4^{(1)} \right)^2 + \bar\alpha^{(2)}(t) L +
2\, \bar C^{(2)}(t,\tau) - \left(\bar C^{(1)}(t,\tau) \right)^2\, \label{exp2loopu}.
\end{eqnarray}
where in the second equality we factor out the square of the
one-loop amplitude, in order to to facilitate the later comparison with
the BDS ansatz. In \eqn{exp2loopu}, $m_4^{(1)}$ must be known to ${\cal O} (\epsilon^2)$.
The two-loop trajectory, $\alpha^{(2)}$, is known in full
QCD~\cite{Fadin:1995xg,Fadin:1995km,Fadin:1996tb,Blumlein:1998ib,DelDuca:2001gu}.
In MSYM, it has been computed through ${\cal O} (\epsilon^0)$
directly~\cite{Kotikov:2000pm} and using the maximal
trascendentality principle~\cite{Kotikov:2002ab}, and through
${\cal O} (\epsilon^2)$ directly~\cite{DelDuca:2008pj},
\begin{equation}
\alpha^{(2)} = - {2\zeta_2\over\epsilon} - 2\zeta_3 - 8\zeta_4\epsilon
+ (36\zeta_2\zeta_3 + 82\zeta_5)\epsilon^2 + {\cal O} (\epsilon^3)\, .
\label{eq:tworegge}
\end{equation}
The MSYM two-loop coefficient function has been computed
through ${\cal O} (\epsilon^2)$~\cite{DelDuca:2008pj},
\begin{eqnarray}
C^{(2)}(t,\tau) &=& \frac{2}{\epsilon^4} + \frac{2}{\epsilon^3}\ln\frac{-t}{\tau}
- \left(5\zeta_2 - \frac{1}{2}\ln^2\frac{-t}{\tau}\right)\frac{1}{\epsilon^2}
- \left(\zeta_3+ 2\zeta_2\ln\frac{-t}{\tau}\right)\frac{1}{\epsilon}
\nonumber\\ &-& {55\over 4}\zeta_4 +
\left( \zeta_2\zeta_3 - 41\zeta_5 + \zeta_4\ln\frac{-t}{\tau}
\right) \epsilon \nonumber\\ &-&\left( {95\over 2}\zeta_3^2 + {1695\over 8}\zeta_6
+ (18\zeta_2\zeta_3 + 42\zeta_5) \ln\frac{-t}{\tau} \right)
\epsilon^2 + {\cal O} (\epsilon^3) \label{eq:2loopif}\\
&=& \frac{1}{2} \left[ C^{(1)}(t,\tau) \right]^2 + \frac{\zeta_2}{\epsilon^2}
+ \left(\zeta_3 + \zeta_2\ln\frac{-t}{\tau}\right)\frac{1}{\epsilon} \nonumber\\
&+& \left( \zeta_3\ln\frac{-t}{\tau} - 19\zeta_4\right)
+ \left( 4\zeta_4\ln\frac{-t}{\tau} - 2\zeta_2\zeta_3 - 39\zeta_5 \right) \epsilon \nonumber\\
&-&\left( 48 \zeta_3^2 + {1773\over 8}\zeta_6
+ (18\zeta_2\zeta_3 + 41\zeta_5) \ln\frac{-t}{\tau} \right)
\epsilon^2 + {\cal O} (\epsilon^3)\, .\nonumber
\end{eqnarray}
The three-loop coefficient is given by
\begin{eqnarray}
m_4^{(3)} &=&
{1\over 3!} \left(\bar\alpha^{(1)}(t)\right)^3 L^3
+ \bar\alpha^{(1)}(t) \left( \bar\alpha^{(2)}(t) + \bar C^{(1)}(t,\tau)\, \bar\alpha^{(1)}(t) \right) L^2
\label{exp3loopu}\\
&+& \left[ \bar\alpha^{(3)}(t) + 2\, \bar\alpha^{(2)}(t)\, \bar C^{(1)}(t,\tau)
+ \bar\alpha^{(1)}(t) \left( 2\, \bar C^{(2)}(t,\tau) + \left(\bar C^{(1)}(t,\tau)\right)^2 \right) \right] L \nonumber\\
&+& 2\, \bar C^{(3)}(t,\tau) + 2\, \bar C^{(2)}(t,\tau)\, \bar C^{(1)}(t,\tau) \nonumber\\
&=& m_4^{(2)} m_4^{(1)} - \frac{1}{3} \left(m_4^{(1)} \right)^3 \nonumber\\
&+& \bar\alpha^{(3)}(t) L
+ 2\, \bar C^{(3)}(t,\tau) - 2\, \bar C^{(2)}(t,\tau)\, \bar C^{(1)}(t,\tau)
+ \frac{2}{3} \left(\bar C^{(1)}(t,\tau)\right)^3\, .\nonumber
\end{eqnarray}
In MSYM, the three-loop trajectory, $\alpha^{(3)}$, has been evaluated in
Ref.~\cite{DelDuca:2008pj,Bartels:2008ce,Drummond:2007aua,Naculich:2007ub}
through ${\cal O} (\epsilon^0)$,
\begin{equation}
\alpha^{(3)} = {44\zeta_4\over 3\epsilon} + {40\over 3}\zeta_2\zeta_3 +
16\zeta_5 + {\cal O} (\epsilon) \, .\label{eq:treregge}
\end{equation}
The three-loop coefficient function has been evaluated in Ref.~\cite{DelDuca:2008pj}
through ${\cal O} (\epsilon^0)$ using knowledge of $m_4^{(1)}$ to ${\cal O} (\epsilon^4)$, and $m_4^{(2)}$
to ${\cal O} (\epsilon^2)$,
\begin{eqnarray}
C^{(3)}(t,\tau) &=& -\frac{4}{3\epsilon^6} - \frac{2}{\epsilon^5} \ln\frac{-t}{\tau}
+ \left(4\zeta_2 - \ln^2\frac{-t}{\tau}\right)\frac{1}{\epsilon^4} \label{eq:3loopif}\\
&+& \left( 3\zeta_2 \ln\frac{-t}{\tau} - \frac{1}{6} \ln^3\frac{-t}{\tau} \right) \frac{1}{\epsilon^3}
+ \left( {217\zeta_4\over 9} + \frac{\zeta_2}{2} \ln^2\frac{-t}{\tau} - \zeta_3 \ln\frac{-t}{\tau} \right)
\frac{1}{\epsilon^2} \nonumber\\
&+& \left( - {22\over 9}\zeta_2\zeta_3 + {224\over 3}\zeta_5
-\frac{\zeta_3}{2} \ln^2\frac{-t}{\tau} +\frac{71}{12}\zeta_4 \ln\frac{-t}{\tau} \right) \frac{1}{\epsilon}
\nonumber\\ &+& {796\over 9}\zeta_3^2 + {211861\over 432}\zeta_6
-\frac{5}{2} \zeta_4 \ln^2\frac{-t}{\tau} + \left(115 \zeta_5
+ \frac{97}{3} \zeta_2 \zeta_3\right) \ln\frac{-t}{\tau} + {\cal O} (\epsilon) \nonumber\\
&=& C^{(2)}(t,\tau)\, C^{(1)}(t,\tau) - {1\over 3} \left[C^{(1)}(t,\tau)\right]^3 \nonumber\\
&-& {44\over 9} \frac{\zeta_4}{\epsilon^2}
- \left( {40\over 9}\zeta_2\zeta_3 + {16\over 3}\zeta_5
+ \frac{22}{3}\zeta_4 \ln\frac{-t}{\tau} \right) \frac{1}{\epsilon} \nonumber\\
&+& \frac{3982}{27}\zeta_6 - {68\over 9}\zeta_3^2
- \left( 8\zeta_5 + \frac{20}{3} \zeta_2 \zeta_3\right) \ln\frac{-t}{\tau} + {\cal O} (\epsilon) \nonumber
\end{eqnarray}
It is straightforward to obtain the four-point amplitude in the physical region, $s \gg -t$,
by continuing Eqs.~(\ref{4pt1l}), (\ref{exp2loopu}) and (\ref{exp3loopu})
through the prescription $\ln(-s) = \ln(s) - i\pi$, for $s > 0$.
\subsection{The five--point amplitude in multi-Regge kinematics}
\label{sec:5pthel}
For the five-point amplitude, $g_1\,g_2\to g_3\,g_4\,g_5$, the high-energy
prescription (\ref{loopnpt}) becomes
\begin{equation}
m_5 = s \left[g\, C(p_2,p_3,\tau) \right]\,
{1\over t_2}\, \left({-s_2\over \tau}\right)^{\alpha(t_2)}
\left[g\,V(q_2,q_1,\kappa,\tau)\right]\,
{1\over t_1}\, \left({-s_1\over \tau}\right)^{\alpha(t_1)}
\left[g\, C(p_1,p_5,\tau) \right]\, , \label{loop5pt}
\end{equation}
where $p_4=q_2-q_1$, and
with the invariants labelled as in \sec{sec:mrkforall}, {\it i.e.}
$t_1=s_{51}$, $t_2=s_{23}$, $s_1=s_{45}$ and $s_2=s_{34}$.
In order for the amplitude $m_5$ to be real,
\eqn{loop5pt} is taken in the region where all the invariants are negative.
Thus, the multi-Regge kinematics (\ref{eq:mrkneg}) become,
\begin{equation}
-s \gg -s_{1}, -s_{2} \gg -t_1, -t_2\, .\label{eq:mrk2}
\end{equation}
Then the mass-shell condition (\ref{nmassnpt}) for the intermediate gluon 4 is
\begin{equation}
- \kappa = {(-s_{1})\, (-s_{2})\over -s}\, ,\label{mass}
\end{equation}
where $\kappa= - |p_{4\perp}|^2$.
In the expansion of \eqn{elasexpand} for $n=5$, the knowledge of
the $l$-loop five-point amplitude in the multi-Regge kinematics (\ref{eq:mrk2}),
together with the $l$-loop trajectory $\alpha^{(l)}$ and coefficient function $C^{(l)}$,
allows one to derive the Lipatov vertex to the same accuracy.
The one-loop coefficient is
\begin{equation}
m_5^{(1)} = \bar\alpha^{(1)}(t_1) L_1 + \bar\alpha^{(1)}(t_2) L_2
+ \bar C^{(1)}(t_1,\tau) + \bar C^{(1)}(t_2,\tau) + \bar V^{(1)}(t_1,t_2,\kappa,\tau)\,
.\label{exp1loop}
\end{equation}
where $L_i=\ln(-s_i/\tau)$ and $i=1,2$.
Then subtracting
the one-loop trajectory (\ref{alpha1}) and coefficient function (\ref{eq:ifonel})
from the one-loop five-point amplitude, we can derive the one-loop
Lipatov vertex. That will explicitly be done in a forthcoming publication.
In the expansion of \eqn{elasexpand} for $n=5$, the two-loop coefficient is
\begin{eqnarray}
m_5^{(2)} &=& \frac{1}{2} \left( m_5^{(1)} \right)^2
+ \bar\alpha^{(2)}(t_1) L_1 + \bar\alpha^{(2)}(t_2) L_2 \label{exp2loop}\\
&+& \bar C^{(2)}(t_1,\tau) +
\bar V^{(2)}(t_1,t_2,\kappa,\tau) + \bar C^{(2)}(t_2,\tau)\nonumber\\
&-& \frac{1}{2} \left( \bar C^{(1)}(t_1,\tau) \right)^2
- \frac{1}{2} \left( \bar V^{(1)}(t_1,t_2,\kappa,\tau) \right)^2
- \frac{1}{2} \left( \bar C^{(1)}(t_2,\tau) \right)^2
\, ,\nonumber
\end{eqnarray}
where $m_5^{(1)}$, $\bar C^{(1)}(t,\tau)$ and $ \bar V^{(1)}(t_1,t_2,\kappa,\tau)$ must be known to ${\cal O} (\epsilon^2)$.
Similarly, the three-loop coefficient is
\begin{eqnarray}
m_5^{(3)} &=&
m_5^{(2)} m_5^{(1)} - \frac{1}{3} \left(m_5^{(1)} \right)^3
+ \bar\alpha^{(3)}(t_1) L_1+\bar\alpha^{(3)}(t_2) L_2\nonumber \\
&+& \bar C^{(3)}(t_1,\tau)
+\bar V^{(3)}(t_1,t_2,\kappa,\tau)
+\bar C^{(3)}(t_2,\tau)\nonumber\\
&-& \bar C^{(2)}(t_1,\tau)\, \bar C^{(1)}(t_1,\tau)
- \bar V^{(2)}(t_1,t_2,\kappa,\tau) \bar V^{(1)}(t_1,t_2,\kappa,\tau)
-\bar C^{(2)}(t_2,\tau)\, \bar C^{(1)}(t_2,\tau) \nonumber\\
&+& \frac{1}{3} \left(\bar C^{(1)}(t_1,\tau)\right)^3
+\frac{1}{3} \left(\bar V^{(1)}(t_1,t_2,\kappa,\tau) \right)^3
+\frac{1}{3} \left(\bar C^{(1)}(t_2,\tau)\right)^3\, .\label{m53ite}
\end{eqnarray}
Here, to find $m_5^{(3)}$ to ${\cal O} (\epsilon^0)$, $m_5^{(1)}$, $\bar C^{(1)}(t,\tau)$ and $ \bar V^{(1)}(t_1,t_2,\kappa,\tau)$ must be
known to ${\cal O} (\epsilon^4)$ while
$m_5^{(2)}$, $\bar C^{(2)}(t,\tau)$ and $ \bar V^{(2)}(t_1,t_2,\kappa,\tau)$
must be known to ${\cal O} (\epsilon^2)$.
It is straightforward to obtain the amplitudes in the physical region where
$s, s_1, s_2$ are positive and $t_1, t_2$ are negative, and where
the multi-Regge kinematics are
\begin{equation}
s \gg s_{1},\ s_{2} \gg -t_1,\ -t_2\, .\label{eq:posmrk5pt}
\end{equation}
and the mass-shell condition is
\begin{equation}
\kappa = {s_{1}\, s_{2}\over s}\, ,\label{mass5pt}
\end{equation}
by continuing \eqns{exp1loop}{exp2loop}
through the prescriptions $\ln(-s_j) = \ln(s_j) - i\pi$, for $s_j > 0$ and $j=1, 2$
and $\ln(-\kappa) = \ln(\kappa) - i\pi$, for $\kappa > 0$, which implies
\eqn{eq:posrescal} for the Lipatov vertex.
\subsection{The six-point amplitude in multi-Regge kinematics}
\label{sec:6pthel}
For the six-gluon amplitude, $g_1\,g_2\to g_3\,g_4\,g_5\,g_6$,
the high-energy prescription (\ref{loopnpt}) in the Euclidean region becomes
\begin{eqnarray}
\lefteqn{ m_6 = s \left[g\, C(p_2,p_3,\tau) \right]\,
{1\over t_3}\, \left({-s_3\over \tau}\right)^{\alpha(t_3)}\,
\left[g\, V(q_2,q_3,\kappa_2,\tau)\right] } \nonumber\\
&\times& {1\over t_2}\, \left({-s_2\over \tau}\right)^{\alpha(t_2)}\,
\left[g\, V(q_1,q_2,\kappa_1,\tau)\right]\,
{1\over t_1}\, \left({-s_1\over \tau}\right)^{\alpha(t_1)}
\left[g\, C(p_1,p_6,\tau) \right]\, .\label{eq:hepresc6pt}
\end{eqnarray}
with $t_1=s_{61}$, $t_2=s_{234}$ and $t_3=s_{23}$, $s_1=s_{56}$, $s_2=s_{45}$
and $s_3=s_{34}$. In order for $m_6$ to be real, we take
\eqn{eq:hepresc6pt} in the unphysical region where the invariants
$s, s_1, s_2, s_3, t_1, t_2, t_3$ are all negative,
where the multi-Regge kinematics are,
\begin{equation}
-s \gg -s_{1}, -s_{2}, -s_{3} \gg -t_1, -t_2, -t_3\, ,\label{eq:mrkneg6pt}
\end{equation}
and the on-shell conditions (\ref{nmassnpt}) are,
\begin{equation}
- \kappa_1 = {(-s_1)\, (-s_2)\over (-s_{456}) }\, , \qquad
- \kappa_2 = {(-s_2)\, (-s_3)\over (-s_{345}) } \,
,\label{negmass6pt}
\end{equation}
with $\kappa_1= - |p_{5\perp}|^2$ and $\kappa_2= - |p_{4\perp}|^2$.
Because in \eqn{eq:hepresc6pt} no new vertex or coefficient function occurs
with respect to \eqn{loop5pt},
in the expansion of \eqn{elasexpand} for $n=6$, the knowledge of
the $l$-loop trajectory $\alpha^{(l)}$, the coefficient function $C^{(l)}$,
and the Lipatov vertex $V^{(l)}$ allow one to derive
the $l$-loop six-point amplitude in the multi-Regge kinematics. The one-loop coefficient is
\begin{eqnarray}
m_6^{(1)} &=& \bar\alpha^{(1)}(t_1) L_1 + \bar\alpha^{(1)}(t_2) L_2
+ \bar\alpha^{(1)}(t_3) L_3 \nonumber\\
&+& \bar C^{(1)}(t_1,\tau) + \bar C^{(1)}(t_3,\tau)
+ \bar V^{(1)}(t_1,t_2,\kappa_1,\tau) + \bar V^{(1)}(t_2,t_3,\kappa_2,\tau)
.\label{exp1loop6pt}
\end{eqnarray}
with $L_i=\ln(-s_i/\tau)$ and $i=1,2,3$. The two-loop coefficient is
\begin{eqnarray}
m_6^{(2)} &=& \frac{1}{2} \left( m_6^{(1)} \right)^2
+ \bar\alpha^{(2)}(t_1) L_1 + \bar\alpha^{(2)}(t_2) L_2 + \bar\alpha^{(2)}(t_3) L_3
\label{exp2loop6pt}\\
&+& \bar C^{(2)}(t_1,\tau) + \bar C^{(2)}(t_3,\tau) +
\bar V^{(2)}(t_1,t_2,\kappa_1,\tau)
+ \bar V^{(2)}(t_2,t_3,\kappa_2,\tau) \nonumber\\
&-& \frac{1}{2} \left( \bar C^{(1)}(t_1,\tau) \right)^2
- \frac{1}{2} \left( \bar C^{(1)}(t_3,\tau) \right)^2\nonumber \\
&-& \frac{1}{2} \left( \bar V^{(1)}(t_1,t_2,\kappa_1,\tau) \right)^2
- \frac{1}{2} \left( \bar V^{(1)}(t_2,t_3,\kappa_2,\tau) \right)^2\, ,\nonumber
\end{eqnarray}
where $m_6^{(1)}$, $\bar C^{(1)}(t,\tau)$ and $ \bar V^{(1)}(t_1,t_2,\kappa,\tau)$ must be known to ${\cal O} (\epsilon^2)$.
Similarly, the three-loop coefficient is
\begin{eqnarray}
m_6^{(3)} &=&
m_6^{(2)} m_6^{(1)} - \frac{1}{3} \left(m_6^{(1)} \right)^3
+ \bar\alpha^{(3)}(t_1) L_1+\bar\alpha^{(3)}(t_3) L_2\nonumber \\
&+& \bar C^{(3)}(t_1,\tau)
+\bar V^{(3)}(t_1,t_2,\kappa_1,\tau)
+\bar V^{(3)}(t_2,t_3,\kappa_2,\tau)
+\bar C^{(3)}(t_3,\tau)\nonumber\\
&-& \bar C^{(2)}(t_1,\tau)\, \bar C^{(1)}(t_1,\tau)
- \bar V^{(2)}(t_1,t_2,\kappa_1,\tau) \bar V^{(1)}(t_1,t_2,\kappa_1,\tau)\nonumber\\
&-& \bar V^{(2)}(t_2,t_3,\kappa_2,\tau) \bar V^{(1)}(t_2,t_3,\kappa_2,\tau)
-\bar C^{(2)}(t_3,\tau)\, \bar C^{(1)}(t_3,\tau) \nonumber\\
&+& \frac{1}{3} \left(\bar C^{(1)}(t_1,\tau)\right)^3
+\frac{1}{3} \left(\bar C^{(1)}(t_3,\tau)\right)^3\nonumber\\
&+&\frac{1}{3} \left(\bar V^{(1)}(t_1,t_2,\kappa_1,\tau) \right)^3
+\frac{1}{3} \left(\bar V^{(1)}(t_2,t_3,\kappa_2,\tau) \right)^3
\, .\label{eq:m63exp}
\end{eqnarray}
Here, $m_6^{(1)}$, $\bar C^{(1)}(t,\tau)$ and $ \bar V^{(1)}(t_1,t_2,\kappa,\tau)$ are needed
to ${\cal O} (\epsilon^4)$ while
$m_6^{(2)}$, $\bar C^{(2)}(t,\tau)$ and $ \bar V^{(2)}(t_1,t_2,\kappa,\tau)$
must be known to ${\cal O} (\epsilon^2)$.
It is straightforward to obtain the amplitudes in the physical region where
$s, s_1, s_2, s_3$ are positive and $t_1, t_2, t_3$ are negative, where
the multi-Regge kinematics are
\begin{equation}
s \gg s_1, s_2, s_3 \gg -t_1, -t_2, -t_3\, ,\label{eq:mrk6pt}
\end{equation}
and the mass-shell conditions for gluons 4 and 5, emitted along
the $t$ channel, are
\begin{equation}
\kappa_1 = {s_{1}\, s_{2}\over s_{456}} \, ,\qquad \kappa_2 = {s_{2}\, s_{3}\over s_{345}} \,
,\label{mass6pt}
\end{equation}
by continuing \eqns{exp1loop6pt}{exp2loop6pt}
through the prescriptions $\ln(-s_j) = \ln(s_j) - i\pi$, for $s_j > 0$ with $j=1, 2,3$,
and $\ln(-\kappa_i) = \ln(\kappa_i) - i\pi$, for $\kappa_i > 0$, with $i=1,2$.
\section{The Bern-Dixon-Smirnov ansatz in multi-Regge kinematics}
\label{sec:bdsmrk}
The BDS ansatz prescribes that the $n$-gluon MHV amplitude be written as,
\begin{eqnarray}
m_n &=& m_n^{(0)} \left[ 1 + \sum_{L=1}^\infty a^L M_n^{(L)}(\epsilon) \right]
\nonumber\\ &=& m_n^{(0)}
\exp\left[ \sum_{l=1}^\infty a^l \left( f^{(l)}(\epsilon)
M_n^{(1)}(l\epsilon) + Const^{(l)} + E_n^{(l)}(\epsilon)\right)\right]\,
,\label{eq:bds1}
\end{eqnarray}
where
\begin{equation}
a = {2\gs^2 N\over (4\pi)^{2-\epsilon}} e^{-\gamma\epsilon}
\end{equation}
is the 't-Hooft gauge coupling, and with
\begin{equation}
f^{(l)}(\epsilon) = f^{(l)}_0 + \epsilon f^{(l)}_1 + \epsilon^2 f^{(l)}_2\,
,\label{eq:flfunct}
\end{equation}
where $f^{(1)}(\epsilon)=1$, and
$f^{(l)}_0$ is proportional to the $l$-loop cusp anomalous
dimension~\cite{Korchemsky:1987wg},
$\hat{\gamma}_K^{(l)} = 4f^{(l)}_0$, which has been conjectured to all orders of
$a$~\cite{Beisert:2006ez} and computed to
${\cal O} (a^4)$~\cite{Bern:2006ew,Cachazo:2006az}, and $f^{(l)}_1$
is related to the soft anomalous
dimension~\cite{Magnea:1990zb,Sterman:2002qn},
${\cal G}_0^{(l)} = 2f^{(l)}_1/l$, and is known to
${\cal O} (a^3)$~\cite{Bern:2005iz}. In \eqn{eq:bds1},
$Const^{(l)}$ are constants, and $E_n^{(l)}(\epsilon)$ are ${\cal O} (\epsilon)$
contributions, with $Const^{(1)}=0$ and $E_n^{(1)}(\epsilon)=0$,
and $M_n^{(L)}(\epsilon)$ is the $L$-loop colour-stripped
amplitude rescaled by the tree amplitude. In the convention and notation
of \eqn{elasexpand}, the rescaled coupling (\ref{rescal}) is related to $a$ by,
\begin{equation}
a = 2 G(\epsilon)\bar\gs^2
\end{equation}
with
\begin{equation}
G(\epsilon) = {e^{-\gamma\epsilon}\ \Gamma(1-2\epsilon)\over
\Gamma(1+\epsilon)\, \Gamma^2(1-\epsilon)} = 1 + {\cal O} (\epsilon^2)\, .
\end{equation}
Thus, the $n$-gluon amplitude is given by,
\begin{equation}
a^L M_n^{(L)}(\epsilon) = \left( \frac{a}{2G(\epsilon)}\right)^L
m_n^{(L)}(\epsilon)\, ,\label{eq:ourm}
\end{equation}
and the BDS ansatz (\ref{eq:bds1}) becomes
\begin{eqnarray}
m_n &=& m_n^{(0)} \left[ 1 + \sum_{L=1}^\infty {\bar\gs}^{2L}(t)
m_n^{(L)}(\epsilon) \right] \nonumber\\ &=& m_n^{(0)}
\exp\left[ \sum_{l=1}^\infty {\bar\gs}^{2l}(t) \left( 2G(\epsilon)\right)^l
\left( f^{(l)}(\epsilon) {m_n^{(1)}(l\epsilon)\over 2G(l\epsilon)}
+ Const^{(l)} +E_n^{(l)}(\epsilon) \right)\right]\, .\label{eq:bdsddg}
\end{eqnarray}
\subsection{Amplitudes with four or five gluons}
Substituting the one-loop four-point amplitude (\ref{4pt1l}) in
\eqn{eq:bdsddg} and comparing
with the expansion (\ref{elasexpand}) for $n=4$ of the high-energy
prescription (\ref{elasuchan}), we determine the Regge trajectory from
the coefficient of the single logarithm~\cite{DelDuca:2008pj},
\begin{eqnarray}
\alpha^{(2)}(\epsilon) &=& 2\, f^{(2)}(\epsilon)\, \alpha^{(1)}(2\epsilon) + {\cal O} (\epsilon)\, , \label{eq:alphabds}\\
\alpha^{(3)}(\epsilon) &=& 4\, f^{(3)}(\epsilon)\, \alpha^{(1)}(3\epsilon) + {\cal O} (\epsilon)\, ,\nonumber
\end{eqnarray}
with $\alpha^{(1)}$ given in \eqn{alpha1}, and in general
\begin{equation}
\alpha^{(l)}(\epsilon) = 2^{l-1}\, f^{(l)}(\epsilon)\, \alpha^{(1)}(l\epsilon) + {\cal O} (\epsilon)\, .\label{eq:alphagen}
\end{equation}
From \eqn{eq:alphagen}, we see that to ${\cal O} (\epsilon^0)$
only the first two terms of the $f^{(l)}(\epsilon)$ function (\ref{eq:flfunct})
enter the evaluation of the Regge trajectory.
Using the $f^{(2)}$ and $f^{(3)}$ functions~\cite{Bern:2005iz},
\begin{eqnarray}
f^{(2)}(\epsilon) &=& - \zeta_2 - \zeta_3\epsilon - \zeta_4\epsilon^2\, ,\nonumber\\
f^{(3)}(\epsilon) &=& {11\over 2} \zeta_4 + (6\zeta_5 + 5\zeta_2\zeta_3)\epsilon
+ (c_1\zeta_6 + c_2\zeta_3^2)\epsilon^2\, ,\label{eq:ffunct}
\end{eqnarray}
we see that \eqn{eq:alphabds} agrees with \eqns{eq:tworegge}{eq:treregge}
to ${\cal O} (\epsilon^0)$. The constants $c_1, c_2$ are known only
numerically~\cite{Spradlin:2008uu}, but they
do not enter the evaluation of the Regge trajectory.
\eqn{eq:bdsddg} implies the iterative structure of the two-loop $n$-gluon
amplitude given in \eqn{eq:ite2bds}, which we report here in our
convention~(\ref{eq:ourm}) for the coupling,
\begin{equation}
m_n^{(2)}(\epsilon) = {1\over 2} \left[m_n^{(1)}(\epsilon)\right]^2
+ {2\,G^2(\epsilon)\over G(2\epsilon)} f^{(2)}(\epsilon)\, m_n^{(1)}(2\epsilon)
+ 4\, Const^{(2)} + {\cal O} (\epsilon)\, ,\label{eq:ite2}
\end{equation}
with $Const^{(2)}= -\zeta_2^2/2$, and where the one-loop
amplitude, $m_n^{(1)}(\epsilon)$, must be
known to ${\cal O} (\epsilon^2)$. \eqn{eq:ite2} has been shown to be correct for
$n= 4$~\cite{Anastasiou:2003kj} and $n=5$~\cite{Bern:2006vw,Cachazo:2008vp} for
general kinematics.
Using the iterative structure (\ref{eq:ite2}) for the four-point amplitude,
it is possible to express the two-loop coefficient function in terms of
the one-loop coefficient function. In fact, comparing
\eqn{eq:ite2} with $n= 4$ to the two-loop factorization of the
four-point amplitude in the multi-Regge kinematics~(\ref{exp2loopu}),
we find the following iterative structure
\begin{equation}
C^{(2)}(t,\tau,\epsilon) = {1\over 2} \left[ C^{(1)}(t,\tau,\epsilon)\right]^2
+ {2\,G^2(\epsilon)\over G(2\epsilon)} f^{(2)}(\epsilon)\, C^{(1)}(t,\tau,2\epsilon)
+ 2\, Const^{(2)}
+ {\cal O} (\epsilon)\, ,\label{eq:ifite2}
\end{equation}
where, to compute the two-loop coefficient function $C^{(2)}(t,\tau,\epsilon)$ to ${\cal O} (\epsilon^0)$, the one-loop coefficient function, $C^{(1)}(t,\tau,\epsilon)$,
is needed to ${\cal O} (\epsilon^2)$. \eqn{eq:ifite2} agrees with
\eqn{eq:2loopif} to ${\cal O} (\epsilon^0)$.
Similarly, the iterative structure (\ref{eq:ite2}) for the five-point amplitude,
means we can also
express the two-loop Lipatov vertex in terms of the one-loop Lipatov vertex.
Comparing \eqn{eq:ite2} with $n= 5$ to the two-loop factorization of the
five-point amplitude~(\ref{exp2loop}), and using \eqns{eq:alphabds}{eq:ifite2},
we obtain
\begin{equation}
V^{(2)}(t_1,t_2,\kappa,\tau,\epsilon) = {1\over 2} \left[ V^{(1)}(t_1,t_2,\kappa,\tau,\epsilon)\right]^2
+ {2\,G^2(\epsilon)\over G(2\epsilon)} f^{(2)}(\epsilon)\, V^{(1)}(t_1,t_2,\kappa,\tau,2\epsilon)
+ {\cal O} (\epsilon)\, ,\label{eq:2llipver}
\end{equation}
where, to compute $V^{(2)}(t_1,t_2,\kappa,\tau,\epsilon)$ to ${\cal O} (\epsilon^0)$, $V^{(1)}(t_1,t_2,\kappa,\tau,\epsilon)$, must be known
through ${\cal O} (\epsilon^2)$.
Of course, \eqn{eq:ite2} with $n= 5$ requires the knowledge of
the one-loop five-point amplitude, $m_5^{(1)}(\epsilon)$, through
${\cal O} (\epsilon^2)$\footnote{We shall provide the details of $m_5^{(1)}(\epsilon)$ to that accuracy,
in fact to all orders in $\epsilon$, in a forthcoming publication~\cite{us}.}, but once $V^{(1)}$ is known
through ${\cal O} (\epsilon^2)$, the two-loop Lipatov vertex can be determined by \eqn{eq:2llipver}
without knowing explicitly the two-loop five-point amplitude. In fact, once evaluated,
$V^{(2)}$ can be used, together with $C^{(2)}$ and $\alpha^{(2)}$, in \eqn{exp2loop}
to determine the two-loop five-point amplitude in the multi-Regge kinematics.
The iterative structure of the three-loop $n$-gluon amplitude is,
\begin{equation}
m_n^{(3)}(\epsilon) = m_n^{(2)}(\epsilon)\, m_n^{(1)}(\epsilon)
- {1\over 3} \left[m_n^{(1)}(\epsilon)\right]^3
+ {4\,G^3(\epsilon)\over G(3\epsilon)} f^{(3)}(\epsilon)\, m_n^{(1)}(3\epsilon)
+ 8\, Const^{(3)} + {\cal O} (\epsilon)\, ,\label{eq:ite3}
\end{equation}
where $m_n^{(1)}(\epsilon)$ and $m_n^{(2)}(\epsilon)$ must be known to
${\cal O} (\epsilon^4)$ and ${\cal O} (\epsilon^2)$, respectively, and with
\begin{equation}
Const^{(3)} = \left( {341\over 216} + {2\over 9} c_1\right) \zeta_6
+ \left( -{17\over 9} + {2\over 9} c_2\right) \zeta_3^2\, .\label{eq:cost3}
\end{equation}
\eqn{eq:ite3} has been shown to be correct for $n=4$~\cite{Bern:2005iz}.
Comparing \eqn{eq:ite3} with $n= 4$ to the three-loop factorisation of the
four-point amplitude in the multi-Regge kinematics~(\ref{exp3loopu}),
we obtain the three-loop iteration of the coefficient function,
\begin{eqnarray}
C^{(3)}(t,\tau,\epsilon) &=& C^{(2)}(t,\tau,\epsilon)\, C^{(1)}(t,\tau,\epsilon)
- {1\over 3} \left[C^{(1)}(t,\tau,\epsilon)\right]^3 \nonumber\\
&+& {4\,G^3(\epsilon)\over G(3\epsilon)} f^{(3)}(\epsilon)\, C^{(1)}(t,\tau,3\epsilon)
+ 4\, Const^{(3)} + {\cal O} (\epsilon)\, .\label{eq:ifite3}
\end{eqnarray}
The constants $c_1, c_2$ cancel when \eqns{eq:ffunct}{eq:cost3} are used
in \eqns{eq:ite3}{eq:ifite3}.
Using the two-loop coefficient function to ${\cal O} (\epsilon^2)$ (\ref{eq:2loopif}),
and the one-loop coefficient function to ${\cal O} (\epsilon^4)$ (\ref{eq:ifonel}),
we see that \eqn{eq:ifite3} is in agreement with \eqn{eq:3loopif} to
${\cal O} (\epsilon^0)$.
Comparing \eqn{eq:ite3} with $n= 5$ to the three-loop factorisation of the
five-point amplitude (\ref{m53ite}), we obtain the three-loop iteration of the
Lipatov vertex,
\begin{eqnarray}
V^{(3)}(t_1,t_2,\kappa,\tau,\epsilon) &=&
V^{(2)}(t_1,t_2,\kappa,\tau,\epsilon) V^{(1)}(t_1,t_2,\kappa,\tau,\epsilon)
- {1\over 3} \left[ V^{(1)}(t_1,t_2,\kappa,\tau,\epsilon)\right]^3 \nonumber\\
&+& {4\,G^3(\epsilon)\over G(3\epsilon)} f^{(3)}(\epsilon)\, V^{(1)}(t_1,t_2,\kappa,\tau,3\epsilon)
+ {\cal O} (\epsilon)\, .\label{eq:3llipver}
\end{eqnarray}
\subsection{Amplitudes with six or more gluons}
In the two-loop expansion of the six-point amplitude~(\ref{exp2loop6pt}),
no new vertices or coefficient functions occur. Thus, using the explicit expressions
of $V^{(2)}$, $C^{(2)}$ and $\alpha^{(2)}$ in \eqn{exp2loop6pt},
one can assemble the two-loop six-point amplitude in the multi-Regge kinematics.
However, even without knowing the explicit expression of the two-loop Lipatov
vertex~(\ref{eq:2llipver}), it is easy to see by substitution that
the iterative structure of Eqs.~(\ref{eq:alphabds}),
(\ref{eq:ifite2}) and (\ref{eq:2llipver}) ensures that the six-point amplitude~(\ref{exp2loop6pt})
fulfils the two-loop iterative formula (\ref{eq:ite2}) for $n=6$. Furthermore, the expression
has the correct analytic properties in the physical region where
$s, s_1, s_2, s_3$ are positive and $t_1, t_2, t_3$ are negative.
Because no new vertices or coefficient functions occur in the two-loop expansion
of \eqn{loopnpt} even for $n=7$ or higher, we conclude that the two-loop expansion
of \eqn{loopnpt} fulfils the two-loop iterative formula (\ref{eq:ite2}), and thus the BDS
ansatz, for any $n$. Thus, the multi-Regge kinematics
are not able to resolve the BDS-ansatz discrepancy, {\it i.e.} the quantity $R_n^{(2)}$
(\ref{eq:discr}) vanishes in the multi-Regge kinematics, for any $n$.
The same arguments can be repeated for three-loop case: in the three-loop expansion of
the six-point amplitude (\ref{eq:m63exp}) no new vertices or coefficient functions occur.
Thus, using the explicit expressions
of $V^{(3)}$, $C^{(3)}$ and $\alpha^{(3)}$ in \eqn{eq:m63exp}
one can assemble the three-loop six-point amplitude in the multi-Regge kinematics.
However, even without knowing the explicit expression of the three-loop Lipatov
vertex~(\ref{eq:3llipver}), it is easy to see by substitution that
the iterative structure of Eqs.~(\ref{eq:alphabds}),
(\ref{eq:ifite3}) and (\ref{eq:3llipver}) ensures that the six-point amplitude~(\ref{eq:m63exp})
fulfils the three-loop iterative formula (\ref{eq:ite3}) for $n=6$.
Because no new vertices or coefficient functions occur in the three-loop expansion
of \eqn{loopnpt} for $n=7$ or higher, the three-loop expansion
of \eqn{loopnpt} fulfils the three-loop iterative formula (\ref{eq:ite3}), and thus the BDS
ansatz, for any $n$. Thus, also the quantity $R_n^{(3)}$
(\ref{eq:discr}) vanishes in the multi-Regge kinematics, for any $n$.
Clearly, the same thing is to occur with the iterative structure of the $l$-loop
$n$-gluon amplitude for $l\ge 4$. We conclude that $R_n^{(l)}$
vanishes in the multi-Regge kinematics for any $l$ and $n$.
The $l$-loop $n$-gluon amplitudes in the multi-Regge kinematics are in complete
agreement with the BDS ansatz, therefore they are not able to resolve the violations of
the ansatz for $n\ge 6$.
In Ref.~\cite{Drummond:2007au, Bern:2008ap} it was argued that the remainder function~(\ref{eq:discr}) for $n=6$ is a function of the three conformal cross-ratios
\begin{equation}
u_1 = \frac{s_{12}\, s_{45}}{s_{345}\, s_{456}}\, ,\qquad
u_2 = \frac{s_{23}\, s_{56}}{s_{234}\, s_{456}}\, ,\qquad
u_3 = \frac{s_{34}\, s_{61}}{s_{234}\, s_{345}}\, .\label{thrinvar}
\end{equation}
Using the notation of \sec{sec:mrkforall} and the
results of \sec{sec:6ptmrkinvar}, we note that
in the multi-Regge kinematics (\ref{eq:mrk6pt})
the conformal invariants (\ref{thrinvar}) become~\cite{Brower:2008nm, Brower:2008ia}
\begin{equation}
u_1 \simeq 1\, ,\qquad
u_2 = \frac{t_3 \kappa_1}{t_2 s_2} \simeq {\cal O} \left(\frac{t}{s}\right)\, ,\qquad
u_3 = \frac{t_1 \kappa_2}{t_2 s_2} \simeq {\cal O} \left(\frac{t}{s}\right)\, ,\label{thrinvarmrk}
\end{equation}
thus $u_1$ is close to 1, while $u_2$ and $u_3$ are very small and are in fact sub-leading
in the multi-Regge kinematics.
\section{Proof of BDS ansatz in multi-Regge kinematics}
\label{sec:proof}
In the previous section, we derived iterative relations for the three building blocks that occur in the
multi-Regge factorisation of gluonic amplitudes, the Regge trajectory, the coefficient functions and the
Lipatov vertex. We argued that the high-energy prescription implied that the six-gluon amplitude also
satisfies the BDS ansatz (in the restricted kinematics where the high energy prescription is valid). In
this section, we are going to prove that in in the Euclidean region the BDS ansatz is fully consistent
with multi-Regge factorisation (the proof for the physical region is similar).
In particular, we show that, if BDS holds true for four- and five-point amplitudes, then
it also holds true for any $n$-gluon amplitude (in multi-Regge kinematics).
We start by deriving exponentiated forms for the coefficient functions and the
Lipatov vertex.
If the BDS ansatz holds true for the four-point amplitude, then we can immediately
insert the tree- and one-loop four-gluon
amplitudes in multi-Regge kinematics
\begin{equation}\begin{split}\label{eq:m41}
m_4^{(0)}=&g^2 C^{(0)}(p_2,p_3)\,\frac{s}{t}\,C^{(0)}(p_1,p_4),\\
m_4^{(1)}(l\epsilon)=&2\bar C^{(1)}(t,\tau,l\epsilon) + \bar\alpha^{(1)}(t,l\epsilon)\ln\left(\frac{-s}{\tau}\right),
\end{split}
\end{equation}
into \eqn{eq:bdsddg}, such that
\begin{eqnarray}\label{eq:m4exp}
m_4
&=&\,g^2 C^{(0)}(p_2,p_3)\,\frac{s}{t}\,C^{(0)}(p_1,p_4) \left(\frac{-s}{\tau}\right)^{\sum_{l=1}^{\infty}\,\bar\gs^{2l}\,
2^{l-1}\frac{G^l(\epsilon)}{G(l\epsilon)}\,f^{(l)}(\epsilon)\,\bar\alpha^{(1)}(t,l\epsilon)}\nonumber \\
&&\,\times\exp\,2\sum_{l=1}^{\infty}\,\bar\gs^{2l}\, 2^{l-1}G^l(\epsilon)\left(\frac{f^{(l)}(\epsilon)}{G(l\epsilon)}\,\bar C^{(1)}(t,\tau,l\epsilon)+Const^{(l)}+E_4^{(l)}(\epsilon)\right).
\end{eqnarray}
Comparing \eqn{eq:m4exp} to the general form of the high energy prescription of \eqn{elasuchan},
we can easily identify the all-orders forms of the Regge trajectory
\begin{equation}\label{eq:aExp}
\alpha(t,\epsilon)=\sum_{l=1}^{\infty}\,\bar\gs^{2l}\, 2^{l-1}\frac{G^l(\epsilon)}{G(l\epsilon)}\,f^{(l)}(\epsilon)\,\bar\alpha^{(1)}(t,l\epsilon),
\end{equation}
and the coefficient function,
\begin{equation}\begin{split}\label{eq:CExp}
C(p_i,p_j,&\tau,\epsilon)=\\
&\,C^{(0)}(p_i,p_j)
\,\exp\,\sum_{l=1}^{\infty}\,\bar\gs^{2l}\, 2^{l-1}
G^l(\epsilon)\left(\frac{f^{(l)}(\epsilon)}{G(l\epsilon)}\,\bar C^{(1)}(t,\tau,l\epsilon)+Const^{(l)}+E_4^{(l)}(\epsilon)\right),
\end{split}
\end{equation}
where in the last equation $t=(p_i+p_j)^2$. Note that expanding \eqn{eq:aExp} and \eqn{eq:CExp} in the rescaled couplings reproduces the
explicit forms for the two-, and three-loop
iterative expressions given in \eqn{eq:alphabds} and \eqns{eq:ifite2}{eq:ifite3} respectively. Furthermore, \eqn{eq:aExp} is in agreement up to ${\cal O} (\epsilon)$ with \eqn{eq:alphagen}, which expresses the $l$-loop Regge trajectory in terms of the function $f^{(l)}$ appearing in the BDS ansatz.
We can now repeat the argument for $m_5$ and, by reusing \eqn{eq:aExp} and \eqn{eq:CExp}, extract the corresponding formula for the Lipatov vertex,
\begin{eqnarray}\label{eq:m5exp}
m_5&=&\,g^2\, C(p_2,p_3,\tau,\epsilon)\,\frac{s}{t_1\,t_2}\,V^{(0)}(q_2,q_1)\,C(p_1,p_5,\tau,\epsilon)\nonumber \\
&&\,\times
\left(\frac{-s_1}{\tau}\right)^{\alpha(t_1,\epsilon)}\,
\left(\frac{-s_2}{\tau}\right)^{\alpha(t_2,\epsilon)}\nonumber\\
&&\,\times\exp\,\sum_{l=1}^{\infty}\,\bar\gs^{2l}\, 2^{l}G^l(\epsilon)\left(\frac{f^{(l)}(\epsilon)}{2G(l\epsilon)}
\,\bar V^{(1)}(t_2,t_1,\kappa_1,\tau,l\epsilon) +
E_5^{(l)}(\epsilon) - E_4^{(l)}(\epsilon)\right).
\end{eqnarray}
Comparing with \eqn{loop5pt}, we find
\begin{eqnarray}
\label{eq:VExp}
V(q_2,q_1,\kappa,\epsilon)&=&V^{(0)}(q_2,q_1)\,\nonumber\\
&\times&\exp\,\sum_{l=1}^{\infty}\,\bar\gs^{2l}\,
2^{l}G^l(\epsilon)\left(\frac{f^{(l)}(\epsilon)}{2G(l\epsilon)}\,\bar V^{(1)}(t_2,t_1,\kappa_1,\tau,l\epsilon) + E_5^{(l)}(\epsilon) -
E_4^{(l)}(\epsilon)\right).\nonumber \\
\end{eqnarray}
As before, expanding \eqn{eq:VExp} in the rescaled couplings reproduces the
explicit forms for the two-, and three-loop
iterative expressions given in \eqns{eq:2llipver}{eq:3llipver}.
We now turn to the generic case. Consider an $n$-gluon amplitude in multi-Regge kinematics which satisfies \eqn{loopnqmr}.
Inserting the exponentiated expressions for the Regge trajectory \eqn{eq:aExp}, the coefficient functions \eqn{eq:CExp} and the Lipatov verticx \eqn{eq:VExp},
we find
\begin{eqnarray}
m_n&=&m_n^{(0)}\,\exp\,\sum_{l=1}^{\infty}\,\bar\gs^{2l}\, 2^{l}G^l(\epsilon)\Bigg[\frac{f^{(l)}(\epsilon)}{2G(l\epsilon)}\,\Bigg(\bar C^{(1)}(t_1,\tau,l\epsilon)+\bar C^{(1)}(t_{n-3},\tau,l\epsilon) \nonumber \\
&&\qquad + \sum_{k=1}^{n-3}\bar\alpha^{(1)}(t_k,l\epsilon)\ln\left(\frac{-s_k}{\tau}\right)\nonumber \\
&& \qquad+\sum_{k=1}^{n-4}\bar V^{(1)}(t_{k+1},t_k,\kappa_k,\tau,l\epsilon)\Bigg)\nonumber \\
&&\qquad+Const^{(l)}+E_4^{(l)}(\epsilon) + (n-4)\big(E_5^{(l)}(\epsilon)-E_4^{(l)}(\epsilon)\big) \Bigg].
\end{eqnarray}
The expression inside the brackets can now be easily identified as the one-loop amplitude in multi-Regge kinematics,
\begin{eqnarray}
m_n^{(1)}(l\epsilon)&=&\,\bar C^{(1)}(t_1,\tau,l\epsilon)+\bar C^{(1)}(t_{n-3},\tau,l\epsilon)
+ \sum_{k=1}^{n-3}\bar\alpha^{(1)}(t_k,l\epsilon)\ln\left(\frac{-s_k}{\tau}\right)\nonumber\\
&&\, +\sum_{k=1}^{n-4}\bar V^{(1)}(t_{k+1},t_k,\kappa_k,\tau,l\epsilon),
\end{eqnarray}
and so we recover
\begin{equation}
m_n=m_n^{(0)}\,\exp\,\sum_{l=1}^{\infty}\,\bar\gs^{2l}\, 2^{l}G^l(\epsilon)\Bigg(\frac{f^{(l)}(\epsilon)}{2G(l\epsilon)}\,m_n^{(1)}(l\epsilon)
+Const^{(l)}+{\cal O} (\epsilon)\Bigg),
\end{equation}
\emph{i.e.} $m_n$ satisfies the BDS ansatz up to ${\cal O} (\epsilon)$.
\section{Quasi-multi-Regge kinematics}
\label{sec:quasi}
\subsection{Amplitudes in the quasi-multi-Regge kinematics with a pair at either end of the ladder}
\label{sec:ampnqmr}
It is possible to define a high-energy prescription
for more general, {\it i.e.} less restrictive, multi-Regge kinematics, such as the
quasi-multi-Regge kinematics where all gluons are strongly
ordered in rapidity, except for a pair of gluons, either at the top or at the bottom of the ladder
as shown schematically in Fig.~\ref{fig:quasiMR}(a).
For example,
\begin{equation}
y_3 \simeq y_4\gg \cdots\gg y_n;\qquad |p_{3\perp}| \simeq |p_{4\perp}| ...\simeq|p_{n\perp}|\, ,
\label{qmrknpt}
\end{equation}
for which the Mandelstam invariants are given in \sec{sec:nptampqmr}.
We conjecture that in the quasi-multi-Regge kinematics of \eqn{qmrknpt}
a generic colour-stripped $l$-loop $n$-gluon amplitude will have the factorised form,
\begin{eqnarray}
\lefteqn{ m_n(1,2, \ldots ,n) =
s \left[g^2\, A(p_2,p_3,p_4) \right]\,
{1\over t_{n-4}}\, \left({-s_{n-4}\over \tau}\right)^{\alpha(t_{n-4})}
\left[g\,V(q_{n-4},q_{n-5},\kappa_{n-5})\right] }
\nonumber\\ &&\qquad\qquad \cdots \times\
{1\over t_2}\, \left({-s_2\over \tau}\right)^{\alpha(t_2)}
\left[g\,V(q_2,q_1,\kappa_1)\right]\,
{1\over t_1}\, \left({-s_1\over \tau}\right)^{\alpha(t_1)}
\left[g\, C(p_1,p_n) \right]\, ,\label{loopnqmr}
\end{eqnarray}
where we suppressed the dependence of the coefficient functions and the Lipatov vertices
on the reggeisation scale $\tau$. $s_{n-4}$ can be chosen to be
either $s_{35}$ or $s_{45}$, the difference between the two being of the order of $s_{34}$,
thus sub-leading with respect to $s$.
In order for $m_n$ to be real, one can take the invariants $s$, $s_1, \ldots ,s_{n-4}$,
$t_1, \ldots, t_{n-4}$, defined as in \sec{sec:mrkforall}, and $s_{34}$ all negative.
Then the kinematics imply
\begin{equation}
- s \gg -s_1, -s_2, \ldots, -s_{n-4} \gg -s_{34}, -t_1, -t_2, \dots, -t_{n-4}\, .\label{eq:negqmrnpt}
\end{equation}
The limit of multi-Regge kinematics (\ref{eq:mrkneg}) is where $s_{34}$ becomes
as large as any $s_i$-type invariant.
In \eqn{loopnqmr}, the coefficient function $C$ and Lipatov vertex $V$
are exactly the same as in \eqn{loopnpt}. However a new coefficient function,
$A(p_2,p_3,p_4)$, is needed to describe the production of two gluons at one end of the ladder.
The tree approximation, $A^{(0)}(p_2,p_3,p_4)$, was
computed in Ref.~ \cite{DelDuca:1995ki,Fadin:1996nw}.
$A$ can be expanded in the rescaled
coupling, just as in \eqns{fullv}{eq:coeffrescal},
\begin{equation}
A(p_2,p_3,p_4,\tau) = A^{(0)}(p_2,p_3,p_4)\left(1 + \bar\gs^{2} {\bar A}^{(1)}(t,s_{34},\tau)
+ \bar\gs^4 {\bar A}^{(2)}(t,s_{34},\tau) + {\cal O} (\bar\gs^{6}) \right)\, .\label{avert}
\end{equation}
For $n=5$, \eqn{loopnqmr} reduces to
\begin{equation}
m_5(1,2,3,4,5) = s \left[g^2\, A(p_2,p_3,p_4,\tau) \right]\,
{1\over t}\, \left({-s_1\over \tau}\right)^{\alpha(t)} \left[g\, C(p_1,p_5,\tau) \right]\, ,\label{loop5qmr}
\end{equation}
with $q=p_1+p_5=-(p_2+p_3+p_4)$, $t=q^2$ and $s=s_{12}$.
Expanding \eqn{loop5qmr} as in \eqn{elasexpand}, we obtain,
at one-, two- and three-loop accuracy,
\begin{eqnarray}
m_5^{(1)} &=& \bar\alpha^{(1)}(t) L + \bar C^{(1)}(t,\tau) + {\bar A}^{(1)}(t,s_{34},\tau)\, ,\label{5pt1lqmr}\\
m_5^{(2)} &=& {1\over 2} \left( m_5^{(1)} \right)^2 + \bar\alpha^{(2)}(t) L \nonumber\\
&+& \bar C^{(2)}(t,\tau) + {\bar A}^{(2)}(t,s_{34},\tau)
- {1\over 2} \left( \bar C^{(1)}(t,\tau) \right)^2
- {1\over 2} \left( {\bar A}^{(1)}(t,s_{34},\tau) \right)^2\, ,\label{5pt2lqmr}\\
m_5^{(3)} &=& m_5^{(2)}\,m_5^{(1)}-{1\over 3}\left(m_5^{(1)}\right)^3 + \bar\alpha^{(3)}(t) L \nonumber\\
&+&\bar C^{(3)}(t,\tau) +{\bar A}^{(3)}(t,s_{34},\tau)
-\bar C^{(2)}(t,\tau)\bar C^{(1)}(t,\tau)-{\bar A}^{(2)}(t,s_{34},\tau){\bar A}^{(1)}(t,s_{34},\tau)\nonumber\\
&+&{1\over 3}\left(\bar C^{(1)}(t,\tau)\right)^3+{1\over 3}\left({\bar A}^{(1)}(t,s_{34},\tau)\right)^3,\,\label{5pt3lqmr}
\end{eqnarray}
with $L=\ln(-s_1/\tau)$, and where $m_5^{(1)}$ is needed to ${\cal O} (\epsilon^2)$ in Eq.~(\ref{5pt2lqmr}), and $m_5^{(1)}$ and $m_5^{(2)}$ to ${\cal O} (\epsilon^4)$ and ${\cal O} (\epsilon^2)$ respectively in Eq.~(\ref{5pt3lqmr}).
The coefficient functions $\bar C$ were already evaluated
in \sec{sec:4pthel}. Therefore, knowledge of the five-point amplitude at a given loop accuracy in the
quasi-multi-Regge kinematics (\ref{qmrknpt}) allows one to find the
coefficient function $A$ at the same loop accuracy. Furthermore, combining the iterative
formula (\ref{eq:ite2})
for $n=5$ with the high-energy prescription (\ref{loopnqmr}), one obtains an iterative
formula for the coefficient function $A$,
\begin{equation}
A^{(2)}(t,s_{34},\tau,\epsilon) = {1\over 2} \left[ A^{(1)}(t,s_{34},\tau,\epsilon)\right]^2
+ {2\,G^2(\epsilon)\over G(2\epsilon)} f^{(2)}(\epsilon)\, A^{(1)}(t,s_{34},\tau,2\epsilon)
+ 2\, Const^{(2)}
+ {\cal O} (\epsilon)\, ,\label{eq:avertif}
\end{equation}
where the one-loop coefficient function, $A^{(1)}(\epsilon)$, is needed to ${\cal O} (\epsilon^2)$. Similarly, it is straight forward to derive from Eq.~(\ref{5pt3lqmr}) an iterative formula at the three-loop coefficient function
\begin{eqnarray}
A^{(3)}(t,s_{34},\tau,\epsilon) &=& A^{(2)}(t,s_{34},\tau,\epsilon)A^{(1)}(t,s_{34},\tau,\epsilon) -{1\over 3} \left[ A^{(1)}(t,s_{34},\tau,\epsilon)\right]^3\nonumber \\
&+& {4\,G^3(\epsilon)\over G(3\epsilon)} f^{(3)}(\epsilon)\, A^{(1)}(t,s_{34},\tau,3\epsilon)
+ 4\, Const^{(3)}
+ {\cal O} (\epsilon)\, ,\label{eq:avertif3l}
\end{eqnarray}
where the one and two-loop coefficient functions $A^{(1)}(\epsilon)$ and $A^{(2)}(\epsilon)$ are needed to ${\cal O} (\epsilon^4)$ and ${\cal O} (\epsilon^2)$ respectively.
\begin{figure}[!t]
\begin{fmffile}{quasimr}
(a) \parbox{70mm}{ \begin{fmfgraph*}(150,250)
\fmfstraight
\fmfleft{p1,pdn1,pdn2,pd5,pd4,p2}
\fmfright{x1,x2,x3,x5,x6,x7}
\fmf{phantom}{p1,u1,v1n,u2,pn,u3,x1}
\fmf{phantom}{p2,o1,v23,o2,p3,o3,x7}
\fmffreeze
\fmf{phantom}{pn,pn1,pn2,p5,p4,p3}
\fmffreeze
\fmf{gluon,label=$p_2$,label.side=left,l.d=0.03w}{p2,v23}
\fmf{gluon,label=$p_3$,label.side=left,l.d=0.03w}{v23,p3}
\fmf{phantom}{v23,v4,v5,vn2,vn1,v1n}
\fmffreeze
\fmfv{decor.shape=circle,decor.filled=shaded,decor.size=0.19w,fore=green}{v23}
\fmfv{decor.shape=circle,decor.filled=shaded,decor.size=.09w,fore=green}{v4}
\fmfv{decor.shape=circle,decor.filled=shaded,decor.size=.09w,fore=green}{v5}
\fmfv{decor.shape=circle,decor.filled=shaded,decor.size=.09w,fore=green}{vn2}
\fmfv{decor.shape=circle,decor.filled=shaded,decor.size=.09w,fore=green}{vn1}
\fmfv{decor.shape=circle,decor.filled=shaded,decor.size=.09w,fore=green}{v1n}
\fmf{zigzag,label=$q_{n-4}$,label.side=right,l.d=0.055w}{v23,v4}
\fmf{zigzag,label=$q_{n-5}$,label.side=right,l.d=0.055w}{v4,v5}
\fmf{zigzag,label=$q_{2}$,label.side=right,l.d=0.055w}{vn2,vn1}
\fmf{zigzag,label=$q_{1}$,label.side=right,l.d=0.055w}{vn1,v1n}
\fmf{gluon,label=$p_1$,label.side=right,l.d=0.055w}{p1,v1n}
\fmf{gluon,label=$p_{n}$,label.side=right,l.d=0.055w}{v1n,pn}
\fmffreeze
\fmf{gluon,label=$p_5$,label.side=left,l.d=0.03w}{v4,p4}
\fmf{gluon,label=$p_6$,label.side=left,l.d=0.03w}{v5,p5}
\fmf{gluon,label=$p_{n-2}$,label.side=left,l.d=0.03w}{vn2,pn2}
\fmf{gluon,label=$p_{n-1}$,label.side=left,l.d=0.03w}{vn1,pn1}
\fmffreeze
\fmf{phantom}{u3,ou4,ou5,oun2,oun1,o3}
\fmffreeze
\fmf{phantom}{o3,ox1,ox2,ox3,ox4,ox5,ox6,ox7,ox8,ox9,ox10,oun1}
\fmffreeze
\fmf{plain,tension=0.2,left=0.3,label=$s_{n-4}$}{ox4,ox10}
\fmffreeze
\fmf{phantom}{oun1,oun1x1,oun1x2,oun1x3,oun1x4,oun1x5,oun1x6,oun1x7,oun1x8,oun1x9,oun1x10,oun2}
\fmffreeze
\fmf{plain,tension=0.2,left=0.3,label=$s_{n-5}$}{oun1x1,oun1x10}
\fmffreeze
\fmf{phantom}{ou5,ou5x1,ou5x2,ou5x3,ou5x4,ou5x5,ou5x6,ou5x7,ou5x8,ou5x9,ou5x10,ou4}
\fmffreeze
\fmf{plain,tension=0.2,left=0.3,label=$s_2$}{ou5x1,ou5x10}
\fmffreeze
\fmf{phantom}{ou4,ou4x1,ou4x2,ou4x3,ou4x4,ou4x5,ou4x6,ou4x7,ou4x8,ou4x9,ou4x10,u3}
\fmffreeze
\fmf{plain,tension=0.2,left=0.3,label=$s_1$}{ou4x1,ou4x10}
\fmffreeze
\fmf{phantom}{v23,v23x1,v23x2,v23x3,v23x4,v23x5,v23x6,v4}
\fmf{phantom}{p3,p3x1,p3x2,p3x3,p3x4,p3x5,p3x6,p4}
\fmffreeze
\fmf{gluon,label=$p_4$,l.s=left}{p3x3,v23x1}
\fmfv{label=$\kappa_{n-5}$,l.a=-180,l.d=-0.17w}{p4}
\fmfv{label=$\kappa_{n-6}$,l.a=-180,l.d=-0.17w}{p5}
\fmfv{label=$\kappa_{2}$,l.a=-180,l.d=-0.09w}{pn2}
\fmfv{label=$\kappa_{1}$,l.a=-180,l.d=-0.09w}{pn1}
\fmf{phantom}{v5,v67,vn2}
\fmfv{label=$\vdots$,l.d=-1mm}{v67}
\end{fmfgraph*}}
(b) \parbox{10mm}{ \begin{fmfgraph*}(150,250)
\fmfstraight
\fmfleft{p1,pdn1,pdn2,pd5,pd4,p2}
\fmfright{x1,x2,x3,x5,x6,x7}
\fmf{phantom}{p1,u1,v1n,u2,pn,u3,x1}
\fmf{phantom}{p2,o1,v23,o2,p3,o3,x7}
\fmffreeze
\fmf{phantom}{pn,pn1,pn2,p5,p4,p3}
\fmffreeze
\fmf{gluon,label=$p_2$,label.side=left,l.d=0.03w}{p2,v23}
\fmf{gluon,label=$p_3$,label.side=left,l.d=0.03w}{v23,p3}
\fmf{phantom}{v23,v4,v5,vn2,vn1,v1n}
\fmffreeze
\fmfv{decor.shape=circle,decor.filled=shaded,decor.size=0.19w,fore=green}{v23}
\fmfv{decor.shape=circle,decor.filled=shaded,decor.size=.09w,fore=green}{v4}
\fmfv{decor.shape=circle,decor.filled=shaded,decor.size=.09w,fore=green}{v5}
\fmfv{decor.shape=circle,decor.filled=shaded,decor.size=.09w,fore=green}{vn2}
\fmfv{decor.shape=circle,decor.filled=shaded,decor.size=.09w,fore=green}{vn1}
\fmfv{decor.shape=circle,decor.filled=shaded,decor.size=.19w,fore=green}{v1n}
\fmf{zigzag,label=$q_{n-5}$,label.side=right,l.d=0.055w}{v23,v4}
\fmf{zigzag,label=$q_{n-6}$,label.side=right,l.d=0.055w}{v4,v5}
\fmf{zigzag,label=$q_{2}$,label.side=right,l.d=0.055w}{vn2,vn1}
\fmf{zigzag,label=$q_{1}$,label.side=right,l.d=0.055w}{vn1,v1n}
\fmf{gluon,label=$p_1$,label.side=right,l.d=0.055w}{p1,v1n}
\fmf{gluon,label=$p_{n}$,label.side=right,l.d=0.055w}{v1n,pn}
\fmffreeze
\fmf{gluon,label=$p_5$,label.side=left,l.d=0.03w}{v4,p4}
\fmf{gluon,label=$p_6$,label.side=left,l.d=0.03w}{v5,p5}
\fmf{gluon,label=$p_{n-3}$,label.side=left,l.d=0.03w}{vn2,pn2}
\fmf{gluon,label=$p_{n-2}$,label.side=left,l.d=0.03w}{vn1,pn1}
\fmffreeze
\fmf{phantom}{u3,ou4,ou5,oun2,oun1,o3}
\fmffreeze
\fmf{phantom}{o3,ox1,ox2,ox3,ox4,ox5,ox6,ox7,ox8,ox9,ox10,oun1}
\fmffreeze
\fmf{plain,tension=0.2,left=0.3,label=$s_{n-5}$}{ox4,ox10}
\fmffreeze
\fmf{phantom}{oun1,oun1x1,oun1x2,oun1x3,oun1x4,oun1x5,oun1x6,oun1x7,oun1x8,oun1x9,oun1x10,oun2}
\fmffreeze
\fmf{plain,tension=0.2,left=0.3,label=$s_{n-6}$}{oun1x1,oun1x10}
\fmffreeze
\fmf{phantom}{ou5,ou5x1,ou5x2,ou5x3,ou5x4,ou5x5,ou5x6,ou5x7,ou5x8,ou5x9,ou5x10,ou4}
\fmffreeze
\fmf{plain,tension=0.2,left=0.3,label=$s_2$}{ou5x1,ou5x10}
\fmffreeze
\fmf{phantom}{ou4,ou4x1,ou4x2,ou4x3,ou4x4,ou4x5,ou4x6,ou4x7,ou4x8,ou4x9,ou4x10,u3}
\fmffreeze
\fmf{plain,tension=0.2,left=0.3,label=$s_1$}{ou4x1,ou4x7}
\fmffreeze
\fmf{phantom}{v23,v23x1,v23x2,v23x3,v23x4,v23x5,v23x6,v4}
\fmf{phantom}{p3,p3x1,p3x2,p3x3,p3x4,p3x5,p3x6,p4}
\fmf{phantom}{v1n,v1nx1,v1nx2,v1nx3,v1nx4,v1nx5,v1nx6,vn1}
\fmf{phantom}{pn,pnx1,pnx2,pnx3,pnx4,pnx5,pnx6,pn1}
\fmffreeze
\fmf{gluon,label=$p_4$,l.s=left}{p3x3,v23x1}
\fmf{gluon,label=$p_{n-1}$,l.s=right,l.d=0.07w}{pnx2,v1n}
\fmfv{label=$\kappa_{n-6}$,l.a=-180,l.d=-0.17w}{p4}
\fmfv{label=$\kappa_{n-7}$,l.a=-180,l.d=-0.17w}{p5}
\fmfv{label=$\kappa_{2}$,l.a=-180,l.d=-0.09w}{pn2}
\fmfv{label=$\kappa_{1}$,l.a=-180,l.d=-0.09w}{pn1}
\fmf{phantom}{v5,v67,vn2}
\fmfv{label=$\vdots$,l.d=-1mm}{v67}
\end{fmfgraph*}}
\vskip 1cm
\end{fmffile}
\caption{\label{fig:quasiMR}Amplitudes in the quasi-multi-Regge kinematics of (a) a pair at either end of the ladder and (b) two pairs,
one at each end of the ladder.}
\end{figure}
\subsection{Amplitudes in the quasi-multi-Regge kinematics with two pairs,
one at each end of the ladder}
\label{sec:ampnqmrsq}
One can also consider the
quasi-multi-Regge kinematics where all gluons are strongly
ordered in rapidity, except for two pairs of gluons, one at each end of the ladder,
\begin{equation}
y_3 \simeq y_4\gg \cdots\ \gg y_{n-1} \simeq y_n;\qquad |p_{3\perp}| \simeq |p_{4\perp}| ...\simeq|p_{n\perp}|\, ,\label{qmrksqnpt}
\end{equation}
for which the Mandelstam invariants are given in \sec{sec:nptampqmrsq} and illustrated in Fig.~\ref{fig:quasiMR}(b).
The high-energy prescription is
\begin{eqnarray}
\lefteqn{ m_n(1,2, \ldots ,n) =
s \left[g^2\, A(p_2,p_3,p_4) \right]\,
{1\over t_{n-5}}\, \left({-s_{n-5}\over \tau}\right)^{\alpha(t_{n-5})}
\left[g\,V(q_{n-5},q_{n-6},\kappa_{n-6})\right] }
\nonumber\\ &&\quad \cdots \times\
{1\over t_2}\, \left({-s_2\over \tau}\right)^{\alpha(t_2)}
\left[g\,V(q_2,q_1,\kappa_1)\right]\,
{1\over t_1}\, \left({-s_1\over \tau}\right)^{\alpha(t_1)}
\left[g^2\, A(p_1,p_n, p_{n-1}) \right]\, .\label{loopnqmrsq}
\end{eqnarray}
where we again suppressed the dependence of the coefficient functions and the Lipatov vertices
on the reggeisation scale.
In order for $m_n$ to be real, one can take all the invariants $s$- and $t$-type to be
negative. Then the kinematics imply
\begin{equation}
- s \gg -s_1, -s_2, \ldots, -s_{n-5} \gg -s_{34}, -s_{n-1,n}, -t_1, -t_2, \dots, -t_{n-5}\, ,\label{eq:negqmrsqnpt}
\end{equation}
The limit of multi-Regge kinematics (\ref{eq:mrkneg}) is where $s_{34}$ and
$s_{n-1,n}$ become as large as any $s_i$-type invariant.
For $n=6$, \eqn{loopnqmrsq} reduces to two coefficient functions $A$
linked by a $t$-channel reggeised gluon propagator,
\begin{equation}
m_6(1,2,3,4,5,6) =
s \left[g^2\, A(p_2,p_3,p_4,\tau) \right]\,
{1\over t}\, \left({-s_1\over \tau}\right)^{\alpha(t)}
\left[g^2\, A(p_1,p_n, p_{n-1},\tau) \right]\, ,\label{loop6qmrsq}
\end{equation}
with $q=p_1+p_5 + p_6=-(p_2+p_3+p_4)$, $t=q^2$ and $s=s_{12}$. $s_1$ can be
anything between $s_{45}$, $s_{46}$, $s_{35}$ and $s_{36}$, the difference between them
being of the order of $s_{34}$ or $s_{56}$, thus sub-leading with respect to $s$.
The quasi-multi-Regge kinematics (\ref{eq:negqmrnpt}) become
\begin{equation}
- s \gg -s_1 \gg -s_{34}, -s_{56}, -t\, .\label{eq:negqmr6pt}
\end{equation}
Expanding \eqn{loop6qmrsq} as in \eqn{elasexpand}, at one-, two- and three-loop accuracy,
we obtain
\begin{eqnarray}
m_6^{(1)} &=& \bar\alpha^{(1)}(t) L + {\bar A}^{(1)}(t,s_{34},\tau) + {\bar A}^{(1)}(t,s_{56},\tau)\, ,\label{6pt1lqmr}\\
m_6^{(2)} &=& {1\over 2} \left( m_6^{(1)} \right)^2 + \bar\alpha^{(2)}(t) L \label{6pt2lqmr}\\
&+& {\bar A}^{(2)}(t,s_{34},\tau) + {\bar A}^{(2)}(t,s_{56},\tau)
- {1\over 2} \left( {\bar A}^{(1)}(t,s_{34},\tau) \right)^2
- {1\over 2} \left( {\bar A}^{(1)}(t,s_{56},\tau) \right)^2
\, ,\nonumber\\
m_6^{(3)} &=& m_6^{(2)}\,m_6^{(1)}-{1\over 3}\left(m_6^{(1)}\right)^3 + \bar\alpha^{(3)}(t) L\label{6pt3lqmr}\nonumber\\
&+&{\bar A}^{(3)}(t,s_{34},\tau) +{\bar A}^{(3)}(t,s_{56},\tau)\nonumber\\
&-&{\bar A}^{(2)}(t,s_{34},\tau){\bar A}^{(1)}(t,s_{34},\tau)-{\bar A}^{(2)}(t,s_{56}\tau){\bar A}^{(1)}(t,s_{56},\tau)\nonumber\\
&+&{1\over 3}\left({\bar A}^{(1)}(t,s_{34},\tau)\right)^3+{1\over 3}\left({\bar A}^{(1)}(t,s_{56},\tau)\right)^3,\,
\end{eqnarray}
with $L=\ln(-s_1/\tau)$.
In the two- and three-loop expansion of the six-point amplitude,~(\ref{6pt2lqmr}) and~(\ref{6pt3lqmr}),
no new vertices or coefficient functions occur. Thus, using the explicit expressions
of $A^{(k)}$ and $\alpha^{(k)}$, $k=1,2,3$, in \eqn{6pt2lqmr} and in \eqn{6pt3lqmr},
one can assemble the two- and three-loop six-point amplitude in the quasi-multi-Regge kinematics
(\ref{eq:negqmr6pt}).
However, even without knowing the explicit expression of $A^{(1)}$ and $A^{(2)}$,
it is easy to see by substitution that
the iterative structure of Eqs.~(\ref{eq:alphabds})
and (\ref{eq:avertif}) ensures that the six-point amplitude~(\ref{6pt2lqmr})
fulfils the two-loop iterative formula (\ref{eq:ite2}) for $n=6$.
Similarly, using \eqn{eq:avertif3l}, one can easily show that the six-point amplitude~(\ref{6pt3lqmr})
fulfils the three-loop iterative formula (\ref{eq:ite3}) for $n=6$.
Thus, also for the quasi-multi-Regge kinematics of \eqn{eq:negqmr6pt}
the quantities $R_6^{(2)}$ and $R_6^{(3)}$ vanish.
Because no new vertices or coefficient functions occur in the two- and three-loop expansion
of \eqn{loopnqmrsq} for $n > 6$, we conclude that the two- and three-loop expansions
of \eqn{loopnqmrsq} fulfil the two- and three-loop iterative formulas (\ref{eq:ite2}) and (\ref{eq:ite3}). Furthermore, it is straightforward to extend the proof of Section~\ref{sec:proof} to the kinematics with a pair of gluons emitted at either side or at each end of the ladder, and hence the BDS
ansatz is fulfilled in quasi-multi-Regge kinematics for any $n$ or, in other words, the quantities $R_n^{(l)}$ vanish in the quasi-multi-Regge kinematics (\ref{qmrksqnpt}), for any $n$ and for any $l$.
Continuing the kinematics (\ref{eq:negqmr6pt}) to the physical region
where $s,\, s_1,\, s_{34},\, s_{56}$ are positive and $t$ is negative,
the conformal invariants (\ref{thrinvar}) become~\cite{Brower:2008nm}
\begin{equation}
u_1 \simeq 1\, ,\quad
u_2 \simeq \frac{(|p_{3\perp}|^2 + p_4^+p_3^-)\, s_{56}}{|q_\perp|^2\, (s_{45}+s_{46})}
\simeq {\cal O} \left(\frac{t}{s}\right)\, ,\quad
u_3 = \frac{(|p_{6\perp}|^2 + p_6^+p_5^-)\, s_{34} }{|q_\perp|^2\, (s_{35}+s_{45})}
\simeq {\cal O} \left(\frac{t}{s}\right)\, ,\label{thrinvarqmrk}
\end{equation}
thus, just like for the multi-Regge kinematics (\ref{eq:mrk6pt})
$u_1$ is close to 1, while $u_2$ and $u_3$ are very small, in fact sub-leading
to the desired accuracy.
\section{What lies beyond?}
\label{sec:outlook}
From the analysis of Sects.~\ref{sec:bdsmrk} and \ref{sec:quasi}, it is
clear that no difference between the Regge factorisation and the BDS ansatz
will be found, unless there is a contribution from
coefficient funtions which appear for the first time
in $n$-gluon amplitudes, with $n\ge 6$.
To introduce this type of coefficient function means considering even less restrictive
multi-Regge kinematics.
In this Section, we examine the two simplest of such instances: a cluster of
two gluons along the ladder, and a cluster of three gluons at one end of the
ladder.
\subsection{Six-point amplitude in the quasi-multi-Regge kinematics of a pair
along the ladder}
\label{sec:ampnqmrc}
In the quasi-multi-Regge kinematics of \sec{sec:nptampqmrc}, where the outgoing
gluons are strongly ordered in rapidity, except for the central pair,
\begin{equation}
y_3 \gg y_4 \simeq y_5 \gg y_6;\qquad |p_{3\perp}| \simeq |p_{4\perp}|
\simeq |p_{5\perp}| \simeq|p_{6\perp}|\,
,\label{qmrk6ptc}
\end{equation}
the high-energy prescription is
\begin{eqnarray}
m_6(1,2,3,4,5,6) &=& s \left[g\, C(p_2,p_3,\tau) \right]\,
{1\over t_2}\, \left({-s_2\over \tau}\right)^{\alpha(t_2)} \nonumber\\
&\times& \left[g^2\,W(q_2,q_1,p_4,p_5,\tau)\right]\,
{1\over t_1}\, \left({-s_1\over \tau}\right)^{\alpha(t_1)}
\left[g\, C(p_1,p_6,\tau) \right]\, , \label{2lLip6pt}
\end{eqnarray}
where $p_4+p_5=q_2-q_1$, and with
$t_1=s_{61}$, $t_2=s_{23}$, $s_1=s_{56}$ and $s_2=s_{34}$ as illustrated in~\fig{fig:6point}(a).
In order for the amplitude $m_6$ to be real,
\eqn{2lLip6pt} is taken in the region where all the invariants are negative.
Thus, the quasi-multi-Regge kinematics (\ref{qmrk6ptc}) become,
\begin{equation}
-s \gg -s_{1}, -s_{2} \gg -s_{45}, -t_1, -t_2\, .\label{eq:mrk2lLip}
\end{equation}
In \eqn{2lLip6pt} a new coefficient function occurs: the vertex for the emission
of two gluons along the ladder,
$W(q_2,q_1,p_4,p_5,\tau)$, which we shall call the two-gluon Lipatov vertex.
Although \eqn{2lLip6pt} can be defined for a generic helicity configuration,
the MHV amplitude requires the two-gluon Lipatov vertex to have two gluons
of equal helicity. $W$ can be expanded in the rescaled coupling,
\begin{eqnarray}
W(q_2,q_1,p_4,p_5,\tau) &=& W^{(0)}(q_2,q_1,p_4,p_5) \nonumber\\
&\times& \left(1 + \bar\gs^{2} \bar W^{(1)}(t_1,t_2,s_{45},\tau)
+ \bar\gs^4 \bar W^{(2)}(t_1,t_2,s_{45},\tau) + {\cal O} (\bar\gs^{6}) \right)\, .\label{2lipvert}
\end{eqnarray}
The tree approximation, $W^{(0)}(q_2,q_1,p_4,p_5)$, was
computed in Ref.~\cite{DelDuca:1995ki,Fadin:1996nw}. The one-loop coefficient,
$W^{(1)}(t_1,t_2,s_{45},\tau)$ is known for the equal-helicity
configuration~\cite{Bartels:2008ce}.
Expanding \eqn{2lLip6pt} to one-, two-, and three-loop accuracy, we obtain
\begin{eqnarray}
m_6^{(1)} &=& \bar\alpha^{(1)}(t_1) L_1 + \bar\alpha^{(1)}(t_2) L_2
+ \bar C^{(1)}(t_1,\tau) + \bar C^{(1)}(t_2,\tau) + \bar W^{(1)}(t_1,t_2,s_{45},\tau)\, \nonumber\\
m_6^{(2)} &=& \frac{1}{2} \left( m_6^{(1)} \right)^2
+ \bar\alpha^{(2)}(t_1) L_1 + \bar\alpha^{(2)}(t_2) L_2 \label{w2loop}\\
&+& \bar C^{(2)}(t_1,\tau) + \bar C^{(2)}(t_2,\tau) +
\bar W^{(2)}(t_1,t_2,s_{45},\tau) \nonumber\\
&-& \frac{1}{2} \left( \bar C^{(1)}(t_1,\tau) \right)^2
- \frac{1}{2} \left( \bar C^{(1)}(t_2,\tau) \right)^2
- \frac{1}{2} \left( \bar W^{(1)}(t_1,t_2,s_{45},\tau) \right)^2\, ,\nonumber\\
m_6^{(3)} &=& m_6^{(2)}\,m_6^{(1)}-{1\over 3}\left(m_6^{(1)}\right)^3 + \bar\alpha^{(3)}(t) L_1+ \bar\alpha^{(3)}(t) L_2\label{w3loop}\\
&+&\bar C^{(3)}(t_1,\tau) + \bar C^{(3)}(t_2,\tau) +\bar W^{(3)}(t_1, t_2,s_{45},\tau)\nonumber\\
&-&\bar C^{(2)}(t_1,\tau)\bar C^{(1)}(t_1,\tau)-\bar C^{(2)}(t_2,\tau)\bar C^{(1)}(t_2,\tau)-\bar W^{(2)}(t_1,t_2,s_{45}\tau)\bar W^{(1)}(t_1,t_2,s_{45},\tau)\nonumber\\
&+&{1\over 3}\left(\bar C^{(1)}(t_1,\tau)\right)^3+{1\over 3}\left(\bar C^{(1)}(t_2,\tau)\right)^3+{1\over 3}\left(\bar W^{(1)}(t_1,t_2,s_{45},\tau)\right)^3,\,\nonumber
\end{eqnarray}
with $L_i=\ln(-s_i/\tau)$ and $i=1,2$, and where $m_6^{(1)}$ must be known
to ${\cal O} (\epsilon^2)$ in \eqn{w2loop} and $m_6^{(1)}$ and $m_6^{(2)}$ to ${\cal O} (\epsilon^4)$ and ${\cal O} (\epsilon^2)$ respectively in \eqn{w3loop}.
Because for $n=6$ we expect to find a remainder function $R_6^{(2)}$,
combining the iterative formula (\ref{eq:discr}) with the two-loop expansion
(\ref{w2loop}), we obtain an iterative formula for the vertex $W^{(2)}$,
\begin{eqnarray}
W^{(2)}(t_1,t_2,s_{45},\tau,\epsilon) &=& {1\over 2} \left[ W^{(1)}(t_1,t_2,s_{45},\tau,\epsilon)\right]^2
\label{eq:wif}\\
&+& {2\,G^2(\epsilon)\over G(2\epsilon)} f^{(2)}(\epsilon)\, W^{(1)}(t_1,t_2,s_{45},\tau,2\epsilon)
+ R_6^{(2)}(u_1^W,u_2^W,u_3^W) + {\cal O} (\epsilon)\, ,\nonumber
\end{eqnarray}
where the one-loop coefficient, $W^{(1)}(\epsilon)$, is needed to ${\cal O} (\epsilon^2)$.
Thus, a remainder function $R_6^{(2)}$ for the multi-Regge kinematics (\ref{eq:mrk2lLip})
may occur in the two-loop iteration of the two-gluon Lipatov vertex.
Using the Mandelstam invariants of \sec{sec:nptampqmrc}, the conformal invariants (\ref{thrinvar}) become
\begin{eqnarray}
u_1&\rightarrow& u_1^W = \frac{s_{45}}{(p_4^++p_5^+)(p_4^-+p_5^-)} \simeq {\cal O} (1)\, , \nonumber\\
u_2&\rightarrow& u_2^W= \frac{|p_{3\perp}|^2 p_5^+p_6^-}{(|p_{3\perp}+p_{4\perp}|^2 + p_5^+p_4^-)
(p_4^++p_5^+)p_6^- }\simeq {\cal O} (1)\, , \nonumber\\
u_3&\rightarrow& u_3^W = \frac{|p_{6\perp}|^2 p_3^+p_4^- }{p_3^+ (p_4^-+p_5^-)
(|p_{3\perp}+p_{4\perp}|^2 + p_5^+p_4^-) }\simeq {\cal O} (1)\,
,\label{thrinvarqmrkc}
\end{eqnarray}
{\it i.e.} all the invariants yield a non-vanishing contribution, which is in general
different from unity.
\begin{figure}[!t]
\begin{fmffile}{sixpoints}
\qquad\quad(a) \parbox{60mm}{\begin{fmfgraph*}(150,115)
\fmfstraight
\fmfleft{p1,p2}
\fmfright{x1,x2}
\fmf{phantom}{p1,u1,v16,u2,p6,x1}
\fmf{phantom}{p2,o1,v23,o2,p3,x2}
\fmf{phantom}{p6,y1,xy1,x1}
\fmf{phantom}{p3,y2,xy2,x2}
\fmffreeze
\fmf{phantom}{v16,vx1,vx2,vc,vx3,x4,v23}
\fmf{phantom}{p3,px1,px2,p4,pc,p5,px3,px4,p6}
\fmf{phantom}{x1,xx1,xx2,xx3,xc,xx4,xx5,xx6,x2}
\fmf{phantom}{y1,yy1,yy2,yy3,yc,yy4,yy5,yy6,y2}
\fmffreeze
\fmf{zigzag,label=$q_2$,label.side=right,l.d=0.05w}{v23,vc}
\fmf{zigzag,label=$q_1$,label.side=right,l.d=0.05w}{vc,v16}
\fmf{gluon,label=$p_2$,label.side=left,l.d=0.03w}{p2,v23}
\fmf{gluon,label=$p_3$,label.side=left,l.d=0.03w}{v23,p3}
\fmf{gluon,label=$p_1$,label.side=left,l.d=0.03w}{p1,v16}
\fmf{gluon,label=$p_6$,label.side=left,l.d=0.03w}{v16,p6}
\fmffreeze
\fmfv{decor.shape=circle,decor.filled=shaded,decor.size=0.09w,fore=green}{v23}
\fmfv{decor.shape=circle,decor.filled=shaded,decor.size=.09w,fore=green}{v16}
\fmfv{decor.shape=circle,decor.filled=shaded,decor.size=.21w,fore=green}{vc}
\fmffreeze
\fmf{gluon,label=$p_4$}{vc,p4}
\fmf{gluon,label=$p_5$,l.d=0.06w,label.side=right}{vc,p5}
\fmf{plain,tension=0.2,right=0.3,label=$s_1$}{y1,yy3}
\fmf{plain,tension=0.2,left=0.3,label=$s_2$}{y2,yy4}
\end{fmfgraph*}}
(b) \parbox{20mm}{\begin{fmfgraph*}(150,120)
\fmfstraight
\fmfleft{p1,p2}
\fmfright{p6,p3}
\fmf{phantom}{p1,v16,p6}
\fmf{phantom}{p2,v23,p3}
\fmf{phantom}{p6,aaa,p3}
\fmf{phantom}{p3,p4,p5,aaa}
\fmffreeze
\fmf{phantom}{v23,bbb,v16}
\fmf{phantom}{bbb,u1,u2,u3,u4,u5,v23}
\fmf{phantom}{bbb,uu1,uu2,uu3,uu4,v16}
\fmffreeze
\fmf{phantom,label=$q$,label.side=right,l.d=0.05w}{bbb,uu3}
\fmffreeze
\fmf{zigzag}{v16,v23}
\fmf{gluon,label=$p_1$,label.side=left,l.d=0.03w}{p1,v16}
\fmf{gluon,label=$p_6$,label.side=left,l.d=0.03w}{v16,p6}
\fmf{gluon,label=$p_2$,label.side=left,l.d=0.03w}{p2,v23}
\fmf{gluon,label=$p_3$,label.side=left,l.d=0.03w}{v23,p3}
\fmf{gluon}{u5,p4}
\fmf{gluon,label=$p_5$,label.side=right,l.d=0.06w}{u4,p5}
\fmffreeze
\fmfv{label=$p_4$,l.a=-10}{p4}
\fmfv{decor.shape=circle,decor.filled=shaded,decor.size=.09w,fore=green}{v16}
\fmfv{decor.shape=circle,decor.filled=shaded,decor.size=.30w,fore=green}{u5}
\end{fmfgraph*}}
\vskip 1cm
\end{fmffile}
\caption{\label{fig:6point}Six-point amplitude in the quasi-multi-Regge kinematics of (a) a pair along the ladder and (b) three-of-a-kind.}
\end{figure}
\subsection{Six-point amplitude in the quasi-multi-Regge kinematics of three-of-a-kind}
\label{sec:amp6qmr3}
In the quasi-multi-Regge kinematics of \sec{sec:kin6qmr3}, where the outgoing
gluons are emitted three in a cluster on one end and one on the other end of the ladder,
\begin{equation}
y_3 \simeq y_4 \simeq y_5 \gg y_6;\qquad
|p_{3\perp}| \simeq |p_{4\perp}| \simeq |p_{5\perp}| \simeq|p_{\perp}|\,
,\label{qmrk6pt3}
\end{equation}
the high-energy prescription is
\begin{eqnarray}
m_6(1,2,3,4,5,6) &=& s \left[g\, B(p_2,p_3,p_4,p_5,\tau) \right]\,
{1\over t}\, \left({-s_1\over \tau}\right)^{\alpha(t)} \left[g\, C(p_1,p_6,\tau) \right]\, , \label{B3Lip6pt}
\end{eqnarray}
where $q=p_1+p_6$, as shown in~\fig{fig:6point}(b), $t=q^2$ and $s=s_{12}$. $s_1$ can be
anything between $s_{36}$, $s_{46}$, and $s_{56}$, the difference between them
being of the order of $s_{345}$, thus sub-leading with respect to $s$.
In order for the amplitude $m_6$ to be real,
\eqn{B3Lip6pt} is taken in the region where all the invariants are negative.
Thus, the quasi-multi-Regge kinematics (\ref{qmrk6pt3}) become,
\begin{equation}
-s \gg -s_{1} \gg -s_{34},-s_{45},-s_{35}, -t\, .\label{eq:mrkB3Lip}
\end{equation}
In \eqn{B3Lip6pt} a new coefficient function occurs for the emission
of three gluons at one end of the ladder occurs,
$B(p_3,p_4,p_5,\tau)$. $B$ can be expanded in the rescaled coupling,
\begin{equation}
\begin{split}
B(p_3,p_4,p_5,&\tau) = B^{(0)}(p_3,p_4,p_5)\\
&\times \left(1 + \bar\gs^{2} \bar B^{(1)}(t,s_{34},s_{45},s_{35},\tau)
+ \bar\gs^4 \bar B^{(2)}(t,s_{34},s_{45},s_{35},\tau) + {\cal O} (\bar\gs^{6}) \right)\, .\label{B3ipvert}
\end{split}
\end{equation}
The tree approximation, $B^{(0)}(p_3,p_4,p_5)$, was
computed in Ref.~\cite{Del Duca:1999ha}.
Expanding \eqn{B3Lip6pt} to one-, two- and three-loop accuracy, we obtain
\begin{eqnarray}
m_6^{(1)} &=& \bar\alpha^{(1)}(t) L
+ \bar B^{(1)}(t,s_{34},s_{45},s_{35},\tau) + \bar C^{(1)}(t,\tau)\, , \nonumber\\
m_6^{(2)} &=& \frac{1}{2} \left( m_6^{(1)} \right)^2
+ \bar\alpha^{(2)}(t) L \label{B2loop}
+ \bar B^{(2)}(t,s_{34},s_{45},s_{35},\tau) + \bar C^{(2)}(t,\tau) \\
&-& \frac{1}{2} \left( \bar B^{(1)}(t,s_{34},s_{45},s_{35},\tau) \right)^2
- \frac{1}{2} \left( \bar C^{(1)}(t,\tau) \right)^2 ,\nonumber\\
m_6^{(3)} &=& m_6^{(2)}m_6^{(1)}-\frac{1}{3} \left( m_6^{(1)} \right)^3
+ \bar\alpha^{(3)}(t) L \label{B3loop}
+ \bar B^{(3)}(t,s_{34},s_{45},s_{35},\tau) + \bar C^{(3)}(t,\tau) \\
&-& \bar B^{(2)}(t,s_{34},s_{45},s_{35},\tau)\bar B^{(1)}(t,s_{34},s_{45},s_{35},\tau) - \bar C^{(2)}(t,\tau)\bar C^{(1)}(t,\tau)\nonumber\\
&+& \frac{1}{3} \left( \bar B^{(1)}(t,s_{34},s_{45},s_{35},\tau) \right)^3
+ \frac{1}{3} \left( \bar C^{(1)}(t,\tau) \right)^3,\nonumber
\end{eqnarray}
with $L=\ln(-s_1/\tau)$, and where $m_6^{(1)}$ must be known
to ${\cal O} (\epsilon^2)$ in \eqn{B2loop} and $m_6^{(1)}$ and $m_6^{(2)}$ to ${\cal O} (\epsilon^4)$ and to ${\cal O} (\epsilon^2)$ respectively in \eqn{B2loop}.
Because for $n=6$ we expect to find a remainder function $R_6^{(2)}$,
combining the iterative formula (\ref{eq:discr}) with the two-loop expansion
(\ref{B2loop}), we obtain an iterative formula for the vertex $B^{(2)}$,
\begin{eqnarray}
B^{(2)}(t,s_{34},s_{45},s_{35},\tau,\epsilon) &=&
{1\over 2} \left[ B^{(1)}(t,s_{34},s_{45},s_{35},\tau,\epsilon)\right]^2 \label{eq:avert3g}\\
&+& {2\,G^2(\epsilon)\over G(2\epsilon)} f^{(2)}(\epsilon)\, B^{(1)}(t,s_{34},s_{45},s_{35},\tau,2\epsilon)
+ 2\, Const^{(2)} \nonumber\\
&+& R_6^{(2)}(u_1^B,u_2^B,u_3^B)
+ {\cal O} (\epsilon)\, ,\nonumber
\end{eqnarray}
where the one-loop coefficient, $B^{(1)}(\epsilon)$, is needed to ${\cal O} (\epsilon^2)$.
Thus, a remainder function $R_6^{(2)}$ for the multi-Regge kinematics (\ref{eq:mrkB3Lip})
may occur in the two-loop iteration for the coefficient function for the emission of three gluons
on one end of the ladder.
In the limit $y_3\gg y_4\simeq y_5$, the kinematics (\ref{qmrk6pt3}) reduce to
\eqn{qmrk6ptc} and the prescription (\ref{B3Lip6pt}) reduces to \eqn{2lLip6pt}.
Then the coefficient function $B$ factors out into the two-gluon Lipatov vertex $W$
and the coefficient function for the emission of a gluon, linked by a reggeised
propagator~\cite{Del Duca:1999ha}. Accordingly, the remainder function
$R_6^{(2)}(u_1^B,u_2^B,u_3^B)$
in \eqn{eq:avert3g} reduces to $R_6^{(2)}(u_1^W,u_2^W,u_3^W)$ in \eqn{eq:wif}.
Using the Mandelstam invariants of \sec{sec:kin6qmr3},
the conformal invariants (\ref{thrinvar}) become~\cite{Brower:2008nm}
\begin{eqnarray}
u_1&\rightarrow& u_1^B = \frac{s\, s_{45}}{s_{345} (p_4^++p_5^+) p_6^-} \simeq {\cal O} (1)\, , \nonumber\\
u_2&\rightarrow& u_2^B = \frac{(|p_{3\perp}|^2 + (p_4^++p_5^+) p_3^- ) p_5^+p_6^-}
{(p_4^++p_5^+)p_6^-
(|p_{3\perp}+p_{4\perp}|^2 + (p_3^-+p_4^-) p_5^+) } \simeq {\cal O} (1)\, , \nonumber\\
u_3&\rightarrow& u_3^B = \frac{|p_{6\perp}|^2 s_{34} }{s_{345}
(|p_{3\perp}+p_{4\perp}|^2 + (p_3^-+p_4^-) p_5^+)
}\simeq {\cal O} (1)\,
,\label{thrinvarqmrk3g}
\end{eqnarray}
{\it i.e.} all the invariants are of similar size.
\section{Conclusions}
In this work we investigated the high-energy limit of
a colour-stripped MHV amplitude, which
is based on the Regge factorisation of the amplitude into a ladder of coefficient
functions and vertices linked by reggeised propagators~\cite{DelDuca:2008pj}.
We showed explicitly that in the Euclidean region
two- and three-loop $n$-gluon amplitudes in multi-Regge
kinematics are fully consistent with the Bern-Dixon-Smirnov ansatz, and in \sec{sec:proof} we proved
that this result holds true at any loop accuracy. In particular, this implies
that in the Euclidean region the breakdown of the iterative structure of the two-loop amplitudes, occurring in the
two-loop six-point amplitude, cannot be resolved by multi-Regge kinematics,
{\it i.e.} the remainder function $R_6^{(2)}$
is sub-leading in the multi-Regge kinematics.
In \sec{sec:quasi} we showed that similar conclusions can be drawn for less restrictive
multi-Regge kinematics,
namely the kinematics where all the outgoing gluons are strongly ordered in
rapidity, but for a pair of gluons either at one end or at both ends of the ladder.
By giving explicit examples for the two- and three-loop six-point amplitude, we argued
that in this case as well the Regge factorisation of the amplitude is consistent
with the iterative structure implied by the BDS ansatz. The structure of the high energy prescription ensures that
this result is valid for an arbitrary number of loops.
Finally, in order to find kinematics which might shed light on the
violation of the
BDS ansatz for the two-loop six-point amplitude, in \sec{sec:outlook} we considered kinematics which occur only for
$n$-gluon amplitudes with $n\ge 6$, and thus for which we could not invoke the
BDS iterative structure. We showed that the iterative structures for the new two-loop functions that appear in these kinematics
might have a dependence on the remainder function $R_6^{(2)}(u_1,u_2,u_3)$, where $u_1$, $u_2$, $u_3$ are the conformal invariants, and therefore we argued that these kinematical limits could provide some information on
this quantity.
This suggestion is supported by the observation that, while in the multi-Regge kinematics of
\sec{sec:bdsmrk} and in the quasi-multi-Regge
kinematics of \sec{sec:quasi} the three conformal cross ratios (\ref{thrinvar})
all took limiting values, in the more general quasi-multi-Regge kinematics of
\sec{sec:outlook} they are allowed to vary over a range defined by the kinematic
invariants.
\section*{Acknowledgements}
We thank Lance Dixon, Vladimir Smirnov and Gabriele Travaglini for useful discussions.
CD thanks the IPPP Durham and the LNF Frascati for
the warm hospitality at various stages of this work. CD is a research fellow of the \emph{Fonds National de la Recherche Scientifique}, Belgium. This work was partly supported by MIUR under contract 2006020509$_0$04,
and by the EC Marie-Curie Research Training Network ``Tools and Precision Calculations for Physics Discoveries at Colliders'' under contract MRTN-CT-2006-035505. EWNG gratefully acknowledges the support of the Wolfson Foundation and the Royal Society.
\section*{Erratum}
We also would like to thank Lance Dixon and Jochen Bartels for pointing out to us that the factorised form conjectured in Eq.~(3.4) is not valid in
the Minkowski region where the centre-of-mass energy squared $s$ and the energy squared $s_2$
of the two gluons emitted along the ladder are time-like while all other invariants stay space-like.
Eq.~(3.4) is valid in the Euclidean region, where all invariants are
space-like, and in the physical region, where the $s$-type invariants are time-like and the $t$-type invariants are space-like.
This error is corrected in the present version, where we have made it clear that we are referring to the Euclidean and to the physical regions only. The non commuting of the high-energy limit and the $\epsilon$ expansion described in Appendix C of the previous version
is no longer relevant to the discussion, and Appendix C has been removed.
|
2,877,628,091,525 | arxiv | \section{Appendix}
For the evaluation we use $mIoU$ (mean intersection over union), Dice score and $F1$-score.
The mIoU metric was used to compare our result to already existing approaches. For all images and $k$ classes is defined as follows:
$$
mIoU = \frac{1}{k} \sum\limits_{i = 1}^k \frac{TP_{ii}}{\sum\limits_{j = 1}^k FN_{ij} +\sum\limits_{j = 1}^k FP_{ij} - TP_{ii}},
$$
where $TP$, $FP$, $FN$ are numbers of true positive, false positive
and false negative pixels respectively.
The Dice score was used to compare our results of weakly-supervised model to the first place solution with fully-supervised approach on SIIM-ACR Pneumothorax \cite{siimpneumothorax}. The metric is defined as follows:
$$
Dice = \frac{2\times TP}{(TP + FP) + (TP + FN)}
$$
where $TP$, $FP$, $FN$ are numbers of true positive, false positive
and false negative pixels respectively. If all pixels are true negative, prediction is considered correct and metric equals 1.
The $F1$-score was used to compare approaches on a first step step of our pipeline --- classification.
The metric is defined as follows:
$$
F1 = \frac{2\times precision \times recall}{pression + recall}
$$
where $precission=\frac{TP}{TP + FP}$, $recall=\frac{TP}{TP + FN}$, and $TP$, $FP$, $FN$ are true positive, false positive and false negative rates respectively.
\section{Conclusions}
We present a novel method of weakly-supervised semantic segmentation that demonstrated its efficiency for detecting anomalous regions on chest X-ray images. In particular, we propose a three-step approach to weakly-supervised semantic segmentation, which uses only image-level labels as supervision. Next, we customize and expand the previous works by including supplementary steps such as regularization, IRNet, and various post-processing techniques. Also, the method is general, domain independent and explainable via localization maps at each step. We evaluated it on two datasets of different nature; however, it can also be implemented in other medical problems.
\section{Experiments and Results}
\subsection{Reproducibility}
\label{repro}
PyTorch \cite{NEURIPS2019_9015} was used for implementing and training all steps of our approach: extracting localization maps via the classification networks, improving obtained maps with IRNet \cite{ahn2019weakly} and segmenting the image during segmentation task. All the experiments were performed on four Nvidia Tesla K80 GPUs.
\subsection{Datasets and evaluation metric}
We conduct experiments on two datasets: PASCAL VOC 2012 \cite{pascal-voc-2012} and SIIM-ACR Pneumothorax \cite{siimpneumothorax}. We evaluate the quality of our pseudo-ground-truth and the performance of the segmentation model trained on them using mIoU.
\par
PASCAL VOC 2012 \cite{pascal-voc-2012} is an image segmentation benchmark dataset containing 20 object classes, and a background class. As in other works on weakly-supervised segmentation, we train our models using augmented 10,582 training images with image-level labels. We report mIoU for 1,449 validation images.
\par
SIIM-ACR Pneumothorax \cite{siimpneumothorax} is a competition that provides an open dataset of chest X-ray images with pixel-wise annotation for regions affected by Pneumothorax: a collapsed lung, where an abnormal volume of air is formed in the pleural space between the lung and the chest wall. This dataset was formed from a subset of ChestX-ray14 dataset \cite{wang2017chestx}, but relabeled by professional radiologists, and additionally annotated on a pixel level. The specified competition has two stages; ground truth labels are provided only for the first, the second is evaluated on the competition website. Thus, we divided images from the first stage into three sets: train, validation and test. Totally, 12,047 frontal-view chest X-ray cases are in the dataset. We use 2,379 positive and 8,296 negative images for training, 145 and 541 for validation, 145 and 541 for the test.
\subsection{Data challenges}
\label{data}
SIIM-ACR Pneumothorax \cite{siimpneumothorax} dataset has a severe class imbalance problem. The number of normal cases exceeds approximately 4 times the number of positive ones. In order to prevent overfitting towards healthy patients we use various augmentation techniques such as scaling, rotation, blur, brightness adjustment, and horizontal flipping. We also add sampling in our data loader during training, which selects the constant ratio between negative and positive class. Another challenge in this dataset is the size of regions of interest. Pneumothorax usually affects a very small area of lungs resulting in a high disbalance among the image pixels. We solve this problem by adding weights for positive class to binary cross-entropy loss. Due to these data challenges we evaluate the performance of our method on SIIM-ACR Pneumothorax not only for all images in validation and test sets, but also separately for positive cases, see Table \ref{siimeval}.
\begin{table}[b!]
\centering
\caption{Influence of DropBlock on PASCAL VOC 2012. Comparison of the classification model trained without regularization to a model with DropBlock.}
\vspace{0.5em}
\begin{tabular}{l|P{3.5cm}}
\hline
Model & Multilabel F1-score (\%) \\
\specialrule{.1em}{.05em}{.05em}
ResNet50 & 88.08 \\
\hline
ResNet50 with DropBlock regularization & 88.2 \\
\hline
\end{tabular}
\label{dropblock}
\end{table}
\begin{table}[b!]
\centering
\caption{Comparison of CAM generation techniques on PASCAL VOC 2012.}
\vspace{0.5em}
\begin{tabular}{l|l|l|l}
\hline
Classification model & CAM extraction method & mIoU train & mIoU val \\
\specialrule{.1em}{.05em}{.05em}
VGG16 & Cam-Grad & 0.4137 & 0.3511\\
\hline
VGG16 & Cam-Grad++ & \bfseries 0.4176 & \bfseries 0.3941 \\
\hline
\end{tabular}
\label{pascalcam}
\end{table}
\subsection{Experiments}
\subsubsection{Step 1. CAM generation}
For classification we implement ResNet50, and VGG16. As suggested in previous work \cite{jiang2019integral}, we added three convolutional layers on the top of the fully-convolutional backbone, each of which is followed by a ReLU. The conducted experiments show that adding DropBlock regularization to our classification models improves their performance, see Table \ref{dropblock}.
For both datasets, the best classification results were achieved using VGG16, which was, thus, selected as the final model for this task. We test two methods for generating pseudo-annotations: Grad-CAM \cite{selvaraju2017grad}, and Grad-CAM++ \cite{chattopadhay2018grad}. Our experiments show that Grad-CAM++ \cite{chattopadhay2018grad}, which utilizes a regularization that Grad-CAM is lacking of, provides better object localization through visual explanations of model predictions; cf. Table \ref{pascalcam}.
\subsubsection{Step 2. IRNet}
For both datasets as post-processing of maps produced at Step 1, we use thresholding and then refine the pseudo-maps by dense CRF to better capture object shapes. The resulting annotations are used to train IRNet.
\subsubsection{Step 3. Segmentation}
The obtained maps after IRNet step are used as the pseudo-labels for segmentation. We implement three networks to complete this task: U-Net \cite{ronneberger2015u}, DeepLabv3 \cite{chen2017rethinking}, and DeepLabv3+ \cite{chen2018encoder}. We report results on PASCAL VOC 2012 produced by DeepLabv3+, as it shows better performance than DeepLabv3 with the same ResNet50 backbone. For SIIM-ACR Pneumothorax, however, U-Net with SEResNeXt50 \cite{hu2018squeeze} backbone shows the best results.
\subsection{Training and optimization}
\label{optim}
For all the models, three optimization methods are examined: SGD, Adam and RAdam \cite{liu2019variance}. For Pneumothorax segmentation, SGD optimizer is applied with the learning rate initiated as 6e-5 and gradually decreasing each epoch, whereas momentum is set to 0.9, and weight decay to 1e-6. The size of network inputs is 512x512, the batch is 48, and it is balanced according to class distribution using augmentations to increase the sample of positive cases.
\subsection{Results}
\par
The comparison of segmentation methods using the same chest X-ray datasets is not simple due to the challenge of finding public medical data. Moreover, this work is the first to present the results of weakly-supervised segmentation methods on SIIM-ACR Pneumothorax \cite{siimpneumothorax} data. However, the performance of our models is comparable to Ouyang et al. \cite{ouyang2019weakly}, who reported their scores on a collected closed dataset of Pneumothorax. These authors train their method with different combinations of well- and weakly-annotated data, whereas our method uses only image-level labels. In Table \ref{siimeval} we show how the results improve with each step of our approach; the result of Ouyang et al. \cite{ouyang2019weakly} model trained on 400 weakly-annotated and 400 well-annotated cases is specified too.
\begin{table}[h!]
\vspace{-1.5em}
\centering
\caption{Results on SIIM-ACR Pneumothorax validation and test sets after each step of our method. Calculated for only positive cases (pos.), and for the whole set, including the healthy patients (all). The Ouyang et al. method, whose result is demonstrated, was trained on 400 weakly-annotated and 400 well-annotated cases.}
\vspace{1.0em}
\begin{tabularx}{\textwidth}{l|l|P{1.5cm}|P{1.5cm}|P{1.5cm}|P{1.5cm}}
\hline
\multicolumn{2}{c}{} & \multicolumn{2}{|c}{mIoU val} & \multicolumn{2}{|c}{mIoU test} \\
\hline
Dataset & Method & pos. & all & pos. & all \\
\specialrule{.1em}{.05em}{.05em}
SIIM-ACR Pneum. \cite{siimpneumothorax} &Step 1. CAM & 0.117 & 0.7633 & 0.142 & 0.7590\\
\hline
SIIM-ACR Pneum. \cite{siimpneumothorax} &Step 2. IRNet & 0.122 & 0.7645 & 0.154 & 0.7607 \\
\hline
SIIM-ACR Pneum. \cite{siimpneumothorax} &Step 3. Segm. & 0.148 & 0.7649 & 0.162 & 0.7677\\
\specialrule{.1em}{.05em}{.05em}
Custom \cite{ouyang2019weakly} & Ouyang et al.\cite{ouyang2019weakly} & - & - & - & 0.669 \\
\hline
\end{tabularx}
\label{siimeval}
\end{table}
\begin{figure}[t!]
\centering
\subfloat[Image] {
\includegraphics[width=0.18\linewidth]{images/localization/20_image.png}
}
\subfloat[Step1.CAM] {
\includegraphics[width=0.18\linewidth]{images/localization/20_cam.png}
}
\subfloat[Step2.IRNet] {
\includegraphics[width=0.18\linewidth]{images/localization/20_irn.png}
}
\subfloat[Step3.Segm] {
\includegraphics[width=0.18\linewidth]{images/localization/20_segment.png}
}
\subfloat[Mask] {
\includegraphics[width=0.18\linewidth]{images/localization/20_gt.png}
}
\\[-1.5ex]
\subfloat {
\includegraphics[width=0.18\linewidth]{images/localization/133_image.png}
}
\subfloat {
\includegraphics[width=0.18\linewidth]{images/localization/133_cam.png}
}
\subfloat {
\includegraphics[width=0.18\linewidth]{images/localization/133_irn.png}
}
\subfloat {
\includegraphics[width=0.18\linewidth]{images/localization/133_segment.png}
}
\subfloat {
\includegraphics[width=0.18\linewidth]{images/localization/133_gt.png}
}
\caption{Pneumothorax localization maps for (a) a random image from the test set at each consecutive step of our method: (b) map after CAM extraction, (c) improved map by IRNet trained on the outcomed of step 1, (d) prediction of U-Net trained on step 2 results, all compared to (e) ground truth mask.}
\label{fig:siimlocalization}
\end{figure}
\begin{figure}[t!]
\vspace{-1.5em}
\centering
\subfloat[Image] {
\includegraphics[width=0.18\linewidth]{images/pneum/102_image.png}
}
\subfloat[Step1.CAM] {
\includegraphics[width=0.18\linewidth]{images/pneum/102_cam.png}
}
\subfloat[Step2.IRNet] {
\includegraphics[width=0.18\linewidth]{images/pneum/102_irn.png}
}
\subfloat[Step3.Segm] {
\includegraphics[width=0.18\linewidth]{images/pneum/102_segment.png}
}
\subfloat[Mask] {
\includegraphics[width=0.18\linewidth]{images/pneum/102_gt.png}
}
\\[-1.5ex]
\subfloat {
\includegraphics[width=0.18\linewidth]{images/pneum/87_image.png}
}
\subfloat {
\includegraphics[width=0.18\linewidth]{images/pneum/87_cam.png}
}
\subfloat {
\includegraphics[width=0.18\linewidth]{images/pneum/87_irn.png}
}
\subfloat {
\includegraphics[width=0.18\linewidth]{images/pneum/87_segment.png}
}
\subfloat {
\includegraphics[width=0.18\linewidth]{images/pneum/87_gt.png}
}
\\[-1.5ex]
\subfloat {
\includegraphics[width=0.18\linewidth]{images/pneum/127_image.png}
}
\subfloat {
\includegraphics[width=0.18\linewidth]{images/pneum/127_cam.png}
}
\subfloat {
\includegraphics[width=0.18\linewidth]{images/pneum/127_irn.png}
}
\subfloat {
\includegraphics[width=0.18\linewidth]{images/pneum/127_segment.png}
}
\subfloat {
\includegraphics[width=0.18\linewidth]{images/pneum/127_gt.png}
}
\caption{Segmentation predictions for (a) a random image from test set of SIIM-ACR Pneumothorax produced at each step of our approach: (b) CAM extraction, (c) IRNet, (d) U-Net segmentation, compared to (e) ground truth mask.}
\label{fig:siimimages}
\end{figure}
We present method's explainability via disease localization regions; cf. Figure \ref{fig:siimlocalization}. We provide qualitative results of segmentation on validation images from both da\-ta\-sets in Figure \ref{fig:siimimages} and Figure \ref{fig:pascalevl}. We show the resulting maps at each step of our method; the figures demonstrate how the performance improves after each step. We achieve comparable results to state-of-the-art method on PASCAL VOC 2012; cf. Table \ref{pascalresults}.
\par
We evaluate our method on the second stage test set on the competition server \cite{siimpneumothorax} to compare it against a fully-supervised upper-performance limit. We achieve 0.769 Dice score while the first place solution got 0.868 using pixel-level labels for training. Our method proves the capability of using only image-level annotations for semantic segmentation on chest X-rays, nevertheless, attaining as good or even better results than those produced by fully supervised networks is still a challenge for weakly-supervised approaches.
\vspace{-2em}
\begin{table}
\centering
\caption{Comparison of weakly-supervised semantic segmentation methods on PASCAL VOC 2012 validation set. Our approach is evaluated after each of the proposed steps, where each step is trained on the outcomes of the previous one.}
\vspace{1.0em}
\begin{tabularx}{\textwidth}{l|P{3.5cm}|P{3.5cm}}
\hline
Method & Year & mIoU \\
\specialrule{.1em}{.05em}{.05em}
Our method. Step 1. CAM & 2020 & 0.479 \\ \hline
Our method. Step 2. IRNet & 2020 & 0.631 \\ \hline
Our method. Step 3. Segmentation & 2020 & \bfseries 0.646 \\
\specialrule{.1em}{.05em}{.05em}
IRNet \cite{ahn2019weakly} & 2019 & 0.635 \\ \hline
FickleNet \cite{lee2019ficklenet} & 2019 & \bfseries 0.649 \\ \hline
DSRG (ResNet101) \cite{huang2018weakly} & 2018 & 0.614 \\ \hline
SEC \cite{kolesnikov2016seed} & 2016 & 0.507 \\
\hline
\end{tabularx}
\label{pascalresults}
\end{table}
\vspace{-4em}
\begin{figure}[H]
\centering
\subfloat[Image] {
\includegraphics[width=0.18\linewidth]{images/pascal/2008_002835_image.jpg}
}
\subfloat[Step1.CAM] {
\includegraphics[width=0.18\linewidth]{images/pascal/2008_002835_cam.png}
}
\subfloat[Step2.IRNet] {
\includegraphics[width=0.18\linewidth]{images/pascal/2008_002835_irn.png}
}
\subfloat[Step3.Segm] {
\includegraphics[width=0.18\linewidth]{images/pascal/2008_002835_segm.png}
}
\subfloat[Mask] {
\includegraphics[width=0.18\linewidth]{images/pascal/2008_002835_gt.png}
}
\\[-1.5ex]
\subfloat {
\includegraphics[width=0.18\linewidth]{images/pascal/2008_004069_image.jpg}
}
\subfloat {
\includegraphics[width=0.18\linewidth]{images/pascal/2008_004069_cam.png}
}
\subfloat {
\includegraphics[width=0.18\linewidth]{images/pascal/2008_004069_irn.png}
}
\subfloat {
\includegraphics[width=0.18\linewidth]{images/pascal/2008_004069_segm.png}
}
\subfloat {
\includegraphics[width=0.18\linewidth]{images/pascal/2008_004069_gt.png}
}
\\[-1.5ex]
\subfloat {
\includegraphics[width=0.18\linewidth]{images/pascal/2007_005304_image.jpg}
}
\subfloat {
\includegraphics[width=0.18\linewidth]{images/pascal/2007_005304_cam.png}
}
\subfloat {
\includegraphics[width=0.18\linewidth]{images/pascal/2007_005304_irn.png}
}
\subfloat {
\includegraphics[width=0.18\linewidth]{images/pascal/2007_005304_segm.png}
}
\subfloat {
\includegraphics[width=0.18\linewidth]{images/pascal/2007_005304_gt.png}
}
\\[-1.5ex]
\subfloat {
\includegraphics[width=0.18\linewidth]{images/pascal/2008_006784_image.jpg}
}
\subfloat {
\includegraphics[width=0.18\linewidth]{images/pascal/2008_006784_cam.png}
}
\subfloat {
\includegraphics[width=0.18\linewidth]{images/pascal/2008_006784_irn.png}
}
\subfloat {
\includegraphics[width=0.18\linewidth]{images/pascal/2008_006784_segm.png}
}
\subfloat {
\includegraphics[width=0.18\linewidth]{images/pascal/2008_006784_gt.png}
}
\\[-1.5ex]
\subfloat {
\includegraphics[width=0.18\linewidth]{images/pascal/2007_005845_image.jpg}
}
\subfloat {
\includegraphics[width=0.18\linewidth]{images/pascal/2007_005845_cam.png}
}
\subfloat {
\includegraphics[width=0.18\linewidth]{images/pascal/2007_005845_irn.png}
}
\subfloat {
\includegraphics[width=0.18\linewidth]{images/pascal/2007_005845_segm.png}
}
\subfloat {
\includegraphics[width=0.18\linewidth]{images/pascal/2007_005845_gt.png}
}
\caption{Visualization of segmentation predictions on PASCAL VOC 2012 for (a) an image produced at each step of our approach: (b) CAM extraction, (c) IRNet, (d) DeepLabv3+ segmentation, compared to (e) ground truth mask.}
\label{fig:pascalevl}
\end{figure}
\section{Clinical Relevance}
During diagnosing procedure the final decision maker is a doctor, while AI-powered decision support systems can assist by detecting regions of interest and presenting the data in a convenient format that doctors can use. With an automatic image segmentation solution, the healthcare provider can reach higher efficiency by saving doctors' time spent on the primary analysis of images. At the same time it will increase diagnosis accuracy by providing the second opinion. The major problem in building such a solution is the lack of large amounts of pixel-wise labeled data that are extremely costly in terms of the expert time required for their annotation. With our approach, which requires only image-level annotations, the costs can be reduced dramatically. In the long run, it allows the cheaper implementation of segmentation models, also facilitating research in the area by overcoming the problem of collecting datasets with pixel-wise annotations. Saving doctors' time on diagnosis is especially important during the disease widespread such as COVID-19, when large number of people are affected by a disease and the amount of required screening procedures grows exponentially.
\par
Using our method for medical images can automate parts of the radiology workflow cutting operational costs for the hospitals. The proposed approach was designed to be general and applicable to other medical purposes; for example detection of various thoracic diseases.
\section{Introduction}
Applications of Convolutional Neural Networks to medical images have recently produced efficient solutions for a vast variety of medical problems, such as segmentation of lung nodules in computed tomography (CT) scans \cite{gruetzemacher20183d}, lesions detection in mammography images \cite{abdelhafiz2019deep}, segmentation of brain gliomas from MRI images \cite{archa2018segmentation} and others \cite{ronneberger2015u,skourt2018lung}.
One of the greatest challenges for using deep learning methods in medicine is the lack of large annotated datasets, especially with pixel-level labeled data.
Creating such datasets is often very expensive and time consuming. For instance, Lin et al. \cite{lin2014microsoft} calculated that collecting bounding boxes for each class is about 15 times faster than producing a ground-truth pixel-wise segmentation mask; getting image-level labels is even easier. Moreover, domain expertise is required to label medical data, which poses another challenge as the doctor's time is costly and could more effectively be used for a patient's diagnosis and disease treatment. Working with image-level annotations also decreases the probability of disagreement between experts, since pixel-wise annotations tend to have more noise and vary among labelers.
\par
To decrease the resources spent on labeling while preserving its quality, we propose a novel weakly-supervised approach to image segmentation that uses only image-level labels. Our method is domain-independent; we have tested it on several distant datasets, including popular PASCAL VOC 2012 \cite{pascal-voc-2012}, and a medical dataset, SIIM-ACR Pneumothorax \cite{siimpneumothorax}. We achieve 64.6 mean intersection-over union (mIoU) score on PASCAL VOC 2012 \cite{pascal-voc-2012} validation set. Our method is capable of segmenting medical images with limited supervision achieving 76.77 mIoU score on the test set of SIIM-ACR Pneumothorax dataset \cite{siimpneumothorax}. The automatic approach to finding Pneumothorax could be used to triage chest radiographs requiring priority interpretation, to rapidly identify critical cases, and also provide a second-opinion for radiologists to make a more confident disease diagnosis.
\section{Methodology}
Our method can be split into three consecutive steps: Class Activation Maps generation, map enhancement with Inter-pixel Relation Network, and segmentation. After each step, we also add one or more post-processing techniques such as CRF, thresholding, noise filtering (small regions with low confidence).
\par
\subsubsection{Step 1. CAM generation} First, we train fully-supervised classification models on image-level labels. The two tested architectures for this step were ResNet50 \cite{he2016deep} and VGG16 \cite{simonyan2014very} with additional three convolutional layers followed by ReLU activation. We also replace stride with dilation \cite{yu2015multi} in the last convolutional layers to increase the size of the final feature map while decreasing the output stride from 32 to 8. We improve the classification performance by including a regularization term, inspired by FickleNet \cite{lee2019ficklenet}. For this, we use DropBlock \cite{ghiasi2018dropblock}---a dropout technique, which to our best knowledge has not been tried in previous works on weakly-supervised segmentation. The trained models are then used to retrieve activation maps by applying the Grad-CAM++ \cite{chattopadhay2018grad} method. The resulting maps serve as pseudo labels for segmentation task.
\par
\subsubsection{Step 2. IRNet}On the second step, IRNet \cite{ahn2019weakly} takes the generated CAM and trains two output branches that predict a displacement vector field and a class boundary map, correspondingly. They take feature maps from all the five levels of the same shared ResNet50 \cite{he2016deep} backbone. The main advantage of IRNet \cite{ahn2019weakly} is its ability to improve boundaries between different object classes. We train it on the generated maps, thus no extra supervision is required. This step allows us to obtain better pseudo-labels before proceeding to segmentation. To our best knowledge, this approach has not been used in the medical imaging domain before.
\par
\subsubsection{Step 3. Segmentation} For the segmentation step, we train DeepLabv3+ \cite{chen2018encoder} and U-Net \cite{ronneberger2015u} models with different backbones, which have proven to produce reliable results in fully supervised semantic segmentation on medical images \cite{skourt2018lung,ronneberger2015u}. The used backbones include ResNet50 \cite{he2016deep} and SEResNeXt50 \cite{hu2018squeeze}. We modify the binary cross-entropy (BCE) loss during segmentation by adding weights to a positive class to prevent overfitting towards normal cases.
\section{Related Work}
The objective of weakly-supervised segmentation is to create models capable of pixel-wise segmentation based on image-level labels. The existing approaches can be categorized by their methodologies into four groups: Expectation-Ma\-xi\-mi\-za\-tion, Multiple Instance Learning, Self-Supervised Learning, and Object Proposal Class Inference \cite{chan2019comprehensive}. In this paper, we follow the self-supervised paradigm, which suggests training a fully supervised segmentation model on the created pseudo-pixel-level annotations, also known as Class Activation Maps (CAM) \cite{zhou2016learning}, which are extracted from the classification network.
This paradigm is the most challenging one as it leads to the least informative form of weak supervision providing no location information for the objects. However, judging from the quantitative performance on PASCAL VOC 2012 \cite{pascal-voc-2012} validation set, the top five methods of weakly-supervised segmentation use the self-supervised learning approach \cite{chan2019comprehensive}.
\par
Many methods of self-supervised learning for semantic segmentation have been recently suggested. Kolesnikov et al. \cite{kolesnikov2016seed} propose Seed Expand Constrain (SEC) method, which trains a CNN, applies CAM to produce pseudo-ground-truth segments, and then trains a Fully Convolutional Network (FCN) optimizing three losses: one for the generated seeds, another for the image-level label, and, finally, a constraint loss against the maps processed by Conditional Random Fields (CRF). Huang et al. \cite{huang2018weakly} introduce Deep Seeded Region Growing (DSRG), which propagates class activations from high-confidence regions to adjacent regions with a similar visual appearance by applying a region-growing algorithm on the generated CAM. Lee et al. \cite{lee2019ficklenet} present FickleNet, which trains a CNN at the image level with a regularization step represented as a center-fixed spatial dropout in the later convolutional layers, and then runs Grad-CAM \cite{selvaraju2017grad} multiple times to generate a thresholded pseudo-labels for a segmentation step. Another approach, proposed by Ahn et al. \cite{ahn2019weakly}, suggests using IRNet \cite{ahn2019weakly}, which takes the random walk from low-displacement field centroids in the CAM up until the class boundaries as the pseudo-ground-truths for training an FCN. Ahn et al. \cite{ahn2019weakly} focus on the segmentation of the individual instances estimating two types of features in addition to CAM: a class-agnostic instance map and pairwise semantic affinities.
\par
The weakly-supervised semantic segmentation on medical datasets has been explored in \cite{lu2020weakly,agarwal2020weakly,demiray2019weakly,cai2018accurate,qu2019weakly}. On the other hand, Ouyang et al. \cite{ouyang2019weakly} combine the weakly-annotated data with well-annotated cases to segment Pneumothorax in chest X-rays. In our approach, we do not use any form of supervision besides image-level labels. We focus on developing a standardized method, which is efficient for various data types, especially for medical images.
\section*{Acknowledgements}
This research was supported by SoftServe and Faculty of Applied Sciences at Ukrainian Catholic University (UCU), whose collaboration allowed to create SoftServe Research Group at UCU. The authors thank Rostyslav Hryniv for helpful and valuable feedback.
\bibliographystyle{splncs04}
|
2,877,628,091,526 | arxiv | \section{White Paper Information}
\begin{enumerate}
\item {\bf Science Category:} the basic science theme of this project is the Milky Way Structure and Formation. However, since it is based on variable stars as population tracers and distance indicators, it is also related to the Explore the Changing Sky theme.
\item {\bf Survey Type Category:} mini survey.
\item {\bf Observing Strategy Category:} this is a project aimed at detecting variable stars in the MW Bulge. Therefore, it is an integrated program with science that hinges on the combination of pointing and detailed observing strategy.
\item {\bf Author Information}\\
$^{1}$Universit\`a di Roma Tor Vergata \\
$^{2}$INAF--Osservatorio Astronomico di Roma \\
$^{3}$INAF--Osservatorio Astronomico di Capodimonte \\
$^{4}$Space Science Data Center–ASI \\
$^{5}$Universidade Federal do Rio Grande do Sul \\
$^{6}$Instituto Milenio de Astrof\'isica \\
$^{7}$Universidad Andr\'es Bello \\
$^{8}$INAF--OAS Osservatorio di Astrofisica \& Scienza dello Spazio di Bologna \\
$^{9}$Space Telescope Science Institute \\
$^{10}$Pontificia Universidad Cat\'olica de Chile \\
$^{11}$Dartmouth College \\
$^{12}$University of Michigan-Dearborn \\
$^{13}$Universit\`e C\^ote d'Azur \\
$^{14}$University of Central Lancashire \\
$^{15}$Universit\`a di Pisa \\
$^{16}$INFN, Sezione di Pisa \\
$^{17}$UK Astronomy Technology Centre, Royal Observatory \\
$^{18}$INAF--Osservatorio Astrofisico di Arcetri \\
$^{19}$Saint Martin's University \\
$^{20}$Zentrum f\"ur Astronomie der Universit\"at Heidelberg \\
$^{21}$Iowa State University \\
$^{22}$National Optical Astronomy Observatory \\
$^{23}$The University of Tokyo \\
$^{24}$Instituto de Astrof\'sica de Canarias \\
$^{25}$Universidad de La Laguna \\
$^{26}$Universit\`a di Roma La Sapienza \\
$^{27}$Florida Atlantic University \\
$^{28}$INAF--Osservatorio Astronomico di Trieste \\
$^{29}$INAF--Osservatorio Astronomico d'Abruzzo \\
$^{30}$Liverpool John Moores University \\
$^{31}$University of Texas \\
$^{32}$Dominion Astrophysical Observatory \\
$^{33}$National Research Council of Canada \\
$^{34}$Konkoly Observatory \\
$^{35}$INAF--Osservatorio Astronomico di Collurania \\
$^{36}$European Southern Observatory \\
$^{37}$Leibniz Institut fuer Astrophysik Potsdam - AIP\\
$^{38}$Department of Physics \& Astronomy, The University of California\\
$^{39}$Las Cumbres Observatory\\
$^{40}$Villanova University, Dept. of Astrophysics and Planetary Science
\end{enumerate}
\clearpage
\section{Scientific Motivation}
This experiment is aimed at disentangling the stellar content
of the Galactic Bulge using variable stars, since they have the
advantage to provide individual distance, age, metallicity and
reddening estimates. We focus our attention on variables tracing old
(RR Lyraes, RRLs; Type II Cepheids, TIICs; t$>$10 Gyr),
intermediate-age (Miras, t$\sim$0.5–10 Gyr), and young
(Classical Cepheids, CC; t$\sim$10–300 Myr) stellar
populations (Bono et al. 2015; Matsunaga et al. 2016).
The Galactic Bulge, which is mainly old with a younger tail,
makes up the 25\% of the total MW stellar mass (Valenti et al. 2016).
Recent photometric and spectroscopic investigations revealed that
the Bulge contains two main components. The old and/or metal-poor one,
traced either with RRL
or with metal-poor Red Clump (RC) stars, is rounder, rotates slower and
has a shallower gradient in radial velocity dispersion. The metal-rich
one is traced with RC stars, it is arranged in a bar that flares up into a
boxy/peanut structure in its outer region, rotates faster, and
has a steeper gradient in radial velocity dispersion (Ness et al. 2012;
Rojas-Arriagada et al. 2014; Pietrukowicz et al.
2015; Kunder et al. 2016; Zoccali et al. 2017).
Recent spectroscopic surveys mainly based on either giants
(BRAVA; Shen et al. 2010) or RC stars (ARGOS; GIBS;
Freeman et al. 2013; Zoccali et al. 2014)
suggest that Bulge stars undergo cylindrical rotation. On
the other hand, BRAVA-RR used RRLs and found much slower rotations,
and higher velocity dispersions (Barbuy et al. 2018). Moreover, it is
not clear yet whether Bulge RRLs trace either the main Bar or the spheroidal
component (D\'ek\'any et al. 2013;
Pietrukowicz et al. 2015; Kunder et al. 2018).
\vspace{-0.35truecm}
\begin{center}
\bf Why a shallow minisurvey
\end{center}
\vspace{-0.35truecm}
$\bullet$ {\em 3D Bulge Structure} --
We have recently developed a new algorithm to estimate reddening distance and metallicity
(REDIME) by using optical/NIR (BVIJHK) bands (Bono et al. 2018). The key advantage of
this approach is that we can provide the 3D structure of the Bulge, a 3D reddening map and
a homogeneous metallicity distribution of the entire sample of RRLs using "blue" ($ugr$) and
"red" ($izy$) LSST bands. The zero-point of the metallicity distribution might be affected
by the accuracy of the adopted reddening law and of the distance diagnostic. However,
we are interested in the differential variation and the accuracy is $\sim$0.2 dex.
This means new constraints on the occurrence of a metallicity gradient across the
Bulge (Hill et al. 2011; Zoccali et al. 2017); the shape
of the Bulge in the four quadrants; the real extent and geometry of both inner and
long Bar (Hammersley et al. 2000; Athanassoula 2005). We are also interested in
estimating the position angle and the inclination of the Bar by using old (RRLs, TIICs),
intermediate age (RC, Miras) and young (classical Cepheids) stars to constrain its
secular stability (Wegg \& Gerhard 2013).
$\bullet$ {\em Bulge stellar populations} --
The current structure of the Bulge mainly relies on RC stars, i.e. old/intermediate
age stellar tracers. However, solid theoretical (Salaris et al. 2003) and empirical
(Stetson et al. 2011) evidence indicates that RC stars are intermediate-mass, central helium burning
stars, while red HB stars are low-mass, central helium burning stars. The difference in visual magnitude, at fixed metal
content, is at least of the order of 0.5 magnitude, while the optical colors are, at solar chemical compositions, quite
similar (see Fig.~1). The two subpopulations have never been identified in the Bulge due to a mix between photometric
error and differential reddening. Data plotted in Fig.~2 indicate that LSST can trace the variation
between the two different sub-populations across the entire Bulge.
We also plan to use the equivalent of the C$_{UBI}$ ([U-B]-[B-I]) photometric index (Monelli et al. 2016),
but for the SDSS bands C$_{ugi}$ ([u-g]-[g-i]) to separate old and intermediate-age Bulge stars
(Fabrizio et al. 2016). Moreover, the spectral energy distribution ($ugri$ bands) to separate
Disk and Bulge stars (Calamida et al. 2017). The reddest ($zy$) LSST bands can overcome thorny
problems with differential reddening (right panel in Fig.~2).
\vspace{-0.25truecm}
\begin{center}
\bf Why a deep minisurvey
\end{center}
\vspace{-0.35truecm}
$\bullet$ {\em Deep into the darkness} -- The current optical photometric survey
is strongly limited in the two innermost degrees
above and below the Galactic plane. The absorption in these regions ranges from
$A_K \sim 1$ to $A_K \sim 1.8$ mag, this means that in the visual band,
$A_V$ ranges from 10 to almost 19(!) magnitudes. However, it is significantly smaller in
the redder LSST bands (see Fig.~3). Note that VVV is opening the path
(Contreras Ramos et al. 2018), but the identification of variable stars is
more difficult because the luminosity amplitude in the $K$-band is a factor of two
smaller than in the $iz$-bands. This means that LSST can provide a complete
census of RRLs even in these highly reddened regions.
These new reddening maps and MDFs cover the entire Bulge and the Galactic center,
thus providing the opportunity to determine the density profile of old stellar
populations. Note that the Bulge and the Halo density profiles in the inner
regions of the Galaxy are expected to be different: the former being
steeper than the latter (Wegg \& Gerhard 2013; P\'erez-Villegas et al. 2017;
Kunder et al. 2018; Valenti et al. 2018). There is evidence that the Bulge includes a modest fraction of
dark matter (15-20\%). This means a core or a mild cusp in the density profile of
the dark matter halo (Portail et al. 2017) that can be easily traced with RRLs.
$\bullet$ {\em Stellar populations beyond the Galactic center} --
NIR time series data collected with 1-4m class telescopes have
revealed a sizable sample of classical Cepheids located in and
beyond the Galactic center (Matsunaga et al. 2011;
D\'ek\'any et al. 2015; Matsunaga et al. 2018; Inno et al. 2019).
This means the opportunity to investigate young (10-250 Myr)
stellar tracers in a region of the Disk in which our knowledge of the radial
distribution and its scale height is quite poor. More recently,
Kains et al. (2018) found more than 2,500 variables in a modest
FoV (VIMOS at VLT) and among them more than 100 are candidate CCs that
appear to be located beyond the Galactic center. The limiting
magnitudes of the deep minisurvey will allow
us to trace the young population in a large fraction of the Disk.
$\bullet$ {\em Absolute age distribution} -- Absolute age estimates based on the magnitude of the
main sequence turn off (MSTO) are affected by uncertainties in distance and in reddening correction. The
difference in magnitude between the MS knee and the MSTO is independent of these uncertainties (Bono et al. 2010b).
This means absolute ages that are at least factor of two more accurate than the classical ones. We plan to trace
the possible occurrence of multiple ancient star formation events using the MS knee in $izy$ bands (23.5-24.0 mag).
{\bf Why LSST}. The Bulge is one of the main reasons why the ground-based observing
facilities are mainly developed in the Southern Hemisphere. The unique
optical characteristics of LSST and the fact that it is the first
experiment collecting deep multi-band time series data
over a long time interval, will allow us to provide a complete census
of the Bulge stellar content.
\clearpage
\begin{figure}
\centering
\includegraphics[width=0.36\textwidth]{VVV_LSST_new.pdf}
\includegraphics[width=0.40\textwidth]{cmd_vimos_var_double_new.jpg}
\vspace{-0.5cm}
\caption{\footnotesize \textit{Left:}Distribution in Galactic coordinates of the Bulge
and Thin Disk regions covered in the NIR bands by VVV and VVVX (red and blue boxes).
The black dots display the RRLs detected by OGLE IV
(Pietrukowicz et al. 2015), while the yellow ones the RRLs detected by VVV (Contreras Ramos et al. 2018). The green
and the magenta circles mark the Galactic center and Bulge low-reddening regions (Dutra et al. 2002). The grey area
shows the FoV of LSST.
\textit{Right:} $I, V$ -- $I$ CMD of the new variables ($\sim$2,500) identified by Kains et al. (2018).}
\label{fig:fig1}
\end{figure}
\begin{figure}
\centering
\vspace{-0.3cm}
\includegraphics[width=0.78\textwidth]{cmd_LSST_new.png}
\vspace{-0.3cm}
\caption{\footnotesize \textit{Left}: $r$,$u-z$ synthetic CMD for two different
stellar populations characterized by different metal content and chemical composition
(see labeled values).
The yellow triangles display RRLs, light yellow dots mark red HB stars and old
RGB bump, light blue dots RC and intermediate age RGB bump.
\textit{Middle}: same as the left, but the stars were randomly perturbed by assuming a mean
reddening typical of the Baade window ($A_K$=0.5 mag).
\textit{Right}: $y$,$z-y$ CMD, with the stars were randomly perturbed from the theoretical CMD by
assuming a mean reddening of $A_K$=1 mag.}
\label{fig:cmd}
\end{figure}
\begin{figure}
\vspace{-0.37cm}
\centering
\includegraphics[width=0.76\textwidth,trim={0cm 0.5cm 0cm 0cm},clip]{lsst_magn_distr.pdf}
\vspace{-0.2cm}
\includegraphics[width=0.76\textwidth]{lsst_magn_maps.png}
\vspace{-0.3cm}
\caption{\footnotesize \textit{Top:} From left to right, un-reddened magnitude distributions of Bulge RRLs detected by OGLEIV ($V,
I$). They were un-reddened by using the reddening map provided by Gonzalez et al. (2012) reddening law by Cardelli
et al. (1989). The expected un-reddened magnitude distributions in LSST bands ($u, g, r, i, z, y$) are also displayed. The
$V,I$ bands were transformed into the LSST bands by using Jordi et al. (2006) and the mean RRL colors provided by
Vivas et al. (2017) and Coppola et al. (2011).
\textit{Bottom:} Apparent magnitude distribution of Bulge RRLs in
Galactic coordinates for LSST bands. The color coding is plotted on top
of the panels. The grey color marks areas in which the
RRLs are fainter than 27 mag.}
\label{fig:distr_mag}
\end{figure}
\clearpage
\section{Technical Description}
\subsection{High-level description}
We plan to get shallow and deep exposures with the same cadence as the WFD survey, but alternating shallow and deep exposures (see below). We will then distinguish between a shallow minisurvey and a deep minisurvey. \\
It is important to stress that all the subsequent discussion is based on the experience of the TVS Crowded Field Photometry Task Force (CFTF), of which one of us is the chair. The goal of the CFTF was to study the efficiency of the detection of variable stars (mainly RRLs) in very crowded fields, and to study the best data analysis strategies to find and characterize the variables. We focused on a DECam dataset (NOAO 2013A-0719, PI: A. Saha) of the Bulge, with similar characteristics (photometric depth, crowdness, pixel scale) to LSST, and for which we already knew the variables form OGLE~4. The final outcome of the CFTF was that all the known bright variables, with our data analysis approach, were correctly retrieved. Furthermore, the identification of several new variables was made possible thanks to the use of a new period-search algorithm (Dall'Ora et al. 2019, in preparation).
\vspace{.3in}
\subsection{Footprint -- pointings, regions and/or constraints}
\begin{itemize}
\item The shallow minisurvey covers an area of -20$\lesssim$l$\lesssim$+20 deg and -15$\lesssim$b$\lesssim$+10 deg, in all the $u,g,r,i,z,y$ bands
\item The deep minisurvey is restricted to an area of -20$\lesssim$l$\lesssim$+20 deg and -3$\lesssim$b$\lesssim$+3 deg and to the $i,z,y$ bands.
\end{itemize}
\subsection{Image quality}
For the shallow minisurvey we can accept seeing of the order of $\sim 1$ arcsec.
The deep minisurvey will be conducted in very highly crowded regions and, even if
only the reddest bands (which provide smaller FWHMs) are requested,
it is mandatory to ask for the median seeing at Cherro Pachon ($\sim 0.7$ arcsec).
\subsection{Individual image depth and/or sky brightness}
\begin{itemize}
\item The shallow minisurvey is made of 5s+5s exposures in all the bands. This will allow us to cover the magnitude ranges shown in Tab.~\ref{table_shallow}.
\item The deep minisurvey will be conducted only in the $i,z,y$ bands, aiming at identifing
RRLs and RC stars in highly reddened regions of the Galactic Bulge. Indeed, according to
Fig.~\ref{fig:distr_mag}, in the inner regions we expect to find the bulk of RRLs at $i,z \sim 24.5$ mag and at $y\sim$ 24 mag. To reach this limits with a SNR of at least $\sim 5$, we need exposure times of 60, 150 and 300s in the $i,z,y$ bands, respectively. Moreover, in the less reddened (i.e. more external) regions these limits will allow us to cover the faint end of the luminosity distribution of the RRLs.
\end{itemize}
The overall proposed strategy is to collect in the internal regions
(see Footprint section) the shallow + deep exposures alternatively,
according to the sequence: $u,g,r,(i_{shallow},i_{deep})$,
$(z_{shallow}$,$z_{deep}),(y_{shallow},y_{deep})$.
This strategy minimizes the pointing and changing filter overheads.
In the external Bulge regions we only plan to collect the shallow exposures.
\begin{table}
\centering
\caption{Expected saturation and $5\sigma$ limits for the shallow survey.}
\label{table_shallow}
\begin{tabular}{|ccc|}
\hline
band & saturation & $5\sigma$ \\
& (mag) & (mag) \\
\hline
u & 13.5 & 22.2 \\
g & 14.5 & 23.6 \\
r & 14.6 & 23.1 \\
i & 14.6 & 22.7 \\
z & 14.1 & 22.1 \\
y & 12.7 & 21.3 \\
\hline
\end{tabular}
\end{table}
These magnitudes have been computed with a custom ETC, which is based on the saturation and on the $5\sigma$ limits listed in https://smtn-002.lsst.io/ and in \\https://www.lsst.org/sites/default/files/docs/sciencebook/SB\_3.pdf.
We do not have special requirements on the sky brightness, since we ask for short exposures in the bluest bands. However, the SNR would benefit of grey time.
\subsection{Co-added image depth and/or total number of visits}
The final depth is not really relevant for this project, since we are interested in time-series of variable stars. However, we remark that the RC stars, which are static stars and that we use as population tracers, are already retrieved with a single visit. Finally, we note that the final stacked image will be of great interest for all the studies on the Galactic structure.
\subsection{Number of visits within a night}
There are no constraints on the number of visits per night, since we adopt the WFD cadence.
However, since RRL light curves change significantly on short timescales, we ask a gap of
at least one hour between two consecutive visits to the same pointing.
Moreover, we stress again that for the internal regions we ask to collect shallow and
deep exposures one after the other to save the overhead time for the change of the filter.
\subsection{Distribution of visits over time}
There are no particular timings or scheduling requirements. We performed a MAF analysis with the PeriodicStarFit jupyter notebook, which adopts the WFD cadence, to check the efficiency of the adopted strategy (see sect. 4).
\subsection{Filter choice}
For the external, less reddened regions (shallow minisurvey) we ask for the full $u,g,r,i,z,y$ bands, since the full combination of all the filters allows us to use the REDIME technique and to disentangle the stellar populations of different metallicity. Moreover, they are also requested for the more internal regions, to study the foreground population. For the deep minisurvey, restricted to the internal regions only, we ask only for the $i,z,y$ bands, since they are less affected by the absorption.
\subsection{Exposure constraints}
The 5s+5s exposures of the shallow minisurvey are designed to both avoid the saturation for the brightest RRL and RC stars, and to reach a reasonable depth at the $5\sigma$ level. Indeed, the median value of the magnitude distribution in all the bands is always reached with the total 10s exposure. The exposures of the deep minisurvey are dictated by the need to reach the RRLs in the more reddened regions.
Finally, it is important to note that the deep exposures saturate at a brightness level which is well within the dynamical range of the shallow survey, being $17.9, 18.9, 19.0, 19.0, 18.5, 17.1$ mag in the $u,g,r,i,z,y$ bands, respectively.
\subsection{Other constraints}
The current experiment has a excellent overlap with the LSST minisurvey suggested by Gonzalez and collaborators.
The reason is twofold:
i) they are very much interested in tracing static stars across the Bulge and in particular in highly reddened regions.
ii) they plan to complement multiband optical photometry collected with LSST with NIR photometry collected by VVV and VVVX,
radial velocity measurements with multi-object spectrographs, and in particular, with new kinematical and dynamical
models of the MW formation and evolution. The two groups have a significant fraction of their science in common.
The current experiment has no overlap with the minisurvey suggested by Clementini \& Musella,
focussed on a sample of MW dwarf satellites, since in the area we plan to cover there is only
one dwarf galaxy (Sagittarius dSph). However, their observing strategy is significantly different
than the current one, since they plan to use the same exposure time of the WFD survey (15/30 sec),
We are suggesting shorter exposures for the shallow minisurvey and longer exposures for the deep
minisurvey. Note that the current observing strategy will allow us to identify the RRLs
belonging to the Sagittarius dSph and to the Sagittarius stream, since they are on average
$\sim$2.5 magnitude fainter than the RRLs in the Bulge. The Sagittarius stream has already been
traced in the Halo and in the Bulge, but we still lack solid constraints on the Sagittarius
stream in the innermost Galactic regions.
The same applies to the Galactic plane surveys proposed by R. Street and collaborators, and by M. Lund and collaborators.
They propose to increase the cadence, in order to improve the detection of variable sources for
a variety of science cases. However, the quoted WPs rely on the 15/30 sec WFD observing strategy,
which affects the identification and characterization of both bright and faint/reddened stellar tracers
we plan to use for our science. It is worth mentioning that the standard 15/30 sec visits
will allow us to trace some of the variables we are interested in, but their spatial distribution would
be limited to partially reddened Bulge regions. This spotted sampling is far from being complete and
homogeneous as required in a detailed survey. Note that the science drivers (REDIME) of the shallow
minisurvey relies on the six $u,g,r,i,z,y$ LSST bands.
Note that Gaia will provide a complete census of both TIICs and Miras in low
reddening regions of the Galactic Bulge. These objects will be saturated in almost all the LSST bands
at the distance of the Bulge, thus further supporting the complementarity between the two experiments.
Moreover, we can determine proper motions for all of our sample, either over the 10 year LSST mission, or by using existing optical and near-infrared surveys as a first epoch. Radial velocities and stellar abundances can be obtained with multi-object spectrographs such as MOONS/VLT, 4MOST/VISTA and AAOMEGA.
Finally, let us mention two independent astrophysical fields that will benefit a lot by the observing
strategy and cadence we are proposing for both the shallow and the deep minisurvey.
i) {\em Microlensing} -- The coupling between large FoV, cadence and number of visits in different
photometric bands will provide unique opportunities to identify both short and long microlensing
events (Navarro et al. 2018).
ii) {\em Galactic Supernovae} -- There are reasons to believe that a significant fraction of Galactic
supernovae are hidden by the Disk and the Bulge. The duration of the LSST experiment and the cadence
we are suggesting will allow us to possible identify this rare event(s).
\subsection{Estimated time requirement}
According to the LSST overheads, the expected total time for the internal fields (shallow + deep minisurvey) are:
\begin{itemize}
\item slew and setting the $u$ filter: 120s
\item 10 seconds (shallow) + 2 seconds shutter open/close: 12s
\item change filter to $g$ band: 120s
\item repeat in the $g$ band: 12s
\item change filter to $r$ band: 120s
\item repeat in the $r$ band: 12s
\item change filter to $i$ band: 120s
\item 10s (shallow) + 60s (deep) + 4s shutter: 74s
\item change filter to $z$ band: 120s
\item 10s (shallow) + 150s (deep) + 4s shutter: 164s
\item change filter to $y$ band: 120s
\item 10s (shallow) + 300s (deep) + 4s shutter: 314s
\end{itemize}
for a time requirement per visit of 1308 seconds (21.8 minutes).
Taking into account 825 visits, the total time per pointing is $\sim 299.75$ hours.\\
For the external regions we have:
\begin{itemize}
\item slew and setting the $u$ filter: 120s
\item 10 seconds (shallow) + 2 seconds shutter open/close: 12s
\item change filter to $g$ band: 120s
\item repeat in the $g$ band: 12s
\item change filter to $r$ band: 120s
\item repeat in the $r$ band: 12s
\item change filter to $i$ band: 120s
\item repeat in the $i$ band: 12s
\item change filter to $z$ band: 120s
\item repeat in the $z$ band: 12s
\item change filter to $y$ band: 120s
\item repeat in the $y$ band: 12s
\end{itemize}
for a time requirement per visit of 792 seconds (13.2 minutes).
Taking into account 825 visits, the total time per pointing is $\sim 181.5$ hours.\\
Since the area surveyed in the internal regions is $240$ square degrees ($25$ LSST pointings), the total
time requested for the deep minisurvey is $7,494$ hours.
The area surveyed in the external regions is $760$ square degress ($80$ LSST pointings), the total time
requested for the shallow minisurvey is $14,520$ hours.\\
In total, the time needed for both surveys would be $22,004$ hours. This number has to be compared to our estimated
time needed by the WFD survey to cover the same total area: $21,945$ hours ($912$ seconds $\times 825$ visits
$\times 105$ pointings).
\vspace{.3in}
\begin{table}[ht]
\centering
\begin{tabular}{|l|l|l|l}
\hline
Properties & Importance \hspace{.3in} \\
\hline
Image quality & 2 \\
Sky brightness & 2 \\
Individual image depth & 1 \\
Co-added image depth & 3 \\
Number of exposures in a visit & 1 \\
Number of visits (in a night) & 2 \\
Total number of visits & 2 \\
Time between visits (in a night) & 2 \\
Time between visits (between nights) & 2 \\
Long-term gaps between visits & 2 \\
\hline
\end{tabular}
\caption{{\bf Constraint Rankings:} Summary of the relative importance of various survey strategy constraints. 1=very important, 2=somewhat important, 3=not important.}
\label{tab:obs_constraints}
\end{table}
\subsection{Technical trades}
This is a long-term, time-series project. It will of course benefit from a good sampling of the light curves with as much as possible uniform image quality (FWHM, sky brightness). However, in our experience excellent results can be achieved even with highly non-uniform datasets [i.e. different instruments/telescopes with very different photometric depths and image quality, and non ad-hoc observing strategy, see e.g. Fiorentino et al. 2017]. Therefore, there are no really unfair trades for this project. The only constraint is the good seeing conditions for the inner regions of the Bulge, together with the proposed exposure times.
\section{Performance Evaluation}
The VESTALE observing strategy will allow us to secure
accurate photometry (1\% level) for both old (t~$>$ 10 Gyr) and
intermediate age (1$\lesssim$ t $\lesssim$ 9 Gyr) stellar tracers.
The LSST multi-band photometry will be compared with similar Bulge data
collected with DECam at 4m Blanco telescope. This means the opportunity
to validate the approach adopted to perform the photometry in crowded
stellar fields, and in particular, the algorithms adopted to identify
and to characterize stellar variability in crowded stellar fields
(see the Task Force CFTF). Indeed, VESTALE overlaps with optical ($V, I$; OGLE IV, Kains et al. 2018, see Fig. 1, right panel), NIR ($Z, Y, J, H, K$; VVV, VVVX)
and SDSS ($u, g, r, i, z$, Vivas et al. 2017) photometric time series data.
To have a quantitative reference of the exected performance, we run a MAF simulation with the PeriodicStarFit jupyter notebook, that estimates the detected fraction of the input variable stars.
By putting the RRLs expected reddened median (with respect to the total distribution) magnitudes, according to the simulation we can correctly retrieve $\sim 40\%$ of the periods after only one year (see Fig. 4). We stress that the plot shows the fraction of the detected variables at the median magnitude level, which is $19.9,18.6,18.2,17.8,17.4,16.8$ mag in the $u,g,r,i,z,y$ bands, respectively.
This means that the efficiency is higher at brighter magnitudes. Indeed, it is almost $100 \%$ after one year for the bright end of the magnitude distribution. Fig. 5 and Fig. 6 show the same analysis, but for the TIICs and the CCs. All the simulations are based on the known sample of variables, released by OGLE IV. In particular, we want to stress that the simulation on CCs is depicting the worst case (low period, small amplitude variables), and has to be considered as an ''acid test''.
It is worth mentioning that, after the first observing season, we are going to get efficiencies comparable to those of Gaia and of the currently available Galactic plane surveys.
As a technical comment, we stress that we could not change the (15 + 15) sec WFD visit in our simulations, and we simpy scaled, in the PeriodicStarFit jupyter notebook, the reference magnitudes by the differences in the expected flux ratios on the basis of our exposure times. Moreover, we ran our simulation over the entire WFD area. Our analysis could be improved with an {\textit {ad-hoc}} simulation on the actual area and with the actual exposure times. The PeriodicStarFit notebook is available in the standard \\
maf\_local/sims\_maf\_contrib/science/periodicVariables directory. It is based on the \\
/home/docmaf/maf\_local/sims\_maf\_contrib/mafContrib/periodicStarMetric.py code.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{map_140days.pdf}
\caption{\footnotesize Fraction of RR Lyrae stars detected after one year. For the simulation, we adopted an average period of 0.6d and an average amplitude of 0.5 mag at the expected median level of the RRLs magnitudes distribution. The map is based on the baseline2018a simulation, and it includes all the sky covered by the WFD survey. However, only the central part of the map is relevant for our project.}
\label{fig:opsim_140}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{t2cep_1year.pdf}
\caption{\footnotesize Fraction of Type II Cepheids stars detected after one year. For the simulation, we adopted an average period of 2.0d and an average amplitude of 0.6 mag and at the expected median level of the TIICs magnitudes distribution. The map is based on the baseline2018a simulation, and it includes all the sky covered by the WFD survey. However, only the central part of the map is relevant for our project.}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{cepheids_1year.pdf}
\caption{\footnotesize Fraction of Classical Cepheids detected after one year. For the simulation, we adopted a period of 1.0d, corresponding to the mode of the CCs periods distribution, and an amplitude of 0.15 mag (corresponding to the typical amplitude of the 1.0d CCs) and at the expected median level of the 1.0d CCs magnitudes distribution. The map si based on the baseline2018a simulation, and it includes all the sky covered by the WFD survey. However, only the central part of the map is relevant for our project. Note also that in this simulation we studied the worst case, i.e. low period and small amplitudes CCs.}
\end{figure}
\vspace{.6in}
\section{Special Data Processing}
There are no special data processing requirements, since we adopt the same visit strategy as the WFD survey, with different exposure times.
\vspace{.6in}
\section{Acknowledgement} This work was developed within the Transient and Variable Stars Science Collaboration (TVS)
and the authors acknowledge the support of TVS in the preparation of this paper.
\section{References}
Athanassoula, E. 2005, MNRAS, 358, 1477 \\
Barbuy, B., Chiappini, C., \& Gerhard, O. 2018,ARA\&A, 56, 223 \\
Bono, G., Iannicola, G., Braga, V. F., et al. 2018, ApJ, accepted, arXiv181107069B \\
Bono, G., Stetson, P. B., Walker, A. R., et al. 2010a, PASP, 122, 651 \\
Bono, G., Stetson, P. B., VandenBerg, D. A., et al. 2010b, ApJL, 708, L74 \\
Bono, G., Genovali, K., Lemasle, B., et al. 2015, in ASPCS, Vol. 491, Fifty Years of Wide Field Studies in the Southern Hemisphere: Resolved Stellar Populations of the Galactic Bulge and Magellanic Clouds, ed. S. Points \& A. Kunder, 148 \\
Calamida, A., Strampelli, G., Rest, A., et al. 2017, AJ, 153, 175 \\
Cardelli, J. A., Clayton, G. C., \& Mathis, J. S. 1989, ApJ, 345, 245 \\
Carollo, D., Beers, T. C., Lee, Y. S., et al. 2007, Nature, 450, 1020 \\
Catchpole, R. M., Whitelock, P. A., Feast, M. W., et al. 2016, MNRAS, 455, 2216 \\
Contreras Ramos, R., Minniti, D., Gran, F., et al. 2018, ApJ, 863, 79 \\
Coppola, G., Dall’Ora, M., Ripepi, V., et al. 2011, MNRAS, 416, 1056 \\
D\'ek\'any, I., Minniti, D., Catelan, M., et al. 2013, ApJL, 776, L19 \\
D\'ek\'any, I., Minniti, D., Majaess, D., et al. 2015, ApJL, 812, L29 \\
Dutra, C. M., Santiago, B. X., \& Bica, E. 2002, A\&A, 381, 219 \\
Fabrizio, M., Bono, G., Nonino, M., et al. 2016, ApJ, 830, 126 \\
Fiorentino, G., Bono, G., Monelli, M., et al. 2015, ApJL, 798, L12 \\
Fiorentino, G., Monelli, M., Stetson, P. B., et al. 2017, A\&A, 599, A125 \\
Freeman, K., Ness, M., Wylie-de-Boer, E., et al. 2013, MNRAS, 428, 3660 \\
Gonzalez, O. A., Rejkuba, M., Zoccali, M., et al. 2012, A\&A, 543, A13 \\
Hammersley, P. L., Garz\'on, F., Mahoney, T. J., L\'opez-Corredoira, M., \& Torres, M. A. P. 2000, MNRAS, 317, L45 \\
Hill, V., Lecureur, A., G\'omez, A., et al. 2011, A\&A, 534, A80 \\
Inno, L., Urbaneja, M. A., Matsunaga, N., et al. 2019, MNRAS, 482, 83 \\
Jordi, K., Grebel, E. K., \& Ammon, K. 2006, A\&A, 460, 339 \\
Kains, N., Calamida, A., Rejkuba, M., et al. 2018, MNRAS, arXiv:1805.01898 \\
Kinman, T. D., Cacciari, C., Bragaglia, A., Smart, R., \& Spagna, A. 2012, MNRAS, 422, 2116 \\
Kunder, A., \& Chaboyer, B. 2008, AJ, 136, 2441 \\
Kunder, A., Rich, R. M., Koch, A., et al. 2016, ApJL, 821, L25 \\
Kunder, A., Valenti, E., Dall'Ora, M., et al. 2018, SSRv, 214, 90 \\
Matsunaga, N., Bono, G., Chen, X., et al. 2018, SSRv, 214, 74 \\
Matsunaga, N., Kawadu, T., Nishiyama, S., et al. 2011, Nature, 477, 188 \\
Matsunaga, N., Feast, M. W., Bono, G., et al. 2016, MNRAS, 462, 414 \\
McCarthy, I. G., Font, A. S., Crain, R. A., et al. 2012, MNRAS, 420, 2245 \\
Monelli, M., Milone, A. P., Fabrizio, M., et al. 2014, ApJ, 796, 90 \\
Navarro, M. G., Minniti, D., Contreras-Ramos, R., 2018, ApJ, 865, 5 \\
Ness, M., Freeman, K., Athanassoula, E., et al. 2012, ApJ, 756, 22 \\
P\'erez-Villegas, A., Portail, M., \& Gerhard, O. 2017, MNRAS, 464, L80 \\
Pietrukowicz, P., Koz\l owski, S., Skowron, J., et al. 2015, ApJ, 811, 113 \\
Rojas-Arriagada, A., Recio-Blanco, A., Hill, V., et al. 2014, A\&A, 569, A103 \\
Salaris, M., Percival, S., \& Girardi, L. 2003, MNRAS, 345, 1030 \\
Sch\"onrich, R., Asplund, M., \& Casagrande, L. 2014, ApJ, 786, 7 \\
Shen, J., Rich, R. M., Kormendy, J., et al. 2010, ApJL, 720, L72 \\
Stetson, P. B., Monelli, M., Fabrizio, M., et al. 2011, The Messenger, 144, 32 \\
Valenti, E., Zoccali, M., Gonzalez, O. A., et al. 2016, A\&A, 587, L6 \\
Valenti, E., Zoccali, M., Mucciarelli, A., et al. 2018, A\&A, 616, A83 \\
Vivas, A. K., Saha, A., Olsen, K., et al. 2017, AJ, 154, 85 \\
Wegg, C., \& Gerhard, O. 2013, MNRAS, 435, 1874 \\
Zoccali, M., Valenti, E., \& Gonzalez, O. A. 2018, A\&A, 618, A147 \\
Zoccali, M., Gonzalez, O. A., Vasquez, S., et al. 2014, A\&A, 562, A66 \\
Zoccali, M., Vasquez, S., Gonzalez, O. A., et al. 2017, A\&A, 599, A12 \\
\end{document}
\grid
|
2,877,628,091,527 | arxiv | \section{Introduction}
Our current world is more complex than ever: emerging technologies continue to develop, providing a solid foundation for stronger growth while changing people’s lifestyles; the pandemic has had a huge impact on economics \cite{maital2020global}, education \cite{setiawan2020covid}, lifestyle \cite{hashem2020examining} resident education and adaptations \cite{chertoff2020early}, and structural vulnerabilities and dynamic inequalities have been enhanced \cite{leach2021post}; the post-pandemic era economic recovery has not yet begun, but health risks have just emerged; the energy crisis continues to affect supply chain security \cite{hutter2022russia}; at the same time, different countries face different opportunities and problems due to their cultural traditions, development processes, and geopolitics. We hope to express these complex situations simply and abstractly. We hope to provide inspiring ideas from macro policies to specific issues. We hope to see effective, practical, comprehensive, and innovative solutions.
Universal Village is a new concept proposed by MIT’s Universal Village Program, which advocates promoting harmony between man and nature through the prudent use of technology and addressing the environmental challenges brought about by rapid urbanization \cite{cao2018preliminary}. It is also the original intention of the UV conference series to comprehensively use various technologies to achieve the goal of an ideal society. The 6$^{th}$ IEEE International Conference on Universal Village (UV2022) features the theme of “Post-Pandemic Reflection on Health, Harmony, and Sustainability: Mobility and Virtual Connection; Diversity and System Efficiency; Responsiveness and Resilience; Inclusiveness and Integration,” focuses on significant topics in the post-pandemic era.
As a satellite activity of IEEE UV2022, the 1$^{st}$ IEEE UV2022 Mathematical Modelling Competition is held to use mathematical modeling methods for practical problems to be solved. The availability of fast and powerful computers has made it possible to mathematize complex problems in industry and commerce\cite{towers2020guide} and solve them better. Common mathematical modeling problems include optimization, evaluation, prediction, etc.; the commonly used methods include integer programming, linear programming, nonlinear programming, graph theory, analytic hierarchy process, regression prediction, principal component analysis, etc.
This short paper officially publishes the problems of the 1$^{st}$ IEEE UV2022 Mathematical Modelling Competition. Participants should choose one problem, carefully analyze the competition problems, understand the relevant background, search and organize related material, build mathematical models, write programs to solve the models, and complete report writing. The paper needs to contain the abstract, the introduction/background, the problem statement, details of models and algorithms, the sensitivity analysis, strengths and weaknesses, and the conclusion.
\section{Problem A: Smart City Development Index}
\subsection{Background}
The world population living in urban areas will increase to $66\%$ by 2030, according to UN\cite{un}. The level of city development is directly related to the quality of human life. The Smart City is an evolving concept about improving the function of cities using information and communication technologies \cite{batty2012smart}. As the increasing population, pollution, congestion, resource usage, and increasingly stricter energy and environmental requirements continue to affect life qualities \cite{chourabi2012understanding}, smart cities nowadays should be able to apply new technologies to solve or alleviate these problems.
A fair, reasonable, and comprehensive city development evaluation index can help compare different cities' situations and guide today's urban construction.
Take Hangzhou as an example, in 2016, Hangzhou created the first "city brain" in China. Driven by this, the pace of Hangzhou's exploration of urban digital construction has been accelerating. At the city-wide digital economy high-quality development conference held in September 2022, Hangzhou proposed to build the city with the highest digital economy development level in China. Similarly, facing the difficulties of urban operation and management, Harbin is constantly deepening and expanding smart application scenarios, realizing smart governance of the city through innovation and deepening the construction of smart applications.
The evaluation should be done in the following aspects, which are known as UV subsystems.
\begin{itemize}
\item {Smart Home and Community}
\item {Smart Medicine and Healthcare \cite{zhang2020evaluation}}
\item {ITS, Urban Planning and Crowd Management \cite{xu2020evaluation}}
\item {Smart Energy Management \cite{yang2020evaluation}}
\item {Smart City Infrastructure \cite{wu2020evaluation}}
\item {Smart Response System for City Emergency \cite{yang2020evaluation}}
\item {Smart Environmental Protection \cite{yuan2020evaluation}}
\item {Smart Humanity \cite{cao2020evaluation}}
\end{itemize}
Problem A focuses on building an index for smart city development evaluation and applying this index to Hangzhou and Harbin.
\subsection{Tasks}
1) Task 1:
Define a "Smart City Development Index" as a metric to measure the success of smart city development. We encourage the participant to consider all eight UV subsystems in the index.
2) Task 2:
Research the recent development of Hangzhou and Harbin. Use the proposed metric to evaluate the development level for these two cities.
3) Task 3:
Choose a city in a country other than China and research the recent development. Use the proposed metric to evaluate the development level of this city.
4) Task 4:
Predict the future change in each subsystem in the next ten years. Predict the future change in Hangzhou and Harbin's proposed "Smart City Development Index" value in the next ten years.
5) Task 5:
Based on situations in Hangzhou and Harbin, make development proposals and formulate plans for these two cities.
\subsection{Possible Useful Links}
1) Hangzhou Statistical Yearbook:
\emph{http://tjj.hangzhou.gov.cn/col/col1229453592/index.html} \\
2) Harbin Statistical Yearbook:
\emph{http://harbin.gov.cn/col/col39/index.html}
\section{Problem B: Vaccine Allocation}
\subsection{Background}
The pandemic in the past three years has brought huge disasters to human beings and changed people's way of life. The emergence of the Omicron variant of SARS-CoV-2 last winter made the epidemic spread more quickly. Vaccines, which have saved tens of millions of lives globally \cite{watson2022global}, remains the most important method for controlling COVID-19 and shifting the pandemic to the next phase \cite{del2022winter}.
With the change in China's pandemic control policies, it is more important to promote vaccination, especially among the elderly.
To facilitate vaccinating citizens, we expect to open more vaccination points in central hospitals, community hospitals, and health centers. However, due to the cost of vaccine transportation and storage, we must consider how to distribute vaccines to central hospitals, community hospitals, and health centers.
Problem B focuses on designing a reasonable vaccine allocation plan to ensure the vaccination demand and consider the cost issue.
\subsection{Tasks}
1) Task 1:
Predict and visualize national daily vaccination numbers for the next three months.
2) Task 2:
Considering the number of nearby residents, transportation convenience, number of medical staff, vaccine storage and transportation costs, and avoiding excessive gathering of people during vaccination, design a vaccine allocation plan for central hospitals, community hospitals, and health centers.
3) Task 3:
Taking Hangzhou Gongshu District and Harbin Daoli District as examples, calculate the number or proportion of vaccines distributed by central hospitals, community hospitals, and health centers in the two districts.
4) Task 4:
Briefly write a note on vaccine allocation (e.g., prioritizing the elderly, etc.)
\subsection{Possible Useful Links}
1) National COVID-19 vaccination status:
\emph{http://www.nhc.gov.cn/xcs/yqjzqk/list\_gzbd.shtml}
\section{Problem C: LinkNYC in China}
\subsection{Background}
In 2016, the New York City government and Google-backed CityBridge jointly launched and built a public communication project - LinkNYC, to redesign telecommunication to activate the "Twenty-First-Century Creative City" \cite{maier20183}. It appears as kiosks on New York streets where people can get free Wi-Fi, charge their phones, use city services and maps for directions, and make free calls within the U.S. and to emergency calls. It is 10 feet tall and is equipped with displays, cameras, tablets, speakers, microphones, and sensors. The original intention of creating LinkNYC was to make the city better meet the needs of citizens.
\begin{figure}[H]
\begin{minipage}{0.8\linewidth}
\vspace{3pt}
\centerline{\includegraphics[width=\textwidth]{linknyc.png}}
\end{minipage}
\caption{LinkNYC's kiosks, the figure is from \cite{linknyc}. The functions include: 1) using personal devices to connect to LinkNYC’s super free Wi-Fi
2) getting access to city services, maps, and directions from the tablet; 3) making free phone calls anywhere in the U.S, using the tablet or the tactile keypad and microphone, plugging in personal headphones for more privacy; 4)
using the dedicated red 911 button in the event of an emergency; 5) charging your device in a power-only USB port; 6) enjoying more room on the sidewalk with Link’s sleek, ADA-compliant design; 7) viewing public service announcements and more relevant advertising on two 55” HD displays}
\end{figure}
Problem C focuses on estimating how many such kiosks are needed in a city. Also, it is expected to design a sustainable profit model.
\subsection{Tasks}
1) Task 1:
If we introduce the LinkNYC to China and build information kiosks in Hangzhou and Harbin, approximately how many information kiosks need to be built to meet the needs of citizens and avoid the waste of resources? Please estimate the number of kiosks needed in each district of Hangzhou and Harbin.
2) Task 2:
Please design the functions included in the kiosk introduced in China. We hope that the information kiosk includes as many free convenience functions as possible and brings profits through commercial models such as advertisements and some paid functions. Please create a profit model to illustrate.
3) Task 3:
Please specify the upper time limit for each user to use the kiosk so that everyone can fully use the service and avoid others waiting for a long time.
4) Task4:
Please give your suggestions on promoting this information kiosk in Chinese cities.
\subsection{Possible Useful Links}
1) Official Site of LinkNYC:
\emph{https://www.link.nyc/}
\section{Conclusion}
This paper introduces the background and problems of the 1$^{st}$ IEEE UV2022 Mathematical Modelling Competition, a satellite activity of the 6$^{th}$ IEEE International Conference on Universal Village. The competition aims to call for solutions based on mathematical modeling methods for real-world problems. The problems are the smart city development index design, the vaccine allocation problem, and the introduction of LinkNYC kiosks to China. Participants are expected to choose one problem, according to the background and each task, then do the analysis, modeling, programming, and writing.
\bibliographystyle{IEEEtran}
|
2,877,628,091,528 | arxiv | \section{Introduction}
\label{sec:intro}
When we are faced with a problem, we try to recall similar problems that
we have faced in the past, so that we can transfer our knowledge from
past experience to the current problem. We make an analogy between the
past situation and the current situation, and we use the analogy to transfer
knowledge \shortcite{gentner83,minsky86,holyoak95,hofstadter01,hawkins04}.
In his survey of the computational modeling of analogy-making, French
\citeyear{french02} cites Structure Mapping Theory (SMT) \shortcite{gentner83}
and its implementation in the Structure Mapping Engine (SME)
\cite{falkenhainer89} as the most influential work on modeling of
analogy-making. In SME, an analogical mapping $M: A \rightarrow B$ is from a source
$A$ to a target $B$. The source is more familiar, more known, or more concrete,
whereas the target is relatively unfamiliar, unknown, or abstract. The
analogical mapping is used to transfer knowledge from the source to the target.
Gentner \citeyear{gentner83} argues that there are two kinds of similarity,
attributional similarity and relational similarity. The distinction between
attributes and relations may be understood in terms of predicate logic. An
attribute is a predicate with one argument, such as {\sc large}($X$), meaning
$X$ is large. A relation is a predicate with two or more arguments, such
as {\sc collides\_with}($X,Y$), meaning $X$ collides with $Y$.
The Structure Mapping Engine prefers mappings based on relational
similarity over mappings based on attributional similarity \shortcite{falkenhainer89}.
For example, SME is able to build a mapping from a representation of the
solar system (the source) to a representation of the Rutherford-Bohr model
of the atom (the target). The sun is mapped to the nucleus, planets are
mapped to electrons, and mass is mapped to charge.
Note that this mapping emphasizes relational similarity. The sun and the
nucleus are very different in terms of their attributes: the sun is
very large and the nucleus is very small. Likewise, planets and electrons
have little attributional similarity. On the other hand, planets revolve
around the sun like electrons revolve around the nucleus. The mass of the
sun attracts the mass of the planets like the charge of the nucleus attracts
the charge of the electrons.
Gentner \citeyear{gentner91} provides evidence that children rely primarily
on attributional similarity for mapping, gradually switching over to
relational similarity as they mature. She uses the terms
{\em mere appearance} to refer to mapping based mostly on
attributional similarity, {\em analogy} to refer to mapping based
mostly on relational similarity, and {\em literal similarity} to
refer to a mixture of attributional and relational similarity.
Since we use analogical mappings to solve problems and make
predictions, we should focus on structure, especially causal relations,
and look beyond the surface attributes of things \shortcite{gentner83}.
The analogy between the solar system and the Rutherford-Bohr model
of the atom illustrates the importance of going beyond mere appearance, to
the underlying structures.
Figures \ref{fig:solar} and \ref{fig:atom} show the LISP representations
used by SME as input for the analogy between the solar system and the
atom \shortcite{falkenhainer89}. Chalmers, French, and Hofstadter
\citeyear{chalmers92} criticize SME's requirement for complex hand-coded
representations. They argue that most of the hard work is done by the human who
creates these high-level hand-coded representations, rather than by SME.
\begin{table}[htbp]
\footnotesize
\tt
\centering
\begin{tabular}{|l|}
\hline
(defEntity sun :type inanimate) \\
(defEntity planet :type inanimate) \\
\\
(defDescription solar-system \\
\hspace{20pt} entities (sun planet) \\
\hspace{20pt} expressions (((mass sun) :name mass-sun) \\
\hspace{40pt} ((mass planet) :name mass-planet) \\
\hspace{40pt} ((greater mass-sun mass-planet) :name >mass) \\
\hspace{40pt} ((attracts sun planet) :name attracts-form) \\
\hspace{40pt} ((revolve-around planet sun) :name revolve) \\
\hspace{40pt} ((and >mass attracts-form) :name and1) \\
\hspace{40pt} ((cause and1 revolve) :name cause-revolve) \\
\hspace{40pt} ((temperature sun) :name temp-sun) \\
\hspace{40pt} ((temperature planet) :name temp-planet) \\
\hspace{40pt} ((greater temp-sun temp-planet) :name >temp) \\
\hspace{40pt} ((gravity mass-sun mass-planet) :name force-gravity) \\
\hspace{40pt} ((cause force-gravity attracts-form) :name why-attracts))) \\
\hline
\end{tabular}
\normalsize
\rm
\figcaption {The representation of the solar system in SME
\shortcite{falkenhainer89}.}
\label{fig:solar}
\end{table}
\begin{table}[htbp]
\footnotesize
\tt
\centering
\begin{tabular}{|l|}
\hline
(defEntity nucleus :type inanimate) \\
(defEntity electron :type inanimate) \\
\\
(defDescription rutherford-atom \\
\hspace{20pt} entities (nucleus electron) \\
\hspace{20pt} expressions (((mass nucleus) :name mass-n) \\
\hspace{40pt} ((mass electron) :name mass-e) \\
\hspace{40pt} ((greater mass-n mass-e) :name >mass) \\
\hspace{40pt} ((attracts nucleus electron) :name attracts-form) \\
\hspace{40pt} ((revolve-around electron nucleus) :name revolve) \\
\hspace{40pt} ((charge electron) :name q-electron) \\
\hspace{40pt} ((charge nucleus) :name q-nucleus) \\
\hspace{40pt} ((opposite-sign q-nucleus q-electron) :name >charge) \\
\hspace{40pt} ((cause >charge attracts-form) :name why-attracts))) \\
\hline
\end{tabular}
\normalsize
\rm
\figcaption {The Rutherford-Bohr model of the atom
in SME \shortcite{falkenhainer89}.}
\label{fig:atom}
\end{table}
Gentner, Forbus, and their colleagues have attempted to avoid hand-coding
in their recent work with SME.\footnote{Dedre Gentner, personal communication,
October 29, 2008.} The CogSketch system can generate LISP representations
from simple sketches \cite{forbus08}. The Gizmo system can generate
LISP representations from qualitative physics models \cite{yan05}. The
Learning Reader system can generate LISP representations from natural language
text \shortcite{forbus07}. These systems do not require LISP input.
However, the CogSketch user interface requires the person who draws the
sketch to identify the basic components in the sketch and hand-label
them with terms from a knowledge base derived from OpenCyc.
Forbus et al. \citeyear{forbus08} note that OpenCyc
contains more than 58,000 hand-coded concepts, and they have added further
hand-coded concepts to OpenCyc, in order to support CogSketch.
The Gizmo system requires the user to hand-code a physical model,
using the methods of qualitative physics \shortcite{yan05}. Learning Reader
uses more than 28,000 phrasal patterns, which were derived from
ResearchCyc \shortcite{forbus07}. It is evident that SME still requires
substantial hand-coded knowledge.
The work we present in this paper is an effort to avoid
complex hand-coded representations.
Our approach is to combine ideas from SME \shortcite{falkenhainer89}
and Latent Relational Analysis (LRA) \shortcite{turney06}. We call
the resulting algorithm the Latent Relation Mapping Engine (LRME).
We represent the semantic relation between two terms using a
vector, in which the elements are derived from
pattern frequencies in a large corpus of raw text. Because the
semantic relations are automatically derived from a corpus, LRME
does not require hand-coded representations of relations. It only
needs a list of terms from the source and a list of terms from
the target. Given these two lists, LRME uses the corpus to build
representations of the relations among the terms, and then it
constructs a mapping between the two lists.
Tables \ref{tab:input} and
\ref{tab:output} show the input and output of LRME for
the analogy between the solar system and the Ruther\-ford-Bohr model of
the atom. Although some human effort is involved in constructing
the input lists, it is considerably less effort than SME requires for its input
(contrast Figures \ref{fig:solar} and \ref{fig:atom} with Table~\ref{tab:input}).
\begin{table}[htbp]
\centering
\begin{minipage}{0.3\textwidth}
\centering
\begin{tabular}{l}
\hline
\textbf{Source $A$} \\
\hline
planet \\
attracts \\
revolves \\
sun \\
gravity \\
solar system \\
mass \\
\hline
\end{tabular}
\end{minipage}
\begin{minipage}{0.3\textwidth}
\centering
\begin{tabular}{l}
\hline
\textbf{Target $B$} \\
\hline
revolves \\
atom \\
attracts \\
electromagnetism \\
nucleus \\
charge \\
electron \\
\hline
\end{tabular}
\end{minipage}
\caption {The representation of the input in LRME.}
\label{tab:input}
\end{table}
\begin{table}[htbp]
\centering
\begin{tabular}{lcl}
\hline
\textbf{Source $A$} & \textbf{Mapping $M$} & \textbf{Target $B$} \\
\hline
solar system & $\rightarrow$ & atom \\
sun & $\rightarrow$ & nucleus \\
planet & $\rightarrow$ & electron \\
mass & $\rightarrow$ & charge \\
attracts & $\rightarrow$ & attracts \\
revolves & $\rightarrow$ & revolves \\
gravity & $\rightarrow$ & electromagnetism \\
\hline
\end{tabular}
\caption {The representation of the output in LRME.}
\label{tab:output}
\end{table}
Scientific analogies, such as the analogy between the solar system and
the Rutherford-Bohr model of the atom, may seem esoteric, but we believe
analogy-making is ubiquitous in our daily lives. A potential practical
application for this work is the task of identifying semantic roles
\shortcite{gildea02}. Since roles are relations, not attributes, it is
appropriate to treat semantic role labeling as an analogical mapping problem.
For example, the {\sc Judgement} semantic frame contains
semantic roles such as {\sc judge}, {\sc evaluee}, and {\sc reason}, and the
{\sc Statement} frame contains roles such as {\sc speaker}, {\sc addressee},
{\sc message}, {\sc topic}, and {\sc medium} \shortcite{gildea02}. The task
of identifying semantic roles is to automatically label sentences with their
roles, as in the following examples \shortcite{gildea02}:
\begin{myitemize}
\item $[${\em Judge} She] \textbf{blames} [{\em Evaluee} the Government]
[{\em Reason} for failing to do enough to help].
\item $[${\em Speaker} We] \textbf{talked} [{\em Topic} about the proposal]
[{\em Medium} over the phone].
\end{myitemize}
\noindent If we have a training set of labeled sentences and a testing
set of unlabeled sentences, then we may view the task of labeling the
testing sentences as a problem of creating analogical mappings between
the training sentences (sources) and the testing sentences (targets).
Table~\ref{tab:roles} shows how ``She blames the Government
for failing to do enough to help.'' might be mapped to ``They blame the
company for polluting the environment.'' Once a mapping has been found,
we can transfer knowledge, in the form of semantic role labels, from
the source to the target.
\begin{table}[htbp]
\centering
\begin{tabular}{lcl}
\hline
\textbf{Source $A$} & \textbf{Mapping $M$} & \textbf{Target $B$} \\
\hline
she & $\rightarrow$ & they \\
blames & $\rightarrow$ & blame \\
government & $\rightarrow$ & company \\
failing & $\rightarrow$ & polluting \\
help & $\rightarrow$ & environment \\
\hline
\end{tabular}
\caption {Semantic role labeling as analogical mapping.}
\label{tab:roles}
\end{table}
In Section~\ref{sec:hypotheses}, we briefly discuss the hypotheses behind
the design of LRME. We then precisely define the task that is performed
by LRME, a specific form of analogical mapping, in Section~\ref{sec:task}.
LRME builds on Latent Relational Analysis (LRA), hence we summarize
LRA in Section~\ref{sec:lra}. We discuss potential applications of LRME in
Section~\ref{sec:apps}.
To evaluate LRME, we created twenty analogical mapping problems, ten
science analogy problems \shortcite{holyoak95} and ten common metaphor problems
\shortcite{lakoff80}. Table~\ref{tab:input} is one of the science analogy
problems. Our intended solution is given in Table~\ref{tab:output}.
To validate our intended solutions, we gave our colleagues the lists of terms
(as in Table~\ref{tab:input}) and asked them to generate mappings between the lists.
Section~\ref{sec:problems} presents the results of this experiment.
Across the twenty problems, the average agreement with our intended
solutions (as in Table~\ref{tab:output}) was 87.6\%.
The LRME algorithm is outlined in Section~\ref{sec:lrme}, along with its
evaluation on the twenty mapping problems. LRME achieves an accuracy
of 91.5\%. The difference between this performance and the human average
of 87.6\% is not statistically significant.
Section~\ref{sec:attributes} examines a variety of alternative approaches
to the analogy mapping task. The best approach achieves an accuracy of
76.8\%, but this approach requires hand-coded part-of-speech tags.
This performance is significantly below LRME and human performance.
In Section~\ref{sec:discussion}, we discuss some questions that are
raised by the results in the preceding sections. Related work is
described in Section~\ref{sec:related}, future work and limitations
are considered in Section~\ref{sec:future}, and we conclude in
Section~\ref{sec:conclusion}.
\section{Guiding Hypotheses}
\label{sec:hypotheses}
In this section, we list some of the assumptions that have guided the
design of LRME. The results we present in this paper do not necessarily
require these assumptions, but it might be helpful to the reader, to
understand the reasoning behind our approach.
\begin{myenumerate}
\item \textbf{Analogies and semantic relations:} Analogies are
based on semantic relations \shortcite{gentner83}. For example, the analogy
between the solar system and the Ruther\-ford-Bohr model of the atom is
based on the similarity of the semantic relations among the concepts
involved in our understanding of the solar system to the semantic
relations among the concepts involved in the Ruther\-ford-Bohr model of the atom.
\item \textbf{Co-occurrences and semantic relations:} Two terms have an
interesting, significant semantic relation if and only if they they tend
to co-occur within a relatively small window (e.g., five words) in a relatively
large corpus (e.g., $10^{10}$ words). Having an interesting semantic relation causes
co-occurrence and co-occurrence is a reliable indicator of an interesting
semantic relation \shortcite{firth57}.
\item \textbf{Meanings and semantic relations:} Meaning has more to do with
relations among words than individual words. Individual words tend
to be ambiguous and polysemous. By putting two words into a pair, we constrain
their possible meanings. By putting words into a sentence, with multiple
relations among the words in the sentence, we constrain the possible meanings
further. If we focus on word pairs (or tuples), instead of individual words,
word sense disambiguation is less problematic. Perhaps a word has no sense
apart from its relations with other words \shortcite{kilgarriff97}.
\item \textbf{Pattern distributions and semantic relations:} There is a
many-to-many mapping between semantic relations and the patterns in which
two terms co-occur. For example, the relation ${\rm CauseEffect}(X,Y)$ may be
expressed as ``$X$ causes $Y$'', ``$Y$ from $X$'', ``$Y$ due to $X$'',
``$Y$ because of $X$'', and so on. Likewise, the pattern ``$Y$ from $X$''
may be an expression of ${\rm CauseEffect}(X,Y)$ (``sick from bacteria'')
or ${\rm OriginEntity}(X,Y)$ (``oranges from Spain''). However,
for a given $X$ and $Y$, the statistical distribution of patterns in which $X$ and $Y$
co-occur is a reliable signature of the semantic relations between $X$ and $Y$
\shortcite{turney06}.
\end{myenumerate}
\noindent To the extent that LRME works, we believe its success lends some
support to these hypotheses.
\section{The Task}
\label{sec:task}
In this paper, we examine algorithms that generate analogical mappings.
For simplicity, we restrict the task to generating {\em bijective} mappings;
that is, mappings that are both {\em injective} (one-to-one; there is no
instance in which two terms in the source map to the same term in the target)
and {\em surjective} (onto; the source terms cover all of the target terms;
there is no target term that is left out of the mapping). We assume that
the entities that are to be mapped are given as input. Formally, the input
$I$ for the algorithms is two sets of terms, $A$ and $B$.
\begin{equation}
I = \left \{ \left \langle A, B \right \rangle \right \}
\end{equation}
\noindent Since the mappings are bijective, $A$ and $B$ must contain
the same number of terms, $m$.
\begin{align}
A & = \left \{ a_1, a_2, \ldots, a_m \right \} \\
B & = \left \{ b_1, b_2, \ldots, b_m \right \}
\end{align}
\noindent A term, $a_i$ or $b_j$, may consist of a single word ({\em planet}) or
a compound of two or more words ({\em solar system}). The words may be any
part of speech (nouns, verbs, adjectives, or adverbs). The output $O$ is a
bijective mapping $M$ from $A$ to $B$.
\begin{align}
O & = \left \{ M: A \rightarrow B \right \} \\
M(a_i) & \in B \\
M(A) & = \left \{ M(a_1), M(a_2), \ldots, M(a_m) \right \} = B
\end{align}
\noindent The algorithms that we consider here
can accept a batch of multiple independent mapping problems
as input and generate a mapping for each one as output.
\begin{align}
I & = \left \{ \left \langle A_1, B_1 \right \rangle,
\left \langle A_2, B_2 \right \rangle, \ldots,
\left \langle A_n, B_n \right \rangle \right \} \\
O & = \left \{ M_1: A_1 \rightarrow B_1,
M_2: A_2 \rightarrow B_2, \ldots,
M_n: A_n \rightarrow B_n \right \}
\end{align}
Suppose the terms in $A$ are in some arbitrary order $\mathbf{a}$.
\begin{equation}
\mathbf{a} = \left \langle a_1, a_2, \ldots, a_m \right \rangle
\end{equation}
\noindent The mapping function $M: A \rightarrow B$, given $\mathbf{a}$,
determines a unique ordering $\mathbf{b}$ of $B$.
\begin{equation}
\mathbf{b} = \left \langle M(a_1), M(a_2), \ldots, M(a_m) \right \rangle
\end{equation}
\noindent Likewise, an ordering $\mathbf{b}$ of $B$, given $\mathbf{a}$,
defines a unique mapping function $M$. Since there are $m!$ possible
orderings of $B$, there are also $m!$ possible mappings from $A$ to $B$.
The task is to search through the $m!$ mappings and find the best one.
(Section~\ref{sec:problems} shows that there is a relatively high
degree of consensus about which mappings are best.)
Let $P(A,B)$ be the set of all $m!$ bijective mappings from $A$ to $B$.
($P$ stands for {\em permutation}, since each mapping corresponds to
a permutation.)
\begin{align}
P(A,B) & = \left \{ M_1, M_2, \ldots, M_{m!} \right \} \\
m & = \left | A \right | = \left | B \right | \\
m! & = \left | P(A,B) \right |
\end{align}
\noindent In the following experiments, $m$ is $7$ on average and $9$
at most, so $m!$ is usually around $7! = 5,040$ and at most $9! = 362,880$.
It is feasible for us to exhaustively search $P(A,B)$.
We explore two basic kinds of algorithms for generating analogical
mappings, algorithms based on {\em attributional similarity} and
algorithms based on {\em relational similarity} \shortcite{turney06}.
The attributional similarity between two words, ${\rm sim_a}(a,b) \in \Re$,
depends on the degree of correspondence between the properties of $a$ and $b$.
The more correspondence there is, the greater their attributional similarity.
The relational similarity between two {\em pairs} of words,
${\rm sim_r}(a\!:\!b, c\!:\!d) \in \Re$, depends on the degree of correspondence
between the relations of $a\!:\!b$ and $c\!:\!d$. The more correspondence there is,
the greater their relational similarity. For example, {\em dog} and {\em wolf}
have a relatively high degree of attributional similarity, whereas
{\em dog}$\,:\,${\em bark} and {\em cat}$\,:\,${\em meow} have a relatively
high degree of relational similarity.
Attributional mapping algorithms seek the mapping (or mappings) $M_{\rm a}$ that
maximizes the sum of the attributional similarities between the terms in $A$
and the corresponding terms in $B$. (When there are multiple mappings that
maximize the sum, we break the tie by randomly choosing one of them.)
\begin{equation}
\label{eqn:att-alg}
M_{\rm a} = \operatornamewithlimits{arg\,max}_{M \in P(A,B)}
\; \sum_{i=1}^{m} {\rm sim_a}(a_i, M(a_i))
\end{equation}
Relational mapping algorithms seek the mapping (or mappings) $M_{\rm r}$ that
maximizes the sum of the relational similarities.
\begin{equation}
\label{eqn:rel-alg}
M_{\rm r} = \operatornamewithlimits{arg\,max}_{M \in P(A,B)}
\; \sum_{i=1}^{m} \sum_{j=i+1}^{m} {\rm sim_r}(a_i\!:\!a_j, M(a_i)\!:\!M(a_j))
\end{equation}
\noindent In (\ref{eqn:rel-alg}), we assume that ${\rm sim_r}$ is
symmetrical. For example, the degree of relational similarity between
{\em dog}$\,:\,${\em bark} and {\em cat}$\,:\,${\em meow} is the
same as the degree of relational similarity between
{\em bark}$\,:\,${\em dog} and {\em meow}$\,:\,${\em cat}.
\begin{equation}
\label{eqn:symmetry}
{\rm sim_r}(a\!:\!b, c\!:\!d) = {\rm sim_r}(b\!:\!a, d\!:\!c)
\end{equation}
\noindent We also assume that ${\rm sim_r}(a\!:\!a, b\!:\!b)$ is
not interesting; for example, it may be some constant value for
all $a$ and $b$. Therefore (\ref{eqn:rel-alg}) is designed so that
$i$ is always less than $j$.
Let ${\rm score_r}(M)$ and ${\rm score_a}(M)$ be defined as follows.
\begin{align}
\label{eqn:score-r}
{\rm score_r}(M)
& = \sum_{i=1}^{m} \sum_{j=i+1}^{m} {\rm sim_r}(a_i\!:\!a_j, M(a_i)\!:\!M(a_j)) \\
\label{eqn:score-a}
{\rm score_a}(M)
& = \sum_{i=1}^{m} {\rm sim_a}(a_i, M(a_i))
\end{align}
\noindent Now $M_{\rm r}$ and $M_{\rm a}$ may be defined in terms
of ${\rm score_r}(M)$ and ${\rm score_a}(M)$.
\begin{align}
M_{\rm r}
& = \operatornamewithlimits{arg\,max}_{M \in P(A,B)} {\rm score_r}(M) \\
M_{\rm a}
& = \operatornamewithlimits{arg\,max}_{M \in P(A,B)} {\rm score_a}(M)
\end{align}
\noindent $M_{\rm r}$ is the best mapping according to
${\rm sim_r}$ and $M_{\rm a}$ is the best mapping according to
${\rm sim_a}$.
Recall Gentner's \citeyear{gentner91} terms, discussed in
Section~\ref{sec:intro}, {\em mere appearance} (mostly attributional
similarity), {\em analogy} (mostly relational similarity), and
{\em literal similarity} (a mixture of attributional and relational
similarity). We take it that $M_{\rm r}$ is an abstract model of
mapping based on analogy and $M_{\rm a}$ is a model
of mere appearance. For literal similarity, we can combine
$M_{\rm r}$ and $M_{\rm a}$, but we should take care to normalize
${\rm score_r}(M)$ and ${\rm score_a}(M)$ before we combine them.
(We experiment with combining them in Section~\ref{subsec:hybrids}.)
\section{Latent Relational Analysis}
\label{sec:lra}
LRME uses a simplified form of Latent Relational Analysis (LRA)
\shortcite{turney05b,turney06} to calculate the relational similarity
between pairs of words. We will briefly describe past work with LRA before
we present LRME.
LRA takes as input $I$ a set of word pairs and generates as output $O$ the
relational similarity ${\rm sim_r}(a_i\!:\!b_i, a_j\!:\!b_j)$ between any
two pairs in the input.
\begin{align}
I & = \left \{ a_1\!:\!b_1, a_2\!:\!b_2, \ldots, a_n\!:\!b_n \right \} \\
O & = \left \{ {\rm sim_r} : I \times I \rightarrow \Re \right \}
\end{align}
\noindent LRA was designed to evaluate proportional analogies.
Proportional analogies have the form $a\!:\!b\!::\!c\!:\!d$,
which means ``$a$ is to $b$ as $c$ is to $d$''. For example,
{\em mason}$\,:\,${\em stone}$\,::\,${\em carpenter}$\,:\,${\em wood}
means ``mason is to stone as carpenter is to wood''. A mason is an artisan
who works with stone and a carpenter is an artisan who works with wood.
We consider proportional analogies to be a special case of bijective
analogical mapping, as defined in Section~\ref{sec:task}, in which
$\left | A \right | = \left | B \right | = m = 2$.
For example, $a_1\!:\!a_2\!::\!b_1\!:\!b_2$ is equivalent to $M_0$ in
(\ref{eqn:ab2}).
\begin{equation}
\label{eqn:ab2}
A = \left \{ a_1, a_2 \right \},\;
B = \left \{ b_1, b_2 \right \},\;
M_0(a_1) = b_1,\;
M_0(a_2) = b_2.
\end{equation}
\noindent From the definition of ${\rm score_r}(M)$ in
(\ref{eqn:score-r}), we have the following result for $M_0$.
\begin{equation}
\label{eqn:rel-qual}
{\rm score_r}(M_0) =
{\rm sim_r}(a_1\!:\!a_2, M_0(a_1)\!:\!M_0(a_2)) =
{\rm sim_r}(a_1\!:\!a_2, b_1\!:\!b_2)
\end{equation}
\noindent That is, the quality of the proportional analogy
{\em mason}$\,:\,${\em stone}$\,::\,${\em carpenter}$\,:\,${\em wood}
is given by ${\rm sim_r}(mason\!:\!stone, carpenter\!:\!wood)$.
Proportional analogies may also be evaluated using attributional
similarity. From the definition of ${\rm score_a}(M)$ in
(\ref{eqn:score-a}), we have the following result for $M_0$.
\begin{equation}
\label{eqn:att-qual}
{\rm score_a}(M_0) =
{\rm sim_a}(a_1, M_0(a_1)) + {\rm sim_a}(a_2, M_0(a_2)) =
{\rm sim_a}(a_1, b_1) + {\rm sim_a}(a_2, b_2)
\end{equation}
\noindent For attributional similarity, the quality of the
proportional analogy
{\em mason}$\,:\,${\em stone}$\,::\,${\em carpenter}$\,:\,${\em wood}
is given by ${\rm sim_a}(mason,carpenter) + {\rm sim_a}(stone,wood)$.
LRA only handles proportional analogies. The main contribution of LRME
is to extend LRA beyond proportional analogies to bijective analogies
for which $m > 2$.
Turney \citeyear{turney06} describes ten potential applications of
LRA: recognizing proportional analogies, structure
mapping theory, modeling metaphor, classifying semantic relations,
word sense disambiguation, information extraction, question answering,
automatic thesaurus generation, information retrieval, and
identifying semantic roles. Two of these applications (evaluating
proportional analogies and classifying semantic relations) are
experimentally evaluated, with state-of-the-art results.
Turney \citeyear{turney06} compares the performance of
relational similarity (\ref{eqn:rel-qual}) and attributional
similarity (\ref{eqn:att-qual}) on the task of solving 374
multiple-choice proportional analogy questions from the SAT college
entrance test. LRA is used to measure relational similarity and a
variety of lexicon-based and corpus-based algorithms are
used to measure attributional similarity. LRA
achieves an accuracy of 56\% on the 374 SAT questions, which
is not significantly different from the average human score
of 57\%. On the other hand, the best performance by attributional
similarity is 35\%. The results show that
attributional similarity is better than random guessing, but not
as good as relational similarity. This
result is consistent with Gentner's \citeyear{gentner91}
theory of the maturation of human similarity judgments.
Turney \citeyear{turney06} also applies LRA to the task of
classifying semantic relations in noun-modifier expressions.
A noun-modifier expression is a phrase, such as {\em laser
printer}, in which the head noun ({\em printer}) is preceded
by a modifier ({\em laser}). The task is to identify the semantic
relation between the noun and the modifier. In this case, the
relation is {\em instrument}; the laser is an {\em instrument}
used by the printer. On a set of 600 hand-labeled noun-modifier
pairs with five different classes of semantic relations, LRA
attains 58\% accuracy.
Turney \citeyear{turney08} employs a variation of LRA for solving
four different language tests, achieving 52\% accuracy on SAT
analogy questions, 76\% accuracy on TOEFL synonym questions,
75\% accuracy on the task of distinguishing synonyms from antonyms,
and 77\% accuracy on the task of distinguishing words that are
similar, words that are associated, and words that are both
similar and associated. The same core algorithm is used for
all four tests, with no tuning of the parameters to the particular
test.
\section{Applications for LRME}
\label{sec:apps}
Since LRME is an extension of LRA, every potential application of LRA is
also a potential application of LRME. The advantage of LRME over LRA is
the ability to handle bijective analogies when $m > 2$ (where
$m = \left | A \right | = \left | B \right |$). In this section,
we consider the kinds of applications that might benefit from this ability.
In Section~\ref{subsec:experiments}, we evaluate LRME on science analogies
and common metaphors, which supports the claim that these two applications
benefit from the ability to handle larger sets of terms. In
Section~\ref{sec:intro}, we saw that identifying semantic
roles \shortcite{gildea02} also involves more than two terms, and
we believe that LRME will be superior to LRA for semantic role labeling.
Semantic relation classification usually assumes that the relations
are binary; that is, a semantic relation is a connection between
two terms \shortcite{rosario01,nastase03,turney06,girju07}. Yuret
observed that binary relations may be linked by
underlying $n$-ary relations.\footnote{Deniz Yuret, personal
communication, February 13, 2007. This observation was in the context
of our work on building the datasets for SemEval 2007
Task 4 \shortcite{girju07}.} For example, Nastase and
Szpakowicz \citeyear{nastase03} defined a taxonomy of 30 binary semantic relations.
Table~\ref{tab:n-ary} shows how six binary relations from Nastase and
Szpakowicz \citeyear{nastase03} can be covered by one \mbox{5-ary} relation,
Agent:Tool:Action:Affected:Theme. An Agent uses a Tool to perform an Action.
Somebody or something is Affected by the Action. The whole event can be
summarized by its Theme.
\begin{table}[htbp]
\small
\centering
\begin{tabular}{lll}
\hline
\multicolumn{2}{c}{\textbf{Nastase and Szpakowicz \citeyear{nastase03}}} & \\
\cline{1-2}
\textbf{Relation} & \textbf{Example} & \textbf{Agent:Tool:Action:Affected:Theme} \\
\hline
agent & student protest & Agent:Action \\
purpose & concert hall & Theme:Tool \\
beneficiary & student discount & Affected:Action \\
instrument & laser printer & Tool:Agent \\
object & metal separator & Affected:Tool \\
object property & sunken ship & Action:Affected \\
\hline
\end{tabular}
\normalsize
\caption {How six binary semantic relations from Nastase and
Szpakowicz \citeyear{nastase03} can be viewed as different fragments
of one \mbox{5-ary} semantic relation.}
\label{tab:n-ary}
\end{table}
In SemEval Task 4, we found it easier to manually tag the datasets
when we expanded binary relations to their underlying \mbox{$n$-ary}
relations \shortcite{girju07}. We believe that this expansion would
also facilitate automatic classification of semantic relations.
The results in Section~\ref{subsec:coherence} suggest that all of
the applications for LRA that we discussed in Section~\ref{sec:lra}
might benefit from being able to handle bijective analogies when $m > 2$.
\section{The Mapping Problems}
\label{sec:problems}
To evaluate our algorithms for analogical mapping, we created twenty mapping
problems, given in Appendix A. The twenty problems consist of ten science
analogy problems, based on examples of analogy in science from Chapter~8 of
Holyoak and Thagard \citeyear{holyoak95}, and ten common metaphor problems,
derived from Lakoff and Johnson \citeyear{lakoff80}.
The tables in Appendix A show our intended mappings for each of the twenty
problems. To validate these mappings, we invited our colleagues in the
Institute for Information Technology to participate in
an experiment. The experiment was hosted on a web server (only accessible
inside our institute) and people participated anonymously, using their web
browsers in their offices. There were 39 volunteers who began the
experiment and 22 who went all the way to the end. In our analysis,
we use only the data from the 22 participants who completed all of the
mapping problems.
The instructions for the participants are in Appendix A. The sequence
of the problems and the order of the terms within a problem were
randomized separately for each participant, to remove any effects
due to order. Table~\ref{tab:agreement} shows the agreement between
our intended mapping and the mappings generated by the participants.
Across the twenty problems, the average agreement was 87.6\%, which is
higher than the agreement figures for many linguistic annotation tasks.
This agreement is impressive, given that the participants had minimal
instructions and no training.
\begin{table}[htbp]
\small
\centering
\begin{tabular}{lllrl}
\hline
\textbf{Type} & \textbf{Mapping} & \textbf{Source $\rightarrow$ Target}
& \textbf{Agreement} & \textbf{$m$} \\
\hline
& A1 & solar system $\rightarrow$ atom & 90.9 & 7 \\
& A2 & water flow $\rightarrow$ heat transfer & 86.9 & 8 \\
& A3 & waves $\rightarrow$ sounds & 81.8 & 8 \\
& A4 & combustion $\rightarrow$ respiration & 79.0 & 8 \\
science
& A5 & sound $\rightarrow$ light & 79.2 & 7 \\
analogies
& A6 & projectile $\rightarrow$ planet & 97.4 & 7 \\
& A7 & artificial selection $\rightarrow$ natural selection & 74.7 & 7 \\
& A8 & billiard balls $\rightarrow$ gas molecules & 88.1 & 8 \\
& A9 & computer $\rightarrow$ mind & 84.3 & 9 \\
& A10 & slot machine $\rightarrow$ bacterial mutation & 83.6 & 5 \\
\hline
& M1 & war $\rightarrow$ argument & 93.5 & 7 \\
& M2 & buying an item $\rightarrow$ accepting a belief & 96.1 & 7 \\
& M3 & grounds for a building $\rightarrow$ reasons for a theory & 87.9 & 6 \\
& M4 & impediments to travel $\rightarrow$ difficulties & 100.0 & 7 \\
common
& M5 & money $\rightarrow$ time & 77.3 & 6 \\
metaphors
& M6 & seeds $\rightarrow$ ideas & 89.0 & 7 \\
& M7 & machine $\rightarrow$ mind & 98.7 & 7 \\
& M8 & object $\rightarrow$ idea & 89.1 & 5 \\
& M9 & following $\rightarrow$ understanding & 96.6 & 8 \\
& M10 & seeing $\rightarrow$ understanding & 78.8 & 6 \\
\hline
Average & & & 87.6 & 7.0 \\
\hline
\end{tabular}
\normalsize
\caption {The average agreement between our intended mappings and the
mappings of the 22 participants. See Appendix A for the details.}
\label{tab:agreement}
\end{table}
The column labeled $m$ gives the number of terms in the set of source terms
for each mapping problem (which is equal to the number of terms in the set of
target terms). For the average problem, $m = 7$.
The third column in Table~\ref{tab:agreement} gives a mnemonic that
summarizes the mapping (e.g., solar system $\rightarrow$ atom). Note that
the mnemonic is not used as input for any of the algorithms, nor
was the mnemonic shown to the participants in the experiment.
The agreement figures in Table~\ref{tab:agreement} for each individual
mapping problem are averages over the $m$ mappings for each problem.
Appendix A gives a more detailed view, showing the agreement for
each individual mapping in the $m$ mappings. The twenty problems contain
a total of 140 individual mappings ($20 \times 7$). Appendix A shows that
every one of these 140 mappings has an agreement of 50\% or higher. That is,
in every case, the majority of the participants agreed with our
intended mapping. (There are two cases where the agreement is
exactly 50\%. See problems A5 in Table~\ref{tab:a1-a5} and M5 in
Table~\ref{tab:m1-m5} in Appendix A.)
If we select the mapping that is chosen by the majority of the 22 participants,
then we will get a perfect score on all twenty problems. More precisely,
if we try all $m!$ mappings for each problem, and select the mapping
that maximizes the sum of the number of participants who agree with
each individual mapping in the $m$ mappings, then we will have a
score of 100\% on all twenty problems. This is strong support for
the intended mappings that are given in Appendix A.
In Section~\ref{sec:task}, we applied Genter's \citeyear{gentner91} categories
-- {\em mere appearance} (mostly attributional similarity), {\em analogy} (mostly
relational similarity), and {\em literal similarity} (a mixture of attributional
and relational similarity) -- to the mappings $M_{\rm r}$ and $M_{\rm a}$,
where $M_{\rm r}$ is the best mapping according to ${\rm sim_r}$ and $M_{\rm a}$
is the best mapping according to ${\rm sim_a}$. The twenty mapping
problems were chosen as analogy problems; that is, the intended mappings
in Appendix A are meant to be relational mappings, $M_{\rm r}$; mappings that
maximize relational similarity, ${\rm sim_r}$. We have tried to avoid
mere appearance and literal similarity.
In Section~\ref{sec:lrme} we use the twenty mapping problems to evaluate
a relational mapping algorithm (LRME), and in Section~\ref{sec:attributes}
we use them to evaluate several different attributional mapping
algorithms. Our hypothesis is that LRME will perform significantly
better than any of the attributional mapping algorithms on the
twenty mapping problems, because they are analogy problems (not mere
appearance problems and not literal similarity problems).
We expect relational and attributional mapping algorithms
would perform approximately equally well on literal similarity problems,
and we expect that mere appearance problems would favour attributional
algorithms over relational algorithms, but we do not test these latter two
hypotheses, because our primary interest in this paper is analogy-making.
Our goal is to test the hypothesis that there is a
real, practical, effective, measurable difference between the output
of LRME and the output of the various attributional
mapping algorithms. A skeptic might claim that relational similarity
${\rm sim_r}(a\!:\!b, c\!:\!d)$ can be reduced to attributional
similarity ${\rm sim_a}(a,c) + {\rm sim_a}(b,d)$; therefore our
relational mapping algorithm is a complicated solution to an illusory
problem. A slightly less skeptical claim is that relational similarity
versus attributional similarity is a valid distinction in cognitive
psychology, but our relational mapping algorithm does not capture
this distinction. To test our hypothesis and refute these skeptical
claims, we have created twenty analogical mapping problems, and we
will show that LRME handles these problems significantly
better than the various attributional mapping algorithms.
\section{The Latent Relation Mapping Engine}
\label{sec:lrme}
The Latent Relation Mapping Engine (LRME) seeks the mapping $M_{\rm r}$
that maximizes the sum of the relational similarities.
\begin{equation}
\label{eqn:lrme}
M_{\rm r} = \operatornamewithlimits{arg\,max}_{M \in P(A,B)}
\; \sum_{i=1}^{m} \sum_{j=i+1}^{m} {\rm sim_r}(a_i\!:\!a_j, M(a_i)\!:\!M(a_j))
\end{equation}
\noindent We search for $M_{\rm r}$ by exhaustively evaluating all of the
possibilities. Ties are broken randomly. We use a simplified form of
LRA \shortcite{turney06} to calculate ${\rm sim_r}$.
\subsection{Algorithm}
\label{subsec:algorithm}
Briefly, the idea of LRME is to build a pair-pattern matrix $\mathbf{X}$,
in which the rows correspond to pairs of terms and the columns correspond
to patterns. For example, the row $\mathbf{x}_{i:}$ might correspond to
the pair of terms {\em sun}$\,:\,${\em solar system} and the column
$\mathbf{x}_{:j}$ might correspond to the pattern ``$\ast$ $X$ centered $Y$ $\ast$''.
In these patterns, ``$\ast$'' is a wild card, which can match any single
word. The value of an element $x_{ij}$ in $\mathbf{X}$ is based on the frequency
of the pattern for $\mathbf{x}_{:j}$, when $X$ and $Y$ are instantiated by the
terms in the pair for $\mathbf{x}_{i:}$. For example, if we take the pattern
``$\ast$ $X$ centered $Y$ $\ast$'' and instantiate $X:Y$ with the pair
{\em sun}$\,:\,${\em solar system}, then we have the pattern
``$\ast$ sun centered solar system $\ast$'', and thus the value of the
element $x_{ij}$ is based on the frequency of ``$\ast$ sun centered solar system
$\ast$'' in the corpus. The matrix $\mathbf{X}$ is smoothed with a truncated
singular value decomposition (SVD) \shortcite{golub96} and the relational
similarity ${\rm sim_r}$ between two pairs of terms is given by the cosine
of the angle between the two corresponding row vectors in $\mathbf{X}$.
In more detail, LRME takes as input $I$ a set of mapping problems and
generates as output $O$ a corresponding set of mappings.
\begin{align}
I & = \left \{ \left \langle A_1, B_1 \right \rangle,
\left \langle A_2, B_2 \right \rangle, \ldots,
\left \langle A_n, B_n \right \rangle \right \} \\
O & = \left \{ M_1: A_1 \rightarrow B_1,
M_2: A_2 \rightarrow B_2, \ldots,
M_n: A_n \rightarrow B_n \right \}
\end{align}
\noindent In the following experiments, all twenty mapping problems
(Appendix A) are processed in one batch ($n = 20$).
The first step is to make a list $R$ that contains all pairs
of terms in the input $I$. For each mapping problem
$\left \langle A, B \right \rangle$ in $I$, we add to $R$ all pairs
$a_i:a_j$, such that $a_i$ and $a_j$ are members of $A$, $i \ne j$,
and all pairs
$b_i:b_j$, such that $b_i$ and $b_j$ are members of $B$, $i \ne j$.
If $\left | A \right | = \left | B \right | = m$, then there are
$m(m-1)$ pairs from $A$ and $m(m-1)$ pairs from $B$.\footnote{We have
$m(m-1)$ here, not $m(m-1)/2$, because we need the pairs in both orders.
We only want to calculate ${\rm sim_r}$ for one order of the pairs,
because $i$ is always less than $j$ in (\ref{eqn:lrme}); however, to ensure
that ${\rm sim_r}$ is symmetrical, as in (\ref{eqn:symmetry}), we need to
make the matrix $\mathbf{X}$ symmetrical, by having rows in the
matrix for both orders of every pair.}
A typical pair in $R$ would be {\em sun}$\,:\,${\em solar system}.
We do not allow duplicates in $R$; $R$ is a list of
pair types, not pair tokens. For our twenty mapping problems, $R$
is a list of 1,694 pairs.
For each pair $r$ in $R$, we make a list $S(r)$ of the phrases
in the corpus that contain the pair $r$. Let $a_i:a_j$ be the terms in the
pair $r$. We search in the corpus for all phrases of the following form:
\begin{equation}
\label{eqn:template}
\mbox{\textbf{``[0 to 1 words] $a_i$ [0 to 3 words] $a_j$ [0 to 1 words]''}}
\end{equation}
\noindent If $a_i:a_j$ is in $R$, then $a_j:a_i$ is also in $R$, so
we find phrases with the members of the pairs in both orders,
$S(a_i:a_j)$ and $S(a_j:a_i)$. The search template (\ref{eqn:template})
is the same as used by Turney \citeyear{turney08}.
In the following experiments, we search in a corpus of $5 \times 10^{10}$
English words (about 280 GB of plain text), consisting of web pages gathered
by a web crawler.\footnote{The corpus was collected by Charles Clarke at
the University of Waterloo. We can provide copies of the corpus on request.}
To retrieve phrases from the corpus, we use Wumpus \cite{buettcher05}, an
efficient search engine for passage retrieval
from large corpora.\footnote{Wumpus was developed by Stefan B{\"u}ttcher
and it is available at http://www.wumpus-search.org/.}
With the 1,694 pairs in $R$, we find a total of 1,996,464 phrases in the
corpus, an average of about 1,180 phrases per pair. For the pair
$r$ = {\em sun}$\,:\,${\em solar system}, a typical phrase $s$
in $S(r)$ would be ``a sun centered solar system illustrates''.
Next we make a list $C$ of patterns, based on the phrases we have found.
For each pair $r$ in $R$, where $r = a_i:a_j$, if
we found a phrase $s$ in $S(r)$, then we replace $a_i$ in $s$ with $X$
and we replace $a_j$ with $Y$. The remaining words may
be either left as they are or replaced with a wild card symbol ``$\ast$''.
We then replace $a_i$ in $s$ with $Y$ and $a_j$ with $X$, and
replace the remaining words with wild cards or leave them as
they are. If there are $n$ remaining words in $s$, after $a_i$
and $a_j$ are replaced, then we generate $2^{n+1}$ patterns from $s$,
and we add these patterns to $C$. We only add new patterns to $C$;
that is, $C$ is a list of pattern types, not pattern tokens; there
are no duplicates in $C$.
For example, for the pair {\em sun}$\,:\,${\em solar system},
we found the phrase ``a sun centered solar system illustrates''.
When we replace $a_i:a_j$ with $X:Y$, we have
``a $X$ centered $Y$ illustrates''. There are three remaining words,
so we can generate eight patterns, such as ``a $X$ $\ast$ $Y$ illustrates'',
``a $X$ centered $Y$ $\ast$'', ``$\ast$ $X$ $\ast$ $Y$ illustrates'', and so on.
Each of these patterns is added to $C$. Then we replace $a_i:a_j$
with $Y:X$, yielding ``a $Y$ centered $X$ illustrates''. This
gives us another eight patterns, such as ``a $Y$ centered $X$ $\ast$''.
Thus the phrase ``a sun centered solar system illustrates'' generates
a total of sixteen patterns, which we add to $C$.
Now we revise $R$, to make a list of pairs that will correspond to
rows in the frequency matrix $\mathbf{F}$. We remove any pairs from $R$
for which no phrases were found in the corpus, when the terms were
in either order. Let $a_i:a_j$ be the terms in the pair $r$.
We remove $r$ from $R$ if both $S(a_i:a_j)$ and $S(a_j:a_i)$ are empty.
We remove such rows because they would correspond
to zero vectors in the matrix $\mathbf{F}$. This reduces $R$ from 1,694
pairs to 1,662 pairs. Let $n_r$ be the number of pairs in $R$.
Next we revise $C$, to make a list of patterns that will correspond
to columns in the frequency matrix $\mathbf{F}$. In the following
experiments, at this stage, $C$ contains millions of patterns, too many for
efficient processing with a standard desktop computer. We need
to reduce $C$ to a more manageable size. We select the patterns that
are shared by the most pairs.
Let $c$ be a pattern in $C$. Let $r$ be a pair in $R$.
If there is a phrase $s$ in $S(r)$, such that there is a pattern generated
from $s$ that is identical to $c$, then we say that $r$ is one of the
pairs that generated $c$. We sort the patterns in $C$ in descending order of the
number of pairs in $R$ that generated each pattern, and we select the top
$tn_r$ patterns from this sorted list. Following Turney \citeyear{turney08},
we set the parameter $t$ to 20; hence $C$ is reduced to the top 33,240
patterns ($tn_r$ = 20 $\times$ 1,662 = 33,240). Let $n_c$ be the
number of patterns in $C$ ($n_c = tn_r)$.
Now that the rows $R$ and columns $C$ are defined, we can build
the frequency matrix $\mathbf{F}$.
Let $r_i$ be the $i$-th pair of terms in $R$ (e.g., let $r_i$ be
{\em sun}$\,:\,${\em solar system}) and let $c_j$ be the $j$-th pattern
in $C$ (e.g., let $c_j$ be ``$\ast$ $X$ centered $Y$ $\ast$'').
We instantiate $X$ and $Y$ in the pattern $c_j$ with the
terms in $r_i$ (``$\ast$ sun centered solar system $\ast$'').
The element $f_{ij}$ in $\mathbf{F}$ is the frequency
of this instantiated pattern in the corpus.
Note that we do not need to search again in the corpus for the instantiated
pattern for $f_{ij}$, in order to find its frequency. In the process of creating
each pattern, we can keep track of how many phrases generated the
pattern, for each pair. We can get the frequency for $f_{ij}$ by
checking our record of the patterns that were generated by $r_i$.
The next step is to transform the matrix $\mathbf{F}$ of raw frequencies
into a form $\mathbf{X}$ that enhances the similarity measurement. Turney
\citeyear{turney06} used the log entropy transformation, as suggested
by Landauer and Dumais \citeyear{landauer97}. This is a kind of
tf-idf (term frequency times inverse document frequency) transformation,
which gives more weight to elements in the matrix that are statistically
surprising. However, Bullinaria and Levy \citeyear{bullinaria07} recently
achieved good results with a new transformation, called PPMIC
(Positive Pointwise Mutual Information with Cosine); therefore
LRME uses PPMIC. The raw frequencies in $\mathbf{F}$ are used to calculate
probabilities, from which we can calculate the pointwise mutual information
(PMI) of each element in the matrix. Any element with a negative PMI is then
set to zero.
\begin{align}
p_{ij} & = \frac{f_{ij}}{\sum_{i=1}^{n_r} \sum_{j=1}^{n_c} f_{ij}} \\
p_{i*} & = \frac{\sum_{j=1}^{n_c} f_{ij}}{\sum_{i=1}^{n_r} \sum_{j=1}^{n_c} f_{ij}} \\
p_{*j} & = \frac{\sum_{i=1}^{n_r} f_{ij}}{\sum_{i=1}^{n_r} \sum_{j=1}^{n_c} f_{ij}} \\
\label{eqn:pmi}
{\rm pmi}_{ij} & = \log \left ( \frac{p_{ij}}{p_{i*} p_{*j}} \right ) \\
x_{ij} & =
\left\{
\begin{array}{rl}
{\rm pmi}_{ij} & \mbox{if ${\rm pmi}_{ij} > 0$} \\
0 & \mbox{otherwise}
\end{array}
\right.
\end{align}
Let $r_i$ be the \mbox{$i$-th} pair of terms in $R$ (e.g., let $r_i$ be
{\em sun}$\,:\,${\em solar system}) and let $c_j$ be the \mbox{$j$-th} pattern
in $C$ (e.g., let $c_j$ be ``$\ast$ $X$ centered $Y$ $\ast$'').
In (\ref{eqn:pmi}), $p_{ij}$ is the estimated probability
of the of the pattern $c_j$ instantiated with the pair $r_i$
(``$\ast$ sun centered solar system $\ast$''), $p_{i*}$
is the estimated probability of $r_i$, and $p_{*j}$ is
the estimated probability of $c_j$. If $r_i$ and $c_j$ are
statistically independent, then $p_{i*} p_{*j} = p_{ij}$ (by the
definition of independence), and
thus ${\rm pmi}_{ij}$ is zero (since $\log(1) = 0$). If there is an
interesting semantic relation between the terms in $r_i$, and the
pattern $c_j$ captures an aspect of that semantic relation, then
we should expect $p_{ij}$ to be larger than it would be if
$r_i$ and $c_j$ were indepedent; hence we should find that
$p_{ij} > p_{i*} p_{*j}$, and thus ${\rm pmi}_{ij}$ is positive.
(See Hypothesis~2 in Section~\ref{sec:hypotheses}.) On the
other hand, terms from completely different domains may avoid
each other, in which case we should find that ${\rm pmi}_{ij}$
is negative. PPMIC is designed to give a high value to
$x_{ij}$ when the pattern $c_j$ captures an aspect of the
semantic relation between the terms in $r_i$; otherwise,
$x_{ij}$ should have a value of zero, indicating that the
pattern $c_j$ tells us nothing about the semantic relation
between the terms in $r_i$.
In our experiments, $\mathbf{F}$ has a density of 4.6\% (the
percentage of nonzero elements) and
$\mathbf{X}$ has a density of 3.8\%. The lower density of $\mathbf{X}$
is due to elements with a negative PMI, which are transformed to zero
by PPMIC.
Now we smooth $\mathbf{X}$ by applying a truncated singular value decomposition
(SVD) \shortcite{golub96}. We use SVDLIBC to calculate the SVD of
$\mathbf{X}$.\footnote{SVDLIBC is the work of Doug Rohde and it is available
at http://tedlab.mit.edu/$\scriptstyle\sim$dr/svdlibc/.}
SVDLIBC is designed for sparse (low density) matrices.
SVD decomposes $\mathbf{X}$ into the product of three matrices
$\mathbf{U} \mathbf{\Sigma} \mathbf{V}^\mathsf{T}$, where $\mathbf{U}$
and $\mathbf{V}$ are in column
orthonormal form (i.e., the columns are orthogonal and have unit length,
$\mathbf{U}^\mathsf{T} \mathbf{U} = \mathbf{V}^\mathsf{T} \mathbf{V} = \mathbf{I}$)
and $\mathbf{\Sigma}$ is a diagonal matrix of singular values \shortcite{golub96}.
If $\mathbf{X}$ is of rank $r$, then $\mathbf{\Sigma}$ is also of rank $r$.
Let ${\mathbf{\Sigma}}_k$, where $k < r$, be the diagonal matrix formed from the top $k$
singular values, and let $\mathbf{U}_k$ and $\mathbf{V}_k$ be the matrices produced
by selecting the corresponding columns from $\mathbf{U}$ and $\mathbf{V}$. The matrix
$\mathbf{U}_k \mathbf{\Sigma}_k \mathbf{V}_k^\mathsf{T}$ is the matrix of rank $k$
that best approximates the original matrix $\mathbf{X}$, in the sense that it
minimizes the approximation errors. That is,
${\bf \hat X} = \mathbf{U}_k \mathbf{\Sigma}_k \mathbf{V}_k^\mathsf{T}$
minimizes $\| {{\bf \hat X} - \mathbf{X}} \|_F$
over all matrices ${\bf \hat X}$ of rank $k$, where $\| \ldots \|_F$
denotes the Frobenius norm \shortcite{golub96}. We may think of this matrix
$\mathbf{U}_k \mathbf{\Sigma}_k \mathbf{V}_k^\mathsf{T}$ as a smoothed or compressed
version of the original matrix $\mathbf{X}$. Following Turney \citeyear{turney06},
we set the parameter $k$ to 300.
The relational similarity ${\rm sim_r}$ between two pairs in $R$ is
the inner product of the two corresponding rows in
$\mathbf{U}_k \mathbf{\Sigma}_k \mathbf{V}_k^\mathsf{T}$,
after the rows have been normalized to unit length. We can simplify
calculations by dropping $\mathbf{V}_k$ \cite{deerwester90}.
We take the matrix $\mathbf{U}_k \mathbf{\Sigma}_k$ and normalize
each row to unit length. Let $\mathbf{W}$ be the resulting
matrix. Now let $\mathbf{Z}$ be $\mathbf{W} \mathbf{W}^\mathsf{T}$,
a square matrix of size $n_r \times n_r$. This matrix contains the
cosines of all combinations of two pairs in $R$.
For a mapping problem $\left \langle A, B \right \rangle$ in $I$,
let $a:a'$ be a pair of terms from $A$ and let $b:b'$ be a pair
of terms from $B$. Suppose that $r_i = a:a'$ and $r_j = b:b'$,
where $r_i$ and $r_j$ are the $i$-th and $j$-th pairs in $R$. Then
${\rm sim_r}(a:a', b:b') = z_{ij}$, where $z_{ij}$ is the
element in the $i$-th row and $j$-th column of $\mathbf{Z}$.
If either $a:a'$ or $b:b'$ is not in $R$, because $S(a:a')$,
$S(a':a)$, $S(b:b')$, or $S(b':b)$ is empty, then we set the
similarity to zero. Finally, for each mapping problem in $I$,
we output the map $M_{\rm r}$ that maximizes the sum of the relational
similarities.
\begin{equation}
M_{\rm r} = \operatornamewithlimits{arg\,max}_{M \in P(A,B)}
\; \sum_{i=1}^{m} \sum_{j=i+1}^{m} {\rm sim_r}(a_i\!:\!a_j, M(a_i)\!:\!M(a_j))
\end{equation}
The simplified form of LRA used here to calculate ${\rm sim_r}$
differs from LRA used by Turney \citeyear{turney06} in several ways.
In LRME, there is no use of synonyms to generate alternate forms of
the pairs of terms. In LRME, there is no morphological processing of the terms.
LRME uses PPMIC \shortcite{bullinaria07} to process the raw frequencies,
instead of log entropy. Following Turney \citeyear{turney08}, LRME uses a
slightly different search template (\ref{eqn:template}) and LRME sets the
number of columns $n_c$ to $tn_r$, instead of using a constant. In
Section~\ref{subsec:experiments}, we evaluate the impact of two of these
changes (PPMIC and $n_c$), but we have not tested the other changes, which
were mainly motivated by a desire for increased efficiency and simplicity.
\subsection{Experiments}
\label{subsec:experiments}
We implemented LRME in Perl, making external calls
to Wumpus for searching the corpus and to SVDLIBC for calculating SVD.
We used the Perl Net::Telnet package for interprocess communication
with Wumpus, the PDL (Perl Data Language) package for matrix manipulations
(e.g., calculating cosines), and the List::Permutor package to
generate permutations (i.e., to loop through $P(A,B)$).
We ran the following experiments on a dual core AMD Opteron 64
computer, running 64 bit Linux. Most of the running time is spent
searching the corpus for phrases. It took 16 hours
and 27 minutes for Wumpus to fetch the 1,996,464 phrases.
The remaining steps took 52 minutes, of which SVD took 10
minutes. The running time could be cut in half by using
RAID 0 to speed up disk access.
Table~\ref{tab:lrme-baseline} shows the performance of LRME in its baseline
configuration. For comparison, the agreement of the 22 volunteers with
our intended mapping has been copied from Table~\ref{tab:agreement}.
The difference between the performance of LRME (91.5\%) and the human participants
(87.6\%) is not statistically significant (paired t-test, 95\% confidence level).
\begin{table}[htbp]
\small
\centering
\begin{tabular}{llrr}
\hline
& & \multicolumn{2}{c}{\textbf{Accuracy}} \\
\cline{3-4}
\textbf{Mapping} & \textbf{Source $\rightarrow$ Target} & \textbf{LRME} & \textbf{Humans} \\
\hline
A1 & solar system $\rightarrow$ atom & 100.0 & 90.9 \\
A2 & water flow $\rightarrow$ heat transfer & 100.0 & 86.9 \\
A3 & waves $\rightarrow$ sounds & 100.0 & 81.8 \\
A4 & combustion $\rightarrow$ respiration & 100.0 & 79.0 \\
A5 & sound $\rightarrow$ light & 71.4 & 79.2 \\
A6 & projectile $\rightarrow$ planet & 100.0 & 97.4 \\
A7 & artificial selection $\rightarrow$ natural selection & 71.4 & 74.7 \\
A8 & billiard balls $\rightarrow$ gas molecules & 100.0 & 88.1 \\
A9 & computer $\rightarrow$ mind & 55.6 & 84.3 \\
A10 & slot machine $\rightarrow$ bacterial mutation & 100.0 & 83.6 \\
\hline
M1 & war $\rightarrow$ argument & 71.4 & 93.5 \\
M2 & buying an item $\rightarrow$ accepting a belief & 100.0 & 96.1 \\
M3 & grounds for a building $\rightarrow$ reasons for a theory & 100.0 & 87.9 \\
M4 & impediments to travel $\rightarrow$ difficulties & 100.0 & 100.0 \\
M5 & money $\rightarrow$ time & 100.0 & 77.3 \\
M6 & seeds $\rightarrow$ ideas & 100.0 & 89.0 \\
M7 & machine $\rightarrow$ mind & 100.0 & 98.7 \\
M8 & object $\rightarrow$ idea & 60.0 & 89.1 \\
M9 & following $\rightarrow$ understanding & 100.0 & 96.6 \\
M10 & seeing $\rightarrow$ understanding & 100.0 & 78.8 \\
\hline
Average & & 91.5 & 87.6 \\
\hline
\end{tabular}
\normalsize
\caption {LRME in its baseline configuration, compared with human performance.}
\label{tab:lrme-baseline}
\end{table}
In Table~\ref{tab:lrme-baseline}, the column labeled {\em Humans} is the average
of 22 people, whereas the {\em LRME} column is only one algorithm (it is not
an average). Comparing an average of several scores to an individual score
(whether the individual is a human or an algorithm) may give a misleading
impression. In the results for any individual
person, there are typically several 100\% scores and a few scores in the 55-75\%
range. The average mapping problem has seven terms. It is not possible to have
exactly one term mapped incorrectly; if there are any incorrect mappings,
then there must be two or more incorrect mappings. This follows from the
nature of bijections. Therefore a score of $5/7 = 71.4\%$ is not uncommon.
Table~\ref{tab:lrme-histogram} looks at the results from another
perspective. The column labeled {\em LRME wrong} gives the number
of incorrect mappings made by LRME for each of the twenty problems.
The five columns labeled {\em Number of people with $N$ wrong} show,
for various values of $N$, how may of the 22 people made $N$ incorrect
mappings. For the average mapping problem, 15 out of 22 participants had a perfect
score ($N = 0$); of the remaining 7 participants, 5 made only two mistakes
($N = 2$). Table~\ref{tab:lrme-histogram} shows more clearly than
Table~\ref{tab:lrme-baseline} that LRME's performance is not significantly
different from (individual) human performance. (For yet another perspective,
see Section~\ref{subsec:analogies-vs-metaphors}).
\begin{table}[htbp]
\small
\centering
\begin{tabular}{lccccccc}
\hline
& \textbf{LRME} & \multicolumn{5}{c}{\textbf{Number of people with $N$ wrong}} & \\
\cline{3-7}
\textbf{Mapping} & \textbf{wrong} & \textbf{$N = 0$}
& \textbf{$N = 1$} & \textbf{$N = 2$}
& \textbf{$N = 3$} & \textbf{$N \ge 4$} & \textbf{$m$} \\
\hline
A1 & 0 & 16 & 0 & 4 & 2 & 0 & 7 \\
A2 & 0 & 14 & 0 & 5 & 0 & 3 & 8 \\
A3 & 0 & 9 & 0 & 9 & 2 & 2 & 8 \\
A4 & 0 & 9 & 0 & 9 & 0 & 4 & 8 \\
A5 & 2 & 10 & 0 & 7 & 2 & 3 & 7 \\
A6 & 0 & 20 & 0 & 2 & 0 & 0 & 7 \\
A7 & 2 & 8 & 0 & 6 & 6 & 2 & 7 \\
A8 & 0 & 13 & 0 & 8 & 0 & 1 & 8 \\
A9 & 4 & 11 & 0 & 7 & 2 & 2 & 9 \\
A10 & 0 & 13 & 0 & 9 & 0 & 0 & 5 \\
\hline
M1 & 2 & 17 & 0 & 5 & 0 & 0 & 7 \\
M2 & 0 & 19 & 0 & 3 & 0 & 0 & 7 \\
M3 & 0 & 14 & 0 & 8 & 0 & 0 & 6 \\
M4 & 0 & 22 & 0 & 0 & 0 & 0 & 7 \\
M5 & 0 & 9 & 0 & 11 & 0 & 2 & 6 \\
M6 & 0 & 15 & 0 & 4 & 3 & 0 & 7 \\
M7 & 0 & 21 & 0 & 1 & 0 & 0 & 7 \\
M8 & 2 & 18 & 0 & 2 & 1 & 1 & 5 \\
M9 & 0 & 19 & 0 & 3 & 0 & 0 & 8 \\
M10 & 0 & 13 & 0 & 3 & 3 & 3 & 6 \\
\hline
Average & 1 & 15 & 0 & 5 & 1 & 1 & 7 \\
\hline
\end{tabular}
\normalsize
\caption {Another way of viewing LRME versus human performance.}
\label{tab:lrme-histogram}
\end{table}
In Table~\ref{tab:lrme-variations}, we examine the sensitivity of LRME
to the parameter settings. The first row shows the accuracy of the
baseline configuration, as in Table~\ref{tab:lrme-baseline}. The next
eight rows show the impact of varying $k$, the dimensionality of the
truncated singular value decomposition, from 50 to 400. The eight rows
after that show the effect of varying $t$, the column factor, from
5 to 40. The number of columns in the matrix ($n_c$) is given by
the number of rows ($n_r$ = 1,662) multiplied by $t$. The second last row
shows the effect of eliminating the singular value decomposition
from LRME. This is equivalent to setting $k$ to 1,662, the number of
rows in the matrix. The final row gives the result when PPMIC
\shortcite{bullinaria07} is replaced with log entropy \shortcite{turney06}.
LRME is not sensitive to any of these manipulations: None
of the variations in Table~\ref{tab:lrme-variations} perform
significantly differently from the baseline configuration
(paired t-test, 95\% confidence level). (This does not necessarily
mean that the manipulations have no effect; rather, it suggests
that a larger sample of problems would be needed to show
a significant effect.)
\begin{table}[htbp]
\small
\centering
\begin{tabular}{lrrrr}
\hline
\textbf{Experiment} & \textbf{$k$} & \textbf{$t$} & \textbf{$n_c$} & \textbf{Accuracy} \\
\hline
baseline configuration
& 300 & 20 & 33,240 & 91.5 \\
\hline
\multirow{8}{*}{varying $k$}
& 50 & 20 & 33,240 & 89.3 \\
& 100 & 20 & 33,240 & 92.8 \\
& 150 & 20 & 33,240 & 91.3 \\
& 200 & 20 & 33,240 & 92.6 \\
& 250 & 20 & 33,240 & 90.6 \\
& 300 & 20 & 33,240 & 91.5 \\
& 350 & 20 & 33,240 & 90.6 \\
& 400 & 20 & 33,240 & 90.6 \\
\hline
\multirow{8}{*}{varying $t$}
& 300 & 5 & 8,310 & 86.9 \\
& 300 & 10 & 16,620 & 94.0 \\
& 300 & 15 & 24,930 & 94.0 \\
& 300 & 20 & 33,240 & 91.5 \\
& 300 & 25 & 41,550 & 90.1 \\
& 300 & 30 & 49,860 & 90.6 \\
& 300 & 35 & 58,170 & 89.5 \\
& 300 & 40 & 66,480 & 91.7 \\
\hline
dropping SVD
& 1662 & 20 & 33,240 & 89.7 \\
\hline
log entropy
& 300 & 20 & 33,240 & 83.9 \\
\hline
\end{tabular}
\normalsize
\caption {Exploring the sensitivity of LRME to various
parameter settings and modifications.}
\label{tab:lrme-variations}
\end{table}
\section{Attribute Mapping Approaches}
\label{sec:attributes}
In this section, we explore a variety of attribute mapping approaches
for the twenty mapping problems. All of these approaches seek the
mapping $M_{\rm a}$ that maximizes the sum of the attributional similarities.
\begin{equation}
M_{\rm a} = \operatornamewithlimits{arg\,max}_{M \in P(A,B)}
\; \sum_{i=1}^{m} {\rm sim_a}(a_i, M(a_i))
\end{equation}
\noindent We search for $M_{\rm a}$ by exhaustively evaluating all of the
possibilities. Ties are broken randomly. We use a variety of different
algorithms to calculate ${\rm sim_a}$.
\subsection{Algorithms}
\label{subsec:attrib-algo}
In the following experiments, we test five lexicon-based attributional
similarity measures that use
WordNet:\footnote{WordNet was developed by a team at Princeton
and it is available at http://wordnet.princeton.edu/.}
HSO \shortcite{hirst98}, JC \shortcite{jiang97},
LC \shortcite{leacock98}, LIN \shortcite{lin98}, and
RES \shortcite{resnik95}. All five are implemented in the Perl package
WordNet::Similarity,\footnote{Ted Pedersen's WordNet::Similarity
package is at
http://www.d.umn.edu/$\scriptstyle\sim$tpederse/similarity.html.}
which builds on the
WordNet::QueryData\footnote{Jason Rennie's WordNet::QueryData
package is at
http://people.csail.mit.edu/jrennie/WordNet/.}
package. The core idea behind them is to treat WordNet as
a graph and measure the semantic distance between two terms by
the length of the shortest path between them in the graph.
Similarity increases as distance decreases.
HSO works with nouns, verbs, adjectives, and adverbs, but JC, LC, LIN,
and RES only work with nouns and verbs. We used WordNet::Similarity
to try all possible parts of speech and all possible senses for each
input word. Many adjectives, such as {\em true} and {\em valuable}, also
have noun and verb senses in WordNet, so JC, LC, LIN, and RES are still
able to calculate similarity for them. When the raw form of a word is not
found in WordNet, WordNet::Similarity searches for morphological variations
of the word. When there are multiple similarity scores, for
multiple parts of speech and multiple senses, we select the highest
similarity score. When there is no similarity score, because a word
is not in WordNet, or because JC, LC, LIN, or RES could not find
an alternative noun or verb form for an adjective or adverb, we
set the score to zero.
We also evaluate two corpus-based attributional similarity measures:
PMI-IR \shortcite{turney01} and LSA \shortcite{landauer97}.
The core idea behind them is that ``a word is characterized
by the company it keeps'' \shortcite{firth57}.
The similarity of two terms is measured by the similarity
of their statistical distributions in a corpus.
We used the corpus of Section~\ref{sec:lrme} along with
Wumpus to implement PMI-IR (Pointwise Mutual Information with
Information Retrieval). For LSA (Latent Semantic Analysis), we used
the online demonstration.\footnote{The online demonstration of LSA
is the work of a team at the University of Colorado at Boulder.
It is available at http://lsa.colorado.edu/.} We selected the
{\em Matrix Comparison} option with the {\em General Reading up to
1st year college (300 factors)} topic space and the {\em term-to-term}
comparison type. PMI-IR and LSA work with all parts of speech.
Our eighth similarity measure is based on the observation that our
intended mappings map terms that have the same part of speech (see
Appendix A). Let ${\rm POS}(a)$ be the part-of-speech tag assigned
to the term $a$. We use part-of-speech tags to define a measure of
attributional similarity,
${\rm sim_{\scriptscriptstyle POS}}(a, b)$, as follows.
\begin{equation}
\label{eqn:pos}
{\rm sim_{\scriptscriptstyle POS}}(a, b) =
\left\{
\begin{array}{rl}
100 & \mbox{if $a = b$} \\
10 & \mbox{if ${\rm POS}(a) = {\rm POS}(b)$} \\
0 & \mbox{otherwise}
\end{array}
\right.
\end{equation}
\noindent We hand-labeled the terms in the mapping problems
with part-of-speech tags \shortcite{santorini90}. Automatic
taggers assume that the words that are to be tagged are embedded
in a sentence, but the terms in our mapping problems are not
in sentences, so their tags are ambiguous. We used our
knowledge of the intended mappings to manually disambiguate the
part-of-speech tags for the terms, thus guaranteeing that corresponding
terms in the intended mapping always have the same tags.
For each of the first seven attributional similarity measures
above, we created seven more similarity measures by combining
them with ${\rm sim_{\scriptscriptstyle POS}}(a, b)$. For example,
let ${\rm sim_{\scriptscriptstyle HSO}}(a, b)$ be the Hirst and
St-Onge \citeyear{hirst98} similarity measure. We combine
${\rm sim_{\scriptscriptstyle POS}}(a, b)$ and
${\rm sim_{\scriptscriptstyle HSO}}(a, b)$ by simply adding them.
\begin{equation}
{\rm sim_{\scriptscriptstyle HSO+POS}}(a, b) =
{\rm sim_{\scriptscriptstyle HSO}}(a, b) +
{\rm sim_{\scriptscriptstyle POS}}(a, b)
\end{equation}
\noindent The values returned by ${\rm sim_{\scriptscriptstyle POS}}(a, b)$
range from 0 to 100, whereas the values returned by
${\rm sim_{\scriptscriptstyle HSO}}(a, b)$ are
much smaller. We chose large values in (\ref{eqn:pos}) so that
getting POS tags to match up has more weight than any of the
other similarity measures. The manual POS
tags and the high weight of
${\rm sim_{\scriptscriptstyle POS}}(a, b)$
give an unfair advantage to the attributional mapping approach,
but the relational mapping approach can afford to be generous.
\subsection{Experiments}
\label{subsec:attrib-exper}
Table~\ref{tab:attributional} presents the accuracy of the
various measures of attributional similarity. The best
result without POS labels is 55.9\% (HSO). The best result with POS
labels is 76.8\% (LIN+POS). The 91.5\% accuracy of LRME
(see Table~\ref{tab:lrme-baseline}) is significantly higher than
the 76.8\% accuracy of LIN+POS (and thus, of course, significantly
higher than everything else in Table~\ref{tab:attributional};
paired t-test, 95\% confidence level). The average human performance
of 87.6\% (see Table~\ref{tab:agreement}) is also significantly higher
than the 76.8\% accuracy of LIN+POS (paired \mbox{t-test}, 95\% confidence
level). In summary, humans and LRME perform significantly better than all
of the variations of attributional mapping approaches that were tested.
\begin{table}[htbp]
\small
\centering
\begin{tabular}{llr}
\hline
\textbf{Algorithm} & \textbf{Reference} & \textbf{Accuracy} \\
\hline
HSO & Hirst and St-Onge \citeyear{hirst98} & 55.9 \\
JC & Jiang and Conrath \citeyear{jiang97} & 54.7 \\
LC & Leacock and Chodrow \citeyear{leacock98} & 48.5 \\
LIN & Lin \citeyear{lin98} & 48.2 \\
RES & Resnik \citeyear{resnik95} & 43.8 \\
PMI-IR & Turney \citeyear{turney01} & 54.4 \\
LSA & Landauer and Dumais \citeyear{landauer97} & 39.6 \\
\hline
POS (hand-labeled) & Santorini \citeyear{santorini90} & 44.8 \\
\hline
HSO+POS & Hirst and St-Onge \citeyear{hirst98} & 71.1 \\
JC+POS & Jiang and Conrath \citeyear{jiang97} & 73.6 \\
LC+POS & Leacock and Chodrow \citeyear{leacock98} & 69.5 \\
LIN+POS & Lin \citeyear{lin98} & 76.8 \\
RES+POS & Resnik \citeyear{resnik95} & 71.6 \\
PMI-IR+POS & Turney \citeyear{turney01} & 72.8 \\
LSA+POS & Landauer and Dumais \citeyear{landauer97} & 65.8 \\
\hline
\end{tabular}
\normalsize
\caption {The accuracy of attribute mapping approaches for a wide
variety of measures of attributional similarity.}
\label{tab:attributional}
\end{table}
\section{Discussion}
\label{sec:discussion}
In this section, we examine three questions that are suggested by the
preceding results. Is there a difference between the science analogy
problems and the common metaphor problems? Is there an advantage
to combining the relational and attributional mapping approaches?
What is the advantage of the relational mapping approach over the
attributional mapping approach?
\subsection{Science Analogies versus Common Metaphors}
\label{subsec:analogies-vs-metaphors}
Table~\ref{tab:agreement} suggests that science analogies may be
more difficult than common metaphors. This is supported by
Table~\ref{tab:sci-met-human}, which shows how the agreement of
the 22 participants with our intended mapping (see Section~\ref{sec:problems})
varies between the science problems and the metaphor problems.
The science problems have a lower average performance and greater
variation in performance. The difference between the science problems
and the metaphor problems is statistically significant (paired t-test,
95\% confidence level).
\begin{table}[htbp]
\small
\centering
\begin{tabular}{lrrr}
\hline
& \multicolumn{3}{c}{\textbf{Average Accuracy}} \\
\cline{2-4}
\textbf{Participant} & \textbf{All 20}
& \textbf{10 Science} & \textbf{10 Metaphor} \\
\hline
1 & 72.6 & 59.9 & 85.4 \\
2 & 88.2 & 85.9 & 90.5 \\
3 & 90.0 & 86.3 & \textbf{93.8} \\
4 & 71.8 & 56.4 & 87.1 \\
5 & \textbf{95.7} & \textbf{94.2} & \textbf{97.1} \\
6 & 83.4 & 83.9 & 82.9 \\
7 & 79.6 & 73.6 & 85.7 \\
8 & \textbf{91.9} & \textbf{95.0} & 88.8 \\
9 & 89.7 & \textbf{90.0} & 89.3 \\
10 & 80.7 & 81.4 & 80.0 \\
11 & \textbf{94.5} & \textbf{95.7} & \textbf{93.3} \\
12 & 90.6 & 87.4 & \textbf{93.8} \\
13 & \textbf{93.2} & 89.6 & \textbf{96.7} \\
14 & \textbf{97.1} & \textbf{94.3} & \textbf{100.0} \\
15 & 86.6 & 88.5 & 84.8 \\
16 & 80.5 & 80.2 & 80.7 \\
17 & \textbf{93.3} & \textbf{89.9} & \textbf{96.7} \\
18 & 86.5 & 78.9 & \textbf{94.2} \\
19 & \textbf{92.9} & \textbf{96.0} & 89.8 \\
20 & 90.4 & 84.1 & \textbf{96.7} \\
21 & 82.7 & 74.9 & 90.5 \\
22 & \textbf{96.2} & \textbf{94.9} & \textbf{97.5} \\
\hline
Average & 87.6 & 84.6 & 90.7 \\
Standard deviation & 7.2 & 10.8 & 5.8 \\
\hline
\end{tabular}
\normalsize
\caption {A comparison of the difficulty of the science problems
versus the metaphor problems for the 22 participants. The numbers
in bold font are the scores that are above the scores of LRME.}
\label{tab:sci-met-human}
\end{table}
The average science problem has more terms (7.4) than the average
metaphor problem (6.6), which might contribute to the difficulty
of the science problems. However, Table~\ref{tab:terms} shows that
there is no clear relation between the number of terms in a problem
($m$ in Table~\ref{tab:agreement}) and the level of agreement.
We believe that people find the metaphor problems easier than the
science problems because these common metaphors are entrenched
in our language, whereas the science analogies are more
peripheral.
\begin{table}[htbp]
\normalsize
\centering
\begin{tabular}{cc}
\hline
\textbf{Num terms} & \textbf{Agreement} \\
\hline
5 & 86.4 \\
6 & 81.3 \\
7 & 91.1 \\
8 & 86.5 \\
9 & 84.3 \\
\hline
\end{tabular}
\normalsize
\caption {The average agreement among the 22 participants as a
function of the number of terms in the problems.}
\label{tab:terms}
\end{table}
Table~\ref{tab:sci-met-alg} shows that the 16 algorithms studied
here perform slightly worse on the science problems than on the
metaphor problems, but the difference is not statistically
significant (paired t-test, 95\% confidence level). We hypothesize
that the attributional mapping approaches are not performing
well enough to be sensitive to subtle differences between
science analogies and common metaphors.
\begin{table}[htb]
\small
\centering
\begin{tabular}{lrrr}
\hline
& \multicolumn{3}{c}{\textbf{Average Accuracy}} \\
\cline{2-4}
\textbf{Algorithm} & \textbf{All 20} & \textbf{10 Science} & \textbf{10 Metaphor} \\
\hline
LRME & 91.5 & 89.8 & 93.1 \\
\hline
HSO & 55.9 & 57.4 & 54.3 \\
JC & 54.7 & 57.4 & 52.1 \\
LC & 48.5 & 49.6 & 47.5 \\
LIN & 48.2 & 46.7 & 49.7 \\
RES & 43.8 & 39.0 & 48.6 \\
PMI-IR & 54.4 & 49.5 & 59.2 \\
LSA & 39.6 & 37.3 & 41.9 \\
\hline
POS & 44.8 & 42.1 & 47.4 \\
\hline
HSO+POS & 71.1 & 66.9 & 75.2 \\
JC+POS & 73.6 & 78.1 & 69.2 \\
LC+POS & 69.5 & 70.8 & 68.2 \\
LIN+POS & 76.8 & 68.8 & 84.8 \\
RES+POS & 71.6 & 70.3 & 72.9 \\
PMI-IR+POS & 72.8 & 65.7 & 79.9 \\
LSA+POS & 65.8 & 69.1 & 62.4 \\
\hline
Average & 61.4 & 59.9 & 62.9 \\
Standard deviation & 14.7 & 15.0 & 15.3 \\
\hline
\end{tabular}
\normalsize
\caption {A comparison of the difficulty of the science problems
versus the metaphor problems for the 16 algorithms.}
\label{tab:sci-met-alg}
\end{table}
Incidentally, these tables give us another view of the performance
of LRME in comparison to human performance. The first row in
Table~\ref{tab:sci-met-alg} shows the performance of LRME on the
science and metaphor problems. In Table~\ref{tab:sci-met-human},
we have marked in bold font the cases where human scores are
greater than LRME's scores. For all 20 problems, there
are 8 such cases; for the 10 science problems, there are 8
such cases; for the 10 metaphor problems, there are 10 such
cases. This is further evidence that LRME's performance is
not significantly different from human performance. LRME is near
the middle of the range of performance of the 22 human participants.
\subsection{Hybrid Relational-Attributional Approaches}
\label{subsec:hybrids}
Recall the definitions of ${\rm score_r}(M)$ and ${\rm score_a}(M)$ given
in Section~\ref{sec:task}.
\begin{align}
{\rm score_r}(M)
& = \sum_{i=1}^{m} \sum_{j=i+1}^{m} {\rm sim_r}(a_i\!:\!a_j, M(a_i)\!:\!M(a_j)) \\
{\rm score_a}(M)
& = \sum_{i=1}^{m} {\rm sim_a}(a_i, M(a_i))
\end{align}
\noindent We can combine the scores by simply adding them or multiplying
them, but ${\rm score_r}(M)$ and ${\rm score_a}(M)$ may be quite different
in the scales and distributions of their values; therefore we first
normalize them to probabilities.
\begin{align}
{\rm prob_r}(M)
& = \frac{{\rm score_r}(M)}{\sum_{M_i \in P(A,B)} {\rm score_r}(M_i)} \\
{\rm prob_a}(M)
& = \frac{{\rm score_a}(M)}{\sum_{M_i \in P(A,B)} {\rm score_a}(M_i)}
\end{align}
\noindent For these probability estimates, we assume that ${\rm score_r}(M) \ge 0$
and ${\rm score_a}(M) \ge 0$. If necessary, a constant value may be added
to the scores, to ensure that they are not negative. Now we can combine
the scores by adding or multiplying the probabilities.
\begin{align}
M_{\rm r+a}
& = \operatornamewithlimits{arg\,max}_{M \in P(A,B)}
\big( {\rm prob_r}(M) + {\rm prob_a}(M) \big) \\
M_{\rm r \times a}
& = \operatornamewithlimits{arg\,max}_{M \in P(A,B)}
\big( {\rm prob_r}(M) \times {\rm prob_a}(M) \big)
\end{align}
Table~\ref{tab:hybrids} shows the accuracy when LRME is combined
with LIN+POS (the best attributional mapping algorithm in
Table~\ref{tab:attributional}, with an accuracy of 76.8\%)
or with HSO (the best attributional mapping algorithm that
does not use the manual POS tags, with an accuracy of 55.9\%).
We try both adding and multiplying probabilities. On its own,
LRME has an accuracy of 91.5\%. Combining LRME with LIN+POS
increases the accuracy to 94.0\%, but this improvement is not
statistically significant (paired t-test, 95\% confidence level).
Combining LRME with HSO results in a decrease in accuracy.
The decrease is not significant when the probabilities are
multiplied (85.4\%), but it is significant when the probabilities
are added (78.5\%).
\begin{table}[htbp]
\small
\centering
\begin{tabular}{lllr}
\hline
\multicolumn{2}{c}{\textbf{Components}} \\
\cline{1-2}
\textbf{Relational} & \textbf{Attributional} & \textbf{Combination} & \textbf{Accuracy} \\
\hline
LRME & LIN+POS & add probabilities & 94.0 \\
LRME & LIN+POS & multiply probabilities & 94.0 \\
LRME & HSO & add probabilities & 78.5 \\
LRME & HSO & multiply probabilities & 85.4 \\
\hline
\end{tabular}
\normalsize
\caption {The performance of four different hybrids of relational
and attributional mapping approaches.}
\label{tab:hybrids}
\end{table}
In summary, the experiments show no significant advantage to
combining LRME with attributional mapping. However, it is possible
that a larger sample of problems would show a significant advantage.
Also, the combination methods we explored (addition and multiplication
of probabilities) are elementary. A more sophisticated approach, such
as a weighted combination, may perform better.
\subsection{Coherent Relations}
\label{subsec:coherence}
We hypothesize that LRME benefits from a kind of coherence among the
relations. On the other hand, attributional mapping approaches do not
involve this kind of coherence.
Suppose we swap two of the terms in a mapping. Let $M$ be the original
mapping and let $M'$ be the new mapping, where $M'(a_1) = M(a_2)$,
$M'(a_2) = M(a_1)$, and $M'(a_i) = M(a_i)$ for $i > 2$. With attributional
similarity, the impact of this swap on the score of the mapping is limited.
Part of the score is not affected.
\begin{align}
{\rm score_a}(M) & = {\rm sim_a}(a_1, M(a_1)) + {\rm sim_a}(a_2, M(a_2))
+ \sum_{i=3}^{m} {\rm sim_a}(a_i, M(a_i)) \\
{\rm score_a}(M') & = {\rm sim_a}(a_1, M(a_2)) + {\rm sim_a}(a_2, M(a_1))
+ \sum_{i=3}^{m} {\rm sim_a}(a_i, M(a_i))
\end{align}
\noindent On the other hand, with relational similarity, the impact
of a swap is not limited in this way. A change to any part of the
mapping affects the whole score. There is a kind of global coherence
to relational similarity that is lacking in attributional similarity.
Testing the hypothesis that LRME benefits from coherence is somewhat
complicated, because we need to design the experiment so that the
coherence effect is isolated from any other effects. To do this, we
move some of the terms outside of the accuracy calculation.
Let $M_*: A \rightarrow B$ be one of our twenty mapping problems,
where $M_*$ is our intended mapping and
$m = \left | A \right | = \left | B \right |$. Let $A'$ be a randomly
selected subset of $A$ of size $m'$. Let $B'$ be $M_*(A')$, the subset
of $B$ to which $M_*$ maps $A'$.
\begin{align}
A' & \subset A \\
B' & \subset B \\
B' & = M_*(A') \\
m' & = \left | A' \right | = \left | B' \right | \\
m' & < m
\end{align}
\noindent There are two ways that we might use LRME to generate a mapping
$M': A' \rightarrow B'$ for this new reduced mapping problem,
{\em internal coherence} and {\em total coherence}.
\begin{myenumerate}
\item \textbf{Internal coherence:} We can select $M'$ based on
$\left \langle A', B' \right \rangle$ alone.
\begin{align}
A' & = \left \{ a_1, ..., a_{m'} \right \} \\
B' & = \left \{ b_1, ..., b_{m'} \right \} \\
M' & = \operatornamewithlimits{arg\,max}_{M \in P(A',B')}
\; \sum_{i=1}^{m'} \sum_{j=i+1}^{m'} {\rm sim_r}(a_i\!:\!a_j, M(a_i)\!:\!M(a_j))
\end{align}
\noindent In this case, $M'$ is chosen based only on the relations
that are internal to $\left \langle A', B' \right \rangle$.
\item \textbf{Total coherence:} We can select $M'$ based on
$\left \langle A, B \right \rangle$ and the knowledge that
$M'$ must satisfy the constraint that $M'(A') = B'$. (This knowledge
is also embedded in internal coherence.)
\begin{align}
A & = \left \{ a_1, ..., a_m \right \} \\
B & = \left \{ b_1, ..., b_m \right \} \\
P'(A,B) & = \left \{ M | \; M \in P(A,B) \; {\rm and} \; M(A') = B' \right \} \\
M' & = \operatornamewithlimits{arg\,max}_{M \in P'(A,B)}
\; \sum_{i=1}^{m} \sum_{j=i+1}^{m} {\rm sim_r}(a_i\!:\!a_j, M(a_i)\!:\!M(a_j))
\end{align}
\noindent In this case, $M'$ is chosen using both the relations
that are internal to $\left \langle A', B' \right \rangle$ and
other relations in $\left \langle A, B \right \rangle$ that are
external to $\left \langle A', B' \right \rangle$.
\end{myenumerate}
Suppose that we calculate the accuracy of these two methods based
only on the sub\-problem $\left \langle A', B' \right \rangle$. At first
it might seem that there is no advantage to total coherence, because
it must explore a larger space of possible mappings than internal
coherence (since $\left | P'(A,B) \right |$ is larger
than $\left | P(A',B') \right |$),
but the additional terms that it explores are not involved in
calculating the accuracy. However, we hypothesize that total
coherence will have a higher accuracy than internal coherence,
because the additional external relations help to select the
correct mapping.
To test this hypothesis, we set $m'$ to 3 and we randomly generated
ten new reduced mapping problems for each of the twenty problems
(i.e., a total of 200 new problems of size 3). The average
accuracy of internal coherence was 93.3\%, whereas the average
accuracy of total coherence was 97.3\%. The difference is
statistically significant (paired t-test, 95\% confidence level).
On the other hand, the attributional mapping approaches cannot
benefit from total coherence, because there is no connection
between the attributes that are in $\left \langle A', B' \right \rangle$
and the attributes that are outside. We can decompose
${\rm score_a}(M)$ into two independent parts.
\begin{align}
A'' & = A \setminus A' \\
A & = A' \cup A'' \\
P'(A,B) & = \left \{ M | \; M \in P(A,B) \; {\rm and} \; M(A') = B' \right \} \\
M' & = \operatornamewithlimits{arg\,max}_{M \in P'(A,B)}
\; \sum_{a_i \in A} {\rm sim_a}(a_i, M(a_i)) \\
& = \operatornamewithlimits{arg\,max}_{M \in P'(A,B)}
\; \left ( \sum_{a_i \in A'} {\rm sim_a}(a_i, M(a_i)) +
\; \sum_{a_i \in A''} {\rm sim_a}(a_i, M(a_i)) \right )
\end{align}
\noindent These two parts can be optimized independently. Thus
the terms that are external to $\left \langle A', B' \right \rangle$
have no influence on the part of $M'$ that covers
$\left \langle A', B' \right \rangle$.
Relational mapping cannot be decomposed into independent parts in
this way, because the relations connect the parts. This gives relational
mapping approaches an inherent advantage over attributional mapping
approaches.
To confirm this analysis, we compared internal and total coherence
using LIN+POS on the same 200 new problems of size 3. The
average accuracy of internal coherence was 88.0\%, whereas the
average accuracy of total coherence was 87.0\%. The difference
is not statistically significant (paired t-test, 95\% confidence
level). (The only reason that there is any difference is that, when
two mappings have the same score, we break the ties randomly. This
causes random variation in the accuracy.)
The benefit from coherence suggests that we can make analogy
mapping problems easier for LRME by adding more terms. The
difficulty is that the new terms cannot be randomly chosen;
they must fit with the logic of the analogy and not overlap
with the existing terms.
Of course, this is not the only important difference between
the relational and attributional mapping approaches.
We believe that the most important difference is that relations
are more reliable and more general than
attributes, when using past experiences to make predictions
about the future \shortcite{hofstadter01,gentner03}.
Unfortunately, this hypothesis is more difficult
to evaluate experimentally than our hypothesis about coherence.
\section{Related Work}
\label{sec:related}
French \citeyear{french02} gives a good survey of computational
approaches to analogy-making, from the perspective of cognitive science
(where the emphasis is on how well computational systems model human
performance, rather than how well the systems perform). We will
sample a few systems from his survey and add a
few more that were not mentioned.
French \citeyear{french02} categorizes analogy-making systems as
{\em symbolic}, {\em connectionist}, or {\em symbolic-connectionist
hybrids}. G{\"a}rdenfors \citeyear{gardenfors04} proposes another
category of representational systems for AI and cognitive science, which he
calls {\em conceptual spaces}. These spatial or geometric
systems are common in information retrieval and machine learning
\shortcite{widdows04,rijsbergen04}. An influential example
is Latent Semantic Analysis \shortcite{landauer97}. The first
spatial approaches to analogy-making began to appear around the same
time as French's \citeyear{french02} survey. LRME takes a spatial
approach to analogy-making.
\subsection{Symbolic Approaches}
Computational approaches to analogy-making date back to {\sc Analogy}
\shortcite{evans64} and Argus \shortcite{reitman65}. Both of these
systems were designed to solve proportional analogies (analogies
in which $\left | A \right | = \left | B \right | = 2$; see
Section~\ref{sec:lra}). {\sc Analogy} could solve proportional
analogies with simple geometric figures and Argus could solve
simple word analogies. These systems used hand-coded rules
and were only able to solve the limited range of problems that
their designers had anticipated and coded in the rules.
French \citeyear{french02} cites Structure Mapping Theory (SMT)
\shortcite{gentner83} and the Structure Mapping Engine (SME)
\shortcite{falkenhainer89} as the prime examples of symbolic
approaches:
\begin{quote}
SMT is unquestionably the most influential work to date on the
modeling of analogy-making and has been applied in a wide range
of contexts ranging from child development to folk physics.
SMT explicitly shifts the emphasis in analogy-making to
the structural similarity between the source and target domains.
Two major principles underlie SMT:
\begin{myitemize}
\item the relation-matching principle: good analogies are determined by
mappings of relations and not attributes (originally only identical
predicates were mapped) and
\item the systematicity principle: mappings of coherent systems of
relations are preferred over mappings of individual relations.
\end{myitemize}
This structural approach was intended to produce a domain-independent
mapping process.
\end{quote}
\noindent LRME follows both of these principles. LRME uses only
relational similarity; no attributional similarity is involved
(see Section~\ref{subsec:algorithm}). Coherent systems of relations
are preferred over mappings of individual relations (see
Section~\ref{subsec:coherence}). However, the spatial (statistical,
corpus-based) approach of LRME is quite different from the symbolic
(logical, hand-coded) approach of SME.
Martin \citeyear{martin92} uses a symbolic approach to handle
conventional metaphors. Gentner, Bowdle, Wolff, and Boronat
\citeyear{gentner01} argue that novel metaphors are processed as
analogies, but conventional metaphors are recalled from memory without
special processing. However, the line between conventional and novel
metaphor can be unclear.
Dolan \citeyear{dolan95} describes an algorithm that can
extract conventional metaphors from a dictionary. A semantic parser is
used to extract semantic relations from the Longman Dictionary of
Contemporary English (LDOCE). A symbolic algorithm finds
metaphorical relations between words, using the extracted
relations.
Veale \citeyear{veale03,veale04} has developed a symbolic approach
to analogy-making, using WordNet as a lexical resource. Using
a spreading activation algorithm, he achieved a score of 43.0\%
on a set of 374 multiple-choice lexical proportional analogy
questions from the SAT college entrance test \shortcite{veale04}.
Lepage \citeyear{lepage98} has demonstrated that a symbolic
approach to proportional analogies can be used for morphology
processing. Lepage and Denoual \citeyear{lepage05} apply
a similar approach to machine translation.
\subsection{Connectionist Approaches}
Connectionist approaches to analogy-making include ACME \shortcite{holyoak89}
and LISA \shortcite{hummel97}. Like symbolic approaches, these systems
use hand-coded knowledge representations, but the search for mappings
takes a connectionist approach, in which there are nodes with weights
that are incrementally updated over time, until the system reaches
a stable state.
\subsection{Symbolic-Connectionist Hybrid Approaches}
The third family examined by French \citeyear{french02} is
hybrid approaches, containing elements of both the
symbolic and connectionist approaches. Examples include
Copycat \shortcite{mitchell93} and Tabletop
\shortcite{french95}. Much of the work in the
Fluid Analogies Research Group (FARG) concerns
symbolic-connectionist hybrids \shortcite{hofstadter95}.
\subsection{Spatial Approaches}
Marx, Dagan, Buhmann, and Shamir \citeyear{marx02} present the
{\em coupled clustering} algorithm, which uses a feature vector
representation to find analogies in collections of text.
For example, given documents on Buddhism and Christianity,
it finds related terms, such as \{{\em school, Mahayana, Zen}\}
for Buddhism and \{{\em tradition, Catholic, Protestant}\} for
Christianity.
Mason \citeyear{mason04} describes the CorMet system
for extracting conventional metaphors from text.
CorMet is based on clustering feature vectors that
represent the selectional preferences of verbs. Given
keywords for the source domain {\em laboratory} and
the target domain {\em finance}, it is able to discover
mappings such as {\em liquid} $\rightarrow$ {\em income}
and {\em container} $\rightarrow$ {\em institution}.
Turney, Littman, Bigham, and Shnayder \citeyear{turney03} present
a system for solving lexical proportional analogy
questions from the SAT college entrance test, which
combines thirteen different modules. Twelve of the
modules use either attributional similarity or
a symbolic approach to relational similarity, but one
module uses a spatial (feature vector) approach to
measuring relational similarity. This module worked
much better than any of the other modules; therefore,
it was studied in more detail by Turney and Littman
\citeyear{turney05a}. The relation between a pair of
words is represented by a vector, in which the elements are
pattern frequencies. This is similar to LRME, but one important
difference is that Turney and Littman \citeyear{turney05a} used
a fixed, hand-coded set of 128 patterns, whereas LRME
automatically generates a variable number of patterns from the
given corpus (33,240 patterns in our experiments here).
Turney \citeyear{turney05b} introduced Latent Relational
Analysis (LRA), which was examined more thoroughly
by Turney \citeyear{turney06}. LRA achieves human-level
performance on a set of 374 multiple-choice proportional
analogy questions from the SAT college entrance exam.
LRME uses a simplified form of LRA. A similar simplification
of LRA is used by Turney \citeyear{turney08}, in a system for
processing analogies, synonyms, antonyms, and associations.
The contribution of LRME is to go beyond proportional
analogies, to larger systems of analogical mappings.
\subsection{General Theories of Analogy and Metaphor}
Many theories of analogy-making and metaphor either do not involve
computation or they suggest general principles and concepts that are
not specific to any particular computational approach.
The design of LRME has been influenced by several theories
of this type \shortcite{gentner83,hofstadter95,holyoak95,hofstadter01,gentner03}.
Lakoff and Johnson \citeyear{lakoff80} provide extensive evidence
that metaphor is ubiquitous in language and thought. We believe that
a system for analogy-making should be able to handle metaphorical
language, which is why ten of our analogy problems are derived
from Lakoff and Johnson \citeyear{lakoff80}. We agree with
their claim that a metaphor does not merely involve a
superficial relation between a couple of words; rather, it
involves a systematic set of mappings between two domains.
Thus our analogy problems involve larger sets of words, beyond
proportional analogies.
Holyoak and Thagard \citeyear{holyoak95} argue that analogy-making
is central in our daily thought, and especially in finding
creative solutions to new problems. Our ten scientific analogies
were derived from their examples of analogy-making in
scientific creativity.
\section{Limitations and Future Work}
\label{sec:future}
In Section~\ref{sec:lra}, we mentioned ten applications for LRA,
and in Section~\ref{sec:apps} we claimed that the results of
the experiments in Section~\ref{subsec:coherence} suggest that
LRME may perform better than LRA on all ten of these applications,
due to its ability to handle bijective analogies when $m > 2$.
Our focus in future work will be testing this hypothesis. In
particular, the task of semantic role labeling, discussed in
Section~\ref{sec:intro}, seems to be a good candidate application
for LRME.
The input to LRME is simpler than the input to SME (compare
Figures \ref{fig:solar} and \ref{fig:atom} in Section~\ref{sec:intro}
with Table~\ref{tab:input}), but there is still some human effort involved
in creating the input. LRME is not immune to the criticism of Chalmers,
French, and Hofstadter \citeyear{chalmers92}, that the human who generates
the input is doing more work than the computer that makes the mappings,
although it is not a trivial matter to find the right
mapping out of 5,040 (7!) choices.
In future work, we would like to relax the requirement that
$\left \langle A, B \right \rangle$ must be a bijection (see
Section~\ref{sec:task}), by adding irrelevant words (distractors)
and synonyms. The mapping algorithm will be forced to decide
what terms to include in the mapping and what terms to leave out.
We would also like to develop an algorithm that can take a proportional
analogy ($m = 2$) as input (e.g., sun:planet::nucleus:electron) and automatically
expand it to a larger analogy ($m > 2$, e.g., Table~\ref{tab:output}).
That is, it would automatically search the corpus for new
terms to add to the analogy.
The next step would be to give the computer only the topic
of the source domain (e.g., solar system) and the topic of
the target domain (e.g., atomic structure), and let it work
out the rest on its own. This might be possible by combining
ideas from LRME with ideas from coupled clustering
\shortcite{marx02} and CorMet \shortcite{mason04}.
It seems that analogy-making is triggered in people when we
encounter a problem \shortcite{holyoak95}. The problem defines
the target for us, and we immediately start searching for
a source. Analogical mapping enables us to transfer our knowledge
of the source to the target, hopefully leading to a solution to
the problem. This suggests that the input to the ideal
analogical mapping algorithm would be simply a statement
that there is a problem (e.g., What is the structure of the
atom?). Ultimately, the computer might find the problems on its
own as well. The only input would be a large corpus.
The algorithms we have considered here all perform exhaustive
search of the set of possible mappings $P(A,B)$. This is
acceptable when the sets are small, as they are here, but
it will be problematic for larger problems. In future work,
it will be necessary to use heuristic search algorithms
instead of exhaustive search.
It takes almost 18 hours for LRME to process the twenty
mapping problems (Section~\ref{sec:lrme}). With better
hardware and some changes to the software, this time
could be significantly reduced. For even greater speed, the
algorithm could run continuously, building a large
database of vector representations of term pairs,
so that it is ready to create mappings as soon
as a user requests them. This is similar to the vision
of Banko and Etzioni \citeyear{banko07}.
LRME, like LRA and LSA \shortcite{landauer97}, uses a truncated singular
value decomposition (SVD) to smooth the matrix. Many other algorithms
have been proposed for smoothing matrices. In our past work
with LRA \shortcite{turney06}, we experimented with Nonnegative
Matrix Factorization (NMF) \shortcite{lee99}, Probabilistic
Latent Semantic Analysis (PLSA) \shortcite{hofmann99},
Iterative Scaling (IS) \shortcite{ando00}, and Kernel Principal
Components Analysis (KPCA) \cite{scholkopf97}. We had some
interesting results with small matrices (around 1000 $\times$ 2000),
but none of the algorithms seemed substantially better than truncated
SVD, and none of them scaled up to the matrix sizes that we have
here (1,662 $\times$ 33,240). However, we believe that SVD is not
unique, and future work is likely to discover a
smoothing algorithm that is more efficient and effective than SVD.
The results in Section~\ref{subsec:experiments} do not show
a significant benefit from SVD. Table~\ref{tab:lrme-variations}
hints that PPMIC \shortcite{bullinaria07} is more important than SVD.
LRME extracts knowledge from many fragments of text. In
Section~\ref{subsec:algorithm}, we noted that we found an average
of 1,180 phrases per pair. The information from these 1,180 phrases
is combined in a vector, to represent the semantic relation for
a pair. This is quite different from relation extraction in (for
example) the Automatic Content Extraction (ACE)
Evaluation.\footnote{ACE is an annual event that began in 1999.
Relation Detection and Characterization (RDC) was introduced to ACE
in 2001. For more information, see http://www.nist.gov/speech/tests/ace/.}
The task in ACE is to identify and label a semantic relation in
a single sentence. Semantic role labeling also involves labeling
a single sentence \shortcite{gildea02}.
The contrast between LRME and ACE is analogous to the distinction
in cognitive psychology between semantic and episodic
memory. Episodic memory is memory of a specific event in one's personal past,
whereas semantic memory is memory of basic facts and concepts,
unrelated to any specific event in the past. LRME extracts relational
information that is independent of any specific sentence, like semantic
memory. ACE is concerned with extracting the relation in a specific
sentence, like episodic memory. In cognition, episodic memory and
semantic memory work together synergistically. When we experience
an event, we use our semantic memory to interpret the event and
form a new episodic memory, but semantic memory is itself constructed
from our past experiences, our accumulated episodic memories. This
suggests that there should be a synergy from combining LRME-like
semantic information extraction algorithms with ACE-like
episodic information extraction algorithms.
\section{Conclusion}
\label{sec:conclusion}
Analogy is the core of cognition. We understand the present
by analogy to the past. We predict the future by analogy
to the past and the present. We solve problems by searching
for analogous situations \shortcite{holyoak95}. Our daily language
is saturated with metaphor \shortcite{lakoff80}, and metaphor
is based on analogy \shortcite{gentner01}. To understand
human language, to solve human problems, to work with humans,
computers must be able to make analogical mappings.
Our best theory of analogy-making is Structure Mapping Theory
\shortcite{gentner83}, but the Structure Mapping Engine
\shortcite{falkenhainer89} puts too much of the burden
of analogy-making on its human users \shortcite{chalmers92}.
LRME is an attempt to shift some of that burden onto the
computer, while remaining consistent with the general principles
of SMT.
We have shown that LRME is able to solve bijective analogical
mapping problems with human-level performance. Attributional
mapping algorithms (at least, those we have tried so far) are
not able to reach this level. This supports SMT, which claims
that relations are more important than attributes when making
analogical mappings.
There is still much research to be done. LRME takes some of
the load off the human user, but formulating the input to LRME
is not easy. This paper is an incremental step
towards a future in which computers can make surprising and
useful analogies with minimal human assistance.
\acks{Thanks to my colleagues at the Institute for Information
Technology for participating in the experiment in
Section~\ref{sec:problems}. Thanks to Charles Clarke and
Egidio Terra for their corpus. Thanks to Stefan B{\"u}ttcher
for making Wumpus available and giving me advice on its use.
Thanks to Doug Rohde for making SVDLIBC available. Thanks to
the WordNet team at Princeton University for WordNet, Ted Pedersen
for the WordNet::Similarity Perl package, and Jason Rennie for
the WordNet::QueryData Perl package. Thanks to the LSA team
at the University of Colorado at Boulder for the use of their
online demonstration of LSA. Thanks to Deniz Yuret, Andr{\'e}
Vellino, Dedre Gentner, Vivi Nastase, Yves Lepage, Diarmuid {\'O}
S{\'e}aghdha, Roxana Girju, Chris Drummond, Howard Johnson, Stan
Szpakowicz, and the anonymous reviewers of {\em JAIR} for their helpful
comments and suggestions.}
|
2,877,628,091,529 | arxiv | \section*{Acknowledgements}
I would like to thank R.M. Green for many useful conversations during the preparation of this article.
\bibliographystyle{plain}
|
2,877,628,091,530 | arxiv | \section{introduction}
Inter- and intra- molecular forces are key to the stability of DNA and
biological processes {\it e.g.} transcription, replication, slippage
etc. \cite{albert,israel}. Up to now, understanding of these forces was
possible through the indirect physical and thermodynamical measurements like
crystallography, light scattering, nuclear magnetic resonance spectroscopy etc.
\cite{Wartel_Phys.Rep85}.
Single molecule force spectroscopy (SMFS) experiments have directly measured
these forces and provided unexpected insights into the strength of the forces
driving these biological processes as well as determined various
interactions responsible for the mechanical stability of DNA structures
\cite{Smith_Science92,cluzel,Lee_Science94,kumarphys}.
With the increasing number of experiments and insights gathered so far, it
has become clear that the measurement of molecular interactions not only depends
on the magnitude of the applied force, but also depends how and where the
force was applied \cite{kumarphys,Bockelmann,Bock,Strunge,Irina,
prentiss1,hatch,Cludia,gaub}.
A major concern is now to understand whether all these interactions contribute
at the same moment or they have different life times. In order to understand
this, a force has been applied perpendicular to the helix direction (DNA
unzipping) and along the helix direction (rupture and slippage) as shown
in Fig. 1 \cite{kumarphys,Bockelmann, Bock,Strunge,Irina, prentiss1,hatch,Cludia,
gaub,cocco}. In case of unzipping of double stranded DNA (dsDNA), the critical
force is found to be independent of the length of DNA and the loading rate
\cite{Bockelmann, Bock}. This may be understood theoretically that at the
centre point of fork (Fig. 1b), the applied force only breaks a base pair at a time
and hence it remains independent of the loading rate and length. However, when
a force (up to 65 pN)
is applied along the helix direction (shear force), the length of the dsDNA
increases and the force-extension ($f-x$) curve can be described
by the worm like chain (WLC) model \cite{wlc}. In the high force regime ($ >65$
pN), the dsDNA can be overstretched about 1.7 times of the B-form contour
length and a phase transition occurs from the B-form to
a stretched or S-form \cite{Rief,smith,Morfill}. Recently, van Mameren {\it et al}.
have studied DNA stretching with or without DNA binding ligands and
demonstrated that overstretching comprises a gradual conversion from dsDNA
to ssDNA and it should be interpreted in terms of force induced DNA
melting \cite{mameren}.
\begin{figure}[t]
\includegraphics[width=2.in]{Fig1.eps}
\caption{Schematic representation of dsDNA: (a) dsDNA in zipped form;
(b) Unzipping of dsDNA by the force ($f$) applied at one end $(5'-3')$;
(c and d) Shear force along the chain applied at the opposite ends
($3'-3'$ or $5'-5'$) of the dsDNA.}
\label{fig-1}
\end{figure}
For a short dsDNA, if the applied shear force increases, the dsDNA separates
into two single strands at some critical force. This phenomenon has been
identified as rupture \cite{Lee_Science94, Strunge}. The unbinding force strongly depends on the
pulling end and is much larger than the unzipping force \cite{Cludia,Lee_Science94,Strunge,lavery}.
Neher and Gerland studied the dynamics of dissociation of the two strands and
found the expression for the critical force \cite{nehar}. Expressing the bond energy and
the base pairing energy in the form of harmonic oscillators in the ladder model of dsDNA of
length $L$, de Gennes \cite{degennes} proposed the maximum force required for the rupture
\begin{equation}
f_c= 2 f_1 (\chi^{-1} \tanh(\chi \frac{L}{2})+1).
\end{equation}
Here, $f_1 $ is the force required to separate a single base pair, which
is same for the homo-sequence and ${\chi^{-1} = \sqrt{Q/2R}}$
is the de Gennes characteristic length over which differential force is distributed.
Here, Q and R are the spring constants, characteristic of stretching of backbone and
hydrogen bonds, respectively.
Recently, Danilowicz {\it et al.} \cite{hatch}
systematically studied the DNA rupture by varying the length of
dsDNA. The critical shear force is found to increase linearly
up to a certain length and approaches the asymptotic value ($\approx 62$ pN),
which is in good agreement with the de Gennes prediction.
It was argued that the covalent bonds (backbone) and
the hydrogen bonds involved in the base pairing will be stretched under the applied
force. The differential force will approach to zero at the length
$\chi^{-1}$, if one moves in from the either side.
However, no experimental effort has been made to study the effect of shearing force
on the stretching of covalent bonds and hydrogen bonds in side the characteristic length
$\chi^{-1}$. Moreover, in the description of de Gennes model \cite{degennes} or
subsequently improved model by Chakrabarti and Nelson \cite{nelson}, effect of thermal
fluctuation has been ignored, whereas all the rupture experiments were generally performed at finite
temperature. The aim of this manuscript is to study the effect of temperature ($T$) on the
rupture and consequences of differential force on the distribution of
extension in bond lengths and hydrogen bonds near the rupture.
We use Langevin Dynamics (LD)simulation to investigate mechanical and
physical properties related to the rupture of DNA
\cite{Allen,Smith,Kouza,MSLi_BJ07}. Since, rupture time is
of the order of milliseconds to seconds,
an atomistic simulation of longer chain in the solvent is computationally
difficult \cite{netz,pm}. We have used a coarse-grained model
\cite{kumarphys,Kouza,MSLi_BJ07,janke} of the flexible polymer chain
to model a DNA, which allows us to study a larger system size and events of
a longer time scale. A chain in the model consists of bead units connected by
effective bonds characterized by the stiff springs. Each effective bond
represents several chemical bonds ({\it e.g.} sugar phosphate etc. ) along
the chain backbone.
The energy of the model system is given by
\begin{eqnarray}
& & E = {\sum_{l=1}^2\sum_{j=1}^N}k(u_{j+1,j}^{(l)}-d_0)^2
+{\sum_{l=1}^2\sum_{i=1}^{N-2}\sum_{j>i+1}^N}4\left(\frac{C}{{u_{i,j}^{(l)}}^{12}}\right) \nonumber \\
& & + {\sum_{i=1}^N\sum_{j=1}^N}4\left(\frac{C}{(|\vec u_i^{(1)}-\vec u_j^{(2)})|^{12}}-
\frac{A}{(|\vec u_i^{(1)}-\vec u_j^{(2)}|)^6}\delta_{ij}\right),
\end{eqnarray}
where $N$ is the number of beads in each strand. $\vec u_i^{(l)}$ represents the position of $i^{th}$ bead
on $l^{th}$ strand. In present case, $l=1(2)$ corresponds to first (complimentary)
strand of dsDNA.
The distance between intra strand beads, $u_{i,j}^{(l)}$, is defined as
$|\vec u_i^{(l)}-\vec u_j^{(l)}|$.
The harmonic (first) term with spring constant $k$ (=100) couples the adjacent beads
along the two strands.
Second term takes care of excluded volume effect {\it i.e.} two beads
can not occupy the same space \cite{book}.
The third term, described by Lennard-Jones (LJ) potential, takes care of the
mutual interaction between two strands.
The first term of LJ potential (same as second term of Eq.2) will
not allow the overlap of two strands. Here, we set $C = 1$ and $A=1$.
The second term of LJ potential corresponds to the base pairing between
two strands. The base pairing interaction is restricted to the native contacts
($\delta_{ij}=1$) only {\it i.e.} $i^{th}$ base of $1^{st}$ strand forms pair with
the $i^{th}$ base of $2^{nd}$ strand only as shown in Fig. 1a. This is similar to the
Go model \cite{go}.
The parameter $d_0 (=1.12)$ corresponds to
the equilibrium distance in the harmonic potential, which is close to the
equilibrium position of the average LJ potential. In Eq. 2,
we use dimensionless distances and energy parameters. The major advantage of
this model is that the ground state energy of the system is known \cite{go}.
The equation of motion is obtained from the following Langevin equation
\cite{Allen,Smith,MSLi_BJ07}
\begin{equation}
m\frac{d^2r}{dt^2} = -{\zeta}\frac{dr}{dt}+F_c+\Gamma,
\end{equation}
where $m$ and $\zeta$ are the mass of a bead and the friction
coefficient, respectively. Here, $F_c$ is defined as $-\frac{dE}{dr}$ and
the random force $\Gamma$ is a white noise \cite{Smith},
i.e., $<{\Gamma(t)\Gamma(t')}>=2\zeta T\delta(t-t')$.
The choice of this dynamics keeps $T$ constant throughout the simulation
for a given $f$. The equation of motion is integrated by using the $6^{th}$ order
predictor-corrector algorithm with time step $\delta t$=0.025 \cite{Smith}.
The results are averaged over many trajectories. The equilibration
has been checked by monitoring the stability of data against at least ten
times longer run. We have used $2\times10^9$ time steps out of which
first $ 5\times 10^8$ steps have not been taken in the averaging.
\begin{figure}[t]
\includegraphics[width=3.4in]{Fig2.eps}
\caption{(a) Force {\it vs} extension curves for different chain lengths
Arrows indicate the maximum force, where the number of
contacts approaches to zero.
For the sake of comparison, we have normalized the extension
by its contour length (at f = 0).
(b) Figure shows the variation of rupture force with length. The solid
line corresponds to a fit of Eq. 1.
Solid circles represent the value obtained through the simulation.
}
\label{fig-2}
\vspace {0.5cm}
\end{figure}
In the constant force ensemble, we add an energy $-\vec{f}.\vec{x}$ to the
total energy
of the system given by Eq. 2. We calculate the reaction coordinate
$x$ (extension) for different values of $f$. The $f-x$ curves
(Fig. 2a) show the entropic response at low forces
and remain qualitatively similar to the one seen in experiments
\cite{Lee_Science94,Strunge,Irina,gaub}.
We identify
the rupture force as a maximum force, where the number of intact base pairs
suddenly goes to zero. In Fig. 2b, we show the
rupture force as a function of the chain length at low temperature.
It is evident from this
plot that the rupture force approaches to an asymptotic value for the chain
length greater than 20, which is in accordance with the experiment \cite{hatch}.
We expand the LJ potential given
in Eq.2 around its equilibrium value. The coefficient of second
(harmonic) term of its expansion corresponds to the elastic constant
of the base-pairing. The de Gennes characteristic length \cite{degennes}
for the present
model is estimated to be $\approx 10$. Substituting the value of $f_1 (=1)$ and
the above mentioned value of $\chi^{-1}$ in Eq. 1, we obtained the value of
$f_c$ for a given length of dsDNA, which is shown by solid line in Fig 2b.
One can notice a nice agreement between the simulation and the value predicted
by Eq. 1 \cite{degennes}.
\begin{figure}[t]
\includegraphics[width=3.4in]{Fig3.eps}
\caption{(a) Variation of extension in hydrogen bond length ($\Delta_h$)
along the chain at three different forces.
(b) Figure shows the variation in extension of covalent
bond length ($\Delta_c$) along
the chain. Here, open and solid symbols correspond to one
strand and its complementary strand, respectively.
}
\label{fig-3}
\end{figure}
One of the important findings of the present simulation is the distribution of
stretching of hydrogen bonds ($\Delta_h$) and extension in the covalent bonds ($\Delta_c$)
for a wide range of force below the rupture, which are experimentally difficult to obtain. In Fig. 3a,
we depict the variation of $\Delta_h$ with base position for the chain of length 40.
From this plot, one can observe that the hydrogen bonds at extreme ends (up to
$\approx 10$ bases) get stretched, whereas bases in the middle (above the de Gennes
length $\approx 10\sim30$) remain same indicating that the differential shear
force approaches to zero in this region. In Fig. 3b, we show the variation of $\Delta_c$
with the base position. All curves have three distinctively different regions.
One can observe that bonds near the pulling end (say 5'-end) get stretched more and
decreases gradually. However,
when one approaches the other end (i.e. 3'-end), there is a change in the
slope and the
extension is quite less compare to the middle one. It should be noted
that 3'-end is near to 5'-end of the other chain, where a similar force is
also applied. Since, dsDNA is in the zipped state, the applied force
at 5'-end
of one strand also pulls the other strand along the opposite direction,
which causes a relatively slower increase.
\begin{figure}[t]
\includegraphics[width=3.4in]{Fig4.eps}
\caption{(a) Variation of the rupture force with chain length at different T.
(b) Figure shows the DNA rupture force-temperature diagram for chain
length 32.
}
\label{fig-4}
\end{figure}
In model studies either temperature is set to zero \cite{degennes} or thermal
fluctuation is ignored \cite{nelson} as a result,
the rupture force defined in Eq. 1 is independent of temperature.
However, all rupture experiments usually performed at room temperature
\cite{Strunge,hatch}, therefore, it is desirable to understand role of entropy, which
may play a significant role at higher $T$. The thermodynamics of force
induced melting can be obtained from the following relation \cite{rouzina,singh}:
\begin{equation}
-fx= \Delta H - T\Delta S,
\end{equation}
where $H$ is the enthalpy and S is entropy of the ruptured chains. Setting
$x$ equal to unity and replacing the value of $\Delta H$ by the value of
the rupture force at $ T= 0$, Eq. 4 can be written as
\begin{equation}
-f = f_c - T\Delta S,
\end{equation}
where $f_c$ is given by Eq. 1. In the thermodynamic limit, value of $S$ may be
analytically estimated \cite{book} or can be obtained from the experiments \cite{santalucia}.
However, for the finite chain length, one has to resort on numerical
techniques to get the value of $S$.
It is possible to study the effect of temperature on the rupture force
in the present setup. In Fig. 4a, we show the dependence of the rupture force on the
length at different temperatures. The qualitative nature of the curves
obtained at different $T$ remain similar to de Gennes plot (Fig. 2b),
with a shift showing that the rupture force decreases with $T$. From Fig. 4a,
one can notice that for chain length 32, the rupture force has approached
its asymptotic value for all $T$. In Fig. 4b, we depict the force-temperature
diagram for the DNA rupture for the chain length 32. A linear dependence on
temperature can be noticed, which is in accordance with Eq. 5.
Usually rupture experiments are performed well below the melting
temperature, which correspond that the DNA is in the zipped state.
Therefore, in Fig. 4b, maximum temperature has been set slightly below the
melting temperature i.e. $T_m =0.23$ at $f = 0$.
In this paper, we have studied the effect of shear force and temperature
on the rupture of dsDNA. We show that the shear force increases
linearly with the length of the DNA and then approaches to the asymptotic
value. In the lattice model, the bond length is constant (stiff),
therefore, covalent bonds and hydrogen bonds will not be stretched. As
a result, the rupture force increases linearly with length \cite{singh}
in the lattice model or in the models, where bond length is considered
as a constant.
Our simulations confirm that, one will not gain strength by increasing a
larger pair sequence as predicted by de Gennes \cite{degennes}. Interestingly, recent experiment also supports
it \cite{hatch}.
The distribution of extension in
hydrogen bonds and covalent bonds for different forces
are shown in Fig. 3, where Fig. 3a clearly shows that the increase in the extension
of the hydrogen bonds is limited up to the de Gennes length, which is
consistent with the strain profile obtained by Chakrabarti and Nelson
\cite{nelson}. Above this length, differential shear force approaches to
zero, as a result there is no extension in the hydrogen bonds.
We also find that the qualitative nature of dependence of rupture
force on length remains same for different temperatures with a shift.
The rupture force decreases with temperature (Fig. 4b) as predicted
by Eq. 5.
Our study shows that the de Gennes length remains independent of the
applied force. The most surprising finding of the present simulation is
revealed from Fig. 3b, which shows variation of extension in the
covalent bonds along the chain for three different forces ($f=10,14,18$).
Although, in all these cases, the differential
force approaches to zero and there is no relative increase in the bond
length above the de Gennes length along the chain (Fig. 3b). However, unlike
the hydrogen bonds, there is a net increase in the extension in the covalent
bonds, which depends linearly on the applied
force. Nuclear magnetic resonance experiment \cite{nmr} or the atomistic
simulation \cite{pm} should be able to observe this.
It may be noted that the present simulation is carried out in the reduced unit.
It is possible to extract a rough estimate of the rupture force in the
real unit. The free energy per base pair (including hydrogen bonding and base
stacking) of G-C is -1.4 kcal/mol, where stacking accounts probably half of
this amount \cite{hydrogen_bond}. Since in A-T base pairing, only two hydrogen
bonds are involved, one can take approximately two third value of the
G-C free energy. For fixing the temperature scale,
one can use DNA melting data where both stacking as well as hydrogen bonding
are required. Whereas in rupture, we assume that there are breaking of hydrogen
bonds only and hence stacking does not contribute significantly. In
Ref. \cite{hatch} hetero-sequence (50\% AT and 50\% GC) chain of different lengths
were considered. Therefore, we take approximately -0.6 kcal/mol per base of
the zipped conformation and equate it with the complete unzipped state. The
required force for the rupture is found to be approximately 3.5 pN per
base pair, which is close to the one used in Ref. \cite{hatch}. So if one scales
the y-axis of Fig. 2b by 3.5 pN then our results are also in quantitative
agreement with the experiment \cite{hatch}.
We thank D. Giri for many helpful discussions on the subject. Financial
supports from the DST, CSIR, India and the MSI, Poland (grant No
202-204-234) are gratefully acknowledged.
|
2,877,628,091,531 | arxiv | \section{Introduction}
Advancements in \textit{in vivo} neuroimaging have resulted in large volumes of clinically acquired, high-dimensional MRI datasets. Quantitative analysis of multimodal imaging data in clinical settings is essential for assessing tumor burden and treatment response in an objective and noninvasive fashion. The scientific community has sought to capitalize on recent advancements of artificial intelligence (AI)-assisted approaches in neuro-oncology~\cite{rudie2019emerging} to develop AI-driven software packages~\cite{gibson2018niftynet,beers2021deepneuro,davatzikos2018cancer} for automating neuro-oncology workflows.
However, the distillation of high-dimensional and multimodal imaging information into meaningful quantitative information remains an ongoing pursuit~\cite{chung2021cancer}. A major challenge is that AI-driven tools typically require manually curated, preprocessed datasets comprising scans of specific sequence(s). Manual curation and preparation of imaging data are time-consuming and error-prone for multiple reasons, including non-standardized naming conventions of scans across different manufacturers or acquisition protocols~\cite{ooijen2019quality}, same series descriptions of scans irrespective of acquisition type, and missing metadata~\cite{hirsch2015we}. This challenge has been magnified by the growing number of diverse and complementary image acquisition protocols that make parsing these datasets using automated methods or heuristics extremely difficult. Nonetheless, limited efforts have been invested in automated tools for scan-type classification and data curation~\cite{remedios2018classifying,van2021deepdicomsort}. Importantly, these have not been integrated into existing software tools. These practical challenges have limited the widespread adoption of existing software packages for heterogeneous clinical imaging data.
To centralize these efforts and expedite the translation of state-of-the-art AI models from research to clinical practice, we have developed an end-to-end AI-driven framework called Integrative Imaging Informatics for Cancer Research: Workflow Automation for Neuro-oncology (I3CR-WANO), which classifies MRI sequences using an ensemble of natural language processing (NLP) and convolutional neural network (CNN) models, preprocesses the data in a reproducible manner, and segments tumor tissue subtypes using CNNs, enabling the extraction of radiomic features. Additionally, I3CR-WANO is robust to missing sequences and adopts an expert-in-the-loop approach, where the segmentation results may be manually refined by radiologists. In this study, we implemented this framework for low- and high-grade gliomas using pre-contrast T1-weighted, (T1WI), post-contrast T1WI (Gd-T1WI), T2-weighted (T2WI), and fluid-attenuated inversion recovery (FLAIR) sequences. All framework components were packaged as Docker containers to ensure reproducibility across platforms and to facilitate dissemination and deployment. For greater flexibility, the core components of the framework were implemented as independent modules instead of as a single monolithic structure.
\section{Materials and methods}
\subsection{End-to-end framework}
I3CR-WANO (Figure~\ref*{fig:fig1}A) consists of the following stages: I) image curation and preprocessing, II) segmentation, III) expert refinement of the tumor mask and segmentation evaluation, and IV) post-processing and visualization of the segmentation mask.
\begin{figure}[!htbp]
\centering
\includegraphics[width=\textwidth]{figures/Picture1.png}
\caption{(A) End-to-end processing framework of I3CR-WANO consisting of the following stages: I) data curation and image pre-processing, II) segmentation, III) expert refinement of tumor mask and segmentation evaluation, and IV) post-processing of the tumor mask. (B) Outputs of framework at different steps: unprocessed pre- and post-contrast T1-weighted (T1WI, Gd-T1WI), T2-weighted (T2WI), and FLAIR sequences (step 1), same sequences after pre-processing (step 3), tumor segmentation predicted by convolutional neural network (step 4), and tumor segmentation after refinement (step 5). White arrows in tumor segmentation mask image point to errors, which were corrected during refinement.}
\label{fig:fig1}
\end{figure}
\subsubsection{Curation and pre-processing} \label{sec:curation}
The first step in the framework takes MRI data in the Digital Imaging and Communications in Medicine (DICOM) format as input and identifies the sequences that can be used in downstream segmentation and feature extraction tasks. The scan-type classifier adopts a cascade architecture. The first stage of the cascade is based on an NLP classifier (Classifier1)~\cite{chakrabarty2020preprocessing}, whereas the second stage is based on a CNN classifier (Classifier2)~\cite{van2021deepdicomsort}. For both classifiers, previously published pretrained models were used~\cite{van2021deepdicomsort,chakrabarty2020preprocessing}. Classifier1 uses information regarding the DICOM series description (i.e., tag [0008, 103E]) and the number of instances per series to classify every scan into one of two classes: segmentable or non-segmentable. Subsequently, Classifier2 performs a more granular classification of these potentially segmentable scans into T1WI, Gd-T1WI, T2WI, FLAIR, and non-segmentable classes. Additionally, the orientation of each scan (i.e., axial, coronal, sagittal) is determined by leveraging the DICOM MR acquisition type (tag [0018,0023]) and image orientation (patient) (tag [0020,0037]) information.
In the event of multiple occurrences of T1WI, Gd-T1WI, T2WI, or FLAIR sequences, we prioritize i) axial scans over sagittal and coronal scans and ii) scans with a higher number of instances. Based on the absence of certain DICOM tags or sequence types, a particular scan or an entire session is excluded from the subsequent processing (Supplementary~\ref*{supp_methods_exclusions}). In all other cases, the curation process results in a maximum of four scans (one each for T1WI, Gd-T1WI, T2WI, and FLAIR sequences) that are used for downstream processing (Figure~\ref*{fig:fig1}B, “scans after step 1”).
I3CR-WANO comprises the following preprocessing functionalities: registration, N4-bias correction, skull-stripping, and intensity normalization. During registration, for every session, the scan with the highest number of instances among those identified in the previous step is selected as the target scan for co-registration. Then, all other scans are rigidly co-registered to the target scan, followed by affine registration to a common anatomical atlas~\cite{rohlfing2010sri24}. The transformation matrix from patient-space to atlas-space (patient2atlasmat) is stored and later used as described in Section~\ref*{sec:postproc}. For all registration steps, we used FMRIB’s Linear Image Registration Tool (FLIRT)~\cite{jenkinson2002improved,jenkinson2001global}. Next, we perform N4 bias field correction and skull-stripping of the registered sequences using the Robust Brain Extraction (ROBEX)~\cite{iglesias2011robust} tool (Figure~\ref*{fig:fig1}B, “scans after step 3”). This is followed by an image intensity normalization step, where intensities within the brain are normalized to zero mean and unit variance after excluding intensities below the 5th and above the 95th percentile.
\subsubsection{Segmentation} \label{sec:seg}
In this step, the preprocessed scans are segmented using pretrained segmentation models (Supplementary~\ref*{supp_methods_seg_pretrain}) to produce a multiclass tumor segmentation mask (Figure~\ref*{fig:fig1}B, “automatic segmentation”). The mask comprises the edema (ED), non-enhancing/necrotic tumor core (NC), and enhancing tumor (ET) classes. Additional outputs include the tumor core (TC) class, which is created by combining the ET and NC classes, and the whole tumor (WT) class, which is constructed by combining all classes.
To render our framework robust to missing sequences, we have trained segmentation models on different combinatorial subsets of sequences (e.g., Gd-T1WI+T2WI, Gd-T1WI+FLAIR, only Gd-T1WI, etc.). Depending on the available sequences, the segmentation module produces a multi-class segmentation mask comprising NC, ET, ED classes (if at least a Gd-T1WI is available), a binary WT segmentation mask (in the absence of Gd-T1WI but presence of T2WI and/or FLAIR), or no mask (in the absence of Gd-T1WI, T2WI, and FLAIR).
\subsubsection{Expert refinement and segmentation evaluation}
Automated segmentation models are prone to errors~\cite{baid2021rsna} like occasional erroneous labelling of vessels within the peritumoral edematous area, periventricular white matter hyperintensities, choroid plexus, and areas of Gd-T1WI bright blood products. To address this, we have adopted an expert-in-the-loop approach, where the predicted segmentation mask is optionally sent to radiologists for refinement (Figure~\ref*{fig:fig1}B, “Expert-refined segmentation”). Subsequently, the refined mask is compared with the predicted mask to evaluate the performance of the segmentation model.
\subsubsection{Post-processing} \label{sec:postproc}
Once the segmentation mask is created, the mask is warped back to the patient space by inverting the patient2atlasmat transformation matrix generated in the registration step and applying it to the tumor mask using nearest-neighbor interpolation. Subsequently, this mask is converted to a DICOM Segmentation image object format using the itkimage2segimage command from the dcmqi~\cite{herz2017dcmqi} tool. The mask can also be used for downstream processing such as the extraction of quantitative features from the tumor mask. This is supported by our framework using the PyRadiomics~\cite{van2017computational} tool, which can calculate first-order statistics, 3D shape features, and texture features for every combination of tumor class and input sequence (Supplementary~\ref*{supp_methods_radiomics}).
\subsection{Framework implementation and distribution}
The source code, detailed documentation, pre-trained AI models as well as a live demonstration of I3CR-WANO are made publicly available for non-commercial use at \url{https://github.com/satrajitgithub/NRG_AI_NeuroOnco_preproc} and \url{https://github.com/satrajitgithub/NRG_AI_NeuroOnco_segment}. In addition, to ensure portability and easy deployment, the framework has been packaged into Docker images available from DockerHub (\url{https://hub.docker.com/r/satrajit2012/nrg_ai_neuroonco_preproc} and \url{https://hub.docker.com/r/satrajit2012/nrg_ai_neuroonco_segment}).
The segmentation Docker image has been implemented using the NVIDIA GPU CLOUD runtime to leverage NVIDIA Graphics processing units during test-time inference. Besides Docker usage through the command-line interface, we also provide a visual interface of the framework through its integration with the open-source Extensible Neuroimaging Archive Toolkit (XNAT) informatics platform~\cite{marcus2007extensible}. XNAT equips the user with advanced functionalities, such as launching a Docker on a batch of sessions (batch mode) or launching a sequence of Docker commands (command orchestration). Additionally, through its integrated Open Health Imaging Foundation (OHIF) Viewer~\cite{doran2022integrating}, XNAT provides the user with a powerful set of image visualization and annotation tools, including the ability to natively refine tumor annotations, perform measurements, and save those contours and segmentation objects back into XNAT. Documentation for command-line Docker usage as well as XNAT usage are available at \url{https://github.com/satrajitgithub/NRG_AI_NeuroOnco_preproc/tree/master/documentation}.
\subsection{Application on glioma datasets}
I3CR-WANO was validated on two independent clinical datasets (Supplementary Table~\ref*{supp_table_dataset}) acquired from the retrospective health records of the Washington University School of Medicine (WUSM; n = 384; median age 56 years, range 44 – 66 years, 154 females, 230 males) and the M.D. Anderson Cancer Center (MDA; n = 30; median age 58 years, range 44 – 65 years, 15 females, 15 males). Data collected from WUSM and MDA were obtained with Institutional Review Board (IRB) approval and met the criteria for the general waiver of consent and waiver of Health Insurance Portability and Accountability Act (HIPAA) authorization. For both datasets, the only inclusion criterion was pathologically confirmed glioma (grade II-IV) from preoperative patients with no prior resection. To ensure the broad applicability of the framework to heterogeneous clinical data, no exclusions were made based on the image acquisition parameters, image quality, or glioma grade. The segmentation models used in the framework were pre-trained on the Brain Tumor Segmentation Challenge (BraTS) 2021~\cite{baid2021rsna} dataset, publicly available from the Synapse platform (\url{https://www.synapse.org/#!Synapse:syn27046444/wiki/616992}).
\subsection{Statistical analyses}
The performance of the scan-type classifier was quantified using the overall accuracy, F1 score for each class, and confusion matrix showing the error distribution across different classes. Failures during preprocessing were identified through visual inspection. Segmentation performance was assessed using the Dice Similarity Coefficient (DSC) metric for the WT, TC, and ET classes. For the BraTS 2021 dataset, the DSC was calculated between the predicted segmentations and the provided expert-annotated ground truths. For both the WUSM and MDA datasets, the predicted tumor segmentations from the framework were refined by experts, and these expert-refined tumor masks were used as surrogate ground truths. Differences in DSC between groups with and without certain sequences were calculated using the Welch’s t-test. For all statistical tests, the threshold for statistical significance was set at $P < .05$.
\section{Results}
\begin{figure}[!htbp]
\centering
\includegraphics[width=\textwidth]{figures/Picture2.png}
\caption{(A), (C) Confusion matrix and F1-scores demonstrating the performance of the scan-type classifier on the WUSM and MDA datasets, respectively. (B), (D) Dispersion of Dice Similarity Coefficient values, stratified by tumor grade, showing the agreement in terms of overlap between the tumor segmentation masks predicted by convolutional neural network and the masks after refinement by a radiologist for WUSM and MDA datasets, respectively. NS = non-segmentable, WT = whole tumor, TC = tumor core, ET = enhancing tumor, WUSM = Washington University School of Medicine, MDA = M.D. Anderson Cancer Center.}
\label{fig:fig2}
\end{figure}
\subsection{Scan-type classification and pre-processing}
The scan-type classifier yielded high overall accuracy (99.61\%, 8835/8870 scans) across the five classes (Figure~\ref*{fig:fig2}A). Precision values were above 0.99 for all classes except T1WI. The precision for T1WI (0.97, 557/572 scans) showed a minor drop because 12 non-segmentable and 3 FLAIR scans were misclassified as T1WI. Of the 12 non-segmentable scans, 10 were determined to be non-brain MR scans (i.e., spinal). In terms of recall, the classifier yielded values above 0.99 for all classes except FLAIR, which had a slightly lower recall of 0.98, because six scans were misclassified. Of the six FLAIR scans, three were misclassified as T1WI. These were all sagittal scans with a very low axial resolution. Overall, of the 384 WUSM sessions, the scan-type classifier identified all possible segmentable sequences for 380 sessions. For these 380 sessions, there were no failures in terms of preprocessing. Hence, they were used for the subsequent segmentation and radiomic feature extraction.
For the MDA dataset, the classifier yielded a very high overall accuracy (99.84\%, 643/644 scans) across five classes, with only a single sagittal FLAIR scan with low axial resolution being misclassified as T1WI. Overall, of the 30 MDA sessions, the scan-type classifier could identify all possible segmentable sequences for all sessions, and there were no failures in terms of preprocessing. Hence, all the sessions were used for subsequent segmentation and radiomic feature extraction.
\subsection{Segmentation}
The segmentation results varied depending on the input sequences, with an overall deterioration in performance with increasing number of missing sequences, especially for ET in the absence of Gd-T1WI (Supplementary~\ref*{supp_results_segmentation}). On the WUSM data, the segmentation models yielded high mean DSC for WT (0.882±0.244), TC (0.75±0.334), and ET (0.91±0.24). For all tumor classes, the mean DSCs for WHO grade IV (WT:0.901, TC:0.869, ET:0.879) were higher than those for WHO grade II (WT:0.87, TC:0.82) and grade III (WT:0.852, TC:0.778, ET:0.832) (Figure~\ref*{fig:fig2}B).
On the MDA data, a high mean DSC was obtained for WT (0.977±0.04), TC (0.984±0.028), and ET (0.899±0.097). Overall, the model generalized well for both the external datasets.
\section{Discussion}
In this paper, we proposed I3CR-WANO, an AI-driven framework for curation, pre-processing, segmentation, and radiomic feature extraction in neuro-oncology MR studies. Through its end-to-end operation, the framework transforms unstructured DICOM MR data into quantitative 3D measurements of tumors, which can be directly used for predicting treatment response and overall survival. I3CR-WANO was validated in 414 patient cases acquired from two different clinical sites, with good overall performance in all facets of processing. The different AI models used for scan-type classification and segmentation generalized well on unseen data. The source code, dockers, and all pre-trained models of this study have been made publicly available.
In recent years, AI-assisted tools such as niftynet~\cite{gibson2018niftynet}, DeepNeuro~\cite{beers2021deepneuro}, and the cancer imaging phenomics toolkit (CaPTk)~\cite{davatzikos2018cancer} have been proposed for automating neuro-oncology workflows in clinical practice. These tools typically depend on carefully curated data sets. However, they did not address the problem of scan-type classification or data curation within their operations. Instead, they rely on manual interaction, which is often the most time-consuming step in an AI workflow~\cite{montagnon2020deep}. In contrast, I3CR-WANO provides a more holistic solution, from data curation to radiomic feature extraction, and completely obviates the need for any intermediate manual interaction. Thus, it greatly facilitates the generation of datasets required for the development and validation of models supporting quantitative tumor measurements. Additionally, the framework’s modular structure includes the necessary commonalities in upstream pipelining that allow cascading with a wide array of downstream applications (e.g., the curation and pre-processing modules are not application-specific).
The proposed framework can streamline clinical workflows and support decision making by automating tumor segmentation and characterization. In this emerging era of precision diagnostics, the quantitative volumetric tumor measurements extracted from this framework can drive personalized treatment planning and response assessment (e.g., Response Assessment in Neuro-Oncology [RANO] criteria~\cite{van2011response}). The generated segmentation masks can be used to track tumors longitudinally and quantitatively assess their growth. In a research setting, it can significantly reduce the latency of data curation, thus expediting model prototyping and facilitating the creation of standardized large-scale neuro-oncology datasets for multi-institutional collaborations~\cite{baid2021rsna,baheti2021brain} that attempt to establish public benchmarks for various aspects of quantitative tumor analysis.
This study has certain limitations that merit discussion. First, the CNN-based scan-type classifier is currently pre-trained only on axial scans and has a minor performance drop, particularly on coronal or sagittal FLAIR scans, with a very low off-plane resolution. Second, segmentation models are currently trained on preoperative glioma cases and cannot be used on postoperative images. However, the current curation and pre-processing modules of the framework are applicable to any MRI study, irrespective of the pathology or treatment status. Moreover, owing to the modular nature of the framework, both limitations can be addressed by using more advanced containerized models that can be simply used as drop-in replacements for the current models. This flexibility can also enable the extension of this framework to multiple tumor types by integrating tumor classification models~\cite{chakrabarty2021mri} and cascading it with segmentation, radiomics, and quantitative report-generation utilities tailored for specific tumor types.
In conclusion, we developed I3CR-WANO, an AI-driven framework that transforms raw MRI DICOM data of patients with high- and low-grade gliomas to quantitative tumor measurements through systematic data curation, processing, tumor segmentation, and radiomic feature extraction, without the requirement of any manual intervention. This work can streamline clinical workflows and support clinical decision-making by automating tumor segmentation and characterization as well as help in curating large-scale neuro-oncology datasets.
\bibliographystyle{ama}
|
2,877,628,091,532 | arxiv | \section{Introduction}
\textit{Introduction.---} The power of classical statistical mechanics is rooted
in the ergodic hypothesis, but in closed quantum many-body systems,
how ``memories'' are forgotten in a realistic time scale \cite{Luca2016,Mori2018,BORGONOVI2016,Gogolin2016}
--- how steady states and thermal behavior at later times emerge
dynamically \cite{Deutsch1991,Srednicki1994,Rigol2008}---
remains an actively investigated topic \cite{Huse2015,Deutsch2018,Abanin2019,Anatoli2011} .
Recently, there is a surge of theoretical interests on the problems of non-equilibrium quantum dynamics,
thanks in part to significant progress in experimental techniques that has made the
dynamics of quantum systems accessible
\cite{Eisert2015,Trotzky2012,Roos2014,Monroe2014,Neill2016,Lukin2017a,Lukin2017b,Monroe2017,Greiner2016,Greiner2018,Tang2018}.
In many cases,
particularly in interacting systems, however, to directly access
such dynamics remains technically challenging due to the increasing
amount of correlations generated over time \cite{Lieb1972,Nachtergaele2006}.
From an entanglement point of view, these correlations are a
consequence of entangled quasiparticle pairs being constantly
generated and propagating into different parts of the system
\cite{Lieb1972,Nachtergaele2006,Cardy2005,Cardy2006,Calabrese_2016,Hastings2010}. The
dynamics of these quasiparticles have been shown to reflect the
underlying nature of their hosting systems, e.g., ballistic in
thermalizing systems \cite{Cardy2005,Calabrese2006,Huse2013} versus logarithmic in localized systems
\cite{Bardarson2012,Iglo2012,Burrell2013,Abanin2013}. In
many of these examples, propagation of entanglement also spreads
conserved quantities which can serve as information carrier
\cite{Lieb1972,Qi2018,Alba2017,Nahum2017}. An important aspect to
understanding quantum dynamics and the emergence of equilibration is
therefore to understand the dynamics of quantum entanglement \cite{Abanin2019},
even in systems without identifiable quasiparticle content \cite{Huse2013,Lauchli2008,Pal2018,HongLiu2014,Casini2016}.
In this context, entanglement dynamics is also connected with information loss and scrambling
\cite{Hosur2016,Swingle2016,ZWLiu2018,Keyserlingk2018,Zhui2017,Asplund2015}.
In equilibrium condensed matter systems, entanglement-based analysis has already proved to be a profitable tool
as a diagnostic of strong correlations, from the presence of topological order to the onset of
quantum criticality \cite{Laflorencie2016}. Indeed, the scaling of
entanglement entropy characterizes the quantum statistics of
quasiparticles \cite{Levin2006,Kitaev2006}, and entanglement spectrum
holds a direct relation between bulk and edge physics
\cite{Haldane2008}, both of which highlight the wealth of information
encoded in entanglement. While entanglement entropy and entanglement
spectrum are important measures of quantum information, entanglement
Hamiltonian (EH) is a more fundamental object. The EH is a sum of
local ``energy'' density $\mathcal{H}(x)$ weighted by a local entanglement
temperature $\beta(x)$: $H_E= \int dx \beta(x) \mathcal{H}(x)$.
The relationship between EH and reduced density matrix of a subsystem
(A), $\rho_A=e^{-H_E}$, implies that $\rho_A$ can be interpreted as a
canonical ensemble with energy density $\mathcal{H}(x)$ in local thermal
equilibrium at temperature $\beta^{-1}(x)$. Therefore, knowledge of the EH
could offer an alternative picture of how subsystem A behaves by
appealing to our intuition of thermodynamics. However, even for
static systems, precise knowledge about their EH is rare. The only
exact result of EH known to date pertains to
integrable systems described by (1+1)-dimensional conformal field
theory (CFT) \cite{Bisognano1975,SRyu2016}, for which the local
temperature $\beta(x)$ satisfies a spatially arch-like envelope
function. Recently, numerical efforts have attempted to
obtain the EH in static interacting systems using various methods
\cite{Assaad2018,Dalmonte2018,WZhu2018}, and have shed some light on
this technically challenging problem. As for time-evolving systems,
although results for non-interacting cases have been obtained
\cite{Tonni2016,Xueda2018},
the quantitative role of EH in strongly-correlated systems remains unexplored,
and it is far from obvious how the time dependence of EH should be.
In this work, we study the EH in the
quench dynamics of Bose Hubbard model, a prototypical non-integrable
system, based on time-dependent density-matrix renormalization group
(t-DMRG) approach \cite{White1992,White2004}. With the help of a
recently developed numerical scheme \cite{WZhu2018}, we are able to
track the time dependence of the EH in real time. Our main findings
are that: 1) a current operator emerges in the EH before the system
reaches equilibration, reflecting the propagation of entanglement
carried by particle flow; 2) in the long-time limit, the EH becomes
nearly stationary and demonstrates features of equilibration; 3) the
long-time steady state exhibits a spatially independent
entanglement temperature, signaling the subsystem becomes locally thermal.
All above results are endorsed by CFT.
These findings imply that the EH can be used to effectively
investigate the emergence of subsystem equilibration under the unitary dynamics of the full system,
which sets up a valuable paradigm for exploring entanglement dynamics out-of-equilibrium.
\textit{Preliminary.---}
We begin by discussing the salient features of the EH dynamics
after a quantum quench, in the framework of 1+1D CFT.
We consider a 1D chain with finite length $L$ defined on $x \in [0, \, L]$,
and the subsystem $A$ under consideration is chosen as $[0, l]$.
At time $t = 0$, we start from an initial state with short-range
entanglement, which may be considered as the ground state of a gapped Hamiltonian.
At $t>0$ we evolve it with a CFT Hamiltonian $H_{\text{CFT}}=\int dx \mathcal{H}(x)$.
We consider the case where the time scale $t$ is smaller than the total length $L$,
such that the other boundary at $x=L$ can be safely neglected.
Based on conformal mappings, we obtained the exact form of the EH (See supplementary materials for details \cite{sm}).
Importantly, we found that in the long-time limit, the EH of subsystem A is the sum of
$\mathcal{H}(x)$ weighted by a spatially dependent finite temperature $\beta^{-1}(x)$,
indicating that the reduced density matrix $\rho_A(t)$ takes the form of
a thermal ensemble.
To be specific,
in the long time limit $t\gg l$, one obtains the EH $H_E=\int dx \beta(x) \mathcal{H}(x)$,
with the envelope function \cite{sm}
\begin{align}
&\beta(x)= 2\beta_0 \cdot\frac{\sinh(\pi(l+x)/\beta_0)\sinh(\pi(l-x)/\beta_0)}{\sinh(2\pi l/\beta_0)}, (t\gg l). \label{eq:cft2}
\end{align}
Here $\beta_0$ characterizes the correlation length of the gapped pre-quench state \cite{Calabrese_2016},
and it also qualifies the effective ``temperature'' of energy density of the system using pre-quench state \cite{sm}.
In addition, as notable byproducts, CFT also gives time dependence of
entanglement entropy to the leading order
\cite{Cardy2005,Cardy2006,sm}:
\begin{eqnarray}
S(t)=\begin{cases}
\frac{3c}{\pi \beta_0}t,\,\,\,\,\,\quad t<l\\
\frac{3c}{\pi \beta_0}l, \,\,\,\,\,\quad t>l
\end{cases}, \label{eq:cft_ee}
\end{eqnarray}
where $c$ is the central charge of the underlying CFT. That is, the
entanglement entropy grows linearly in time until it saturates at a
value satisfying the volume law \cite{sm}.
\begin{figure}[b]
\includegraphics[width=0.45\textwidth]{EE_time.eps}
\caption{\textbf{Dynamics of the entanglement entropy.}
(a) Time-evolution of entanglement entropy by quenching from various $U^{\mathbf{i}}$ to $U^{\mathbf{f}}=3.3$.
(b) Effective temperature $\beta_0$ as a function of $E^{\mathbf{quench}}-E_0$,
where $E_0$ is the lowest energy of post-quench Hamiltonian $\hat H(U^{\mathbf f})$ and
$E^{\mathbf{quench}}= \langle \Psi(t=0)|H(U^{\mathbf f})|\Psi(t=0) \rangle$.
The black line is the best fit to $\beta_0\propto (E^{\mathbf{quench}}-E_0)^\alpha,\alpha=-0.641 \pm 0.012$.
Inset: Linear scaling of $S_{\ell}=\frac{\pi c}{3\beta_0}\ell$ to the length of the subsystem $\ell$.
} \label{fig:EE_BH}
\end{figure}
\begin{figure*}[t]
\includegraphics[width=0.95\textwidth]{EH_time.eps}
\caption{\textbf{Dynamics of the EH.}
(a) Spectrum of correlation matrix $G_{ab}(t)$.
The lowest and second lowest eigenvalue crosses with each other at $t_0\approx 1.65$ (inset).
The shaded area shows the short time regime $t<t_0$.
The parameters of the EH (see Eq. \ref{eq:EH}) as a function of time: (b) interaction strength $U_n(t)$,
(c) real part of couplings $Re J_{n,n+1}(t)$, (d) relative phase of couplings $\Phi_{n,n+1}(t)=\arg J_{n,n+1}$,
where $n$ labels spatial lattice sites.
Here we quench the Bose-Hubbard model (Eq. \ref{eq:BH}) from $U^{\mathbf{i}}=5.0$ to $U^{\mathbf{f}}=3.3$.
The total system size $L=48$ and the typical subsystem length is $\ell=9$.
Different symbols label local coupling and interaction strengths.
The brown dashed line is guide to eye.
Inset of (b) is the cartoon picture of one dimension chain and entanglement bipartition.
} \label{fig:EH_BH}
\end{figure*}
\textit{Model and Method.---}
We now turn to a paradigmatic
non-integrable model, the one-dimensional Bose-Hubbard model, which
has been experimentally realized with ultracold gases in deep
optical lattices \cite{Cazalilla2011},
\begin{eqnarray}\label{eq:BH}
\hat H= -J\sum_{i} (b^\dagger_i b_{i+1} + h.c.) + \frac{U}{2}\sum_i n_i(n_i-1),
\end{eqnarray}
where $b^\dagger_i (b_i)$ is the boson creation (annihilation)
operator and $n_j=b^\dagger_i b_i$ is the on-site density operator.
Throughout this work, we consider a uniform Hamiltonian density, i.e.
the physical coupling $J$ (set to $J=1$) and interaction $U$ are
spatially independent. In the equilibrium case, at fixed filling
$\langle n_i\rangle=1$, a critical value $U_c\approx 3.38$
\cite{Fisher1989,Kuhner1998} separates a Mott insulating phase
($U>U_c$) from a superfluid phase ($U<U_c$), the latter described by
an effective Luttinger liquid theory with $c=1$. Below we set the
initial state in the Mott phase as the ground state of
$H$ with pre-quench condition $U^{\mathbf i} > U_c$,
and investigate its quench dynamics under
the $H$ with post-quench condition $U^{\mathbf f} < U_c$.
\newcommand{\mathcal{U}}{\mathcal{U}}
To simulate the unitary time evolution
$|\Psi(t)\rangle = \mathcal{U}(t)|\Psi(t=0)\rangle$, we use the time-dependent
density-matrix renormalization group (t-DMRG)
\cite{White1992,White2004}. We apply a second-order Trotter
decomposition of the short time propagator
$\mathcal{U}(\Delta t)=\exp(-i \Delta t \hat H)$ into a product of term which
acts only on two nearest-neighbor sites. We use a dimension up to
$5120$, which guarantees that the neglected weight in the Schmidt
decomposition in each time step is less than $10^{-6}$. Once the
$|\Psi(t)\rangle$ is computed, we partition the one-dimensional chain
of length $L$ into two segments, $\ell$ and $L-\ell$, and calculate
the subsystem reduced density matrix,
$\rho_{\ell}(t)=Tr_{L-\ell} |\Psi(t)\rangle \langle \Psi(t)|$.
The entanglement Hamiltonian is formally defined as
$\rho_A(t)=\exp(-\hat H_E)$, but it is technically challenging to
extract $\hat H_E$ through this definition because the transformation
$\hat H_E(t)=-\ln \rho_A(t)$ is non-linear. Very recently, a generic
scheme to obtain the operator form of EH has been proposed in
Ref. \cite{WZhu2018}, which we briefly outline here. The starting
point is to define a set of basis operators $\hat L_a$, which we take
as the boson hopping operator $b^\dagger_i b_j$ and density
interaction operator $n_i(n_i-1)$ according to the form of the
physical Hamiltonian. These operators define the variational space in
which we search for the ``best'' EH in the form
$H_E = \sum_a w_a \hat L_a$, where $w_a$ are parameters coupled to operators $\hat L_a$.
Practically, the variational scheme is equivalent to
solve the eigenvalue problem of the correlation matrix
$G_{ab}=\langle \xi| \hat{L}_a \hat{L}_b |\xi \rangle- \langle \xi|\hat{L}_a |\xi\rangle \langle \xi|\hat{L}_b |\xi\rangle$ \cite{XLQi2017,WZhu2018},
where $|\xi\rangle$ is a reference state chosen here as one eigenstate of $\rho_A$.
The lowest eigenvalue of $G_{ab}$, i.e. $g_0$,
minimizes the variance $\langle \xi | H_E^2 | \xi\rangle - \langle \xi|H_E|\xi\rangle^2$,
which can be interpreted as the ``fluctuation'' of ``Hamiltonian'' $H_E =\sum_a w_a L_a$ under $|\xi\rangle$.
The eigenvector of $g_0$ gives rise to the estimate of $\{w_a\}$.
It has been confirmed that \cite{WZhu2018}, in the static case
this numerical receipt can give reliable EH that faithfully captures
all features of the reduced density matrices.
In this work, we generalize and formulate this scheme using
matrix-product state ansatz, which is amenable to simulating the time
evolution of the EH within the t-DMRG approach, and works well for
larger system sizes compared to exact diagonalization.
\textit{Entanglement entropy.---}
We compute the time-dependent entanglement entropy and compare with the CFT results obtained earlier.
Fig. \ref{fig:EE_BH}(a) shows the time evolution of the entanglement entropy for various initial conditions $U^{\mathbf i}$.
For all cases, $S_{\ell}(t)$ shows two temporal regimes:
At short times $t<t_*$, the entropy shows a linear rise,
until it bends over to an almost flat plateau.
The linear increase can be accounted for by the ``ballistic'' propagation of entanglement.
At long times $t>t_*$, the entropy saturates to its steady-state value.
As shown in inset of Fig. \ref{fig:EE_BH}(b),
the saturation of the entropy depends linearly on the block length,
which clearly exhibits a ``volume-law'' scaling.
In particular, based on the relationship of Eq. \ref{eq:cft_ee},
we can extract the pre-quench entanglement temperature $\beta_0$ (or correlation length of the initial state).
In Fig. \ref{fig:EE_BH}(b),
we show the dependence of the effective entanglement temperature $\beta_0$ on the post-quench energy above the ground state, $E^{\mathbf{quench}}-E_0$,
where $E^{{\mathbf{quench}}}$ is the energy of the pre-quench state in the post-quench Hamiltonian, and $E_0$ is the post-quench ground state energy.
It is clear that $\beta_0$ monotonically decreases with $E^{\mathbf{quench}}-E_0$.
Our best fitting gives the scaling $\beta_0\propto (E^{\mathbf{quench}}-E_0)^\alpha,\alpha\approx -0.641 \pm 0.012$.
It reflects that a higher initial energy translates to a higher effective temperature.
\textit{Entanglement Hamiltonian.---}
Next we turn to discuss the time evolution of EH.
Here we assume the EH has following form (detailed discussion see \cite{sm}):
\begin{equation}\label{eq:EH}
H_E(t)= -\sum_{i} (J_{i,i+1}(t)b^\dagger_i b_{i+1} + h.c.) +\sum_i \frac{U_i(t)}{2}n_i(n_i-1)
\end{equation}
We map out the EH at each time step by using the scheme described in the method section \cite{WZhu2018}.
Fig. \ref{fig:EH_BH}(a) shows the spectrum of correlation matrix as a function of time.
Interestingly, it is found a level crossing between the lowest and
second lowest eigenvalue around $t_0\approx 1.65$ (inset of Fig. \ref{fig:EH_BH}).
After this critical time,
the lowest eigenvalue $g_0$ monotonically decreases, implies the trial EH
works better in the time regime $t>t_0$. Next we will focus on the $t>t_0$ regime
and discuss the salient features of the EH.
Fig. \ref{fig:EH_BH}(b-c) shows the time evolution of the interaction strength $U_i(t)$,
real part of coupling strength $Re J_{i,i+1}(t)$ after a global quench.
First, both $J$ and $U$ show sizable oscillations at early times $t<t_0$, and later
the subsequent dynamics gradually reduce (as indicated by the envelope dashed curve).
In particular, at the long-time limit $t>t_*$,
all coupling strengths approach almost stationary values.
Physically, this suggests the subsystem has equilibrated to a steady state.
Second, before reaching equilibration, it is found
the imaginary part of boson hopping strength is nonzero.
To show this, we define the phase angle $\Phi_{i,i+1}=\arg J_{i,i+1} =\tan^{-1}\frac{Im J_{i,i+1}}{Re J_{i,i+1}}$,
and the phase angle directly relates to the imaginary part of coupling strength $Im J_{i,i+1}(t)= |J_{i,i+1}|\sin \Phi_{i,i+1}$.
In Fig. \ref{fig:EH_BH}(d), $\Phi_{i,i+1}(t>0)$ shows oscillation behaviors due to the non-equilibrium dynamics.
For comparison, in the static case we have $\Phi_{i,i+1}(t=0)= 0$.
Since $Im J_{i,i+1}$ is directly coupled to the current operator $\hat J_c=i[H,x]=i\sum_i (b^\dagger_n b_{n+1}-b_n b^\dagger_{n+1})$ (we set $e=\hbar=1$),
this implies that time-reversal symmetry is broken, and a non-vanishing particle current flow emerges in time evolution.
The emergent current flow reflects quasiparticle propagation, which is consistent with
the picture that quasiparticles serve as entanglement information carriers\cite{Cardy2005}.
The inset of Fig. \ref{fig:EH_BH}(d) single out one typical evolution ($\Phi_{2,3}$).
It signals that the current first flows from the entanglement cut into the bulk ($\Phi_{2,3}> 0$),
and then reverse direction ($\Phi_{2,3}<0$), and reduces to zero in the long time.
This again shows the transport of quasiparticles.
At long times, the imaginary part tends to vanish with only small fluctuations around zero,
suggesting that the subsystem has reached equilibrium and net particle flow is absent.
The appearance of current in the EH allows us to conclude that
information spreading originates in the propagation of quasiparticles
between the two bipartition constituents \cite{Cardy2005}.
Third, as shown in Fig. \ref{fig:EH_BH}(b-c),
at the long time limit $t>t_*$ the evolution of local coupling and interaction strengths at different spatial locations tend to converge to the same value,
indicating that the EH is spatially uniform away from the entanglement cut.
To further study the spatial dependence of the EH at the long-time limit,
we plot the time-averaged local coupling strengths as a function of distance to the cut in Fig. \ref{fig:EH_BH_scaling}.
In Fig. \ref{fig:EH_BH_scaling}(a), we show the spatial dependence of local interaction strength $U_n/U_1$ at the long times.
In particular, local strengths in the long time limit are nearly uniformly distributed away from the entanglement cut ($x\ll\ell$).
Crucially, this spatial dependence shows excellent agreement with the CFT prediction Eq. (\ref{eq:cft2}).
Moreover, we demonstrate that the residual fluctuations near the entanglement cut $x\sim \ell$
can be interpreted as a finite temperature effect.
In Fig. \ref{fig:EH_BH_scaling}(b), we show that
by increasing temperature (through changing quenching parameters as discussed in Fig. \ref{fig:EE_BH}(b)),
the spatial independence of local strengths becomes sharper near the entanglement cut $x\sim \ell$.
The consistency with the CFT Eq. (\ref{eq:cft2}) indicates that
local strengths should be completely flat (shown by dashed line) at infinite temperature,
which is also supported by our numerical results (inset of Fig. \ref{fig:EH_BH_scaling}(b)).
Physically, spatial dependence of local coupling strengths
in the EH can be interpreted as a local entanglement temperature $\beta^{-1}(x)$,
and $\rho_A=\exp(-\int dx\beta(x)H_E(x))$
resembles a physical system equilibrated at local temperature $\beta^{-1}(x)$ depending on distance from a ``heat source'' that is subsystem B.
From this point of view, it is appealing that spatially independent $\beta(x)$
reveals local temperature reaches the equilibration.
\begin{figure}
\includegraphics[width=0.20\textwidth]{EH_scaling_1.eps}
\includegraphics[width=0.27\textwidth]{EH_scaling_2.eps}
\caption{\textbf{Spatial dependence of the EH.}
(a) Local coupling strengths at long time limit (red diamonds).
The red line fits Eq. (\ref{eq:cft2}). (b)
Spatial dependence of local coupling strengths for various quenching parameters:
$U^{\mathbf i}=4.0,U^{\mathbf f}=3.3$ (black squares),
$U^{\mathbf i}=4.5,U^{\mathbf f}=3.3$ (blue circles),
$U^{\mathbf i}=5.0,U^{\mathbf f}=3.3$ (red diamonds) and
$U^{\mathbf i}=5.5,U^{\mathbf f}=3.3$ (green triangles).
The solid lines show best fit to the envelope function Eq. (\ref{eq:cft2})
with various pre-quench temperature $\beta_0$.
Inset: Interaction strength scaling to infinite temperature.
} \label{fig:EH_BH_scaling}
\end{figure}
\textit{Summary and Discussion.---}
We have addressed the out-of-equilibrium dynamics of strongly-correlated systems
from the point of view of entanglement Hamiltonian.
By tracking the time evolution of the entanglement Hamiltonian, we were able to gain remarkable signatures
of the entanglement propagation and information scrambling.
We demonstrate that, the entanglement Hamiltonian involves an emergent current operator,
which drives the quasiparticle propagation towards equilibration.
In the long-time limit the entanglement Hamiltonian becomes stationary.
In particular, spatially distributed entanglement temperature satisfies a universal feature
as proposed by the conformal field theory,
indicating the subsystem indeed reach equilibrium away from the entanglement cut.
Our results shows that entanglement Hamiltonian provides fundamental insight into
the non-equilibrium dynamics of quantum many-body systems.
In closing, we would like to make several remarks.
Although the limited system sizes prevent comparison over
a large range of subsystem sizes, we confirm
the characters of entanglement Hamiltonian with underlying scaling behavior
are robust on all of system sizes we can reach \cite{sm}.
Moreover, we investigate numerically a variety of one-dimensional systems of different kinds \cite{sm}.
Through these studies, our results have implications well beyond the specific model.
Lastly, our findings open up several avenues for future investigation.
For instance, applying these tools for characterizing the presence of
equilibration could be powerful in studying many-body localization \cite{Huse2015,Alet2018,Abanin2019},
where one of the key features is the suppression of entanglement.
In addition, taking into account the recent proposal in synthetic quantum systems \cite{Zoller2018},
the dynamics of constructed entanglement Hamiltonian may be valuable for future experiments.
\textit{Note Added---}
At the final stage of preparing this
manuscript, we became aware of a work on entanglement Hamiltonian in non-interacting systems \cite{Tonni2019}.
\textit{Acknowledgments.---}
W.Z. thanks Beni Yoshida for fruitful discussion.
This work was supported by the start-up funding at Westlake University.
Work at Argonne was supported by ANL LDRD Proj. 1007112.
X.W. is supported by the Gordon and Betty Moore Foundation’s EPiQS initiative through Grant No.GBMF4303 at MIT.
Research at Perimeter Institute (YCH) is supported by the Government of Canada through the Department of Innovation, Science and Economic Development Canada and by the Province of Ontario through the Ministry of Research, Innovation and Science.
\bibliographystyle{apsrev}
|
2,877,628,091,533 | arxiv | \subsection{Basic notions}
\label{subsection:basic notions}
Let $M$ be a $2d$-dimensional manifold endowed with a symplectic structure, i.e. a closed and nondegenerate 2-form $\omega$. The pair $(M,\omega)$ is called a symplectic manifold which is also a volume manifold by Liouville's theorem. Let $\mu$ be the so-called Lebesgue measure associated to the volume form $\omega^d=\omega\land\dots\land\omega$.
A diffeomorphism $g\colon(M,\omega)\to(N,\omega')$ between two symplectic manifolds is called a symplectomorphism if $g^*\omega'=\omega$. The action of a diffeomorphism on a 2-form is given by the pull-back $(g^*\omega')(X,Y)=\omega'(g_*X,g_*Y)$. Here $X$ and $Y$ are vector fields on $M$ and the push-forward $g_*X=Dg\,X$ is a vector field on $N$. Notice that a symplectomorphism $g\colon M\to M$ preserves the Lebesgue measure $\mu$ since $g^*\omega^d=\omega^d$.
For any smooth Hamiltonian function $H\colon M\to{\mathbb{R}}$ there is a corresponding Hamiltonian vector field $X_H\colon M\to TM$ determined by $\iota_{X_H}\omega=dH$ being exact, where $\iota_v\omega=\omega(v,\cdot)$ is a 1-form. Notice that $H$ is $C^s$ iff $X_H$ is $C^{s-1}$. The Hamiltonian vector field generates the Hamiltonian flow, a smooth 1-parameter group of symplectomorphisms $\varphi^{t}_{H}$ on $M$ satisfying $\frac{d}{dt}{\varphi^{t}_{H}}=X_{H}\circ\varphi^{t}_{H}$ and $\varphi^0_H=\rm{id}$. Since $dH(X_H)=\omega(X_H,X_H)=0$, $X_H$ is tangent to the energy level sets $H^{-1}(e)$.
In addition, the Hamiltonian flow is globally defined with respect to time because $H|_{\partial M}$ is constant or, equivalently, $X_H$ is tangent to $\partial M$.
If $v\in T_xH^{-1}(e)$, i.e. $dH(v)(x)=\omega(X_H,v)(x)=0$, then its push-forward by $\varphi_H^t$ is again tangent to $H^{-1}(e)$ on $\varphi_H^t(x)$ since
$$
dH(D\varphi_H^t\,v)(\varphi_H^t(x))=\omega(X_H,D\varphi_H^t\,v)(\varphi_H^t(x))={\varphi_H^t}^*\omega(X_H,v)(x)=0.
$$
We consider also the tangent flow $D\varphi^{t}_{H}:TM\to{TM}$ that satisfies the linear variational equation (the linearized differential equation)
$$
\frac{d}{dt}{D\varphi^{t}_{H}}=DX_{H}(\varphi^{t}_{H})\, D\varphi^{t}_{H}
$$
with $DX_H\colon M\to TTM$.
We say that $x$ is a \emph{regular} point if $dH(x)\not=0$ ($x$ is not critical). We denote the set of regular points by $\mathcal{R}(H)$ and the set of critical points by $\operatorname{Crit}(H)$.
We call $H^{-1}(e)$ a regular energy level of $H$ if $H^{-1}(e)\cap\operatorname{Crit}(H)=\emptyset$.
A regular energy surface is a connected component of a regular energy level.
Given any regular energy level or surface $\mathcal{E}$, we induce a volume form $\omega_{\mathcal{E}}$ on the $(2d-1)$-dimensional manifold $\mathcal{E}$ in the following way. For each $x\in {\mathcal E}$,
$$
\omega_{\mathcal{E}}(x)=\iota_Y \omega^{d}(x)
\quad\text{on $T_x{\mathcal E}$}
$$
defines a $(2d-1)$ non-degenerate form if $Y\in T_xM$ satisfies $dH(Y)(x)=1$.
Notice that this definition does not depend on $Y$ (up to normalization) as long as it is transversal to ${\mathcal E}$ at $x$.
Moreover, $dH(D\varphi_H^t\,Y)(\varphi_H^t(x))=d(H\circ\varphi_H^t) (Y)(x)=1$. Thus, $\omega_{\mathcal{E}}$ is $\varphi_H^t$-invariant, and the measure $\mu_{\mathcal{E}}$ induced by $\omega_{\mathcal{E}}$ is again invariant. In order to obtain finite measures, we need to consider compact energy levels.
On the manifold $M$ we also fix any Riemannian structure which induces a norm $\|\cdot\|$ on the fibers $T_xM$. We will use the standard norm of a bounded linear map $A$ given by $\|A\|=\sup_{\|v\|=1}\|A\,v\|$.
The symplectic structure guarantees by Darboux theorem the existence of an atlas $\{h_j\colon U_j\to{\mathbb{R}}^{2d}\}$ satisfying $h_j^*\omega_0=\omega$ with
\begin{equation}\label{canonical symplectic form}
\omega_0=\sum_{i=1}^d dy_i\land dy_{d+i}.
\end{equation}
On the other hand, when dealing with volume manifolds $(N,\Omega)$ of dimension $p$, Moser's theorem \cite{Moser} gives an atlas $\{h_j\colon U_j\to{\mathbb{R}}^{p}\}$ such that $h_j^*(dy_1\land\dots\land dy_p)=\Omega$.
\begin{subsection}{Oseledets' theorem for 4-dim Hamiltonian systems}
Unless indicated, for the rest of this paper we fix a $4$-dimensional compact symplectic manifold $(M,\omega)$.
Take $H\in C^2(M,{\mathbb{R}})$. Since the time-1 map of any tangent flow derived from a Hamiltonian vector field is measure preserving, we obtain a version of Oseledets' theorem~\cite{O} for Hamiltonian systems.
Given $\mu$-a.e. point $x\in{M}$ we have two possible splittings:
\begin{enumerate}
\item \label{case 1 OS}
$T_{x}M=E_{x}$ with $E_{x}$ $4$-dimensional and
$$
\underset{t\to{\pm{\infty}}}{\lim}\frac{1}{t}\log{\|D\varphi^{t}_{H}(x)\, v\|}=0,
\qquad
v\in E_{x}.
$$
\item \label{case 2 OS}
$T_{x}M=E^{+}_{x}\oplus E^{-}_{x}\oplus{E^{0}_{x}}\oplus{\mathbb{R}X_{H}(x)}$, where $\mathbb{R}X_{H}(x)$ denotes the vector field direction, each one of these subspaces being $1$-dimensional and
\begin{itemize}
\item
$\lim\limits_{t\to\pm\infty}
\frac{1}{t}\log{\|D\varphi^{t}_{H}(x)|_{{E^{0}_{x}}\oplus{\mathbb{R}X_{H}(x)}}\|}=0$;
\item
$\lambda^{+}(H,x)=\lim\limits_{t\to\pm\infty} \frac{1}{t}\log{\|D\varphi^{t}_{H}(x)|_{E^{+}_{x}}\|}>0$;
\item
$\lambda^{-}(H,x)=\lim\limits_{t\to\pm\infty} \frac{1}{t}\log{\|D\varphi^{t}_{H}(x)|_{E^{-}_{x}}\|} = -\lambda^{+}(H,x)$.
\end{itemize}
\end{enumerate}
Moreover,
\begin{equation}\label{angle}
\lim_{t\to{\pm{\infty}}}\frac{1}{t}\log{\det D\varphi^{t}_{H}(x)=\sum_{i\in\{+,-\}}\lambda^{i}(H,x)\,\operatorname{dim}(E^{i}_{x})}=0
\end{equation}
and
\begin{equation}\label{angle2}
\lim_{t\to\pm\infty}\frac1t\log \sin\alpha_t = 0
\end{equation}
where $\alpha_t$ is the angle at time $t$ between any subspaces of the splitting.
The splitting of the tangent bundle is called \emph{Oseledets splitting} and the real numbers $\lambda^{\pm}(H,x)$ are called the \emph{Lyapunov exponents}. In the case \eqref{case 1 OS} we say that the Oseledets splitting is trivial. The full measure set of the \emph{Oseledets points} is denoted by $\mathcal{O}(H)$.
The vector field direction $\mathbb{R}X_{H}(x)$ is trivially an Oseledets's direction with zero Lyapunov exponent.
\end{subsection}
\begin{subsection}{The transversal linear Poincar\'{e} flow of a Hamiltonian}
\label{subsection:transversal linear Poincare flow}
For each $x\in\mathcal{R}$ (we omit $H$ when there is no ambiguity) take the orthogonal splitting $T_xM={\mathbb{R}} X_H(x)\oplus N_x$, where $N_x=({\mathbb{R}} X_H(x))^\perp$ is the normal fiber at $x$.
Consider the automorphism of vector bundles
\begin{equation}
\begin{split}
D\varphi^{t}_{H}\colon T_{\mathcal{R}}M & \to T_{\mathcal{R}}M \\
(x,v) & \mapsto (\varphi^{t}_{H}(x),D\varphi_{H}^{t}(x)\, v).
\end{split}
\end{equation}
Of course that, in general, the subbundle $N_{\mathcal{R}}$ is not $D\varphi_{H}^{t}$-invariant. So we relate to the $D\varphi^{t}_{H}$-invariant quotient space $\widetilde{N}_{\mathcal{R}}=T_{\mathcal{R}}M / \mathbb{R}X_{H}(\mathcal{R})$ with an isomorphism $\phi_{1}\colon N_{\mathcal{R}}\to \widetilde{N}_{\mathcal{R}}$ (which is also an isometry).
The unique map
$$
P_{H}^{t}\colon N_{\mathcal{R}}\to N_{\mathcal{R}}
$$
such that $\phi_{1}\circ P_{H}^{t}=D\varphi^{t}_{H}\circ\phi_{1}$ is called the \emph{linear Poincar\'{e} flow} for $H$. Denoting by $\Pi_{x}\colon T_xM\to N_x$ the canonical orthogonal projection, the linear map $P^{t}_{H}(x)\colon N_{x}\to N_{\varphi^{t}_{H}(x)}$ is
$$
P^{t}_{H}(x)\, v=\Pi_{\varphi^{t}_{H}(x)}\circ D\varphi^{t}_{H}(x)\, v.
$$
We now consider
$$
{\mathcal N}_x=N_x\cap T_xH^{-1}(e),
$$
where $T_xH^{-1}(e)=\ker dH(x)$ is the tangent space to the energy level set with $e=H(x)$.
Thus, ${\mathcal N}_{\mathcal R}$ is invariant under $P^{t}_{H}$.
So we define the map
$$
\Phi_{H}^{t}\colon\mathcal{N}_{\mathcal{R}}\to\mathcal{N}_{\mathcal{R}},
\qquad
\Phi_{H}^{t}=P^{t}_{H}|_{{\mathcal N}_{\mathcal R}},
$$
called the \emph{transversal linear Poincar\'{e} flow} for $H$ such that
$$
\Phi^{t}_{H}(x)\colon \mathcal{N}_{x}\to \mathcal{N}_{\varphi^{t}_{H}(x)},
\quad
\Phi^{t}_{H}(x)\, v=\Pi_{\varphi^{t}_{H}(x)}\circ D\varphi^{t}_{H}(x)\, v
$$
is a linear symplectomorphism for the symplectic form induced on ${\mathcal N}_{\mathcal R}$ by $\omega$.
If $x\in\mathcal{R}\cap\mathcal{O}$ and $\lambda^+(x)>0$, the Oseledets splitting on $T_{x}M$ induces a $\Phi^{t}_{H}(x)$-invariant splitting $\mathcal{N}_{x}=\mathcal{N}^{+}_{x}\oplus \mathcal{N}^{-}_{x}$ where $\mathcal{N}^{\pm}_{x}=\Pi_{x}(E^{\pm}_{x})$.
\end{subsection}
\subsection{Lyapunov exponents}
Our next lemma explicits that the dynamics of $D\varphi^{t}_{H}$ and $\Phi_{H}^{t}$ are coherent so that the Lyapunov exponents for both cases are related.
\begin{lemma}\label{equal}
Given $x\in\mathcal{R}\cap\mathcal{O}$, the Lyapunov exponents of the $\Phi_{H}^{t}$-invariant decomposition are equal to the ones of the $D\varphi^{t}_{H}$-invariant decomposition.
\end{lemma}
\begin{proof}
If the Oseledets' splitting is trivial there is nothing to prove.
Otherwise, let
$$
n^+=\alpha X_H(x)+v^+\in{\mathcal{N}^{+}_{x}}
$$
with $v^+\in E_x^+$ and $\alpha\in{\mathbb{R}}$.
We want to study the asymptotic behavior of $\|\Phi_H^t(x)\, n^+\|$.
From the following two equalities
\begin{itemize}
\item
$
\Pi_{\varphi_{H}^{t}(x)}D\varphi_{H}^{t}(x)\, X_H(x)=
\Pi_{\varphi_{H}^{t}(x)} X_H\circ\varphi_{H}^{t}(x)=0$,
\item
$\| \Pi_{\varphi_{H}^{t}(x)}D\varphi_{H}^{t}(x)\, v^+\|=
\sin(\theta_t)\|D\varphi_{H}^{t}(x)\, v^+\|$,
\end{itemize}
we get
$$
\lim_{t\to{\pm{\infty}}}\frac{1}{t}\log{\|\Phi^{t}_{H}(x)\, n^{+}\|}=\lim_{t\to{\pm{\infty}}}\frac{1}{t}
\log \left[\sin(\theta_{t})\|{D\varphi_{H}^{t}(x)}\, v^{+}\|\right],
$$
where $\theta_{t}$ is the angle between $X_{H}\circ\varphi_{H}^{t}(x)$ and $E^{+}_{\varphi^{t}_{H}(x)}$.
By \eqref{angle2}, we obtain
\begin{equation*}
\begin{split}
\lim_{t\to{\pm{\infty}}}\frac{1}{t}
\log \left[\sin(\theta_{t})\|{D\varphi_{H}^{t}(x)}\, v^{+}\|\right]
&=
\lim_{t\to{\pm{\infty}}}\frac{1}{t}\log \|D\varphi_{H}^{t}(x)\, v^{+}\| \\
&=
\lambda^{+}(H,x).
\end{split}
\end{equation*}
We proceed analogously for $\mathcal{N}^{-}_{x}$.
\end{proof}
\medskip
Below we state the Oseledets theorem for the transversal linear Poincar\'{e} flow.
\begin{theorem}
Let $H\in C^2(M,{\mathbb{R}})$. For $\mu$-a.e. $x\in{M}$
there exists the \emph{upper Lyapunov exponent}
$$
\lambda^{+}(H,x)=\underset{t\to{+\infty}}{\lim}\frac{1}{t}\log\|\Phi_{H}^{t}(x)\| \geq0
$$
and $x\mapsto \lambda^+(H,x)$ is measurable. For $\mu$-a.e. $x$ with $\lambda^{+}(H,x)>0$, there is a splitting $\mathcal{N}_{x}=\mathcal{N}_{x}^{+}\oplus{\mathcal{N}_{x}^{-}}$ which varies measurably with $x$ such that:
$$
\lim_{t\to\pm\infty}\frac{1}{t}\log\|\Phi_{H}^{t}(x)\, v\| =
\begin{cases}
\lambda^{+}(H,x), & {v}\in{\mathcal{N}_{x}^{+}}\setminus\{{0}\} \\
-\lambda^{+}(H,x), & {v}\in{\mathcal{N}_{x}^{-}}\setminus\{{0}\} \\
\pm \lambda^{+}(H,x), & {v}\notin{\mathcal{N}_{x}^{+}} \cup \mathcal{N}_{x}^{-}
\end{cases}
$$
\end{theorem}
\begin{subsection}{Hyperbolic structure}
\label{section:hyperb}
Let $H\in C^2(M,{\mathbb{R}})$.
Given any compact and $\varphi^{t}_{H}$-invariant set $\Lambda\subset H^{-1}(e)$, we say that $\Lambda$ is a \emph{hyperbolic set} for $\varphi_H^t$ if there exist $m\in{\mathbb{N}}$ and a $D\varphi_{H}^{t}$-invariant splitting $T_\Lambda H^{-1}(e)=E^{+}\oplus E^{-}\oplus E$ such that for all $x\in\Lambda$ we have:
\begin{itemize}
\item $\|D\varphi^{m}_{H}(x)|_{E^{-}_{x}}\|\leq\frac{1}{2}$ (uniform contraction),
\item $\|D\varphi^{-m}_{H}(x)|_{E^{+}_{x}}\|\leq\frac{1}{2}$ (uniform expansion)
\item
and $E$ includes the directions of the vector field and of the gradient of $H$.
\end{itemize}
If $\Lambda$ is a regular energy surface, then $\varphi_H^t|_{\Lambda}$ is said to be {\em Anosov} (for simplicity, we often say that $\Lambda$ is Anosov).
Notice that there are no minimal hyperbolic sets larger than energy level sets.
Similarly, we can define a hyperbolic structure for the transversal linear Poincar\'{e} flow $\Phi_H^t$. We say that $\Lambda$ is hyperbolic for $\Phi_H^t$ on $\Lambda$ if $\Phi_H^t|_\Lambda$ is a hyperbolic vector bundle automorphism.
The next lemma relates the hyperbolicity for $\Phi_H^t$ with the hyperbolicity for $\varphi_H^t$. It is an immediate consequence of a result by Doering~\cite{D} for the linear Poincar\'e flow extended to our Hamiltonian setting and the transversal linear Poincar\'e flow.
\begin{lemma}\label{hyperbolic}
Let $\Lambda$ be an $\varphi_{H}^{t}$-invariant and compact set. Then
$\Lambda$ is hyperbolic for $\varphi_{H}^{t}$ iff $\Lambda$ is hyperbolic for $\Phi_H^t$.
\end{lemma}
We end this section with a well-known result about the measure of hyperbolic sets for $C^2$ (or more general $C^{1+}$) dynamical systems, proved by Bowen~\cite{Bowen}, Bochi-Viana~\cite{BV3} and Bessa~\cite{Be} in several contexts.
Here, following~\cite{Be}, it is stated for Hamiltonian functions, meaning a higher differentiability degree.
\begin{lemma}\label{Bowen}
Let $H\in C^3(M,{\mathbb{R}})$ and a regular energy surface ${\mathcal E}$.
If $\Lambda\subset{\mathcal E}$ is hyperbolic, then $\mu_{\mathcal{E}}(\Lambda)=0$ or $\Lambda={\mathcal E}$ (i.e. Anosov).
\end{lemma}
\end{subsection}
\subsection{Dominated splitting}
\label{section:hyperb2}
We now study a weaker form of hyperbolicity.
\begin{definition}
Let $\Lambda\subset{M}$ be an $\varphi^{t}_{H}$-invariant set and $m\in{\mathbb{N}}$. A splitting of the bundle $\mathcal{N}_\Lambda=\mathcal{N}^{-}_\Lambda\oplus\mathcal{N}^{+}_\Lambda$ is an \emph{$m$-dominated splitting} for the transversal linear Poincar\'{e} flow if it is $\Phi^{t}_{H}$-invariant and continuous such that
\begin{equation}\label{dd}
\frac{\|\Phi^{m}_{H}(x)|\mathcal{N}^{-}_{x}\|}{\|\Phi^{m}_{H}(x)|\mathcal{N}^{+}_{x}\|}\leq{\frac{1}{2}},
\qquad
x\in\Lambda.
\end{equation}
We shall call $\mathcal{N}_\Lambda=\mathcal{N}^{-}_\Lambda\oplus\mathcal{N}^{+}_\Lambda$ a \emph{dominated splitting} if it is $m$-dominated for some $m\in{\mathbb{N}}$.
\end{definition}
If $\Lambda$ has a dominated splitting, then we may extend the splitting to its closure, except to critical points. Moreover, the angle between $\mathcal{N}^{-}$ and $\mathcal{N}^{+}$ is bounded away from zero on $\Lambda$.
Due to our low dimensional assumption, the decomposition is unique. For more details about dominated splitting see~\cite{BDV}.
The above definition of dominated splitting is equivalent to the existence of $C>0$ and $0<\theta<1$ so that
\begin{equation}\label{dd2}
\frac{\|\Phi^{t}_{H}(x)|\mathcal{N}^{-}_{x}\|}{\|\Phi^{t}_{H}(x)|\mathcal{N}^{+}_{x}\|}\leq C\theta^t,
\qquad
x\in\Lambda,
\quad
t\geq0.
\end{equation}
The proof of the next lemma hints to the fact that the $4$-dimensional setting is crucial in obtaining hyperbolicity from the dominated splitting structure.
\begin{lemma}\label{hyperbolic2}
Let $H\in C^2(M,{\mathbb{R}})$ and a regular energy surface ${\mathcal E}$.
If $\Lambda\subset {\mathcal E}$ has a dominated splitting for $\Phi_H^t$, then $\overline\Lambda$ is hyperbolic.
\end{lemma}
\begin{proof}
Since $\mathcal{E}$ is compact it is at a fixed distance away from critical points, hence there is $K>1$ such that
$$
\frac1K \leq \|X_H(x)\| \leq K,
\qquad
x\in {\mathcal E}.
$$
On the other hand, because $X_H$ is volume-preserving on the $3$-dimensional submanifold ${\mathcal E}$, we get
\begin{equation}\label{eq cons X}
\sin(\gamma_0)\,\|X_H(x)\| = \sin (\gamma_t)\,\|X_H\circ\varphi_H^t(x)\|\,
\|\Phi_H^t(x)|_{{\mathcal N}_x^+}\|\,\|\Phi_H^t(x)|_{{\mathcal N}_x^-}\|.
\end{equation}
Here $\gamma_t$ is the angle between the subspaces ${\mathcal N}^-$ and ${\mathcal N}^+$ at $\varphi_H^t(x)$, which is bounded from below by some $\beta>0$ for any $x\in\overline\Lambda$.
We can now rewrite \eqref{eq cons X} as
\begin{equation*}
\begin{split}
\|\Phi_H^t(x)|_{{\mathcal N}_x^-}\|^2
& =
\frac{\sin(\gamma_0)}{\sin(\gamma_t)}
\frac{\|X_H(x)\|}{\|X_H\circ\varphi_H^t(x)\|}
\frac{\|\Phi_H^t(x)|_{{\mathcal N}_x^-}\|}{\|\Phi_H^t(x)|_{{\mathcal N}_x^+}\|} \\
&\leq
K^2 \frac{\sin(\gamma_0)}{\sin(\beta)} C\theta^t,
\end{split}
\end{equation*}
where we also have used \eqref{dd2}. Thus we have uniform contraction on ${\mathcal N}_x^-$.
The above procedure can be adapted for ${\mathcal N}_x^+$ to find uniform expansion, hence $\overline\Lambda$ is hyperbolic for $\Phi_H^t$. Lemma~\ref{hyperbolic} concludes the proof.
\end{proof}
Combining Lemmas \ref{Bowen} and \ref{hyperbolic2} we get the following.
\begin{proposition}\label{Bowen2}
Let $H\in C^3(M,{\mathbb{R}})$ and a regular energy surface ${\mathcal E}$.
If $\Lambda\subset{\mathcal E}$ has a dominated splitting for $\Phi_H^t$, then $\mu_{{\mathcal E}}(\Lambda)=0$ or ${\mathcal E}$ is Anosov.
\end{proposition}
In particular, there is a $C^2$-dense set of $C^2$-Hamiltonians for which the above holds.
\begin{remark}
It is an open problem to decide whether for every $H\in C^3(M,{\mathbb{R}})$ the following holds: an invariant set $\Lambda$ containing critical points of $H$ and admitting a dominated splitting can only be of zero measure or Anosov.
\end{remark}
\end{section}
\begin{section}{Proof of the main theorems}
\label{section:Proof of the main theorems}
\subsection{Integrated Lyapunov exponent}
Let $H\in C^{2}(M,\mathbb{R})$. We take any measurable $\varphi^{t}_{H}$-invariant subset $\Gamma$ of $M$ and we define the integrated upper Lyapunov exponent over $\Gamma$ by
\begin{equation}\label{def LE}
\operatorname{LE}(H,\Gamma)=\int_{\Gamma}\lambda^{+}(H,x)\,d\mu(x).
\end{equation}
The sequence
$$
a_n(H)=\int_\Gamma\log\|\Phi^{n}_{H}(x)\|\,d\mu(x)
$$
is subaditive ($a_{n+m}\leq a_n+a_m$), hence $\lim\frac{a_n(H)}n=\inf\frac{a_n(H)}n$.
That is,
\begin{equation}\label{infimum}
\operatorname{LE}(H,\Gamma)=\inf_{n\geq{1}}\frac{1}{n}\int_{\Gamma}\log\|\Phi^{n}_{H}(x)\|\,d\mu(x).
\end{equation}
Since $H\mapsto\frac{1}{n}\int_{\Gamma}\log\|\Phi^{n}_{H}(x)\|d\mu(x)$ is continuous for each $n$, we conclude that $\operatorname{LE}(\cdot,\Gamma)$ is upper semicontinuous among $C^2$ Hamiltonians having a common invariant set $\Gamma$.
\subsection{Decay of Lyapunov exponent}
For a given Hamiltonian $H\in C^2(M,{\mathbb{R}})$ and $m\in{\mathbb{N}}$, we define the open set
$$
\Gamma_{m}(H)=M\setminus D_{m}(H),
$$
where $D_{m}(H)$ is the invariant set with $m$-dominated splitting for $\Phi_H^t$.
This means that $\Gamma_{m}(H)$ is the set of points absent of $m$-dominated splitting.
Furthermore, there exists $\tilde{m}\in\mathbb{N}$ such that for all $m'\geq\tilde{m}$ we have $\Gamma_{m'}(H)\subset\Gamma_m(H)$.
On the other hand, if $H'=H$ on $D_m(H)$, then $\Gamma_m(H')\subset\Gamma_m(H)$. The equivalent relations for $D_m(H)$ are immediate.
The next proposition is fundamental because it allows us to decay the integrated Lyapunov exponent over a full measure subset of $\Gamma_{m}(H)$.
\begin{proposition}\label{main}
Let $H\in C^{s+1}(M,\mathbb{R})$ with $s\geq2$ or $s=\infty$, and $\epsilon,\delta>0$. Then there exists $m\in{\mathbb{N}}$ and $\widetilde{H}\in C^s(M,{\mathbb{R}})$, $\epsilon$-$C^{2}$-close to $H$, such that $\widetilde H=H$ on $D_m(H)$ and
\begin{equation}\label{dec LE}
\operatorname{LE}(\widetilde H,\Gamma_{m}(H))<\delta.
\end{equation}
\end{proposition}
We assume that $\operatorname{LE}(H,\Gamma_m(H))>0$,
otherwise the claim holds trivially.
We postpone the proof of this proposition to section~\ref{end} and complete the ones of our main results.
\subsection{Proof of Theorem~\ref{teorema22}}
Here we look at the product set
$$
\mathcal M=M\times C^2(M,{\mathbb{R}})
$$
endowed with the standard product topology.
Given a point $p$ on the manifold $M$, we denote by ${\mathcal E}_p(H)$ the energy surface in $H^{-1}(H(p))$ passing through $p$.
The subset
$$
A=\{(p,H)\in \mathcal M \colon {\mathcal E}_p(H)\text{ is an Anosov regular energy surface}\}
$$
is open by structural stability of Anosov systems.
Moreover, for each $(p,H)\in A$ there is a tubular neighbourhood of ${\mathcal E}_p(H)$ in $M$, consisting of regular energy surfaces supporting Anosov flows.
On the complement of the closure of $A$, denoted by
$$
B=\mathcal M \setminus \overline A,
$$
there is a continuous positive function
$$
\eta\colon B\to {\mathbb{R}}^+
$$
guaranteeing for $(p,H)\in B$ that ${\mathcal V}_{p,H}$ is a connected component of
$$
\{x\in M\colon |H(x)-H(p)|<\eta(p,H)\}
$$
containing $p$ and made entirely of non-Anosov energy surfaces.
Now, for each $k\in{\mathbb{N}}$ write
$$
A_k=\left\{(p,H)\in B \colon
\operatorname{LE}(H,{\mathcal V}_{p,H}) <\frac1k\right\}.
$$
This is an open set because the function in its definition is upper semicontinuous.
\begin{lemma}
$A_k$ is dense in $B$.
\end{lemma}
\begin{proof}
Let $(p,H')\in B$. We want to find an arbitrarly close pair $(p,H)$ in $A_k$.
Notice that we will not need to approximate on the first component, the point on the manifold, but only on the Hamiltonian.
Denote the set of $C^3$ Morse functions on $M$ by $K$. Since Morse functions are $C^2$-dense and have a finite number of critical points, it is sufficient to prove the claim by restricting to $(M\times K)\cap B$.
Moreover, small perturbations of a Hamiltonian in $K$ will have regular energy surfaces through $p$. Therefore, we have a dense subset $D\subset M\times K$ in $B$ such that ${\mathcal E}_p(H)$ is regular for $(p,H)\in D$ and away from Anosov. This means that in fact we only need to show the claim for $D$.
Let $(p,\widehat H)\in D$, and $\varepsilon>0$ such that $(p, H) \in B$ for any $H$ that is $\varepsilon$-$C^2$-close to $\widehat H$.
Proposition~\ref{main} guarantees that for all $\delta>0$ we can find $H\in C^2(M,{\mathbb{R}})$ which is $\varepsilon$-$C^2$-close to $\widehat H$ and satisfies
$H=\widehat H$ on $D_m(\widehat H)$ (hence $\Gamma_m(H)\subset\Gamma_m(\widehat H)$) and
$$
\operatorname{LE}(H,\Gamma_m(\widehat H))<\delta.
$$
Notice that $\mu(\Gamma_m(H)\cap{\mathcal V}_{p,H})=\mu({\mathcal V}_{p,H})$ for all $m\in{\mathbb{N}}$.
Otherwise, if there was an energy surface ${\mathcal E}\subset{\mathcal V}_{p,H}$ and $m\in{\mathbb{N}}$ such that
$\mu_{{\mathcal E}}(D_m(H)\cap{\mathcal E})>0$, by Proposition~\ref{Bowen2} it would be Anosov, thus contradicting that $(p,H)\in B$.
Therefore, since the upper Lyapunov exponent is non-negative,
\begin{equation}
\begin{split}
\operatorname{LE}(H,{\mathcal V}_{p,H})
&=
\operatorname{LE}(H,\Gamma_m(H)\cap{\mathcal V}_{p,H}) \\
& \leq
\operatorname{LE}(H,\Gamma_m(\widehat H))<\delta.
\end{split}
\end{equation}
The choice $\delta=1/k$ yields $(p,H)\in A_k$.
\end{proof}
From the above, $A\cup A_k$ is open and dense.
Finally,
\begin{equation*}
\begin{split}
\mathfrak{A}
&=
\bigcap_{k\in{\mathbb{N}}}(A\cup A_k) =
A\cup \bigcap_{k\in{\mathbb{N}}}A_k \\
&=
A\cup \left\{(p,H)\in B \colon \int_{{\mathcal V}_{p,H}}\lambda^+(H,x)\,d\mu(x)=0\right\}
\end{split}
\end{equation*}
is residual.
By~\cite{BF} Proposition A.7, we can thus write
$$
\mathfrak{A}=\bigcup_{H\in\mathfrak{R}}\mathfrak{M}_H\times \{H\},
$$
where $\mathfrak{R}$ is $C^2$-residual in $C^2(M,{\mathbb{R}})$ and, for each $H\in\mathfrak{R}$, $\mathfrak{M}_H$ is a residual subset of $M$, having the following property:
if $H\in\mathfrak{R}$ and $p\in\mathfrak{M}_H$, then ${\mathcal E}_p(H)$ is Anosov or
$$
\int\int\lambda^+d\mu_{{\mathcal E}}dH=0.
$$
The latter implies that $dH$-a.e. the Lyapunov exponents on each energy surface ${\mathcal E}$ in ${\mathcal V}$ are $\mu_{{\mathcal E}}$-a.e. equal to zero.
Recall that we can split the measure $\mu$ into $\mu_{\mathcal E}$ on the energy surfaces and $dH$ corresponding to the $1$-form transversal to ${\mathcal E}$.
Therefore, for a $C^2$-generic $H$, in a neighbourhood of a generic point in $M$ we have the above dichotomy, thus being valid everywhere in the manifold. That completes the proof of Theorem~\ref{teorema22}.
\subsection{Proof of Theorem~\ref{teorema2}}\label{section: proof of thm 2}
It is enough to show that we can arbitrarly $C^2$-approximate any $H\in C^\infty(M,{\mathbb{R}})$ by $H'\in C^2(M,{\mathbb{R}})$ satisfying
$$
LE(H',Z)=0
$$
for some $Z$ to be determined, without domination and whose $\mod0$-complement is dominated.
We use an inductive scheme built on~\eqref{dec LE} and the fact that $\operatorname{LE}(\cdot,\Gamma)$ is an upper semicontinuous function among Hamiltonians having a common invariant set $\Gamma$, to define a convenient sequence $H_n\in C^\infty(M,{\mathbb{R}})$ with $C^2$-limit $H'$.
Choose a sequence $\epsilon_n\leq\epsilon_0 2^{-n}$ (to be further specified later) for some $\epsilon_0>0$.
By Proposition~\ref{main} we construct the sequence of Hamiltonians $H_n$ in the following way:
\begin{enumerate}
\item
$H_0=H$,
\item
$H_n$ and $H_{n-1}$ are $\epsilon_n$-$C^2$-close,
\item
$H_n=H_{n-1}$ on $D_{m_n}(H_{n-1})$,
\item
$\operatorname{LE}(H_n,\Gamma_{m_n}(H_{n-1}))\leq 2^{-n}$.
\end{enumerate}
That is, each term $H_n$ of the sequence is the perturbation of the previous one $H_{n-1}$ as given by Proposition~\ref{main}.
Then, the $C^2$-limit $H'$ exists and is $\epsilon_n$-$C^2$-close to any $H_n$.
For each $n$ and an invariant set $\Gamma$ for $H_n$, because $\operatorname{LE}(\cdot,\Gamma)$ is upper semicontinuous, for any $\theta>0$ we can find $\eta_n>0$ such that
$$
\operatorname{LE}(H_*,\Gamma) \leq (1+\theta)\,\operatorname{LE}(H_n,\Gamma)
$$
as long as $H_n$ and $H_*$ are $\eta_n$-$C^2$-close and have the common invariant set $\Gamma$.
Impose now additionally that $\epsilon_n<\eta_n$. So, for any $n$,
\begin{equation*}
\begin{split}
\operatorname{LE}(H',\cap_i\Gamma_{m_i}(H_{i-1}))
&\leq
\operatorname{LE}(H',\Gamma_{m_n}(H_{n-1})) \\
&\leq
(1+\theta) \operatorname{LE}(H_n,\Gamma_{m_n}(H_{n-1})) \\
&\leq
(1+\theta) 2^{-n}.
\end{split}
\end{equation*}
Therefore, $\operatorname{LE}(H',\cap_{i}\Gamma_{m_i}(H_{i-1}))=0$ and the Lyapunov exponents vanish on
$$
Z=\bigcap\limits_{i\in{\mathbb{N}}}\Gamma_{m_i}(H_{i-1})\pmod 0.
$$
Consider an increasing subsequence $m_{n_k}$.
The complementary set of $\cap_{i}\Gamma_{m_{n_i}}(H_{n_i-1})$ is
$$
D=\bigcup\limits_{i\in{\mathbb{N}}}D_{i},
\quad\text{where}\quad
D_{i}=D_{m_{n_i}}(H_{n_i-1}).
$$
By the inductive scheme above, $D_i\subset D_{i+1}$ and $H'=H_{n_i}$ on $D_i$.
So, $H'$ has an $m_{n_i}$-dominated splitting on $D_{i}$.
Finally, we would like to explain why, unfortunately, the strategy in \cite{BV2} to obtain \emph{residual} instead of \emph{dense} in the hypothesis of Theorem \ref{teorema2}, does not apply in our case.
We start with a $C^2$ Hamiltonian which is a continuity point of the upper semicontinuous function $H\mapsto \operatorname{LE}(H,M)$ (it is well-known that the set of points of continuity is residual) and define the \emph{jump} (see \cite{BV2} p. 1467) by $\operatorname{LE}(H,\Gamma_{\infty}(H))$, where $\Gamma_{\infty}(H)=\cap_{m}\Gamma_{m}(H)$.
A continuity point means a zero jump, so that $\lambda^{+}(H,x)=0$ for a.e. $x\in \Gamma_{\infty}(H)$ or else $\Gamma_{\infty}(H)$ has zero measure.
Now, in order to estimate a lower bound for the jump, we will need to perturb the original Hamiltonian $H$ as done in section \ref{perturbations}.
But Theorem \ref{robinson} becomes useless if
$H$ is $C^2$, because the conjugacy symplectomorphism will only be $C^1$.
Finally, we should note that $C^3(M,\mathbb{R})$ equipped with
the $C^2$-topology is not a Baire space, thus residual sets can be
meaningless.
\end{section}
\begin{section}{Perturbing the Hamiltonian}\label{perturbations}
\begin{subsection}{A symplectic straightening-out lemma}
Here we present an improved version of a lemma by Robinson~\cite{R2} that provides us with symplectic flowbox coordinates useful to perform local perturbations to our original Hamiltonian.
Consider the canonical symplectic form on ${\mathbb{R}}^{2d}$ given by $\omega_0$ as in \eqref{canonical symplectic form}.
The Hamiltonian vector field of any smooth $H\colon{\mathbb{R}}^{2d}\to{\mathbb{R}}$ is then
$$
X_H=\left[\begin{matrix}0&I\\-I&0\end{matrix}\right]\nabla H,
$$
where $I$ is the $d\times d$ identity matrix.
Let the Hamiltonian function $H_0\colon{\mathbb{R}}^{2d}\to{\mathbb{R}}$ be given by $y\mapsto y_{d+1}$, so that
$$
X_{H_0}=\frac{\partial}{\partial y_1}.
$$
\begin{theorem}[Symplectic flowbox coordinates]\label{robinson}
Let $(M^{2d},\omega)$ be a $C^s$ symplectic manifold, a Hamiltonian $H\in C^s(M,{\mathbb{R}})$, $s\geq2$ or $s=\infty$, and $x\in M$. If $x\in\mathcal{R}(H)$, there exists a neighborhood $U\subset M$ of $x$ and a local $C^{s-1}$-symplectomorphism $g\colon (U,\omega)\to({\mathbb{R}}^{2d},\omega_0)$ such that $H=H_0\circ g$ on $U$.
\end{theorem}
\begin{proof}
Fix $e=H(x)$.
Choose any $C^s$ function $G\colon M\to{\mathbb{R}}$ such that $G(x)=0$ and
\begin{equation}\label{transversality}
\omega(X_H,X_G)(x)\not=0.
\end{equation}
This defines a transversal $\Sigma$ to $X_H$ at $x$ in the following way.
If $U\subset M$ is a small enough neighborhood of $x$ in $M$ ($U$ will always be allowed to remain as small as needed), then
$$
\Sigma=G^{-1}(0) \cap U
$$
is a $C^s$ regular connected submanifold of dimension $2d-1$.
Notice also that \eqref{transversality} holds in $U$.
Locally there is a $C^s$ regular $(2d-2)$-dimensional hypersurface of $H^{-1}(e)$ where $H$ and $G$ are both constant: $\Sigma_e=\Sigma\cap H^{-1}(e)$.
Notice that for $m\in\Sigma_e$
\begin{equation}
\begin{split}
T_m\Sigma_e
&=\{v\in T_mM\colon dH(v)(m)=dG(v)(m)=0\} \\
&=\ker(\iota_{X_H}\omega(m)) \cap \ker(\iota_{X_G}\omega(m)).
\end{split}
\end{equation}
Since $\omega(X_H,X_G)\not=0$, we have $X_G(m),X_H(m)\not\in T_m\Sigma_e$ and
$$
T_mM=T_m\Sigma_e\oplus{\mathbb{R}} X_H(m)\oplus{\mathbb{R}} X_G(m).
$$
Now, consider the closed 2-form $\omega_e=\omega|_{\Sigma_e}$ defined on $T\Sigma_e\times T\Sigma_e$.
To show that $(\Sigma_e,\omega_e)$ is a $C^s$ symplectic manifold it is enough to check that $\omega_e$ is non-degenerate.
So, suppose there is $v\in T_m\Sigma_e$ such that $\omega_e(w,v)=0$ for any $w\in T_m\Sigma_e$.
As in addition $\omega(X_H,v)(m)=\omega(X_G,v)(m)=0$, $m\in\Sigma_e$, due to the fact that $\omega$ is non-degenerate we have to have $v=0$. Thus, $\omega_e$ is non-degenerate.
So, Darboux's theorem assures us the existence of a local diffeomorphism $h\colon\Sigma_e\to{\mathbb{R}}^{2d-2}$ such that
\begin{equation}\label{Darboux}
h^*\omega_0'=\omega_e
\quad\text{where}\quad
\omega_0'=
\sum_{i=2}^{d}dy_i\land dy_{d+i}.
\end{equation}
\begin{figure}[h]
\begin{center}
\includegraphics[width=10cm,height=6cm]{flow.eps}
\caption{The symplectic flowbox.}\label{f2}
\end{center}
\end{figure}
The next step is to extend the above symplectic coordinates from $\Sigma_e$ to $U$. For this purpose we use the parametrization by the flows $\varphi_H^t$ and $\phi^t$ generated by $X_H$ and $Y:=\omega(X_H,X_G)^{-1}X_G$, respectively. The time reparametrization in the definition of $Y$ is necessary to normalize the pull-back of the form as it will become clear later.
The tranversality condition \eqref{transversality} is again used in solving the equation $G\circ \varphi_H^{\tau(m)}(m)=0$, $m\in U$, with respect to a function $\tau\colon U\to{\mathbb{R}}$. That is, we want to find $\tau$ and $U$ such that $\varphi_H^{\tau(m)}(m)\in\Sigma$ for each $m\in U$.
By the implicit function theorem, since $G\circ \varphi_H^{0}(m)=0$ and
$$
\frac{d}{dt}G\circ\varphi_H^t(m)\vert_{t=0}=
dG(X_H)(m)=
\omega(X_G,X_H)(m)\not=0,
$$
there exists $U$ and a unique $\tau\in C^{s-1}(U,{\mathbb{R}})$ as required.
Moreover, $\phi^t$ preserves the level sets of $G$ as ${\mathcal L}_YG=\omega(X_G,Y)=0$, and
$$
{\mathcal L}_YH=\frac{d}{dt}H\circ\phi^t(m)=
\omega(X_H,Y)\circ\phi^t(m)=1.
$$
Thus, $H\circ\phi^t(m)=H(m)+t$ and in particular $H\circ\phi^{e-H(m)}(m)=e$ meaning that $\phi^{e-H(m)}(m)\in H^{-1}(e)$ for $m\in U$.
So, we define the map $g\colon U\to{\mathbb{R}}^{2d}$ given by
$$
g(m)=(-\tau(m),h_1\circ\phi^{e-H(m)}\circ\varphi_H^{\tau(m)}(m),H(m),h_2\circ\phi^{e-H(m)}\circ\varphi_H^{\tau(m)}(m)),
$$
where $h=(h_1,h_2)$ as in \eqref{Darboux} and $h_i\colon\Sigma_e\to{\mathbb{R}}^{d-1}$.
In particular, $H_0\circ g=H$.
It remains to prove that $g$ is a $C^{s-1}$-symplectomorphism.
It follows that $g$ is $C^{s-1}$ and it has a $C^{s-1}$ inverse $g^{-1}\colon g(U)\to U$ given by
$$
g^{-1}(y)=\varphi_H^{y_1}\circ\phi^{y_{d+1}-e}\circ h^{-1}(\widehat y),
$$
where $\widehat y=(y_2,\dots,y_d,y_{d+2},\dots,y_{2d})$.
In addition, for $y\in g(U)$,
\begin{equation}
\begin{split}
g^{-1}_*\, X_{H_0}(y) &=
\dot\varphi_H^{y_1}\circ\phi^{y_{d+1}-e}\circ h^{-1}(\widehat y) \\
&=
X_H\circ\varphi_H^{y_1}\circ\phi^{y_{d+1}-e}\circ h^{-1}(\widehat y) \\
&=
X_H\circ g^{-1}(y).
\end{split}
\end{equation}
Hence, $g_*X_{H}= X_{H_0}$. Similarly, we can show that $g_*Y=\pde{}{y_{d+1}}$ when restricting to $\Sigma$.
Notice that on $g(\Sigma_e)$ we have $g^{-1}_*\pde{}{y_j} =h^{-1}_*\pde{}{y_j}$ for $j\not\in\{1,d+1\}$. Furthermore, taking in addition $k\not\in\{1,d+1\}$,
\begin{equation*}
\begin{split}
({g^{-1}}^*\omega)\left(\pde{}{y_j},\pde{}{y_k}\right)
&=
({h^{-1}}^*\omega)\left(\pde{}{y_j},\pde{}{y_k}\right)
=\omega_0\left(\pde{}{y_j},\pde{}{y_k}\right), \\
({g^{-1}}^*\omega)\left(\pde{}{y_1},\pde{}{y_{d+1}}\right)
&=
\omega(X_H,Y)\circ g^{-1} =1.
\end{split}
\end{equation*}
Since $Dh^{-1}\pde{}{y_j}\in T\Sigma_e$, and $H$ and $G$ are constant on $\Sigma_e$,
$$
({g^{-1}}^*\omega)\left(\pde{}{y_1},\pde{}{y_j}\right)=
\omega\left(X_H,Dh^{-1}\pde{}{y_j}\right) =
dH\left(Dh^{-1}\pde{}{y_j}\right)=0
$$
and analogously $({g^{-1}}^*\omega)\left(\pde{}{y_{d+1}},\pde{}{y_j}\right)=0$. Therefore ${g^{-1}}^*\omega$ has to be the canonical 2-form, i.e. $g^*\omega_0=\omega$ on $\Sigma_e$.
Now, we show that $g^*\omega_0=\omega$ also holds on $\Sigma$.
Using Cartan's formula for the Lie derivative ${\mathcal L}_v=\iota_v d+d\iota_v$ with respect to a vector field $v$ and the identities $df^*=f^*d$ and $f^*\iota_v\omega=\iota_{f_*^{-1}v}f^*\omega$, then
$$
{\mathcal L}_{Y}g^*\omega_0=g^*d\iota_{\partial/\partial y_{d+1}}\omega_0=g^*d^2(-y_1)=0.
$$
As we also have ${\mathcal L}_{X_G}\omega=0$ and ${\mathcal L}_{Y}\omega=0$, the forms $g^*\omega_0$ and $\omega$ are constant and coincide along the flow of $Y$ passing through $\Sigma_e$, i.e. on $\Sigma$.
In order to see that we can have $g^*\omega_0=\omega$ on all of $U$, we compute
$$
{\mathcal L}_{X_H}g^*\omega_0=d \iota_{X_{H_0}}\omega_0 = d(dH_0)=0.
$$
Recall that ${\mathcal L}_{X_H}\omega=0$. So, $g^*\omega_0=\omega$ along the flow of $X_H$ through $\Sigma$, thus on all $U$. This concludes the proof that $g$ is a symplectomorphism.
\end{proof}
\subsection{Hamiltonian local perturbation}
In the next lemma we introduce the main tool to perturb $2d=4$-dimensional Hamiltonians. We will then be able to perturb the transversal linear Poincar\'{e} flow in order to rotate its action by a small angle. As we shall see later, that is all we need to interchange $\mathcal{N}^{+}$ with $\mathcal{N}^{-}$ using the lack of dominance.
For functions on ${\mathbb{R}}^4$ consider the $C^k$-norm, with $k\geq0$ integer,
$$
\|f\|_{C^k}= \sup_{y} \max_{0\leq|\sigma|\leq k}
\left |\frac{\partial^{|\sigma|} f(y)}{\partial^{\sigma_1}y_1\dots\partial^{\sigma_4}y_4}\right|,
$$
where $\sigma=(\sigma_1,\dots,\sigma_4)\in {\mathbb{N}}_0^4$ with $|\sigma|=\sum_i \sigma_i$.
Define the ``tube''
$$
V_{a,b,c}=\{(y_1,y_2,y_3,y_4)\in{\mathbb{R}}^4\colon a<y_1<b, \sqrt{y_2^2+y_4^2} < c, |y_3|<c \}.
$$
Moreover, take the $2$-dim plane
$
\Sigma_{0}=\{(0,y_2,0,y_4)\in{\mathbb{R}}^4\}
$
and the orthogonal projection $\pi_{0}\colon{\mathbb{R}}^4\to\Sigma_{0}$.
Notice that the transversal linear Poincar\'e flow of $H_0(y)=y_3$ on $\Sigma_0$ is given by $\Phi_{H_0}^t(y_2,y_4)=\pi_0$.
In the following we fix a universal $0<\varrho<1$.
\begin{lemma}\label{basic}
Given $0<\nu<1$ and $\epsilon>0$, there exists $\alpha_0>0$ such that, for every $0<r<1$ and $0<\alpha\leq\alpha_0$, we can find $H\in C^\infty({\mathbb{R}}^4,{\mathbb{R}})$ satisfying
\begin{itemize}
\item
$H=H_0$ outside $V_{0,\varrho,r}$,
\item
$\|H-H_0\|_{C^2}<\epsilon$,
\item
$DX_H(y)=0$ for $y\in\{0,\varrho\}\times{\mathbb{R}}^3$ and
\item
$\Phi_H^1(0,y_2,0,y_4):=(\pi_0 D\varphi_H^1)(0,y_2,0,y_4)=R_{\alpha}$ on $\Sigma_0$ with $\sqrt{y_2^2+y_4^2}<r\nu$, where
$$
R_{\alpha}=
\begin{bmatrix}
0&0&0&0\\
0&\cos\alpha &0& -\sin\alpha \\
0&0&0&0\\
0&\sin\alpha &0& \cos\alpha
\end{bmatrix}.
$$
\end{itemize}
\end{lemma}
\begin{proof}
Consider the Hamiltonian flow $\varphi_{H_0}^t(y)=(y_1+t,y_2,y_3,y_4)$.
We want to $\epsilon$-$C^2$-perturb $H_0$ to get a Hamiltonian flow that rotates on the $(y_2,y_4)$-plane while the orbit is inside $V_{\xi,\xi',r\nu}$ for some fixed universal constants $0<\xi<\xi'<\varrho<1$. Outside the slightly larger tube $V_{0,\varrho,r}$ we impose no perturbation.
In order to construct a $C^\infty$ perturbation on those terms, we need to consider three bump functions.
It is possible to find $C^\infty$ maps $\ell\colon{\mathbb{R}}\to{\mathbb{R}}$ along the time direction and $\widetilde\ell\colon{\mathbb{R}}\to{\mathbb{R}}$ on the $y_3$-direction satisfying
$$
\ell(y_1)=
\begin{cases}
\ell_0, & y_1 \in [\xi,\xi'] \\
0, & y_1 \not\in (0,\varrho),
\end{cases}
\quad\quad
\widetilde\ell(y_3)=
\begin{cases}
1, & |y_3|\leq r\nu \\
0, & |y_3|\geq r,
\end{cases}
$$
$\ell_0>0$, $\int_0^1\ell=1$,
the norms $\|\ell\|_{C^0},\|\ell'\|_{C^0},\|\ell''\|_{C^0}$, $\|\widetilde\ell\|_{C^0}$ all bounded from above by a constant (recall that $\xi$, $\xi'$ and $\varrho$ are seen as universal),
$\|\widetilde\ell'\|_{C^0}\leq \frac2{(1-\nu) r}$ and
$\|\widetilde\ell''\|_{C^0}\leq \frac {4}{[(1-\nu) r]^2}$.
Similarly, get a $C^\infty$ map $\phi\colon{\mathbb{R}}^+_0\to{\mathbb{R}}$ for the plane $(y_2,y_4)$ such that
$$
\phi(\rho)=
\begin{cases}
\frac{\rho^2}2, & \rho \leq r\nu \\
0, & \rho \geq r,
\end{cases}
$$
$\|\phi\|_{C^0}\leq (r\nu)^2$, $\|\phi'\|_{C^0}\leq \frac{2r\nu^2}{1-\nu}$ and $\|\phi''\|_{C^0}\leq \left(\frac{2\nu}{1-\nu}\right)^2$.
Now, we construct the perturbed Hamiltonian
\begin{equation}\label{pert H}
H(y)=H_0(y) - \alpha \ell(y_1)\, \widetilde\ell(y_3)\, \phi(\rho),
\end{equation}
where $\rho=\sqrt{y_2^2+y_4^2}$. Clearly, it is equal to $H_0$ outside $V_{0,\varrho,r}$.
Hence, for $y\in V_{0,1,r\nu}$,
\begin{equation}\label{nabla H}
\nabla H(y)=\left(
-\alpha\ell'(y_1)\phi(\rho),
-\alpha\, y_2\ell'(y_1),
1,
-\alpha\, y_4\ell'(y_1)
\right).
\end{equation}
So, on this domain, $X_H$ generates the flow
\begin{equation}
\begin{split}
\varphi_H^t(y)= & \left(
y_1+t,
\rho \cos\left(\theta+\alpha\,\int_0^t\ell(y_1+s)ds \right), \right.\\
&
y_3+\alpha\,\phi(\rho)\,[\ell(y_1+t)-\ell(y_1)], \\
&\left.
\rho \sin\left(\theta+\alpha\,\int_0^t\ell(y_1+s)ds\right)
\right),
\end{split}
\end{equation}
where $\theta=\arctan(y_4/y_2)$.
Notice that $\frac d{dt}\rho^2=0$ so that $\rho$ is $\varphi_H^t$-invariant. That is, on the $(y_2,y_4)$-plane the motion consists of a rotation.
Furthermore, by fixing $y_3=0$,
$|y_3(t)|\leq \alpha \,|\frac{\rho^2}2|\,|\ell(t)| \leq r\nu$
if $\alpha\leq 2(\|\ell\|_{C^0}r\nu)^{-1}$.
Now, if $\rho<r\nu$,
$$
\varphi_H^1(0,y_2,0,y_4)=(1,\rho\cos(\theta+\alpha),0,\rho\sin(\theta+\alpha))
$$
and $(\pi_0D\varphi_H^1)(0,y_2,0,y_4)\,v=R_\alpha \, v$, $v\in \Sigma_0$.
Finally, we need to estimate the $C^2$-norm of the perturbation.
From \eqref{pert H} and \eqref{nabla H} we get
\begin{equation}
\|H-H_0\|_{C^1} \ll \alpha r \nu (1-\nu)^{-1},
\end{equation}
where we are using the notation $A\ll B$ to mean that there is a constant $C>0$ such that $A\leq CB$.
The second order derivatives are
\begin{equation}
\begin{split}
\frac{\partial^2H}{\partial y_1^2} &=-\alpha \ell''(y_1)\widetilde\ell(y_3)\phi(\rho) \\
\frac{\partial^2H}{\partial y_1\partial y_3} &=-\alpha \ell'(y_1)\widetilde\ell'(y_3)\phi(\rho) \\
\frac{\partial^2H}{\partial y_2\partial y_4}& =-\alpha \frac{y_2y_4}{\rho^2} \ell(y_1) \widetilde\ell(y_3) \left(\phi''(\rho)-\phi'(\rho)\rho^{-1}\right) \\
\frac{\partial^2H}{\partial y_1\partial y_j} &=-\alpha y_j \rho^{-1}\ell'(y_1)\widetilde\ell(y_3)\phi'(\rho) \\
\frac{\partial^2H}{\partial y_j^2} &=-\alpha \ell(y_1)\widetilde\ell(y_3)\left[\phi''(\rho)y_j^2\rho^{-2}+\phi'(\rho)\rho^{-1}-\phi'(\rho)y_j^2\rho^{-3}\right]\\
\frac{\partial^2H}{\partial y_3\partial y_j} &=-\alpha \ell(y_1)\widetilde\ell'(y_3)\phi'(\rho)y_j\rho^{-1} \\
\frac{\partial^2H}{\partial y_3^2} &=-\alpha \ell(y_1)\widetilde\ell''(y_3)\phi(\rho)
\end{split}
\end{equation}
where $j=2,4$. So, $DX_H=0$ if $y_1\leq 0$ or $y_1\geq\varrho$, and
\begin{equation}
\begin{split}
\|D^2(H-H_0)\|_{C^0} & \ll \alpha (1-\nu)^{-2}.
\end{split}
\end{equation}
Hence, there is $\alpha_0\ll \epsilon (1-\nu)^ 2$ such that $\|H-H_0\|_{C^2}<\epsilon$ for all $0<\alpha\leq\alpha_0$.
\end{proof}
\begin{remark}
It is not possible to find $\alpha$ as above if we require $C^3$-closeness. This can easily be seen in the proof by computing the third order derivatives. E.g. $\frac{\partial^3}{\partial y_2^3}H$ contains the term $\alpha \ell(y_1)\widetilde\ell(y_3)y_2^3 \rho^{-3} \phi'''(\rho)$ that can not be controlled by a bound of smaller order than $\alpha r^{-1}$.
\end{remark}
\end{subsection}
\begin{subsection}{Realizing Hamiltonian systems}\label{realizable}
In this section we define the central objects for the proof of Proposition~\ref{main}, the achievable or \emph{realizable} linear flows. These will be constructed by perturbations of $\Phi_H^t$.
We start with a point $x\in\mathcal{O}(H)$ with lack of hyperbolic behavior and mix the directions $\mathcal{N}_{x}^{+}$ and $\mathcal{N}_{x}^{-}$ to cause the decay of the upper Lyapunov exponent.
In fact we are interested in ``a lot'' of points (related to the Lebesgue measure on transversal sections). Therefore, we perturb the Hamiltonian to make sure that ``many'' points $y$ near $x$ have $\Phi_H^t(y)$ close to $\Phi_H^t(x)$. For this reason we must be very careful in our procedure.
Consider a Darboux atlas $\{h_{j}\colon U_{j}\to \mathbb{R}^{4}\}_{j\in\{1,...,\ell\}}$.
For each $x\in\mathcal{R}(H)$ choose $j$ such that $x\in U_{j}$,
and take the $3$-dimensional normal section $\mathfrak{N}_{x}$ to the flow.
In the sequel we abuse notation to write $\mathfrak{N}_{x}$ for $h_j(\mathfrak{N}_{x}\cap U_j)$, so that we work in ${\mathbb{R}}^{4}$ instead of $M$.
Furthermore, denote by $B(x,r)=\{(u,v,w)\in {\mathbb{R}}^3\colon\sqrt{u^2+v^2}<r,|w|<r\}$ the open ball in $\mathfrak{N}_{x}$ about $x$ with small enough radius $r$.
We estimate the distance between linear maps on tangent fibers at different base points by using the atlas and translating the objects to the origin in ${\mathbb{R}}^4$.
That is, $\|A_1^t-A_2^t\|$ for linear flows $A_i^t\colon T_{x_i}M\to T_{\varphi_H^t(x_i)}M$, is given by
$$
\|Dh_{j_{1,t}}(\varphi_H^t(x_1))A_1^t(Dh_{j_{1,0}}(x_1))^{-1}-Dh_{j_{2,t}}(\varphi_H^t(x_2))A_2^t(Dh_{j_{2,0}}(x_2))^{-1}\|,
$$
where $j_{i,t}$ is the indice of the chart corresponding to $\varphi_H^t(x_i)$.
Consider the standard Poincar\'{e} map
$$
\mathcal{P}_{H}^{t}(x)\colon U \to {\mathfrak{N}_{\varphi_{H}^{t}(x)}},
$$
where $U\subset \mathfrak{N}_{x}$ is chosen sufficiently small. Given $T>0$, the self-disjoint set
$$
\mathcal{F}_{H}^{T}(x,U)=\left\{\mathcal{P}_{H}^{t}(x)\,y\in M\colon y\in{U},t\in[0,T]\right\},
$$
is called a \emph{$T$-length flowbox} at $x$ associated to the Hamiltonian $H$.
There is a natural way to define a measure $\overline{\mu}$ in the transversal sections by considering the invariant volume form $\iota_{X_H}\omega^d$.
We easily obtain an estimate on the time evolution of the measure of transversal sets:
for $\nu,t>0$ there is $r>0$ such that for any measurable $A\subset B(x,r)$ we have
\begin{equation}\label{time ev trans measure}
\left|\tra\mu(A) - \alpha(t)\, \tra\mu(\mathcal P_H^t(x)\,A)
\right| <\nu,
\end{equation}
where
$$
\alpha(t)=\frac{\|X_H(\varphi_H^t(x))\|}{\|X_H(x)\|}.
$$
\begin{definition}\label{rlf}
Fix a Hamiltonian $H\in C^{s+1}(M,{\mathbb{R}})$, $s\geq2$ or $s=\infty$, $T,\epsilon>0$, $0<\kappa<1$ and a non-periodic point $x\in M$ (or with period larger than $T$). The flow $L$ of symplectic linear maps:
$$
L^t(x)\colon {\mathcal N}_x\to{\mathcal N}_{\varphi_H^t(x)},
\quad
0\leq t\leq T,
$$
is {\em $(\epsilon,\kappa)$-realizable of length $T$ at $x$} if the following holds:
For $\gamma>0$ there is ${r}>0$ such that for any open set $U\subset{B(x,r)}\subset \mathfrak{N}_{x}$ we can find
\begin{enumerate}
\item \label{rlf 0}
$K\subset{U}$ with $\tra\mu(U\setminus K)\leq \kappa\, \tra\mu(U)$, and
\item \label{rlf 0a}
$\widetilde H\in C^s(M,{\mathbb{R}})$ $\epsilon$-$C^{2}$-close to $H$, verifying
\begin{enumerate}
\item \label{rlf 1}
$H=\widetilde H$ outside $\mathcal{F}_{H}^{T}(x,U)$,
\item \label{rlf 2}
$DX_{H}(y)=DX_{\widetilde{H}}(y)$ for $y\in U\cup\mathcal{P}_{H}^{T}(x)\,U$, and
\item \label{rlf 3}
$\|\Phi^{T}_{\widetilde{H}}(y)-L^T(x)\|<\gamma$ for all $y\in{K}$.
\end{enumerate}
\end{enumerate}
\end{definition}
Let us add a few words about this definition: \eqref{rlf 1} and \eqref{rlf 2} guarantee that the support of the perturbation is restricted to the flowbox and it $C^1$ ``glues'' to its complement; \eqref{rlf 3} says that a large percentage of points (given numerically by \eqref{rlf 0}) have the transversal linear Poincar\'{e} flow of $\widetilde H$ (as in \eqref{rlf 0a}) very close to the abstract linear action of the central point $x$ along the orbit.
Notice that the realizability is with respect to the $C^{2}$ topology.
\begin{remark}\label{vitali}
Using Vitali covering arguments we may replace any open set $U$ of Definition {\rm \ref{rlf}} by open balls. That turns out to be very useful because the basic perturbation Lemma {\rm\ref{basic}} works for balls.
\end{remark}
It is an immediate consequence of the definition that the transversal linear Poincar\'{e} flow of $H$ is itself a realizable linear flow.
In addition, the concatenation of two realizable linear flows is still a realizable linear flow as it is shown in the following lemma.
\begin{lemma}\label{trivial}
Let $H\in C^{s+1}(M,{\mathbb{R}})$, $s\geq2$ or $s=\infty$, and $x\in{M}$ non-periodic.
If $L_1$ is $(\epsilon,\kappa_1)$-realizable of length $T_1$ at $x$ and
$L_2$ is $(\epsilon,\kappa_2)$-realizable of length $T_2$ at $\varphi_{H}^{T_1}(x)$ so that $\kappa=\kappa_1+\kappa_2<1$, then the concatenated linear flow
$$
L^t(x)=
\begin{cases}
L_1^{t}(x), & 0\leq t\leq T_1 \\
L_2^{t-T_1}(\varphi_H^{T_1}(x))\, L_1^{T_1}(x), & T_1<t\leq T_1+T_2
\end{cases}
$$
is $(\epsilon,\kappa)$-realizable of length $T_1+T_2$ at $x$.
\end{lemma}
\begin{remark}\label{remark concat}
Notice that concatenation of realizable flows worsens $\kappa$.
\end{remark}
\begin{proof}
For $\gamma>0$, take $r_1, r_2,K_1,K_2,\widetilde H_1,\widetilde H_2$ the obvious variables in the definition for $L_1$ and $L_2$.
We want to find the corresponding ones $r,K,\widetilde H$ for $L$ satisfying the properties of realizable flows. Let $x_2=\varphi_H^{T_1}(x)$.
\begin{itemize}
\item
First, choose $r\leq r_1$ such that
$$
U_2:=\mathcal P_H^{T_1}(x)\, U\subset B(x_2,r_2)
$$
with $U=B(x,r)$.
\item
Now, we construct $\widetilde H$ as
$$
\widetilde H =
\begin{cases}
\widetilde H_1 &\text{on } \mathcal F_H^{T_1}(x,U) \\
\widetilde H_2 &\text{on } \mathcal F_H^{T_2}(x_2,U_2) \\
H & \text{otherwise.}
\end{cases}
$$
Notice that $\mathcal F_H^{T_1+T_2}(x,U) = \mathcal F_H^{T_1}(x,U) \cup \mathcal F_H^{T_2}(x_2,U_2)$.
\item
Consider $K=K_1\cap \mathcal P_H^{-T_1}(x)\,(K_2 \cap U_2)$.
Hence,
\begin{equation*}
\begin{split}
\tra\mu(U\setminus K) & \leq
\tra\mu(U\setminus K_1) +\tra\mu(U\setminus \mathcal P_H^{-T_1}(x)\,(K_2 \cap U_2)) \\
&\leq
(\kappa_1+1)\,\tra\mu(U) -\tra\mu(\mathcal P_H^{-T_1}(x)\,(K_2 \cap U_2)).
\end{split}
\end{equation*}
Now, by \eqref{time ev trans measure} applied to $A=\mathcal P_H^{-T_1}(x)\,(K_2 \cap U_2)$ we know that
\begin{equation*}
\begin{split}
\tra\mu(\mathcal P_H^{-T_1}(x)\,(K_2 \cap U_2)) & \geq
\alpha(T_1)\,\tra\mu(K_2\cap U_2) \\
& =
\alpha(T_1)\,[\tra\mu(U_2)-\tra\mu(U_2\setminus K_2)] \\
& \geq
\alpha(T_1)(1-\kappa_2)\,\tra\mu(U_2).
\end{split}
\end{equation*}
On the other hand, using \eqref{time ev trans measure} for $A=U$, $\tra\mu(U_2)\geq \alpha(T_1)^{-1}\tra\mu(U)$.
Combining all the above estimates we get
$$
\tra\mu(U\setminus K) \leq (\kappa_1+\kappa_2)\,\tra\mu(U).
$$
\item
The choice of $\widetilde H$ yields that $DX_H=DX_{\widetilde H}$ on $U$ because that is true for $\widetilde H_1$. The same on $\mathcal P_H^{T_1+T_2}(x)\,U$ related to $\widetilde H_2$.
\item
In order to check that $\widetilde H$ is $C^s$ it is enough to look at $U_2$. That follows from the same reason as the previous item.
\item
Finally, there is $C>0$ verifying for $y\in K$ and writing $y_2=\mathcal P_H^{T_1}(x)\,y$,
\begin{equation*}
\begin{split}
\|\Phi_{\widetilde H}^{T_1+T_2}(y)-L^{T_1+T_2}(x)\|
\leq &
\|\Phi_{\widetilde H}^{T_2}(y_2)[\Phi_{\widetilde H}^{T_1}(y)-L^{T_1}(x)]\| \\
&
+\|[\Phi_{\widetilde H}^{T_2}(y_2)- L^{T_2}(x_2)]L^{T_1}(x)\| \\
< &
C\gamma.
\end{split}
\end{equation*}
\end{itemize}
\end{proof}
The next lemma is the basic mechanism to perform perturbations in time length $1$, for which we use Lemma~\ref{basic} to realize the map $\Phi_{H}^{t}(x)\circ R_{\alpha}$ (the rotation $R_\alpha$ is defined in a canonical basis of $\mathcal{N}_x$ by the matrix given in Lemma~\ref{basic}).
We will not be needing more than lenght $1$ realizable flows, since we can concatenate them (keeping in mind Remark \ref{remark concat}). Each lenght $1$ piece contributes to rotations by the same angle $\alpha$, independently of $x$, as shown below.
\begin{lemma}\label{rlf1}
Let $H\in C^{s+1}(M,{\mathbb{R}})$, $s\geq2$ or $s=\infty$, $\epsilon>0$ and $0<\kappa<1$.
Then, there exists $\alpha_0=\alpha_0(H,\epsilon,\kappa)>0$ such that for any non-periodic point $x\in{M}$ (or with period larger than $1$) and $0<\alpha\leq\alpha_0$, the linear flow $\Phi_{H}^{t}(x)\circ R_{\alpha}\colon {\mathcal N}_x\to{\mathcal N}_{\varphi_H^t(x)}$ is $(\epsilon,\kappa)$-realizable of length $1$ at $x$.
\end{lemma}
\begin{proof}
Let $\gamma>0$.
We start by choosing $r>0$ sufficiently small such that:
\begin{itemize}
\item
$B(x,r)$ is inside the neighbourhood given by Lemma~\ref{robinson}. Notice that by taking the transversal section $\Sigma=B(x,r)$ in the proof of the lemma, such neighbourhood can be extended along its orbit to an open set $A$ containing $F_H^1(x,r)$, where $F_H^\tau(x,r)=\bigcup_{0\leq t\leq\tau}\varphi_H^t(B(x,r))$.
So, a $C^s$-symplectomorphism $g\colon A\to{\mathbb{R}}^4$ exists satisfying: $g(B(x,r))$ is an orthogonal section to $X_{H_0}$ at $g(x)=0$, $H=H_0\circ g$, and all norms of the derivatives are bounded.
Moreover, the derivatives of $g$ and $g^{-1}$ are of order $r$-close to the identity tangent map $I$ on local coordinates;
\item
$\mathcal{F}_H^1(x,B(x,r))$ is not self-intersecting;
\item
$F_H^\varrho(x,r)\subset \mathcal{F}_H^1(x,B(x,r))$ (recall that $0<\varrho<1$ is a fixed constant introduced before Lemma~\ref{basic}).
\end{itemize}
Let $U=B(x',r')\subset B(x,r)$, $\widehat\epsilon>0$ and $0<\widehat\kappa<1$. Define $g_{x'}=g-g(x')$ and $\widehat U=g_{x'}(U)$. For $r$ small we find $r_1,r_2>0$ such that $B(0,r_1)\subset \widehat U\subset B(0,r_2)$ and $r_2/r_1=[(1-\widehat\kappa)^{-1/3}+1]/2>1$.
Setting $\nu=[1+(1-\widehat\kappa)^{1/3}]/2<1$, Lemma~\ref{basic} gives us that there is $\alpha_0=\alpha_0(H,\widehat\epsilon,\widehat\kappa)>0$ such that for any $0<\alpha\leq\alpha_0$ and using the radius $r_1$ we have that $\Phi_{H_0}^t(0)\circ R_{\alpha}$ is $(\widehat\epsilon,\widehat\kappa)$-realizable of length-$1$ at the origin.
Take the obvious variables in the definition $\widehat K\subset\widehat U$ and $\widehat H$ such that $\widehat K =B(0,r_1\nu)$ with $\overline{\mu}(\widehat{K})= \pi (r_1\nu)^3$ and $\|\widehat H-H_0\|_{C^2}<\widehat\epsilon$. Then, $\overline{\mu}(\widehat{K}) \geq (1-\widehat\kappa)\overline{\mu}(\widehat{U})$.
Define $K=g_{x'}(\widehat K)\subset U$ and $\widetilde H=\widehat H\circ g_{x'}$. If $\widehat\epsilon$ and $\widehat\kappa$ are small enough (depending on $\epsilon$, $\kappa$ and the norms of the derivatives of $g$), we get that Definition~\ref{rlf}~(1) is satisfied and
$$
\|\widetilde H-H\|_{C^2}=
\|(\widehat H-H_0)\circ g_{x'}\|_{C^2}
\ll \widehat\epsilon <\epsilon.
$$
We use the same notation $\ll$ as in the proof of Lemma~\ref{basic}.
By construction it is simple to check that Definition~\ref{rlf}~(2a) and (2b) also hold.
As discussed before, Lemma~\ref{robinson} determines the existence of a neighborhood at each regular point of $M$ and a $C^{s}$-symplectomorphism straightening the flow.
By compacity of $M$ the derivatives of the symplectomorphism up to the order $s$ are uniformly bounded on small length $1$ flowboxes. For this reason $\alpha_{0}$ given above (depending on $\widehat\epsilon$ and $\widehat\kappa$) was chosen to be independent of $x\in M$.
It remains to check (c) in Definition~\ref{rlf}. This will require further restrictions on $r$, depending on $\gamma$.
By definition, the time-$1$ transversal linear Poincar\'e flow on $\mathcal{N}_y\subset T_yH^{-1}(H(y))$ is
$$
\Phi_{\widetilde H}^1(y) =
\Pi_{\varphi_{\widetilde H}^1(y)} \, Dg^{-1}(\widehat y) \,
D\varphi_{\widehat H}^1(g(y))\, Dg (y)
$$
for $y\in K$, and in $x$ yields
$$
\Phi_H^1(x)\circ R_\alpha =
\Pi_{\varphi_H^1(x)} \, Dg^{-1}(\widehat x) \, Dg(x) \,R_\alpha,
$$
where $\widehat x=\varphi_{H_0}^1\circ g(x)=(1,0,0,0)$ and $\widehat y=\varphi_{\widehat H}^1\circ g(y)$ are of order $r$-close.
Notice that $\|\Pi_{\varphi_{\widetilde H}^1(y)}-\Pi_{\varphi_H^1(x)}\| \ll r$ and
$$
\|Dg^{-1}(\widehat y)-
Dg^{-1}(\widehat x)\|
\ll r.
$$
Therefore,
$
\|\Phi_{\widetilde H}^1(y)-\Phi_H^1(x)\circ R_\alpha\| \ll
r+ \|\Upsilon\|,
$
where
$$
\Upsilon = \Pi_{\varphi_H^1(x)} Dg^{-1}(\widehat x) \, \left[
D\varphi_{\widehat H}^1(g(y))\, Dg (y) -
Dg(x) \,R_\alpha
\right].
$$
Moreover, $\|Dg^{-1}(\widehat x)-I\|\ll r$. So,
$$
\|\Upsilon\| \ll r+
\|\Pi_{\varphi_H^1(x)} \left(
R_\alpha\, Dg(y)-Dg(x)\,R_\alpha
\right)\|
$$
where we have also used $\|\Pi_{\varphi_H^1(x)} - \pi_0\|\ll r$ and $\pi_0 D\varphi_{\widehat H}^1(0,y_2,0,y_4)=R_{\alpha}$.
Finally, since
$$
R_\alpha\, Dg(y)-Dg(x)\,R_\alpha=R_\alpha(Dg(y)-I)+(I-Dg(x))R_\alpha,
$$
we obtain the bound
$$
\|\Phi_{\widetilde H}^1(y)-\Phi_H^1(x)\circ R_\alpha\| \ll r <\gamma
$$
for $r\ll\gamma$ small enough.
\end{proof}
\begin{remark}
A similar result holds true also for $R_{\alpha}\circ{\Phi_{H}^{t}(x)}$ using essentially the same proof.
\end{remark}
\end{subsection}
\end{section}
\begin{section}{Proof of Proposition~\ref{main}}\label{end}
We present here a sketch of how to complete the proof of Proposition~\ref{main}; see~\cite{Be} for full details.
We would like to highlight the fact that our result does not hold for a $C^2$ Hamiltonian $H$, since the perturbed one $\widetilde H$ has to be one degree of diferenciability less.
The differentiability loss comes from the symplectomorphism obtained in Theorem~\ref{robinson} that rectifies the flow.
\subsection{Local}
The lemma below states that the absence of dominated splitting is sufficient to interchange the two directions of non-zero Lyapunov exponents along an orbit segment by the means of a realizable flow.
\begin{lemma}\label{exchange}
Let $H\in C^{s+1}(M,{\mathbb{R}})$, $s\geq2$ or $s=\infty$, $\epsilon>0$ and
$0<\kappa<1$. There exists $m\in{\mathbb{N}}$, such that for
every $x\in{\mathcal{R}(H)\cap\mathcal{O}(H)}$ with a positive Lyapunov exponent and satisfying
$$
\frac{\|\Phi_{H}^{m}(x)|\mathcal{N}^{-}_{x}\|}{\|\Phi_{H}^{m}(x)|\mathcal{N}^{+}_{x}\|}\geq{\frac{1}{2}},
$$
there exists a $(\epsilon,\kappa)$-realizable linear flow $L$ of length $m$ at $x$ such that
$$
L^{m}(x)\,\mathcal{N}_{x}^{+}=\mathcal{N}_{\varphi_{H}^{m}(x)}^{-}.
$$
\end{lemma}
\begin{proof}
The proof is the same as for Lemma 3.15 of~\cite{Be} in which the constructions of Lemma~\ref{rlf1} are used, namely the concatenation of rotated Poincar\'e linear maps.
\end{proof}
Now we aim at locally decaying the upper Lyapunov exponent.
\begin{lemma}\label{smallnorm}
Let $H\in C^{s+1}(M,{\mathbb{R}})$, $s\geq2$ or $s=\infty$, and $\epsilon,\delta>0$, $0<\kappa<1$.
There is $T\colon\Gamma_{m}(H)\to{\mathbb{R}}$ measurable, such that for $\mu$-a.e. $x\in{\Gamma_{m}(H)}$
and $t\geq{T(x)}$, we can find a $(\epsilon,\kappa)$-realizable linear flow $L$ at $x$ with length $t$ satisfying
\begin{equation}\label{inequality}
\frac1t\log \|L^{t}(x)\|<\delta.
\end{equation}
\end{lemma}
\begin{proof}
We follow Lemma 3.18 of~\cite{Be}.
Notice that for $\mu$-a.e. $x\in\Gamma_{m}(H)$ with $\lambda=\lambda^{+}(H,x)>0$ and due to the nice recurrence properties of the function $T$ (see Lemma 3.12 of~\cite{B}) we obtain for every (very large) $t\geq T(x)$ that
$$
\frac{\|\Phi^{m}_{H}(y)|\mathcal{N}^{-}_{y}\|}{\|\Phi^{m}_{H}(y)|\mathcal{N}^{+}_{y}\|}\geq{\frac{1}{2}}
$$
for $y=\varphi_{H}^{s}(x)$ with $s\approx t/2$.
Now, by Lemma~\ref{exchange} we obtain a $(\epsilon,\kappa)$-realizable linear flow $L_2^t$ such that $L_{2}^{m}\,\mathcal{N}_{y}^{+}=\mathcal{N}_{\varphi_{H}^{m}(y)}^{-}$.
We consider also the realizable linear flows $L_{1}^t\colon\mathcal{N}_{x}\to\mathcal{N}_{y}$ and $L_{3}^t\colon\mathcal{N}_{\varphi_{H}^{m}(y)}\to\mathcal{N}_{\varphi_{H}^{t}(x)}$ given by $\Phi_H^t$ for $0\leq t\leq s$ and $t\geq m$, respectively.
Then we use Lemma~\ref{trivial} and concatenate $L_1\to L_2\to L_3$ as $L^t$, which is a $(\epsilon,\kappa)$-realizable linear flow at $x$ with length $t$.
The choice of $t\gg m$ and the exchange of the directions will cause a decay on the norm of $L^t$. Roughly that is:
\begin{itemize}
\item
in $\mathcal{N}^{+}_{x}$ the action of $L_{1}$ is approximately $e^{\lambda t/2}$,
\item
in $\mathcal{N}^{-}_{\varphi_{H}^{m}(y)}$ the action of $L_{3}$ is approximately $e^{-\lambda t/2}$ and
\item
$L_{2}$ exchange these two rates.
\end{itemize}
Therefore, $\|L^{t}(x)\|<{e^{t\delta}}$.
\end{proof}
\subsection{Global}
Notice that, in Lemma~\ref{smallnorm}, we obtained $\|L^{t}(x)\|<e^{t\delta}$. However, we still need to get an upper estimate of the upper Lyapunov exponent.
Due to (\ref{infimum}) this can be done without taking limits, say in finite time computations. In other words, we will be using the inequality
\begin{equation}\label{infimum2}
\int_{\Gamma_{m}(H)}\lambda^{+}(\tilde{H},x)d\mu(x)\leq \int_{\Gamma_{m}(H)}\frac{1}{t}\log\|\Phi^{t}_{\tilde{H}}(x)\|d\mu(x),
\end{equation}
which is true for all $t\in\mathbb{R}$.
Therefore, $\delta$ is larger than the upper Lyapunov exponent of at least most of the points near $x$.
To prove Proposition~\ref{main} we turn Lemma \ref{smallnorm} global. This is done by a recurrence argument based in the Kakutani towers techniques entirely described in~\cite{Be} section 3.6. In broad terms the construction goes as follows:
\begin{itemize}
\item
Take a very large $m\in\mathbb{N}$ from Lemma~\ref{exchange}.
Then Lemma~\ref{smallnorm} gives us a measurable function $T\colon\Gamma_{m}(H)\to{\mathbb{R}}$ depending on $\kappa$ and $\delta$. Let $\delta^2=\kappa$.
\item
For $x_{1}\in\Gamma_{m}(H)$, the realizability of the flow $L^t(x_{1})$ guarantees that we have a $t$-length flowbox at $x_{1}$ (a tower $\mathcal{T}_{1}$) associated to the perturbed Hamiltonian $\widetilde{H}_{1}$.
If we take a point in the measurable set $K_{1}$ (cf.~(\ref{rlf 0}) of Definition~\ref{rlf}) contained in the base of the tower, then by~(\ref{rlf 3}) of Definition~\ref{rlf} and Lemma~\ref{smallnorm}, we have $\|\Phi_{\widetilde{H}_{1}}^{t}(y)\|<e^{2\delta t}$ for all $y\in K_{1}$.
\item
Now, for $x_{2},...,x_{j}\in \Gamma_{m}(H)$, where $j\in\mathbb{N}$ is large enough, we define self-disjoint towers $\mathcal{T}_{i}$, $i=1,...,j$, which (almost) cover the set $\Gamma_{m}(H)$ in the measure theoretical sense.
We take these towers such that their heights are approximately the same, say $h$.
\item
The $C^s$ Hamiltonian $\widetilde{H}$ is defined by glueing together all perturbations $\widetilde{H}_{i}$, $i=1,...,j$.
\item
Consider $\mathcal{T}=\cup_i{\mathcal{T}_{i}}$, $U=\cup_{i}U_{i}$ and $K=\cup_{i}K_{i}$. Clearly $K\subset U$.
Note that for points in $U\setminus K$ we may not have $\|\Phi_{\widetilde{H}_{1}}^{t}(\cdot)\|<e^{2\delta t}$.
\item
Denote by $\mathcal{T}^{K}$ the subtowers of $\mathcal{T}$ with base $K$ instead of $U$. By~(\ref{rlf 0}) of Definition~\ref{rlf} we obtain that $\overline{\mu}(U\setminus K)\leq\kappa\overline{\mu}(U)$, hence $\mu(\mathcal{T}\setminus\mathcal{T}^{K})<\mu(\mathcal{T})\leq\delta^2$.
\end{itemize}
We claim that it is sufficient to take $t=h\delta^{-1}$ in \eqref{infimum2}. It follows from~\eqref{inequality} that we only control the iterates that enter the base of $\mathcal{T}^{K}$.
Since the height of each tower is approximately $h$ the orbits leave $\mathcal{T}^{K}$ at most $\delta^{-1}$ times.
For each of those times the chance of not re-entering again is less than $\delta^{2}$.
So, the probability of leaving $\mathcal{T}^{K}$ along $t$ iterates is less than $\delta$.
In conclusion, most of the points in $\Gamma_{m}(H)$ satisfy the inequality \eqref{inequality} and Proposition~\ref{main} is proved.
\end{section}
\subsection*{Acknowledgments}
We would like to thank Gonzalo Contreras and the anonymous referee for useful comments. MB was supported by Funda\c c\~ao para a Ci\^encia e a Tecnologia, SFRH/BPD/20890/2004. JLD was partially supported by Funda\c c\~ao para a Ci\^encia e a Tecnologia through the Program~FEDER/POCI~2010.
|
2,877,628,091,534 | arxiv | \section{Operators and Observables}
In this section, we define all the operators that are used in the main text:
\begin{equation}
\begin{split}
n_{i\gamma\sigma} &= c^{\dagger}_{i \gamma \sigma} c_{i \gamma \sigma}, \\
n_{i\gamma} &= n_{i \gamma \uparrow} + n_{i \gamma \downarrow}, \\
S^{\kappa}_{i \gamma} &=\frac{1}{2} \sum_{\sigma,\sigma'} c^{\dagger}_{i \gamma \sigma} \sigma^{\kappa}_{\sigma \sigma'} c_{i \gamma \sigma'}, \\
P_{i \gamma} &= c_{i \gamma \uparrow} c_{i \gamma \downarrow},
\end{split}
\end{equation}
where $\gamma$ is the orbital index, $\sigma$ and $\sigma'$ are the spin index, and
$\sigma^{\kappa}$ are the Pauli matrices with $\kappa=\{x,y,z\}$ being Cartesian components.
The average occupation and charge fluctuations (Fig.1 of main text) are
defined as
\begin{equation}
\begin{split}
\< n_{\gamma} \> &= \frac{1}{L} \sum_{i, \sigma} n_{i\gamma\sigma}, \\
\< \delta N_{\gamma}^2 \> &= \frac{1}{L} \sum_{i} \< n_{i \gamma} \ n_{i \gamma} \> - \< n_{i \gamma} \> \< n_{i \gamma} \>,
\end{split}
\end{equation}
where $L$ is the number of sites in the lattice.
The static spin-spin correlations (inset of Fig. 1) is calculated using
\begin{equation} \label{eq:Sk}
S(k) = \frac{1}{L^2} \sum_{i,j} e^{-i k (i - j)} \langle {{\mathbf{S}_i}\cdot{\mathbf{S}_{j}}} \rangle,
\end{equation}
where ${\mathbf{S}_i} = \sum_{\gamma} \mathbf{S}_{i \gamma}$.
In this supplemental, we also show results for $n^{\gamma}_k$ defined as
\begin{equation} \label{eq:Sk}
\begin{split}
c_{j \gamma} &= c_{j \gamma \uparrow} + c_{j \gamma \downarrow}, \\
n^{\gamma}(k) &= n^{\gamma}_k = \frac{1}{L^2} \sum_{i,j} e^{-i k (i - j)} \langle c^{\dagger}_{i \gamma} c_{j \gamma} \rangle.
\end{split}
\end{equation}
\subsection{Spectral functions and sum rules}
To characterize the OSMP, the dynamical response functions (shown below) are calculated
using DMRG
\begin{equation} \label{eq:Greenijw}
\begin{split}
O(i,j,\omega) &= \frac{-1}{\pi} \text{Im} \big[ \langle \psi_0 | O^{\dagger}_{i} \frac{1}{\omega - H + E_g + {\mathrm i}\eta} O_{j} | \psi_0 \rangle \big],
\end{split}
\end{equation}
where the local operator $O_i$ can represent any degree of freedom of the model.
In general, these
functions are Fourier transformed into the crystal momentum domain to calculate the
momentum-energy resolved spectra that is relevant to experiments:
\begin{equation} \label{eq:Greenkw}
\begin{split}
O(k,\omega) &= \frac{1}{L^2} \sum_{i,j} e^{-ik(i-j)} O(i,j,\omega). \\
\end{split}
\end{equation}
Note that within DMRG, the site $j$ is fixed to the center of the
lattice ($d=L/2-1$) to reduce the edge effects and computational cost, and therefore the modified
Fourier transform becomes
\begin{equation}
\begin{split}
O(k,\omega) &= \frac{1}{L} \sum_{i} e^{-ik(i-d)} O(i,d,\omega).
\end{split}
\end{equation}
Additionally, we use open boundary conditions in the DMRG simulation and therefore
the quasi crystal-momenta are defined as
\begin{equation}
\begin{split}
k = \frac{\pi n}{L+1} \ \ \ \text{where} \ \ \ n = 1,2 ... \ L.
\end{split}
\end{equation}
\section{Orbital Operators}
The dominant orbitals of an iron atom in iron-based superconductors
are the five $3d$ orbitals,
corresponding to well-known $L=2$ orbital angular momenta.
The corresponding operators $\{L_x,L_y,L_z\}$ are written in the basis of $L_z = \{-2,-1,0,1,2\}$, forming
$5\times5$ matrices:
\[
L_x = \frac{1}{2}
\begin{bmatrix}
0 & 2 & 0 & 0 & 0 \\
2 & 0 & \sqrt{6} & 0 & 0 \\
0 & \sqrt{6} & 0 & \sqrt{6} & 0 \\
0 & 0 & \sqrt{6} & 0 & 2 \\
0 & 0 & 0 & 2 & 0 \\
\end{bmatrix}
\]
\[
L_y = \frac{-{\mathrm i}}{2}
\begin{bmatrix}
0 & 2 & 0 & 0 & 0 \\
-2 & 0 & \sqrt{6} & 0 & 0 \\
0 & -\sqrt{6} & 0 & \sqrt{6} & 0 \\
0 & 0 & -\sqrt{6} & 0 & 2 \\
0 & 0 & 0 & -2 & 0 \\
\end{bmatrix}
\]
\[
L_z =
\begin{bmatrix}
2 & 0 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & -1 & 0 \\
0 & 0 & 0 & 0 & -2 \\
\end{bmatrix}
\]
DMRG is a real-space algorithm, and therefore we must write
these matrices in the orbital basis using the transformation
\begin{equation}
\begin{split}
|-2\> &= \frac{1}{\sqrt{2}} \Big( |x^2 - y^2\> - {\mathrm i} |xy\> \Big), \\
|-1\> &= \frac{1}{\sqrt{2}} \Big( |xz\> - {\mathrm i} |yz\> \Big), \\
|0\> &= |z^2\>, \\
|1\> &= \frac{-1}{\sqrt{2}} \Big( |xz\> + {\mathrm i} |yz\> \Big), \\
|2\> &= \frac{1}{\sqrt{2}} \Big( |x^2 - y^2\> + {\mathrm i} |xy\> \Big), \\
\end{split}
\end{equation}
where $\{ |x^2 - y^2\>, |z^2\>, |xz\>, |yz\>, |xy\> \}$ are the
five iron $3d$ orbitals. The operators $\{L_x,L_y,L_z\}$ represented in
the orbital basis are:
\[
L_x = \frac{{\mathrm i}}{2}
\begin{bmatrix}
0 & 0 & 0 & 1 & 0 \\
0 & 0 & 0 & \sqrt{3} & 0 \\
0 & 0 & 0 & 0 & 1 \\
-1 & -\sqrt{3} & 0 & 0 & 0 \\
0 & 0 & -1 & 0 & 0 \\
\end{bmatrix}
\]
\[
L_y = \frac{{\mathrm i}}{2}
\begin{bmatrix}
0 & 0 & -1 & 0 & 0 \\
0 & 0 & \sqrt{3} & 0 & 0 \\
1 & -\sqrt{3} & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 1 \\
0 & 0 & 0 & -1 & 0 \\
\end{bmatrix}
\]
\[
L_z = {\mathrm i}
\begin{bmatrix}
0 & 0 & 0 & 0 & 2 \\
0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 & 0 \\
0 & 0 & -1 & 0 & 0 \\
-2 & 0 & 0 & 0 & 0 \\
\end{bmatrix}.
\]
In the main text, we employ the iron $t_{2g}$ orbitals,
{\it i.e.,} we drop the contribution from $|x^2 - y^2\>, |z^2\>$
orbitals. After this approximation, we obtain the local orbital angular momentum
operators used in the main text (site index understood)
\begin{equation}
\begin{split}
L^{x} &= {\mathrm i} (\cd{xz} \c{xy}^{\phantom\dagger} - \cd{xy} \c{xz}^{\phantom\dagger}), \\
L^{y} &= {\mathrm i} (\cd{yz} \c{xy}^{\phantom\dagger} - \cd{xy} \c{yz}^{\phantom\dagger}), \\
L^{z} &= {\mathrm i} (\cd{xz} \c{yz}^{\phantom\dagger} - \cd{yz} \c{xz}^{\phantom\dagger}).
\end{split}
\end{equation}
\section{Additional Results}
In this section, we show tests performed to ensure the quality of the
presented results. We first show $n^{\gamma}(k)$ that is in
agreement with previous studies with a similar model~\cite{SLiOSMP}.
A significant change in $n^{a/b}(k)$ indicates metallic behavior
of these orbitals. On the contrary, $n^{c}(k)$ shows little variation in
$k$ that is associated with a gapped orbital $c$.
\begin{figure}[thbp]
\begin{center}
\hspace{-0.0cm}
\begin{overpic}[trim = 0mm 0mm 0mm 0mm,
height=0.28\textwidth,width=0.38\textwidth,angle=0]{nkBlock.pdf}
\end{overpic}
\end{center}
\vspace{-0.5cm}
\caption{(color online)
$n^\gamma(k)$ for the Block OSMP ($U/W=0.8$) using a $32$ sites chain.
The largest increase in $n^\gamma(k)$ of the itinerant orbitals $a$ (black) and $b$ (red)
occurs at $k/\pi \simeq 0.25$, i.e. the Fermi momentum. Note that results for $a$ and $b$ are almost
identical, thus indistinguishable in the figure. $n^c(k)$ of the
localized orbital $c$ has only a slight variation in $k$,
suggesting a gap in the single-particle states of orbital $c$.
These results are in agreement with previous studies
using Quantum Monte Carlo~\cite{SLiOSMP}.
}
\label{fig:S1}
\end{figure}
Furthermore, all spectral quantities defined by
equations~\ref{eq:Greenijw} and \ref{eq:Greenkw} must satisfy a sum rule.
In general, it can be shown that integrating the spectral function over momentum $k$ and energy $\omega$
gives a quantity related to a unique static observable. As an example,
we use the single-particle spectral function $A^c(k,\omega)$ of orbital $c$ below (above) the
Fermi energy ($E_F$) representing the filled (unfilled) electron (hole) states.
The electron and hole components of $A^c(k,\omega)$ are defined as
\begin{equation} \label{eq:Akw}
\begin{split}
A_e^c(k,\omega) &= \frac{-1}{\pi L^2} \times \\ \sum_{i,j} \text{Im} \big[ &\langle \psi_0 | c^{\dagger}_{ic} \frac{1}{\omega - H + E_g + {\mathrm i}\eta} c_{jc} | \psi_0 \rangle \big] e^{-ik(i-j)}, \\
\\
A_h^c(k,\omega) &= \frac{-1}{\pi L^2} \times \\ \sum_{i,j} \text{Im} \big[ &\langle \psi_0 | c_{ic} \frac{1}{\omega + H + E_g + {\mathrm i}\eta} c^{\dagger}_{jc} | \psi_0 \rangle \big] e^{-ik(i-j)},
\end{split}
\end{equation}
where $c$ is the orbital index and $c_{jc} = c_{jc\uparrow} + c_{jc\downarrow}$.
To obtain the sum rule of this quantity, we first sum over the momentum $k$.
This sum simply results in $L \delta_{i,j}$, giving us the local response that is the
single-particle density of states,
\begin{figure}[h]
\begin{center}
\begin{overpic}[trim = 1.2cm 0mm 0mm 0mm,
height=0.28\textwidth,width=0.32\textwidth,angle=0]{SumRule_Check.pdf}
\end{overpic}\hspace{-0.25cm}
\end{center}
\vspace{-0.5cm}
\caption{(color online) Density of states of the Mott orbital $c$, calculated using
the electron ($A^c_e$, blue) and hole ($A^c_h$, red) components, employing $24$ sites,
$\Delta \omega = 0.05,$ and $\eta=0.1$. The vertical grey line represents the
Fermi energy $E_F \simeq 4.3$~eV. The
tails of the density of states
are fitted in order to account for the missing tail weights.
$I \simeq 1$ is the integrated value of the electron and hole part of
the density of states which is respectively equal to the average local electron and hole
occupations.
}
\label{fig:S2}
\end{figure}
\begin{equation} \label{eq:Aw}
\begin{split}
A_e^c(\omega) &= -\frac{1}{\pi L} \sum_{i} \text{Im} \big[ \langle \psi_0 | c^{\dagger}_{ic} \frac{1}{\omega - H + E_g + {\mathrm i}\eta} c_{ic} | \psi_0 \rangle \big], \\
A_h^c(\omega) &= -\frac{1}{\pi L} \sum_{i} \text{Im} \big[ \langle \psi_0 | c_{ic} \frac{1}{\omega + H + E_g + {\mathrm i}\eta} c^{\dagger}_{ic} | \psi_0 \rangle \big],
\end{split}
\end{equation}
shown in Figure~\ref{fig:S2}. Further integration over $\omega$ of the imaginary part
(Lorentzian poles) of the electron (hole) part simply gives the total electron (hole)
density of orbital $c$:
\begin{equation} \label{eq:Aw}
\begin{split}
n_e^c &= \frac{1}{L} \sum_{i} \langle \psi_0 | c^{\dagger}_{i} c_{i} | \psi_0 \rangle , \\
n_h^c &= \frac{1}{L} \sum_{i} \langle \psi_0 | c_{i} c^{\dagger}_{i} | \psi_0 \rangle .
\end{split}
\end{equation}
This is explicitly done within our calculations by integrating over the density of
states (Fig.~\ref{fig:S2}). The integration of the electron (blue curve)
and hole (red curve) portions lead to approximately $1.0$ that is consistent, within the accuracy
of our results, with calculations
of the local density from the ground-state. Note that a singly occupied orbital is
one of the characteristics of a Mott insulator.
We also performed finite-size scaling on the density of states at the Fermi energy $E_F \simeq 4.3$~eV.
Figure \ref{fig:S3} shows that the $L \rightarrow \infty$ extrapolated quasi-particle weight,
$A(\omega=E_F)$, of orbital $c$ is an order of magnitude smaller than the
almost degenerate orbitals $a$ and $b$. The near zero weight of orbital $c$ at the Fermi
energy is consistent with a Mott phase, providing further evidence of the presence of an OSMP
as the ground-state.
\begin{figure}[t]
\vspace{0.1cm}
\begin{center}
\begin{overpic}[trim = -0cm 0.8cm 1.2cm 0.0cm,
height=0.28\textwidth,width=0.32\textwidth,angle=0]{1OLFit.pdf}
\end{overpic}
\end{center}
\vspace{-0.14cm}
\caption{(color online) Finite-size scaling of the orbital-resolved electron part of the
density of states at the Fermi energy ($E_F$). The quasi-particle
weight of the Mott orbital approaches $0$ (more accurately, $0.003$) with increasing
system size while the weight remains finite for the itinerant
orbitals ($0.04$ and $0.07$). Note that $A_e(E_F)$ of the itinerant
orbitals $a$ and $b$ is an order magnitude larger than the Mott orbital $c$, further
emphasizing that orbital $c$ is a Mott insulator.
}
\label{fig:S3}
\end{figure}
\section{Reproducing data using DMRG++}
The full open source code, sample inputs, and corresponding
computational details can be found at
\url{https://g1257.github.io/papers/86/}.
\end{document}
|
2,877,628,091,535 | arxiv | \section{Introduction}
We develop an application to $SAT$, to be positioned in current stream of interest in the structure of Boolean satisfiability \cite{kolaitis}. We use $strings$ in a technical sense, borrowed from Computability \cite{odifreddi}.
An $encoded$ $decision$ $problem$ over $\Sigma$ is a pair $(E, F)$ where $E$=words that encode instances of the problem, $F$=words to be accepted. On input $x$, a decision program $P$ for $(E, F)$ either $accepts$ $x$ (if $x$ is in $F$) or $rejects$ $x$ (if $x$ is in $E-F$) or else $discards$ $x$ (for $x$ outside $E$).
Our fundamental construct is a set of strings $Log_E (F)$ called $logogram$ of $F$ relative to $E$ that conveys structural information on $E$, $F$, and how $F$ is embedded in $E$. We mostly use the reduced version $|Log_E (F)|$, consisting of those strings in $Log_E (F)$ that do not include other strings in $Log_E (F)$. The $kernel$ $Ker(P)$ of a program $P$ that solves $(E, F)$ is the set of those strings in $|Log_E (F)|$ that are actually used by $P$ in making decisions. There are strict relationships between the composition in terms of strings of the kernel $Ker(P)$ of a program solving $(E, F)$ and the complexity of $P$.
Our application to $SAT$ uses a property of internal independence of a decision problem that we call ``strong internal independence.'' Think of a computation in which the result of any computation step does not change the results that are possible for subsequent steps. Internal independence is defined in terms of a relation of $entanglement$ $\sqsupseteq^E$ between sets of strings relative to reference set $E$.
Our main results are the following. We show that $(CNF, SAT)$ exhibits the strong internal independence property: Intuitively, no ``entanglement at distance" between strings in $|Log_{CNF} (SAT)|$ is possible. Besides, we show that problem $(CNF, SAT)$ cannot have, in its reduced logogram, certain collective certificates that we call $wizards$. As a consequence, the decision programs $P$ that solve $(CNF, SAT)$ all have the same kernel $Ker(P)=|Log_{CNF} (SAT)|$.
\section{Certificates of Membership as Strings}
We first recall notions regarding the certificates of membership in NP theory. As next step, we illustrate possible use of strings to represent certificates. We conclude the section reviewing basic algebraic properties of strings.
Let $G\subseteq \Sigma^*\times\Sigma^*$ so that $G$ is a relation on words over $\Sigma$. Let $Dom(G)$ and $Cod(G)$ be first and second projection of $G$. A relation $G$ which is both polynomial-time decidable and polynomially balanced is an NP relation. $L$ is in NP if and only if there exists an NP relation $G$ such that $L=Dom(G)$. We interchange problems with languages: $(E, F) \in NP$ and $F \in NP$ amount to the same.
Let $(E, F)$ be an NP problem. Then there exists a sequence $y_1$, $y_2$,.. of words (over some appropriate alphabet) called $solutions$ or else $certificates$ $of$ $membership$ for problem $(E, F)$. For any problem instance $x\in E$ we have that $x$ can possibly be satisfied by some of the $y_i s$. We also have ``unsatisfiable'' instances. What ``satisfaction'' means operationally is proper of problem $(E, F)$.
Cardinality function $\alpha (n)$ of an NP problem: We may arrange notations so that all solutions that can possibly satisfy an $x$ of size $n$ are between $y_1$ and $y_{\alpha (n)}$.
Associated with solutions $y_1$, $y_2$,.. there is a decomposition of target set $F$ into subsets $F_i$ called $regions$, where $F_i$ is the set of those $x's$ that are satisfied by $y_i$. Regions satisfy the obvious relation $F=\cup_i F_i$.
\subsection{Generalized Certificates}
In this paper we replace certificates with $generalized$ $certificates$. These are represented by $strings$, defined to be functions $N\rightarrow\Sigma$ with finite domain ($N$=positive integers). In loose words, a string $g$ being included (or subsumed) in a word $x$ is that which remains by canceling zero or more letters in $x$, while leaving blanks in places of letters. Note that words are certain special strings, thus the solutions $y_1$, $y_2$,.. continue to be certificates. This generalization allows us to introduce certain more general certificates that we call $wizards$.
We assume that satisfiability, being a property exhibited by certain words, is accompanied by characteristic $signs$, that we think as distinctive marks, or signatures, being somehow inscribed within the word $x$ under study. A detailed discussion would yield strings as proper formalization of such notions as ``mark'' or ``signature.'' Thus, we assume that signs are strings interspersed in $x$. Since strings represent words in shorthand, we call their set a $logogram$.
\subsection{Strings}
We define $\Sigma_\infty$ to be the set of all strings over $\Sigma$. Look at $g\in \Sigma_\infty$ as a prescription that a word $x$ over $\Sigma$ may or may not satisfy. If $Dom(g)$ is an initial segment of $N$ then $g$ is an ordinary word: Thus, words are certain special strings. The length (or size) $|g|$ is the greatest number in $Dom(g)$.
$\Sigma_\infty$ is partially ordered. Given $f, g \in \Sigma_\infty$, $g$ is an $extension$ of $f$, written $f \le g$, as soon as $Dom(f ) \subseteq Dom(g)$ and $g$ takes same values as $f$ in $Dom(f )$. If $f \le g$ and $g \le f$ then $f = g$. If $f \le g$ but not $g \le f$, write $f < g$ and say $g$ is a proper extension of $f$, or else $f$ is a proper $restriction$ of $g$. The empty partial function $N\rightarrow \Sigma$ , noted $\perp$, is the $void$ string, and $Dom(\perp)=\emptyset$. Any $f$ in $\Sigma_\infty$ is an extension of $\perp$, thus $(\Sigma_\infty, \le)$ has a least element $\perp$.
Two strings $f$ and $g$ are $compatible$ as soon as $f(x)=g(x)$ for any $x$ in $Dom(f)\cap Dom(g)$. If $f, g$ are disjoint, which is to say $Dom(f)\cap Dom(g)=\emptyset$, then $f$ and $g$ are certainly compatible. The $meet$ $f \wedge g$ of any pair $f, g$ is the restriction of $f$ (or $g$) to that portion of the intersection $Dom(f)\cap Dom(g)$ where $f$ and $g$ agree. The $join$ of two compatible strings $f, g$, noted $f +g$, is the least string which is an extension of both $f$ and $g$. Thus $f, g \le f+g$ and $Dom(f +g)=Dom(f ) \cup Dom(g)$. Equipped with meet and join, $\Sigma_\infty$ is an upward directed complete meet-semilattice \cite{gierz}.
\section{Entanglement among Strings}
The cylinders defined below are as in Computability (the formalism is slightly different). The logogram is a newcomer in Computer Science. Entanglement is a key concept to deal with internal structure of computational problems.
\subsection{Cylinders} Given $H \subseteq \Sigma_\infty$ we define
\begin{equation}\label{mauri1}
Exp(H)=\{x\in \Sigma^* : \exists a \in H \hspace{0.5em} (x \ge a) \}
\end{equation}
Thus, $Exp(H)$=set of all words that include strings from $H$. Call $Exp(H)$ $absolute$ $expansion$, equivalently, $absolute$ $cylinder$ associated with $H$. Note that $Exp(H)$ is the union of the elementary cylinders $Exp(g)$ for $g \in H$.
Given any recursive set of words $E$, we write $\Sigma_\infty (E)$ for the set of all strings that happen to be included in words of $E$, thus
\begin{equation}\label{mauri2}
\Sigma_\infty (E)=\{g\in\Sigma_\infty : Exp(g)\cap E \not= \emptyset \}
\end{equation}
$\Sigma_\infty (E)$ is the set of those strings $g$ in $\Sigma_\infty$ whose associated cylinder $Exp(g)$ intersects $E$. We think of $E$ as the set of words over $\Sigma$ that encode instances of some fixed reference computational problem $\Pi$. (Whenever we talk of a reference set $E$ there is implicit reference to some fixed abstract decision problem $\Pi$ as well as to a program $P$ solving $\Pi$.) For $H\subseteq \Sigma_\infty (E)$ we write
\begin{equation}\label{mauri3}
Exp_E (H)=E^H=\{x\in E : \exists a \in H \hspace{0.5em} (x \ge a) \}=E\cap Exp(H)
\end{equation}
Thus, $E^H$ is the set of those words in $E$ that contain strings from $H$. $E^H$ is the $expansion$ of $H$ relative to base $E$. We actually regard $E^H$ as a relativized cylinder, equivalently, as being a cylinder relative to a reference set $E$.
Note that for $E=\Sigma^*$ we regain the absolute expansion of set $H$.
Given $H \subseteq \Sigma_\infty (E)$, correspondence $Exp_E : H \rightarrow E^H$ exhibits properties:
\begin{equation}\label{mauri4}
E^H \cup E^K = E^{H \cup K}, \hspace{1em} E^H \cap E^K = E^{H + K}
\end{equation}
Thus, unions and intersections of sets that are cylinders relative to reference set $E$ are cylinders in $E$. Also note that, for any $H, K \in \Sigma_\infty (E)$,
\begin{equation}\label{mauri5}
H \subseteq K \Rightarrow E^H \subseteq E^K
\end{equation}
\begin{equation}\label{mauri6}
Exp_E (Exp_E(H))=Exp_E (E^H)=E^H
\end{equation}
\subsection{Logograms}
In this section we introduce the $logogram$ of a set of words $F$ relative to a reference set $E$. Given $F \subseteq E$, we define
\begin{equation}\label{mauri7}
Log_E (F)=\{g\in\Sigma_\infty(E) : \forall x \in E (x\ge g \Rightarrow x \in E^F )\}
\end{equation}
Since ordinary words are strings, $F$ can be regarded as a set of strings, hence $E^F$ is defined. $E^F$ is the relative cylindrification of $F$ in $E$, and this in turn is the set of all words in $E$ that are prefixed by words in $F$.
$Remark$ If $F$ is a cylinder in $E$, i.e., $F=E^H$ for some $H\subseteq \Sigma_\infty(E)$, then $E^F=F$ by Equation~\ref{mauri6}.
$Remark$ For $E=\Sigma^*$ Equation~\ref{mauri7} gives $absolute$ $logogram$.
The main property of correspondence $Log_E : E \rightarrow \Sigma_\infty(E)$ is the following. Given $A, B \subseteq E$,
\begin{equation}\label{mauri8}
Log_E (A \cup B) \supseteq Log_E(A) \cup Log_E(B).
\end{equation}
Let us understand this inclusion. $Log_E (A \cup B)$ is the set of all strings that, for $x$ in $E$, are able to trigger event $x\in E^{A \cup B}=E^A \cup E^B$. A string that triggers $x\in E^A$ certainly belongs to $Log_E (A \cup B)$. Analogously, a string that triggers $x\in E^B$ certainly belongs to $Log_E (A \cup B)$. Thus, $Log_E(A) \cup Log_E(B)$ certainly is a subset of $Log_E (A \cup B)$. However, there can be strings $f$ whose inclusion in a word $x \in E$ is a sufficient condition for event $x\in E^A \cup E^B$ but not for $x\in E^A$ or $x\in E^B$. Thus, in the general case $Log_E (A \cup B)$ is not the same set as $Log_E(A) \cup Log_E(B)$.
\subsection{Entanglement}
The presence of certain strings in a word may entail that of certain others. Given $H, K \subseteq \Sigma_\infty (E)$, we write $K \sqsupseteq^E H$ if the following happens: Every word in $E$ which includes strings from $K$ also includes strings from $H$. (Think of strings in $K$ as spies, or else symptoms, for presence in an input string $x$ of strings from $H$.) If $H \sqsupseteq^E K$ and $K \sqsupseteq^E H$ then we write $H \equiv^E K$ and say that $H, K$ are $isoexpansive$ relative to $E$. Clearly, $\equiv^E$ is an equivalence relation. It is easily seen that $H \equiv^E K$ if and only if $E^H=E^K$.
For $E=\Sigma^*$ we rewrite $\sqsupseteq^E$ as $\sqsupseteq$ and $\equiv^E$ as $\equiv$. Note that $f \sqsupseteq^E g$ if and only if every word $x$ (within $E$) which includes $f$ also includes $g$.
We mention a few easy facts. (i) If $f \le g$ then $f \sqsupseteq^E g$ for any possible $E$. (ii) In the general case $f \sqsupseteq^E g$ does not imply $f \le g$. (It is well possible that this holds for specific sets $E$. For example, if $E=\Sigma^*$ then $f \sqsupseteq^E g$ if and only if $f \le g$.) (iii) If $f, g$ are incompatible, then it cannot be that $f \sqsupseteq^E g$. (iv) Given any $H, K \subseteq \Sigma_\infty (E)$,
\begin{equation}\label{mauri11}
H \subseteq K \Rightarrow H \sqsupseteq K \Rightarrow H \sqsupseteq^E K
\end{equation}
We ask: Is there any easy piece of algebra linking expansion, logogram, entanglement? To get an answer, we define a Galois connection that will provide us with a closure operation in $\Sigma_\infty (E)$, noted $H \rightarrow H^{\alpha\beta}$ . We will see that $H$ and $H^{\alpha\beta}$ are isoexpansive relative to $E$. What more, there can be distinct subsets $K, I$,.. of $H$ being isoexpansive (mod $E$) to $H^{\alpha\beta}$ while possibly exhibiting different computational behaviors.
We define our connection to be a pair $(\alpha, \beta)$ of correspondences between sets of strings and sets of words. The first correspondence $\alpha$ carries a set of strings $H \subseteq \Sigma_\infty (E)$ into a corresponding set of words $H^{\alpha} \subseteq E$. The second carries a set of words $A \subseteq E$ into a set of strings $A^{\beta} \subseteq \Sigma_\infty (E)$ according to
\begin{equation}\label{mauri12}
H \sqsubseteq^E K \Rightarrow H^{\alpha} \supseteq K^{\alpha}
\end{equation}
\begin{equation}\label{mauri13}
A \subseteq B \Rightarrow A^{\beta} \sqsupseteq^E B^{\beta}
\end{equation}
\begin{equation}\label{mauri14}
H \sqsubseteq^E H^{\alpha \beta}, \hspace{1em} A\subseteq A^{\beta \alpha}
\end{equation}
The connection is formally defined through the explicit expressions:
\begin{equation}\label{mauri15}
H^{\alpha} = E^H
\end{equation}
\begin{equation}\label{mauri16}
A^{\beta} = Log_E (A)
\end{equation}
We emphasize that $A$ is any subset of $E$. Thus, given any subset $A$ of the reference set $E$ the function $A^{\beta} =Log_E (A)$ is defined. However, not all subsets $A$ of $E$ happen to be the conjugate set $H^{\alpha}$ of some set $H \subseteq \Sigma_\infty (E)$. If that happens, we say that $A$ is closed. Note that $A$ closed implies $E^A = A$.
\begin{theorem}
$(\alpha, \beta)$ is a Galois connection.
\end{theorem}
\begin{proof}
We must derive Equations \ref{mauri12}-\ref{mauri14} from Equations \ref{mauri15}-\ref{mauri16}.
(I) Let $H, K \subseteq \Sigma_\infty (E)$ be given, and assume $H \sqsubseteq^E K$.
Let $g \in K$, and let $x$ be any word in $E$ such that $x \ge g$. Then $x \in E^K$ hence $x \in K^{\alpha}$. Since $H \sqsubseteq^E K$, there exists $f \in H$ such that $x \ge f$. Then $x$ is in $E^H$ hence $x \in H^{\alpha}$ . Equation \ref{mauri12} is proved.
(II) Next, we prove Equation \ref{mauri13}. Let $A, B \subseteq E$ and assume $A \subseteq B$.
We must prove that if a word $x \in E$ includes a string $g$ from $A^{\beta}$ then $x$ also includes a string $f$ from $B^{\beta}$.
Let $g \in Log_E (A)$ so that $g \in A^{\beta}$ by Equation \ref{mauri16}.
Thus, for all $x \in E$ we have $x \ge g \Rightarrow x \in E^A$. But $A \subseteq B$, hence $x \in E^A \Rightarrow x \in E^B$ by Equation \ref{mauri5}.
Thus, for all $x \in E$ we have $x \ge g \Rightarrow x \in E^B$. By Equation \ref{mauri16} this is to say $g \in Log_E (B) = B^{\beta}$.
We have shown that $A^{\beta} \subseteq B^{\beta}$. Equation \ref{mauri13} follows by virtue of Equation \ref{mauri11}.
(III) Next, we prove the first of Equations \ref{mauri14}.
Let $g \in H^{\alpha\beta}$ and let $x \in E$ be any string in $E$ such that $x \ge g$.
We have $H^{\alpha} = E^H$ hence $H^{\alpha\beta} = Log_E (E^H)$. Thus, $g \in Log_E(E^H)$. By Equation \ref{mauri7} we have $x \in Exp_E(E^H)$, and then, by virtue of Equation \ref{mauri6}, $x \in E^H$.
We conclude that there exists $f \in H$ such that $x \ge f$.
(IV) Next, we prove the second of Equations \ref{mauri14}.
It follows from Equations \ref{mauri15}, \ref{mauri16} that $A^{\beta\alpha} = E^{Log_E (A)}$. On other hand one has $A \subseteq E^{Log_E (A)}$ for any $A \subseteq E$. Indeed, $A \subseteq E^A$ and $E^A = E^{Log_E (A)}$ from definitions, taking into account Equation \ref{mauri6}. We proved the second of Equations \ref{mauri14}.
\end{proof}
\begin{theorem}
The following equations hold:
\begin{equation}\label{mauri17}
H \subseteq H^{\alpha\beta}, \hspace{1em} A \subseteq A^{\beta\alpha}
\end{equation}
\begin{equation}\label{mauri18}
H^{\alpha} = H^{\alpha\beta \alpha}, \hspace{1em} A^{\beta} = A^{\beta \alpha \beta}
\end{equation}
\end{theorem}
\begin{proof}
From theory of Galois connection \cite{birkhoff}.
\end{proof}
\begin{theorem}
The map $H \rightarrow H^{\alpha\beta}$ is a closure operation in $\Sigma_\infty (E)$, and $A \rightarrow A^{\beta \alpha}$ is a closure operation in $E$.
\end{theorem}
Only for a closed $A$ do we have that, for all $x$ in $E$, $x \in A$ if and only if there exists a $g \in Log_E (A)$ such that $x \ge g$. If and only if $A$ is a closed subset of the reference set $E$ we define the reduced kernel $|Log_E (A)|$. Regarding the reduced kernel $|Log_E (A)|$ of a closed set $A \subseteq E$, we explicitly note that, for any $x$ in $E$, $x \in A$ if and only if there is $g \in |Log_E (A)|$ such that $x \ge g$.
\section{The Kernel of a Decision Program}
We begin with a few remarks on the nature of the strings that happen to occur in the reduced logogram $|Log_E (F)|$.
For any NP decision problem $(E, F)$ we assume $F$ to be a relative cylinder in $E$. (It is known that $SAT$ is a cylinder \cite{balcazar}.) The strings in $|Log_E (F)|$ are certificates of membership for $F$ relative to $E$: For words in $E$, to include one or more strings from $|Log_E (F)|$ is necessary and sufficient for membership in $F$. In principle, we cannot exclude that $|Log_E (F)|$ may contain strings that behave as collective witnesses, also called wizards. (There exist problems, e.g. $PRIMES$, where $|Log_E (F)|$ has wizards.)
In that case a program $P$ solving $(E, F)$ might do calculations that are functionally equivalent to testing input $x$ for wizards.
Let $P$ solve problem $(E, F)$. The computations that $P$ performs are functionally equivalent to sequences of tests done on input $x$. This is part of Scott's view of computations \cite{scott} \cite{larsen}. (The term ``test'' is ours: Dana Scott uses ``token'' or else ``piece of information'' according to context.)
Note that Scott's theory is consistent with our developments as soon as we identify Scott's tokens with strings. In this view what $P$ actually does is searching the input $x$ for strings in $|Log_E (F)|$. That yields a view of computations as sequences of tests $in$ $disguise$.
Let program $P$ solve problem $(E, F)$. The tests in $|Log_E (F)|$ are those that $P$ can use: They are so to speak at disposal for a program $P$. Which of these tests are actually used by $P$ is a different story. We define the $kernel$ of program $P$, noted $Ker(P)$, to be the set of the strings from $|Log_E (F)|$ that $P$ actually uses for making decisions. The strings in $Ker(P)$ are uniquely identified by the algorithm that $P$ implements. The composition of $Ker(P)$ in terms of strings can also be determined through experiments with the executable of $P$.
A concept of great relevance for sequel is that of a $complete$ subset of the reduced logogram $|Log_E (F)|$ of decision problem $(E, F)$: We define a set $H \subseteq |Log_E (F)|$ to be complete for problem $(E, F)$ as soon as, for any $x \in E$, one has $x \in F \Leftrightarrow \exists f \in H (f\le x$).
The proofs of following two theorems are not difficult and are omitted.
\begin{theorem}
A necessary condition for $P$ to correctly solve $(E, F)$ is $Ker(P)$ complete for $(E, F)$.
\end{theorem}
Let $H \subseteq |Log_E (F)|$ be a complete set of strings for $(E, F)$. We define $H$ to be $irreducible$ for $(E, F)$ as soon as no proper subset $K \subset H$ happens to be complete for $(E, F)$.
\begin{theorem}
Let $|Log_E (F)|$ be irreducible and programs $P, Q$ both solve $(E, F)$. Then $Ker(P)=Ker(Q)$.
\end{theorem}
\section{Independence of Decision Problems}
We first introduce a notion of pairwise independence of strings relative to a reference set $E$. As next step, we define a notion of internal independence of set $E$. Next we define notions of internal and strong internal independence of a decision problem.
\paragraph{Mutual Independence of Strings}
Let $f, g$ be any two strings in $\Sigma_\infty(E)$ where $E$ is any infinite recursive set of words over alphabet $\Sigma$. According to definitions, $f$ entangles $g$ relative to $E$ as soon as, for all $x \in E$, $x \ge f \Rightarrow x \ge g$. We agreed that $f \sqsupseteq^E g$ means that $f$ entangles $g$ relative to $E$.
Observe that $f$ fails to entangle $g$ relative to $E$ if and only if there exists $x \in E$ such that $x$ contains $f$ and does not contain $g$. If $f \not\sqsupseteq^E g$ and $g \not\sqsupseteq^E f$ then $f$ and $g$ are said $mutually$ $independent$ relative to $E$; $f$ and $g$ are $mutually$ $dependent$ relative to $E$ when they fail to be mutually independent relative to $E$. If $f, g$ are incompatible, then certainly $f, g$ are mutually independent relative to any $E$.
\paragraph{Independence of a Recursive Set}
Our next step is to define the internal independence of a recursive set $E$. We define $E$ to be $internally$ $independent$ as soon as, given any $f, g \in \Sigma_\infty (E)$ one has $f \sqsupseteq^E g$ if and only if $f$ is part of $g$, that is to say, if and only if $f \le g$.
\paragraph{Independence of a Decision Problem}
Now we are ready to introduce the simple internal independence of a decision problem $(E, F)$.
We call $(E, F)$ $internally$ $independent$ as soon as the strings in $|Log_E (F)|$ are mutually independent taken two by two.
\begin{theorem}
If $E$ is internally independent then any decision problem $(E, F)$ based on $E$ as reference set exhibits the simple internal independence property.
\end{theorem}
\begin{proof}
Let $E$ be any infinite recursive set exhibiting the internal independence property. Let $(E, F)$ be any decision problem based on $E$ as reference set. Let $f, g$ be any two strings in the reduced logogram $|Log_E (F)|$ of the problem.
(I) Assume $f, g$ incompatible. Since $f \in \Sigma_\infty (E)$, we have $E \cap Exp(f) \not= \emptyset$. Let $x \in E \cap Exp(f)$. Then $x$ is in $E$, $x$ includes $f$ and does not include $g$. Analogously, one can find a $y \in E$ which includes $g$ and does not include $f$. Thus, $f, g$ are mutually independent in $E$.
(II) Assume $f, g$ compatible. By the minimality property of the reduced logogram $|Log_E (F)|$ it cannot be that $f \ge g$. By the internal independence of the reference set $E$ one has $f \sqsupseteq^E g$ if and only if $f \ge g$. Then, it also cannot be the case that $f \sqsupseteq^E g$. As a consequence, there exists $x \in E$ which includes $f$ and does not include $g$. Analogously, there exists $y \in E$ which includes $g$ and does not include $f$. Thus, again we have that $f, g$ are mutually independent.
We conclude that problem $(E, F)$ exhibits the simple internal independence property.
\end{proof}
\paragraph{Strong Independence of a Decision Problem}
Let us now come to the strong internal independence of a decision problem. We know that $(E, F)$ is internally independent as soon as the strings in its reduced logogram $|Log_E (F)|$ are mutually independent taken two by two.
The simple internal independence of a decision problem $(E, F)$ certainly is a form of internal independence of a decision problem, but we may indeed ask for more independence: We may ask for independence of the elements of the reduced kernel $|Log_E (F)|$ taken $m$ by $m$ all $m$. The following notion of internal independence of a decision problem captures this extreme form of internal independence of a problem.
We shall say that the decision problem $(E, F)$ exhibits the property of $strong$ $internal$ $independence$ if, for any choice of $s$ distinct strings $f_1,.., f_s$ in $|Log_E (F)|$, the following is true: For every $i$ between $1$ and $s$ there exists a word $x_i \in E$ such that $x_i$ contains $f_i$ and fails to contain any of the remaining strings in $\{f_1,.., f_s\}$.
It is left for the reader to show that strong internal independence of a decision problem implies simple internal independence.
\section{Witnesses and Wizards}
From Equation~\ref{mauri8} we have
\begin{equation}\label{mauri22}
Log_E A_1 \cup .. \cup Log_E A_m \subseteq Log_E (A_1 \cup .. \cup A_m)
\end{equation}
for closed $A_1,.., A_m \subseteq E$. Now replace $m$ with $\alpha(n)$ and $A_i$ with $F_i$ :
\begin{equation}\label{mauri23}
Log_E F_1 \cup .. \cup Log_E F_{\alpha(n)} \subseteq Log_E (F_1 \cup .. \cup F_{\alpha(n)})
\end{equation}
The strings in $Log_E F_1,.., Log_E F_{\alpha(n)}$ are witnesses. The possible strings in
\begin{equation}\label{mauri24}
Log_E (F_1 \cup .. \cup F_{\alpha(n)}) - Log_E F_1 \cup .. \cup Log_E F_{\alpha(n)}
\end{equation}
we call ``wizards'' since they are so to speak able to perceive that an input $x$ shall be in someone of the $F_is$ but couldn't say which. The possible existence of this type of strings in the reduced logogram $|Log_E (F)|$ of a decision problem $(E, F)$ can be demonstrated by examples. Wizards have been found to exist in the reduced logograms of following problems (i) To decide if a symmetric loopfree graph is connected, (ii) To decide if a given positive integer is composite (note, incidentally, that $PRIMES$ is in P \cite{agrawal}).
In a situation in which the target set $F$ is decomposed according to $ \cup_n F_n = F$, the witnesses are always there in the reduced logogram of set $F$ relative to $E$. On the contrary, the wizards may be missing. It pertains to the structure of the computational problem at hand whether the target set $F$ has wizards. We conclude this section proving a theorem:
\begin{theorem}\label{completeSubset}
If $F=\cup_n F_n$ where the $F_is$ are cylinders in $E$, then
$\cup_{i=1}^{\alpha(n)} |Log_E F_i|$ is complete for $(E, F)$.
\end{theorem}
\begin{proof}
Being a union of cylinders in $E$, $F$ is a cylinder in $E$.
Being cylinders in $E$, the $F_is$ are endowed with reduced logograms. This is to say that, for $i=1,..,\alpha(n)$ and any $x \in E$, one has $x \in F_i$ if and only if there is $g \in |Log_E (F_i)|$ such that $x \ge g$. Since the target set $\cup_n F_n = F$ is itself a cylinder in $E$, Equation~\ref{mauri23} holds.
(I) Let $f \in |Log_E F_1| \cup .. \cup |Log_E F_{\alpha(n)}|$ and let $x$ be an input word of length $n$ such that $x \in E$ and $x \ge f$. We must prove $x \in F$.
Very obviously we have $f \in Log_E F_1 \cup .. \cup Log_E F_{\alpha(n)} $.
Since sequence $F_i$ has a cardinality function $\alpha(n)$, then, for input words $x \in E$ of length $n$, equation $\cup_n F_n = F$ can be rewritten $F = F_1 \cup .. \cup F_{\alpha(n)}$.
Given $F_1 \cup .. \cup F_{\alpha(n)} = F$, $f \in Log_E (F)$ follows from Equation~\ref{mauri23}. Then $x \in F$ follows from $x \ge f$ (taking into account that $F$ is a cylinder in $E$).
(II) Let $x \in E$ be any input word of length $|x|=n$. Assume $x \in F$. We must prove that there exists $f \in |Log_E F_1| \cup .. \cup |Log_E F_{\alpha(n)}|$ such that $x \ge f$.
Since $x \in F$ and $F = F_1 \cup .. \cup F_{\alpha(n)}$, there exists $i$, $1 \le i \le \alpha(n)$, such that $x \in F_i$.
Since $F_i$ is a cylinder in $E$, the reduced logogram $|Log_E (F_i)|$ exists. This implies that, if $y \in E$ includes a string $g \in |Log_E (F_i)|$ then certainly $y \in F_i$. Conversely, if $y \in F_i$ then $y$ includes at least a string $g \in |Log_E (F_i)|$.
But this is just to say that $|Log_E (F_i)|$ is a complete subset of $|Log_E (F)|$ for $F_i$ relative to $E$, which is to say, for problem $(E, F_i)$.
Since $|Log_E (F_i)|$ is complete for $F_i$ relative to $E$, it follows from $x \in F_i$ that there exists a string $f \in |Log_E (F_i)|$ such that $x \ge f$.
Then we also have $f \in |Log_E F_1| \cup .. \cup |Log_E F_{\alpha(n)}|$.
We have shown that, given any input word $x \in E$ such that $|x|=n$, one has $x \in F$ if and only if there exists a string $f$ in $|Log_E F_1| \cup .. \cup |Log_E F_{\alpha(n)}|$ such that $f \le x$. Thus, $|Log_E F_1| \cup .. \cup |Log_E F_{\alpha(n)}|$ is a complete subset of $|Log_E F|$ for $F$ relative to $E$.
\end{proof}
\section{Application to Boolean Formulas}
The encoding scheme that we adopt converts $CNF$ formulas into words over $\Sigma=\{0, 1, 2\}$. In what follows $E=CNF$, $F=SAT$.
We represent clauses over $x_1,.., x_n$ by sequences of $n$ codes from $\Sigma$. Code $0$ denotes absence of the variable, code $1$ presence without minus, code $2$ presence with minus.
E.g., clause $x_1 \vee x_3 \vee -x_4$ becomes 1012.
A whole formula is encoded as a sequence of clauses. We define $F^{nm}$ = satisfiable formulas with $n$ variables and $m$ clauses.
We introduce the sequence $y_1, y_2, ..$ of solutions, and the corresponding sequence $F_1, F_2, ..$ or recursive subsets of $F$. Here the solutions $y_i$ are value assignments. The cardinality function is $\alpha(n)=2^n$.
We assume that $F=SAT$ as well as the regions $F_1, F_2, ..$ are closed sets in $E=CNF$. Thus, all these sets are assumed to be relative cylinders in $E$. These assumptions correspond to known properties of $SAT$ \cite{balcazar} \cite{hemaspaandra}.
Essentially, our application consists in investigating whether $|Log_E (F)|$ might possibly contain strings not already in some of the $|Log_E (F_i)|$.
Before we discuss the propositions that we were able to derive, let us spend a few words on the logogram of $SAT$. A string in $|Log_E (F^{nm})|$ is a prescription that a word in $F^{nm}$ may or may not be conformant with. We may represent a string in $|Log_E (F^{nm})|$ as a word of length $nm$ over $\{\flat \}\cup \Sigma$.
Example for $n=m=3$: String $\flat \flat 11 \flat 2 \flat 2 \flat$ prescribes that first clause shall include $x_3$, second shall include $x_1$ and $-x_3$, third shall include $-x_2$. Note that strings in $|Log_E (F^{nm})|$ only prescribe either $1$ or $2$ as values (by the minimality property of reduced logogram).
\begin{theorem}
Problem $(CNF, SAT)$ exhibits the strong internal independence property.
\end{theorem}
\begin{proof}
We consider $s$ distinct strings $f_1,.., f_s$ in $|Log_E (F^{nm})|$. Thus, regarded as a partial function, each $f_i$ will assign only values $1$ or $2$.
We must prove that for each $i=1,.., s$ there exists a string $x_i \in E^{nm} = CNF^{nm}$ such that $x_i$ includes $f_i$ and does not include any of the remaining strings $f_1,.., f_s$.
Let $i$ be any one of the indices $1,.., s$. Then $Dom(f_i) \subseteq \{1,.., nm\}$ and, for all $h \in Dom(f_i)$, we either have $f_i (h)=1$ or $f_i (h)=2$.
Let $x_i$ be that word of length $nm$ over $\Sigma=\{0, 1, 2\}$ such that for all $h \in Dom(f_i)$ it holds that $x_{ih}=f_i(h)$ while for $h$ not in $Dom(f_i)$ one has $x_{ih}=0$. Then certainly $x_i$ includes $f_i$.
Let $f_j$ be any one of the strings $f_1,.., f_s$ being different from $f_i$. Thus, $f_j \not= f_i$. We must prove that $x_i$ does not include $f_j$.
(I) Assume $Dom(f_j)=Dom(f_i)$.
Since $f_i$ and $f_j$ are different, there is $k \in Dom(f_i)$ such that $f_i(k) \not= f_j(k)$.
But $x_{ik} = f_i(k)$, then $x_{ik} \not= f_j(k)$. Then $x_i$ does not include $f_j$.
(II) Let Assume $Dom(f_j) \not=Dom(f_i)$.
Then either there is $a \in Dom(f_j)$ such that $a \not\in Dom(f_i)$ or there exists $b \in Dom(f_i)$ such that $b \not\in Dom(f_j)$.
Assume that $a$ exists. Then $x_i$ does not include $f_j$ since $x_{ia}=0$ while $f_j(a) \notin 0$, hence $x_{ia} \not= f_j(a)$. Analogously, $x_i$ does not include $f_j$ in case $b$ exists.
\end{proof}
\begin{theorem}\label{nowizards}
The reduced logogram $|Log_{CNF} (SAT)|$ does not contain wizards.
\end{theorem}
\begin{proof}
We must prove:
\begin{equation}\label{mauri25}
Log_E F_1^{nm} \cup .. \cup Log_E F_{\alpha(n)}^{nm} = Log_E (F^{nm})
\end{equation}
where $\alpha(n)$ is the cardinality function of sequence $F_1, F_2, ..$. Here $F_i$ is the range of the value assignment $y_i$ (set of formulas in $F$ that are satisfied by $y_i$) and is a cylinder in $E$. Since Equation~\ref{mauri23} holds, we just have to prove that the right-hand side of Equation~\ref{mauri25} does not contain wizards. We actually will prove:
\begin{equation}\label{mauri26}
|Log_E F_1^{nm}| \cup .. \cup |Log_E F_{\alpha(n)}^{nm}| = |Log_E (F^{nm})|
\end{equation}
which is evidently equivalent to Equation~\ref{mauri25}.
We write $K^{nm}$ for $|Log_E (F^{nm})|$ and, for every integer $i=1,.., \alpha(n)$, we write $K_i^{nm} = |Log_E (F_i^{nm})|$. We must prove $K^{nm} = K_1^{nm} \cup .. \cup K_{\alpha(n)}^{nm}$.
First of all, note that the set of all witnesses $K_1^{nm} \cup .. \cup K_{\alpha(n)}^{nm}$ is complete for the target set $F=SAT$ relative to reference set $E=CNF$ by Theorem~\ref{completeSubset}. This implies that, if $x \in F^{nm}$, then $x$ includes a string $f \in K_1^{nm} \cup .. \cup K_{\alpha(n)}^{nm}$.
Let $h \in K^{nm}$. Since $K^{nm}$ is included in $\Sigma_\infty (E)$, we have $h \in \Sigma_\infty (E)$. Then there is an $x \in E^{nm}$ such that $x \ge h$. On the other side, if $x$ is in $E^{nm}$ and includes string $h$, then $x \in F$, hence, since $K_1^{nm} \cup .. \cup K_{\alpha(n)}^{nm}$ is complete for $F$ relative to $E$, there shall exist a string $k \in K_1^{nm} \cup .. \cup K_{\alpha(n)}^{nm}$ such that $x \ge k$ (and $h, k$ shall have to be compatible to one another).
We then set $h \rightarrow k$ to mean that (i) $k$ is a member of $K_1^{nm} \cup .. \cup K_{\alpha(n)}^{nm}$, (ii) there exists $x \in E^{nm}$ such that both $h \le x$ and $k \le x$. (Thus, $h \rightarrow k$ implies that $h$ and $k$ are compatible.)
Besides, we introduce the set $U(h)=\{k|h \rightarrow k\}$ of those witnesses (members of set $K_1^{nm} \cup .. \cup K_{\alpha(n)}^{nm}$) that are related to $h$.
Now, by way of contradiction, we assume that $h$ does not belong to $U(h)$.
We then have that the elements in the set $\{h\} \cup U(h)$ are all distinct.
By the strong internal independence of $SAT$, in correspondence to each string $f \in \{h\} \cup U(h)$ there exists a word $x \in E^{nm}$ such that $f \le x$ and for no $g\in \{h\} \cup U(h)$ being distinct from $f$ one has $g \le x$.
Let $x \in E^{nm}$ be such that $x \ge h$ and for no $g \in U(h)$ one has the inclusion $g \le x$. Word $x$ is in $F=SAT$ since $x$ includes $h$ which is an element of $|Log_E (F^{nm})|$. Besides, $x$ does not contain any element from $U(h)$. But that in turn means that $x$ does not contain any strings from the witset $K_1^{nm} \cup .. \cup K_{\alpha(n)}^{nm}$. (Should $x$ include a string $k$ from $K_1^{nm} \cup .. \cup K_{\alpha(n)}^{nm}$ that would mean that both $x \ge h, x \ge k$ would hold, hence $k$ would be related with $h$ which would imply $k \in U(h)$.)
This is absurd, since $K_1^{nm} \cup .. \cup K_{\alpha(n)}^{nm}$ is complete for $SAT$ relative to $CNF$. We conclude that $h$ is a member of $U(h)$, and hence is in $K_1^{nm} \cup .. \cup K_{\alpha(n)}^{nm}$. Since we already know that $K_1^{nm} \cup .. \cup K_{\alpha(n)}^{nm}$ is a subset of $K^{nm}$, we conclude that $K^{nm} = K_1^{nm} \cup .. \cup K_{\alpha(n)}^{nm}$. Thus, $SAT$ has no wizards.
\end{proof}
\begin{theorem}\label{irreducible}
The reduced logogram $|Log_{CNF} (SAT)|$ is irreducible.
\end{theorem}
\begin{proof}
Let $g \in |Log_E (F^{nm})|$. By Theorem~\ref{nowizards} we know that $g$ must be a witness. Thus, $g$ is a string conveying the specification of exactly one value assignment. Besides, $g$ is minimal (no proper restriction of $g$ is a sufficient condition for event $x \in F$). These two facts make it a straightforward task to specify the general shape that string $g$ shall exhibit.
First of all, $Dom(g)$ shall have to be a set of exactly $m$ numbers taken from
$\{1,.., nm\}$. The first of these numbers is to be taken from the first block $\{1,.., n\}$ (where the first clause is allocated), the second is to be from the second block $\{n+1,.., 2n\}$,.., the $m$th is from the $m$th block $\{n(m-1)+1,.., nm\}$ (where the last clause is allocated). Thus, there are $nm$ possible determinations for $Dom(g)$. We know that regarded as a prescription, $g$ can only prescribe the two values $1$ and $2$. (To help intuition, string $g$ can be thought of as a sequence of flats $\flat\flat..\flat$ of length $nm$ in which some of the flats (as many as $m$) have been replaced with $1s$ or $2s$.)
With any $g$ that satisfies the above requirements we associate a formula $\gamma(g)$ as follows. We note that, regarded as a prescription, $g$ prescribes the presence of exactly one literal in each clause of a formula $x$ consisting of $m$ clauses: We then state that the $i$th clause of $\gamma(g)$ shall consist of exactly the single literal that $g$ prescribes to the $i$th clause of $x$.
Evidently, $\gamma(g)$ is satisfiable and $g \le \gamma(g)$. We claim that $\gamma(g)$ does not include members of $|Log_E (F^{nm})|$ other than $g$.
Indeed, the strings in $|Log_E (F^{nm})|$ never prescribe 0 as value, and $g$ is the largest string being included in the codeword of $\gamma(g)$ which does not prescribe 0 as value. Thus, the only strings that do not prescribe 0 as value and happen to be included in the codeword of $\gamma(g)$ are exactly string $g$ itself and the proper restrictions of string $g$. Since $g$ is minimal, all of its proper restrictions are not members of $|Log_E (F^{nm})|$. Thus $g$ is the only string being included in the codeword of $\gamma(g)$ to be found in $|Log_E (F^{nm})|$.
Hence, $|Log_E (F^{nm})|- \{g\}$ is not complete for $F^{nm}$ relative to $E^{nm}$.
\end{proof}
\section{SAT as Search Problem}
The search version of a decision problem consists in obtaining solutions for a given instance $x$. Thus, with any NP problem $(E, F)$ we associate the following search problem: Given $x$ find a solution $y$ for $x$ or state that no such $y$ exists.
It is known that, by self-reducibility of $SAT$, if we had a polynomial algorithm for $SAT$, then we would also have a polynomial algorithm for the search problem associated with $SAT$ \cite{hemaspaandra}. The results of previous sections show that we can say more: It is impossible to solve $SAT$ without at the same time solving the search problem associated with $SAT$.
These remarks suggest that we may wish to focus on the search problem associated with $SAT$. This is what we do in this section.
Given any NP problem $(E, F)$, we introduce the $cover$ of the target set $F$ associated with $|Log_E (F)|$ to be the family of sets
\begin{equation}
\mathcal D_E (F) = \{Exp_E (g) \subseteq F : g \in |Log_E (F)|\}.
\end{equation}
Its members are the $charts$ or else $regions$ of the cover. The cover that is associated with the kernel of a program $P$ solving $(E, F)$ is then
\begin{equation}
\mathcal F_P (E, F) = \{Exp_E (g) \subseteq F : g \in Ker(P)\}.
\end{equation}
Both $\mathcal D_E (F)$ and $\mathcal F_P (E, F)$ are families of subsets of the target set $F$ whose union is $F$, with $\mathcal F_P (E, F)$ being a subfamily of $\mathcal D_E (F)$.
For $SAT$ we have the following situation: $\mathcal F_P (E, F) = \mathcal D_E (F)$ by Theorem~\ref{irreducible} and the strings in $|Log_E (F)|$ are all witnesses by Theorem~\ref{nowizards}. Thus any of these strings, call it $g$, has an associated relativized cylinder $Exp_E (g)$ being fully included in only one of the regions $F_is$.
Since for $E=CNF$, $F=SAT$, $Exp_E (g)$ is actually an intersection of two absolute cylinder sets $Exp(g)$ and $E$, then $Exp_E (g)$ itself is an absolute cylinder. In general, $Exp_E(g)$ will intersect certain other regions $F_h$, $F_k$,.. , but there exists only one region $F_j$ which completely includes $Exp_E(g)$. Besides, every region $F_i$ shall have to include at least one such elementary relativized cylinder $Exp_E(g)$.
As a consequence, the cardinality of the cover $\mathcal D_E (F)$ cannot be smaller than that of the family of sets $\{F_i^n : i=1,.., 2^n\}$, hence it is exponential.
\paragraph{Remarks on the Time Complexity of SAT}
In the rest of this section we make remarks on the time complexity of SAT in the light of Theorems 8, 9, 10. We will be less formal than in previous sections. Our remarks consist of two parts:
\paragraph{Part One}
It follows from Theorem~\ref{irreducible} that there is a unique subfamily $\mathcal F$ of $\mathcal D_E (F)$ such that $F = \bigcup \mathcal F$, namely $\mathcal D_E (F)$ itself. As a consequence, for any proper subset $\mathcal F \subset \mathcal D_E (F)$ one has $F \not= \bigcup\mathcal F$.
We then have that it cannot be that $\mathcal F_P (E, F)$ is a proper subfamily of the full cover $\mathcal D_E (F)$, otherwise we would have $F \not= \bigcup\mathcal F_P (E, F)$, and then $P$ could not be correct as a program. In particular, since $\mathcal D_E (F)$ is exponential, $\mathcal F_P (E, F)$ is not allowed to be a polynomial subfamily of $\mathcal D_E (F)$ ): No search algorithm for $SAT$ can only search a polynomial family of sets.
\paragraph{Part Two}
It remains for us to discuss the possibility that one single algorithm can solve the full search problem for $x$ by directly searching the full exponential family $\mathcal D_E (F)$ in polynomial time. However this can scarcely be the case due to complete absence of any form of dependence among subsets in the reduced logogram $|Log_E (F)|$ for $E=CNF$, $F=SAT$. By this lack of internal dependence, any computation of a program $P$ solving $(CNF, SAT)$ is such that the result of any computation step does not change the results that are left possible for the subsequent steps. In the rest of this part we make a few informal remarks on how this lack of dependence comes into play.
We take a general purpose program machine $M$ as computation model. (That $M$ is a program machine means that the process carried out by $M$ is determined by a running program.) We assume that only one program is running at any moment of time within $M$. We keep machine $M$ fixed while we consider an infinite set of programs solving $SAT$ (actually the set of all programs that run on $M$ and solve $SAT$). We emphasize that the hardware is kept fixed while different programs all running on that hardware are compared.
Let $B(x, m)$ be a program which for any given input $x$ of size $n$ and every integer $m$ between $1$ and $2^n$ will decide if $x$ has solutions in the range between $y_1$ and $y_m$. Take $Time_B(x, m)$ be the number of time units that $B$ uses on inputs $x, m$.
We will make remarks that convey evidence for following statement: If for any $x$ and $m<2^n$ we have $Time_B (x, m) = Time_B (x, m+1)$, then we may replace $B$ with a new program $C$ running on $M$ and such that $Time_C (x, m) < Time_C (x, m+1) = Time_B (x, m+1)$.
Indeed, under the above hypotheses on $M$, we can speak of the the class of all programs $B$, $C$,.. that solve $SAT$ on machine $M$, and we can introduce a most efficient program $A$ in this class. We understand that $A$ is a most efficient program as soon as $Time_A(x, 2^n) \le Time_C(x, 2^n)$ for any other program $C$ on any input word $x$.
It is sufficient for us to give a hint for $Time_A(x, 1) < Time_A(x, 2)$.
Our hint is the following. Since, by Theorem~\ref{nowizards}, we have $|Log_E (F_1 \cup F_2)| = |Log_E (F_1)| \cup |Log_E ( F_2)|$ and $|Log_E (F_1)| \cap |Log_E ( F_2)|=\emptyset$, a computation that implements the collection of tests in $|Log_E (F_1 \cup F_2)|$ consists of two distinct computations, one implementing collection $|Log_E (F_1)|$ and the other implementing collection $|Log_E (F_2)|$. Thus, computation $A(x, 1)$ being a proper prefix of computation $A(x, 2)$ is compatible with assumed optimality of $A$, whence $Time_A(x, 1) < Time_A(x, 2)$.
\section{On Ascribing Knowledge to Programs}
Our theory has roots in the body of formalized concepts referred to as Scott's theory of computation \cite{dizenzo}. Thus, the reduced logogram $|Log_E (F)|$ associated with problem $(E, F)$ is an $information$ $system$ \cite{scott}, \cite{larsen} (however, the very important relation is not entailment but entanglement). Even more relevant are the relationships with the ``dynamical'' part of Scott's theory, the one regarding computations as sequences of steps through which the running program's knowledge increases \cite{gierz}. We also, in this latter respect, used concepts from the model theoretic analysis of program knowledge \cite{fagin}.
In this section we briefly review relationships of the above theory with formalisms that ascribe knowledge to a running program.
In Scott's theory the computations that program $P$ does are functionally equivalent to sequences of tokens (or tests) being consistent with the input string $x$. In our developments, the ``tests'' or ``tokens'' are identified with the strings in $Ker(P)$. In Scott's theory, the state of knowledge of a running program $P$ consists of a pile of $assertions$. These are consistent (indeed, they are propositions that are true of one and the same object $x$). As soon as the pile becomes a decisive one, the program makes its decision and stops. Our addition is: The ``assertions'' are of the form $x \in Exp(g)$ or else $x \not \in Exp(f)$ where $f, g \in Ker(P)$.
Searching $x$ for a string $g$ amounts to same as asking if $x$ happens to belong in the absolute elementary cylinder $Exp(g)$ associated with $g$. We thus arrive at the conclusion that all that $P$ can possibly do to make a decision consists in asking questions of this form. Thus, the computations that $P$ performs are just sequences of tests $in$ $disguise$. Note that $P$ has not got to ask whether $x$ is in $Exp_E(g)$ since $P$ already knows that $x$ is in $E$. (This is an important point since asking if $x$ is in $Exp_E(g)$ would be more computationally expensive.)
In this theory, information regarding $x$ is acquired by $P$ in lumps. The acquisition of a piece of information occurs at the moment when the execution of a sequence of tests is completed (i.e., when the computation that implements that sequence of tests is completed). We may well think of a piece of information as being a piece of paper carrying a written note such as ``$x$ is in $Exp(g)$'' or ``$x$ fails to be in $Exp(g)$.'' These notes stack one upon the other until the pile becomes a decisive one: This is the case when the data that was gathered entails one of the events $x \in F$ or else $x \in E-F$.
Note that loading an input $x$ in memory does not imply computations, hence no tests are made on $x$ while loading, hence no knowledge is acquired about $x$. After loading $x$, the pile of assertions that represents program's knowledge is empty.
\section{Conclusions}
We advocated strings (with special meaning for the term) as a fundamental notion for studies of computation. So to speak, strings are needed to express the notions of internal and strong internal independence of a decision problem that underly our theory of decision problems. We were led to formulate strings to become able to derive the very basic notion of internal independence of a decision problem. Strings seem to be useful since they are absolutely elementary. Note that they are already at work in Computability. The ``restrictions'' that are often used in the study of circuit complexity are finite Boolean versions of the strings \cite{hemaspaandra}.
Strings are not made of consecutive letters. A string can be interspersed in a word: By canceling zero or more letters in a word $x$, and by leaving blanks in places of letters, we get a string $f$ which is a substring of the original word $x$. In a string, one has information associated with spaces between letters (and hence with possible multiple periodicity with which letters may occur). As soon as we have the strings, we are able to define the kernel $Ker(P)$ of a decision program $P$, a set of strings which capture structural features of both program $P$ and the decision problem $(E, F)$ that $P$ solves.
$Ker(P)$ is a subset of the reduced logogram $|Log_E(F)|$ of target set $F$ in base $E$. The reduced logogram consists of substrings of the words in $F$ which exhibit the following property: If a word in $E$ includes one of these substrings then it belongs to $F$. We may think of the strings in $|Log_E(F)|$ as kind of genes of the words in $F$. (In early notes the logogram was the $jinnee$ or $genie$ of problem $(E, F)$.) The idea clearly comes from biology, where it is known that certain occurrences at given intervals of certain letters within DNA sequences convey structural information, and yield observable characters in the macroscopic development of the structures.
Our application to $SAT$ uses a structural property of that problem that seems to have escaped attention so far. We called it ``strong internal independence.'' Theorem 8 shows that $SAT$ exhibits the strong internal independence property. Theorem 9 shows that, by that property, $SAT$ cannot have collective certificates in its reduced logogram. As a consequence, all the programs that solve $SAT$ have same kernel (Theorem 10).
The remarks in Section 8 suggest how Theorems 8, 9, 10 can possibly be used to put SAT under scrutiny. Our ultimate concern in this paper has been to set forth our developments as a possible new technique to attack decision problems, where ``technique'' is here used in the sense that Hemaspaandra and Ogihara gave to this term in the preface of their ``Companion.''
\section{Acknowledgements}
In the development of this research I received advice from Proff. Fabrizio Luccio, Johan Hastad, Giancarlo Mauri, and Claudio Procesi. These results would not have been achieved without that help.
|
2,877,628,091,536 | arxiv | \section{Figures for Section~\ref{sec:expander}}
\section{Proof of Theorem~\ref{thm:weakI}}
\label{sec:proof of weakI}
\begin{algorithm}[bt]
\caption{Weak system.}
\label{algo:weak}
\begin{algorithmic}
\Require{$N$, $s$, $\mtx{\Phi}$ (adjacency matrix of a $d$-left-regular expander $G$),
$\mtx{\Phi}\signal$, and $I$}
\Ensure $\widehat\signal$
\NFor{$j\gets 1$ \textbf{to} $d$}
\NFor{\textbf{each }$i\in I$}
\State $\signal^{(j)}_i \gets \median_{u\in \Gamma(\{i\})} \sum_{(u,v)\in E} \ensuremath\mathbf{x}_u$
\Comment{each sum is an element of input $\mtx{\Phi}\signal$}
\NFor{\textbf{each }$i\in I$}
\State $\signal'_i \gets \median_{1\leq j\leq d} \signal^{(j)}_i$
\State $\widehat\signal \gets $ top $O(s)$ elements of $\signal'$
\State \Return $\widehat\signal$
\end{algorithmic}
\end{algorithm}
First, we need the following two lemmata.
\begin{lemma}[Noise]\label{lem:noise}
Let $\alpha>1$ and $t>\alpha k$. Let $\Phi$ be the adjacency graph of an $(n,m,d,2\alpha k,\epsilon)$-expander with $\epsilon < 1/2$. Let $\ensuremath\mathbf{x}\in \mathbb{R}^n$ be such that $|\ensuremath\mathbf{x}_1|\geq |\ensuremath\mathbf{x}_2|\geq \cdots \geq |\ensuremath\mathbf{x}_n|$. Let $I = [\alpha k]$, then
\[
\|(\Phi(\ensuremath\mathbf{x}-\ensuremath\mathbf{x}_{[t]}))_{\Gamma(I)}\|_1 \leq 4\epsilon d(\|\ensuremath\mathbf{x}-\ensuremath\mathbf{x}_{[t]}\|_1 + \alpha k |\ensuremath\mathbf{x}_{t+1}|).
\]
\end{lemma}
\begin{proof}
Partition $\{1,\dots,N\}$ into blocks $I\cup H_1\cup B_1\cup B_2\cup \dots$, where $H_1=\{\alpha k+1,\dots,t\}$ and $B_i=\{t+(i-1)\alpha k+1,\dots,t+i\alpha k\}$ for $i\geq 1$. Consider $\ensuremath\mathbf{x}$ restricted to a block $B_i$.
\textbf{Case 1}. $\ensuremath\mathbf{x}_{B_i}$ is flat, i.e., $|\ensuremath\mathbf{x}_{t+i\alpha k}|\geq |\ensuremath\mathbf{x}_{t+(i-1)\alpha k+1}|/2$. Consider all $d|B_i|$ edges in the expander emanating from $B_i$. Suppose that $Z$ edges of them are incident to $\Gamma(I)$, then
\[
|\Gamma(I)\cup \Gamma(B_i)|\leq \epsilon d(|I|+|B_i|) - Z.
\]
On the other hand, by the expansion property,
\[
|\Gamma(I)\cup \Gamma(B_i)|\geq (1-\epsilon) d(|I|+|B_i|),
\]
which implies that
\[
Z\leq \epsilon d(|I|+|B_i|)\leq 2\epsilon\alpha kd.
\]
Each of the $Z$ edges sends a noise of $x_i$ to $\Gamma(I)$, therefore
\[
\|(\Phi \ensuremath\mathbf{x}_{B_i})_{\Gamma(I)}\|\leq Z\cdot \max_{i\in B_i}|\ensuremath\mathbf{x}_i|\leq 2\epsilon\alpha kd\cdot |\ensuremath\mathbf{x}_{t+(i-1)\alpha k+1}|\leq 4\epsilon d\|\ensuremath\mathbf{x}_{B_i}\|_1,
\]
where the last inequality follows from the fact that $\ensuremath\mathbf{x}_{B_i}$ is flat so that $\alpha k|\ensuremath\mathbf{x}_{t+(i-1)\alpha k+1}|\leq 2\|\ensuremath\mathbf{x}_{B_i}\|_1$.
\textbf{Case 2}. $x_{B_i}$ is not flat, then $|\ensuremath\mathbf{x}_{t+i\alpha k}| < |\ensuremath\mathbf{x}_{t+(i-1)\alpha k+1}|/2$. Let
\[
J = \{i\in B_i: |\ensuremath\mathbf{x}_i| < |\ensuremath\mathbf{x}_{t+(i-1)\alpha k+1}|/2\}.
\]
Increase $|\ensuremath\mathbf{x}_i|$ for all $i\in J$ so that $|\ensuremath\mathbf{x}_i| = |\ensuremath\mathbf{x}_{t+(i-1)\alpha k+1}|/2$ and $\ensuremath\mathbf{x}_{B_i}$ becomes flat, and this increases $\|\ensuremath\mathbf{x}_{B_i}\|_1$ by at most $\alpha k |\ensuremath\mathbf{x}_{t+(i-1)\alpha k+1}|/2$. Invoking Case 1, we obtain that
\[
\|(\Phi \ensuremath\mathbf{x}_{B_i})_{\Gamma(I)}\|_1 \leq 4\epsilon d\left(\|\ensuremath\mathbf{x}_{B_i}\|_1 + \frac{\alpha k \left|\ensuremath\mathbf{x}_{t+(i-1)\alpha k+1}\right| }{2}\right).
\]
Now we go back to the entire $\ensuremath\mathbf{x}$. Suppose that $B_{i_1},\dots,B_{i_q}$ are not flat, then by triangle inequality we shall have
\[
\|(\Phi(\ensuremath\mathbf{x}-\ensuremath\mathbf{x}_t))_{\Gamma(I)}\|_1 \leq 4\epsilon d\|\ensuremath\mathbf{x}-\ensuremath\mathbf{x}_t\|_1 + 4\epsilon d\cdot \frac{\alpha k}{2}\sum_{p=1}^q \left|\ensuremath\mathbf{x}_{t+(i_p-1)\alpha k+1}\right|.
\]
Observe that $|\ensuremath\mathbf{x}_{t+(i_p-1)\alpha k+1}|\leq |\ensuremath\mathbf{x}_{t+(i_{p-1}-1)\alpha k+1}|$ for $p\geq 2$, we can show inductively that
\[
|\ensuremath\mathbf{x}_{t+(i_p-1)\alpha k+1}|\leq \frac{|\ensuremath\mathbf{x}_{t+1}|}{2^{p-1}},\quad p\geq 1,
\]
whence it follows that
\[
\|(\Phi(\ensuremath\mathbf{x}-\ensuremath\mathbf{x}_{[t]}))_{\Gamma(I)}\|_1 \leq 4\epsilon d(\|\ensuremath\mathbf{x}-\ensuremath\mathbf{x}_{[t]}\|_1 + \alpha k |\ensuremath\mathbf{x}_{t+1}|).\qedhere
\]
\end{proof}
In the usual decomposition, the head contains the entries with large coordinate values, which will be referred to as \emph{heavy hitters}. If a heavy hitter fails to be recovered, it must have been displaced by another entry, loosely called a decoy, in the recovered signal. The next lemma bounds the number of decoys.
\begin{lemma}[Decoys]\label{lem:decoy}
Suppose that $G$ is a $(4s,\frac{\epsilon}{512})$-bipartite expander which satisfies the $(\frac{9s}{\epsilon},\beta \epsilon,\zeta)$-isolation property, where $\frac12-\zeta > 576\beta$. Let $\ensuremath\mathbf{x}\in \mathbb{R}^n$ be a signal satisfying the assumption in the Weak system, and let $\ensuremath\mathbf{x}'\in \mathbb{R}^n$ be the estimates defined as
\[
\ensuremath\mathbf{x}'_{i} = \median_{u\in \Gamma(\{i\})} \sum_{(u,v)\in E} \ensuremath\mathbf{x}_u,\quad i\in [N].
\]
Define
\[
D = \{i\in [N]: |\ensuremath\mathbf{x}_i - \ensuremath\mathbf{x}'_i| \geq \epsilon/(4s)
\},
\]
then $|D| < s/8$.
\end{lemma}
\begin{proof}
Without loss of generality, assume that $|D| = s/8$, or we replace $D$ with a subset of size exactly $s/8$. Also assume that $|\ensuremath\mathbf{x}_1|\geq |\ensuremath\mathbf{x}_2|\geq \cdots \geq |\ensuremath\mathbf{x}_n|$. Suppose that $|\ensuremath\mathbf{x}_i|\geq \epsilon/(2s)$ for all $i\in H:=\supp{\ensuremath\mathbf{y}}$, otherwise we can place the violated $i$'s into $\ensuremath\mathbf{z}$, causing $\|\ensuremath\mathbf{z}\|_1$ to increase by at most $s\cdot \epsilon/(2s) = \epsilon/2$, so we would have $\|\ensuremath\mathbf{z}\|_1\leq 2$. Let $T = H\cup D\cup \{i: |\ensuremath\mathbf{x}_i|\geq \epsilon/(4s)\}$, then $t := |T|\leq \|\ensuremath\mathbf{z}\|_1/(\epsilon/(4s)) + |D| + |H|\leq 9s/\epsilon$.
Note that $|\ensuremath\mathbf{x}_{t+1}| \leq \epsilon/(4s)$. Taking $\alpha = 2$ in Lemma~\ref{lem:noise}, we know that
\[
\|(\Phi(\ensuremath\mathbf{x}-\ensuremath\mathbf{x}_{(t)}))_{\Gamma(H\cup D)}\|_1 \leq 4\cdot\beta\epsilon d\left(\frac{3}{2}+ \frac{\epsilon}{2} + 2s\cdot\frac{\epsilon}{4s}\right) \leq 8\beta\epsilon d.
\]
By the isolation property, there are at most $\frac{9s}{\epsilon}\cdot \frac{\epsilon}{144} = \frac{s}{16}$ elements in $T$ which are not isolated in at least $\zeta d$ nodes from other elements in $T$. This implies that at least $s/16$ elements in $D$ are isolated in at least $\zeta d$ nodes from other elements in $T$.
A decoy at position $i$ receives at least $\epsilon/(4s)$ noise in at least $(1/2-\zeta) d$ isolated nodes of $\Gamma(\{i\})$, hence in total, a decoy element receives at least $\epsilon(1/2-\zeta)d/(4s)$ noise. Therefore the $s/16$ decoys overall should receive noise at least
\[
\frac{\epsilon(\frac12-\zeta)d}{4s}\cdot \frac{s}{16} > 8\beta\epsilon d\geq \|(\Phi(x-x_t))_{\Gamma(H\cup D)}\|_1,
\]
which is a contradiction. Therefore $|D| < s/8$.
\end{proof}
\begin{remark}\label{remark:decoy}
Despite the fact that we have specified various constants (such as $4$, $\frac{1}{512}$, $9$, etc) in the lemma above, the constants can be flexibly adjusted such that the number of decoys is at most $\zeta s$ for any given small $\zeta > 0$ with appropriate choices of other constants.
\end{remark}
Now we are ready to show Theorem~\ref{thm:weakI}.
\begin{proof}[Proof of Theorem~\ref{thm:weakI}]
The proof is essentially the same as \cite[Lemma 4]{PS12}. It follows from Lemma~\ref{lem:decoy} that with appropriate choices of constants, that there are at most $\zeta s/4$ decoys and at least $(1-\zeta/4)s$ elements $i$ in $\supp{\ensuremath\mathbf{y}}$ satisfy $|\ensuremath\mathbf{x}_i - \ensuremath\mathbf{x}_i'|\leq \eta/(4s)$. Let $I' = I\cap \supp{\ensuremath\mathbf{y}}$. We describe below the construction of $\widehat{\ensuremath\mathbf{x}}$, $\widehat{\ensuremath\mathbf{y}}$ and $\widehat{\ensuremath\mathbf{z}}$.
\begin{itemize}
\item Elements $i\in\supp{\widehat\signal}$ with a good estimate (to
within $\pm \eta/(4s)$ contribute $\signal_i-\widehat\signal_i$ to
$\widehat{\mathbf{z}}$. There are at most $s$ of these, each
contributing $\eta/(4s)$, for total contribution $\eta/4$ to
$\widehat{\mathbf{z}}$.
\item Elements $i\in\supp{\widehat\signal}$ with a bad estimate (not
to within $\pm \eta/(4s)$) contribute $\signal_i-\widehat\signal_i$ to
$\widehat{\mathbf{y}}$. There are at most $\zeta s/4$ of these.
\item Elements $i\in\supp{\mathbf{z}}\setminus\supp{\widehat\signal}$
contribute $\signal_i$ to $\widehat{\mathbf{z}}$. The $\ell_1$ norm of these is at most $\|\mathbf{z}\|_1$.
\item Elements $i\in I'\setminus\supp{\widehat\signal}$
with a good estimate that are nevertheless displaced by another
element $i'\in\supp{\widehat\signal}\setminus\supp{\mathbf{y}}$ with
a good estimate contribute to $\widehat{\mathbf{z}}$.
There are at most $s$ of these. While the value $\signal_i$ may be
large and make a large contribution to $\widehat{\mathbf{z}}$, this
is offset by $\signal_{i'}$ satisfying
$|\signal_{i'}|
\ge |\widehat{\signal}_{i'}|-\eta/(4s)
\ge |\widehat{\signal}_{i}|-\eta/(4s)
\ge |\signal_{i}|-\eta/(2s)$, which
contributes to $\mathbf{z}$ but not to
$\widehat{\mathbf{z}}$. Thus the net contribution to
$\widehat{\mathbf{z}}$ is at most $\eta/(2s)$ for each of the $s$
of these $i$, for
a total $\eta/2$ contribution to $\widehat{\mathbf{z}}$.
\item Elements $i\in I'\setminus\supp{\widehat\signal}$
that themselves have bad estimates or are displaced by elements with
bad estimates contribute $\signal_i$ to $\widehat{\mathbf{y}}$. There are at
most $\zeta s/4$ bad estimates overall, so there are at most $\zeta s/4$
of these.
\item Elements $i\in I\setminus I'$ contribute to $\widehat{\ensuremath\mathbf{y}}$. There are at most $\zeta s/2$ of these.
\end{itemize}
It is clear that $|\supp{\widehat{\ensuremath\mathbf{y}}}|\leq \zeta s$ and $\|\widehat{\ensuremath\mathbf{z}}\|_1\leq \|\ensuremath\mathbf{z}\|_1 + \eta$, as desired. The runtime is easy to verify.
\end{proof}
\section{One-layer Hashing Construction}\label{sec:expander_appendix}
\begin{lemma}[expanding property]\label{lem:one-layer}
For any $\epsilon \in (0, 1/4)$, $k\geq 1$, $\alpha\geq 1$ and $N = \Omega(\alpha k)$, a random one-layer $(B,d)$ hashing scheme gives an $(\alpha k,\epsilon)$ bipartite expander with probability $\geq 1-1/N^c$, where $B=\Omega(\frac{\alpha k}{\epsilon})$ and $d=\Omega(\frac{1}{\epsilon}\log\frac{N}{k})$.
\end{lemma}
\begin{lemma}[isolation property]\label{lem:one-layer-isolation}
For any $\epsilon,\zeta \in (0, 1/4)$, $k\geq 1$, $\alpha\geq 1$ and $N = \Omega( k/\epsilon)$, a random one-layer $(B,d)$ hashing scheme gives a bipartite graph with $(L, \epsilon, \zeta)$-isolation property with probability $\geq 1-1/N^c$,
where $B=\Omega(\frac{ k}{\zeta\epsilon})$, $d=\Omega(\frac{1}{\zeta\epsilon}\log\frac{N}{k})$, $L=O(k/\epsilon)$.
\end{lemma}
If we combine Lemma~\ref{lem:one-layer}, Lemma~\ref{lem:one-layer-isolation} and Theorem~\ref{thm:weakI}, we have a clean formulation, in the language of expanders, of the result on weak system in \cite{PS12}.
\subsection{Proof of Lemma~\ref{lem:one-layer}}
\begin{proof}
Let $p_s$ be the probability of a fixed set of $s$ elements hashed into less than $(1-\epsilon)ds$ elements. By symmetry this probability is independent of the $s$ positions and thus is well-defined. Hence the probability
\begin{equation}\label{eqn:one-layer-a}
\Pr\{\text{hashing does not give an expander}\} = \sum_{s=2}^{4k} \binom{N}{s} p_s.
\end{equation}
Our goal is to show that
\begin{equation}\label{eqn:p_s}
p_s \leq \exp\left(-cs\ln\frac{eN}{s}\right)
\end{equation}
for some absolute constant $c>2$, for which it suffices to show that
\begin{equation}\label{eqn:one-layer-b}
p_s \leq \exp\left(-cs\ln\frac{N}{k}\ln\frac{Ck}{s}\right)
\end{equation}
for some $c, C > 0$. Indeed, it follows from \eqref{eqn:one-layer-b} that
\[
p_s \leq \exp\left(-cs\ln\frac{N}{k}\ln\frac{Ck}{s}\right) \leq \exp\left\{-cs\left(\ln\frac{N}{k}+\ln\frac{Ck}{s}\right)\right\} = \exp\left(-cs\ln\frac{CN}{s}\right)
\]
and \eqref{eqn:p_s} holds. Assume for the moment that \eqref{eqn:p_s} is proved, then we can bound \eqref{eqn:one-layer-a} to be
\begin{align*}
\sum_{s=2}^{\alpha k} \binom{N}{s} p_s
&\leq \sum_{s=2}^{\alpha k} \exp\left\{s\ln\frac{eN}{s}-cs\ln\frac{CN}{s}\right\}\\
&\leq \sum_{s=2}^{\alpha k} \exp\left\{-(c-1)s\ln\frac{C'N}{s}\right\}\\
&\leq \sum_{s=2}^{\alpha k} \exp\left(-(c-1)s\log N\right) < \frac{1}{N^{c'}}
\end{align*}
as desired.
Now we compute $p_s$. Fix a set $S$ of $s$ elements. Suppose that they are hashed into $X_i$ ($i=1,\dots,d$) buckets in $d$ repetitions, respectively. We have that $1\leq X_i\leq s$ and $\sum X_i\leq (1-\epsilon)sd$. Define the event
\[
E_i(X_i) = \{S\text{ is hashed into }X_i\text{ rows in }i\text{-th reptition}\},
\]
and we shall compute $\Pr\{E_i(X_i)\}$.
When $E_i$ happens, there are $s-X_i$ repetitions. Consider we hash the element one by one, choosing $b_1,\dots,b_d\in \{1,\dots,B\}$ sequentially. We have a collision when selecting $b_i$ if $b_i\in \{b_1,\dots,b_{i-1}\}$. The probability that a collision occurs at step $i$, even conditioned on $b_1,\dots,b_{i-1}$, is at most $i/B\leq s/B$. Therefore,
\[
\Pr\{E_i(X_i)\}\leq \binom{s}{s-X_i}\left(\frac{s}{B}\right)^{s-X_i}.
\]
Hence
\[
p_s = \sum \Pr\{E_1(X_1),\dots,E_d(X_d)\} = \sum \prod_{i=1}^d \binom{s}{s-X_i}\left(\frac{s}{B}\right)^{s-X_i} = \sum \left(\frac{s}{B}\right)^{sd-\sum X_i} \prod_{i=1}^d \binom{s}{s-X_i}
\]
where the summation is over all possible configurations of $\{X_i\}$. Invoking the combinatorial identity
\begin{equation}\label{eqn:comb_identity}
\sum_{k_1+k_2+\cdots+k_m=n} \binom{r_1}{k_1}\binom{r_2}{k_2}\cdots\binom{r_m}{k_m} = \binom{r_1+r_2+\cdots+r_m}{n}
\end{equation}
and writing $X=\sum X_i$, we see that
\[
p_s \leq \sum_{X=d}^{(1-\epsilon)sd} \left(\frac{s}{B}\right)^{sd-\sum X_i}\binom{sd}{sd-\sum X_i} \leq \sum_{X=\epsilon sd}^d \binom{sd}{X}\left(\frac{s}{B}\right)^{X}
\]
Now we invoke Chernoff bound
\begin{equation}\label{eqn:chernoff_2}
\sum_{k=\epsilon n}^n \binom{n}{k}\lambda^k \leq \left(\frac{e\lambda}{\epsilon}\right)^{\epsilon n},\quad \lambda<\epsilon
\end{equation}
to obtain that
\[
p_s\leq \left(\frac{es}{\epsilon B}\right)^{\epsilon sd}\leq \exp\left(-cs\log\frac{N}{k}\ln\frac{Ck}{s}\right)
\]
as desired, where the constants $c,C > 0$ can be made arbitrarily big.
\end{proof}
\subsection{Proof of Lemma~\ref{lem:one-layer-isolation}}
\begin{proof}
Let $S$ be a set of size $s\leq L$. We shall bound the probability $p_s$ (which is defined by symmetry) that at least $\epsilon s$ elements of $S$ collide with each other in at least $\zeta d$ repetitions. When this happens, there are at least $\epsilon\zeta ds$ colliding element-repetition pairs. As in Lemma~\ref{lem:one-layer} it suffices to have \eqref{eqn:one-layer-b} for some $c,C > 0$ that can be made arbitrarily large.
In one repetition, one element of $S$ collide with others with probability $\leq s/B$. By a coupling argument as in \cite{PS12}, among all $sd$ element-repetition pairs with expected $\mu = s^2 d/B$ failed pairs, there are at least $\zeta \epsilon sd$ failed pairs with probability
\[
\left(\frac{e\mu}{\zeta\epsilon ds}\right)^{\zeta\epsilon sd} = \left(\frac{es}{\zeta\epsilon B}\right)^{\zeta\epsilon sd}\leq \exp\left(-cs\log\frac{N}{k}\ln\frac{Ck}{s}\right)
\]
as desired, where the absolute constants $C,c>0$ can be made arbitrary large.
\end{proof}
\section{Proof of Lemma~\ref{lem:two-layer}}
\begin{proof}
Let $p_s$ be the probability of a fixed set of $s$ elements hashed into less than $(1-\epsilon)ds$ elements. By symmetry this probability is independent of the $s$ positions and thus is well-defined. Hence the probability
\begin{equation}\label{eqn:7a}
\Pr\{\text{hashing does not give an expander}\} = \sum_{s=2}^{4k} \binom{N}{s} p_s.
\end{equation}
Similarly to Lemma~\ref{lem:one-layer}, it suffices to show that
\begin{equation}\label{eqn:7b}
p_s \leq \exp\left(-cs\ln\frac{N}{k}\right)
\end{equation}
Assume for the moment that this is proved, then we can bound \eqref{eqn:7a} to be
\begin{align*}
\sum_{s=2}^{4k} \binom{N}{s} p_s
&\leq \sum_{s=2}^{4k} \exp\left\{s\ln\frac{eN}{s}-cs\ln\frac{N}{k}\right\}\\
&\leq \sum_{s=2}^{4k} \exp\left\{s\ln(eN)-\frac{c}{2}s\ln(eN)\right\}\quad (k\leq \sqrt{N/e})\\
&\leq \sum_{s=2}^{4k} \exp\left(-\left(\frac{c}{2}-1\right)s\log(eN)\right) < \frac{1}{N^{c'}}
\end{align*}
as desired.
Now we prove \eqref{eqn:7b}. Fix a set $S$ of $s$ elements. The outer layer of hashing has $d_1$ blocks of size $B_1$, and let $Y_i$ ($i=1,\dots,d_1$) be the number of hashed row of the $s$ elements in $i$-th block. The inner layer has $d_1d_2$ blocks, indexed by $(i,j)_{1\leq i\leq d_1,1\leq j\leq d_2}$ of size $B_2$, and let $X_{ij}$ be the number of hashed row of the $s$ elements in the $(i,j)$-th block. Define the events
\begin{gather*}
E_i(Y_i) = \{S\text{ is hashed into }Y_i\text{ rows in }i\text{-th outer block}\}\\
E_{ij}(X_{ij}) = \{S\text{ hashed into }X_{ij}\text{ rows in }(i,j)\text{-th inner block}\}
\end{gather*}
First we calculate $\Pr\{E_i\}(Y_i)$. Consider we pick a row at one time for an element in $S$ in order. When $E_i(Y_i)$ happens there are at least $s-Y_i$ collisions, hence
\[
\Pr\{E_i(Y_i)\} \leq \binom{s}{s-Y_i} \left(\frac{s}{B_1}\right)^{s-Y_i}
\]
and similarly
\[
\Pr\{E_{ij}(X_{ij})|E_i(Y_i)\} \leq \binom{Y_i}{Y_i-X_{ij}} \left(\frac{Y_i}{B_2}\right)^{Y_i-X_{ij}}
\]
It follows that
\begin{align*}
p_s
&= \sum
\Pr\{E_{11}(X_{11}),\dots,E_{d_1d_2}(X_{d_1d_2})|E_1(Y_1),\dots,E_{d_1}(Y_{d_1})\}\Pr\{E_1(Y_1),\dots,E_{d_1}(Y_{d_1})\}\\
&\leq
\sum \prod_i \Pr\{E_i\}(Y_i) \prod_{i,j} \Pr\{E_{ij}(X_{ij})|E_i(Y_i)\} \\
& \leq \sum \prod_i \binom{s}{Y_i} \left(\frac{s}{B_1}\right)^{s-Y_i} \cdot
\prod_{i,j} \binom{Y_i}{X_{ij}}\left(\frac{Y_i}{B_2}\right)^{Y_i-X_{ij}}\\
& \leq \sum \left(\frac{s}{B_1}\right)^{sd_1-\sum Y_i} \left(\frac{s}{B_2}\right)^{d_2\sum Y_i-\sum X_{ij}} \prod_i \binom{s}{Y_i} \prod_{i,j} \binom{Y_i}{X_{ij}}
\end{align*}
where the summation is taken over all possible configurations of $\{X_i\}$ and $\{Y_i\}$ so that $s\geq Y_i\geq \max_j X_{ij}$ and $\sum X_{ij} \leq (1-\epsilon)sd_1d_2$.
Invoking the combinatorial equality \eqref{eqn:comb_identity}
and letting $X=\sum X_{ij}$ and $Y=\sum Y_i$, we obtain that
\begin{align}
p_s &\leq \sum_{Y=d_1}^{sd_1}\binom{sd_1}{Y} \left(\frac{s}{B_1}\right)^{sd_1-Y} \sum_{X=d_1d_2}^{\min\{d_2Y,(1-\epsilon)sd_1d_2\}} \binom{d_2Y}{X}\left(\frac{s}{B_2}\right)^{d_2Y-X}\notag\\
&\leq \sum_{Y=d_1}^{(1-\epsilon/2)sd_1}\binom{sd_1}{Y}\left(\frac{s}{B_1}\right)^{sd_1-Y} \sum_{X=d_1d_2}^{d_2Y} \binom{d_2Y}{X}\left(\frac{s}{B_2}\right)^{d_2Y-X}\notag\\
&\qquad + \sum_{Y=(1-\epsilon/2)sd_1}^{sd_1}\binom{sd_1}{Y}\left(\frac{s}{B_1}\right)^{sd_1-Y} \sum_{X=d_1d_2}^{(1-\epsilon)sd_1d_2} \binom{d_2Y}{X}\left(\frac{s}{B_2}\right)^{d_2Y-X}\notag\\
&=: S_1+S_2\label{eqn:7c}
\end{align}
We bound $S_1$ and $S_2$ separately. First,
\[
S_1 \leq \sum_{Y=d_1}^{(1-\epsilon/2)sd_1}\binom{sd_1}{Y}\left(\frac{s}{B_1}\right)^{sd_1-Y}\left(1+\frac{s}{B_2}\right)^{d_2Y}\\
\leq \left(1+\frac{s}{B_2}\right)^{sd_1d_2}\sum_{Y=\frac{\epsilon}{2} sd_1}^{sd_1}\binom{sd_1}{Y}\left(\frac{s}{B_1}\right)^{Y}
\]
It follows from Chernoff bound \eqref{eqn:chernoff_2} that
\begin{align}
S_1 &\leq \left(1+\frac{s}{B_2}\right)^{sd_1d_2} \left(\frac{es}{\frac{\epsilon}{2} B_1 }\right)^{\epsilon sd_1/2}\notag\\
&\leq \exp\left\{-\frac{1}{2}\epsilon sd_1\left(\ln\frac{\epsilon B_1}{2es}\right) + sd_1d_2\ln\left(1+\frac{s}{B_2}\right)\right\}\notag\\
&\leq \exp\left\{-\frac{1}{4}\epsilon sd_1\ln\frac{B_1}{k}+c_2\epsilon sd_1d_2\right\} \quad (\text{since }B_1 \gtrsim k/\epsilon^2)\notag\\
&\leq \exp\left\{-c_3 s\ln\frac{N}{k}\right\}\label{eqn:7s1}
\end{align}
where the absolute constant $c_2 > 0$ can be made arbitrarily close to $0$ and the absolute constant $c_3$ can be made arbitrarily large.
Now we bound $S_2$. When $Y\geq (1-\epsilon/2)sd_1$ then
\[
\frac{(1-\epsilon)sd_1d_2}{d_2Y}\leq 1-\frac{\epsilon}{2}.
\]
Again invoking Chernoff bound,
\[
\sum_{X=d_1d_2}^{(1-\epsilon)sd_1d_2} \binom{d_2Y}{X}\left(\frac{s}{B_2}\right)^{d_2Y-X} \leq \left(\frac{es}{\frac{\epsilon}{2} B_2}\right)^{d_2Y - (1-\epsilon) sd_1d_2} \leq \left(\frac{s}{C'k}\right)^{d_2Y - (1-\epsilon)sd_1d_2}
\]
where $C'>0$ is an absolute constant which can be made arbitrarily large. So
\begin{align*}
S_2 &\leq \sum_{Y=(1-\epsilon/2)sd_1}^{sd_1}\binom{sd_1}{Y}\left(\frac{s}{B_1}\right)^{sd_1-Y} \left(\frac{s}{C'k}\right)^{\epsilon sd_1d_2/2}\\
&\leq \sum_{Y=0}^{(\epsilon/2)sd_1}\binom{sd_1}{Y}\left(\frac{s}{B_1}\right)^{Y} \left(\frac{s}{C'k}\right)^{\epsilon sd_1d_2/2}\\
&\leq 2 \left(\frac{s}{C'k}\right)^{\epsilon sd_1d_2/2}
\end{align*}
It immediately follows, similarly to upper-bounding $S_1$, that
\begin{equation}\label{eqn:7s2}
S_2 \leq \exp\left\{-c_4s\ln\frac{N}{k}\ln\frac{C'k}{s}\right\},
\end{equation}
where $c_4 > 0$ can be made arbitrarily large. Plugging \eqref{eqn:7s1} and \eqref{eqn:7s2} into \eqref{eqn:7c} we see that \eqref{eqn:7b} holds. This completes the proof.
\end{proof}
\section{Proof of Lemma~\ref{lem:two-layer-isolation}}
\begin{proof}
Fix a set $S$ of size $s$. Let event $\mathcal{E}$ be that at least $(1-\epsilon/2)s$ elements in $S$ are isolated in at least $(1-\zeta/2)d_1$ first-layer buckets. Similarly to Lemma~\ref{lem:one-layer-isolation} we know that
\[
\Pr\{\mathcal{E}^c\} \leq
\left(\frac{c's}{\zeta\epsilon B_1}\right)^{\zeta\epsilon sd_1}\leq e^{-cs\log\frac{N}{k}}
\]
where $c'$ is an absolute constant and $c>0$ can be made arbitrarily large. In the above we used that fact that since $B_1=\Omega(k/(\zeta^{\alpha}\epsilon^{2\alpha}))$ it holds that
\[
\ln\frac{\zeta\epsilon^2 B_1}{c_1 k} \geq \left(1-\frac{1}{\alpha}\right)\ln\frac{B_1}{k}.
\]
Conditioned on the event $\mathcal{E}$. Among the $(1-\epsilon/2)s$ elements we shall show that at least $(1-\epsilon)$ of them are isolated in at least $(1-\zeta)d_1d_2$ second-layer buckets. That means, there are a total of at least $\frac{\epsilon}{2}\frac{\eta}{2} sd_1d_2$ failed element-reptitions. But now, the probability of each collision is always bounded by $s/B_2$ even conditioned on previous outcomes, and we can proceed as in Lemma~\ref{lem:one-layer-isolation} to conclude that there are at least $\theta\zeta\epsilon sd_1d_2$ (for some absolute constant $\theta$) with probability at most
\[
\left(\frac{es}{\theta\zeta\epsilon B_2}\right)^{\theta \zeta\epsilon sd_1d_2} \leq e^{-c''s\log\frac{N}{k}},
\]
as desired, where the constant $c'' > 0$ can be made arbitrarily large.
\end{proof}
\section{Proof of Lemma~\ref{lem:R-S_coding}}
\label{sec:R-S_coding}
\begin{proof}
As an outer code, use Reed-Solomon over an alphabet of size
$\beta/\log \beta$. This is concatenated with a random code of length
$\log\beta$ as an inner code. The inner code can be decoded in
constant time from a lookup table of size $\beta$ and the
outer code can be decoded by solving a linear system of size
approximately $\beta$ in time $O(\beta^2)$.
To encode the $\beta$ bits of the inner code, proceed as follows.
To encode a single bit $b\in\{0,1\}$, replace each row $\rho$ of $\mtx{\Phi}$ with a $2$-by-$N$ submatrix.
In column $i$ of $\rho$, replace each 1 with a height-two column
$\bigl(\begin{smallmatrix}\rho_i\\0\end{smallmatrix}\bigr)$
or
$\bigl(\begin{smallmatrix}0\\\rho_i\end{smallmatrix}\bigr)$
depending on $b$. For decoding in the
presence of noise, consider any
$\bigl(\begin{smallmatrix}a\\b\end{smallmatrix}\bigr)$
to be a {\em relaxed} encoding equivalent to
$\bigl(\begin{smallmatrix}\rho_i\\0\end{smallmatrix}\bigr)$
if $|a|>|b|$ and
$\bigl(\begin{smallmatrix}0\\\rho_i\end{smallmatrix}\bigr)$ otherwise.
Replace each 0 with a height-2 column of zeros.
Overall we use a Weak system (Theorem~\ref{thm:weakI}) with a $(\Theta(k),O(1))$ bipartite expander that exhibits a $(\Theta(k),d)$ hashing scheme, where $d=\Theta(\log(B/k))$. We know that there exist $\Omega(k)$ heavy hitters, each dominates the buckets where it lands in $\Omega(d)$ repetitions. In each such repetition, our bit encoding scheme ensures that the associated bit can be recovered successfully, hence for each of such heavy hitter, we shall collect $\Omega(d)$ bits, enough to recover the message of $\beta$ bits.
The runtime is $O(B\beta^2\log(B/k))$ for exhaustive recovery in the Weak system.
\end{proof}
\section{Proof of Lemma~\ref{lem:expander_recovery}}
\label{sec:expander_recovery}
\begin{proof}
Combining Lemma~\ref{lem:two-layer}
and Lemma~\ref{lem:two-layer-isolation}, one can show that there exists an $(4s,\epsilon)$-expander such that
\begin{enumerate}\addtolength{\itemsep}{-0.25\baselineskip}
\renewcommand{\labelenumi}{(\alph{enumi})}
\setlength{\itemsep}{0pt}
\setlength{\parskip}{0pt}
\setlength{\parsep}{0pt} \item the expander exhibits a $(B_1,d_1,B_2,d_2)$ hashing structure, where the parameters are as in Lemma~\ref{lem:two-layer-isolation};
\item the expander satisfies the $(O(s/\epsilon), O(\epsilon), O(1))$-isolation property;
\end{enumerate}
As in the proof of Lemma~\ref{lem:decoy}, suppose that $|\ensuremath\mathbf{x}_i|\geq \epsilon/s$ for all $i\in\supp{\ensuremath\mathbf{y}}$, otherwise we can place the violated $i$'s into $\ensuremath\mathbf{z}$, causing $\|z\|_1$ to increase by at most $s\cdot \epsilon/s = \epsilon$, so we would have $\|z\|_1\leq 2$. Call the elements in $\supp{\ensuremath\mathbf{y}}$ heavy hitters. If $|\supp{\ensuremath\mathbf{y}}|\leq s/8$ our goal is automatically achieved, so we assume that $|\supp{\ensuremath\mathbf{y}}| > s/8$.
\textbf{Step 1.} Overall we know from Remark~\ref{remark:decoy} that we have at most $s/8$ decoys, or, we can recover $|\supp{\ensuremath\mathbf{y}}|-s/8$ heavy hitters from the second-layer bucket values, where successful recovery means that each of them dominates in at least $\alpha_2 d_1 d_2$ second-layer buckets, i.e., the bucket noise is at most $\nu=\epsilon/(2s)$.
For each of them, in at least $\beta_1 d_1$ of $d_1$ outer repetitions, it dominates in at least $\beta_2 d_2$ inner repetitions, where $(1-\beta_1)(1-\beta_2)>1-\alpha_2$. Because whenever an element dominates in the second-layer bucket, it must dominate the first-layer bucket incident to that second-layer bucket, we conclude that there exists a set $S\subseteq \supp{\ensuremath\mathbf{y}}$, $|S|\geq |\supp{\ensuremath\mathbf{y}}|-s/8$, such that each $i\in S$ dominates at least $\beta_1 d_1$ first-layer buckets among all $d_1$ repetitions, and in each of such repetitions, it dominates at least $\beta_2 d_2$ second-layer buckets.
We can choose the hidden constants in the expander parameters such that
$\beta_1 \geq 1-\zeta$ and $\beta_2$ matches the error tolerance of the coding scheme we described in Lemma~\ref{lem:R-S_coding}, where $\zeta$ is the parameter we set in Section~\ref{sec:expander_description}.
\textbf{Step 2.} It follows from above that each $i\in S$ will be recovered in at least $\beta_1d_1$ outer repetitions, since its bucket value is $\geq \epsilon/s - \nu \geq \epsilon/(2s)$. Indeed, in every repetition of outer hashing, we collect top $O(s/\epsilon)$ (first-layer) buckets, so we will include every bucket with value $\geq \epsilon/(2s)$, and thus the heavy hitter $i$. In this case, the message associated with the heavy hitter will be recovered correctly, as the inner encoding can tolerate $1-\beta_2$ fraction of error. Therefore we know that for each $i\in S$, the associated messages will be correctly recovered in $\beta_1 d_1$ outer repetitions.
\textbf{Step 3.} As described in the previous section, we shall form a graph $\tilde G$. Note that for $i\in S$, $\beta_1 d_1$ nodes in the column are good nodes (i.e., with correct message). For each of them, perform a breadth-first search of $O(\log_\delta d_1)$ steps, collecting at most $d_1^c$ nodes. Since the column contains at most $(1-\beta)d_1 \leq \zeta d_1$ bad nodes, by Theorem~\ref{lem:graph_expander} and Property (d) of our choices of parameters, there exists a good node in the $i$-th column such that if we perform a breadth-first search of $c\log_\delta d_1$ steps, we shall collect $\alpha d_1$ good nodes which are all in the $i$-th column. The Parvaresh-Vardy code with our choice of parameters (Property (b) and (c)) enables us to include it in the list. We shall briefly describe the decoding below. Having collected at most $d_1^c$ points $(x,r(x))\in \ensuremath\mathbb{F}^{m+1}$, we consider all polynomials $Q(x,y_0,\dots,y_{m-1})$ of degree at most $d_X = \alpha d_1 - (h-1)m\log_{B_1}N$ in its first variable and at most $h-1$ in each such that $Q(x,r(x))=0$ for all $i$. Our choice of parameters (Property (c), i.e., $d_X h^m > d_1^c$) guarantees that such $Q$ exists. Then, the existence of $\alpha d_1$ good nodes (in the BFS visited nodes) indicates that the equation
\[
Q(x, f_i(x), (f_i^h \bmod{E})(x),\dots, (f_i^{h^{m-1}}\bmod{E})(x)) = 0
\]
has $\alpha d_1$ roots in $\ensuremath\mathbb{F}$ for $f_i$ corresponding to the coordinate $i\in S$. By our choice of parameters (Property (b)), the univariate polynomial $Q(x)$ has degree less than $\alpha d_1$ and must be identically zero. This means that $f_i(x)$ is a root of $Q^\ast(z) = Q(x,z,z^h,\dots,z^{h^{m-1}}) = 0$ over $\ensuremath\mathbb{F}[x]/E(x)$. We can find $f_i$ by factoring $Q^\ast$ and thus recover the position $i$ of the heavy hitter.
In the end, our candidate list will contain all $i\in S$, that is, we shall have recovered $|\supp{\ensuremath\mathbf{y}}|-s/8$
heavy hitters.
\tightpgh{Number of Measurements.} The number of measurements is $O(B_2d_1d_2)= O(\epsilon^{-2} s\log(N/s))$.
\tightpgh{Size of Look-up Table.} The inner decoding uses a look-up table of size $O(\log B_1) = O(\frac{s}{\epsilon} + \log\log\frac{N}{s})$. The algorithm also stores the expander graph $G$, which takes space $O(d_1)$. Both are smaller than the space cost of the recovered graph $O(s d_1/\epsilon)$, so their contribution to the space complexity can be neglected.
\tightpgh{Runtime.} For each of $d_1$ repetitions, we shall recover every bucket with value $\geq \epsilon/(2s)$ in $O(B_1\log^3(B_1/k)) = O(s^{1+\beta}\poly(\log N,1/\epsilon))$ time. There are $O(s/\epsilon)$ of them in each repetition. Then we form a graph of size $O(sd_1/\epsilon)$. Forming this graph takes time $O(s^{1+\beta}\poly(\log N,1/\epsilon))$ from the argument above. Then we do breadth-first search of $c\log_\delta d_1$ steps on every node. Each BFS takes $O(d_1^c)$ time. Each decoding of the BFS nodes takes $\poly(d_1,\log|B_1|)=\poly(\log N,1/\epsilon)$ time, and can be done deterministically (see, e.g., \cite[Theorem 4.3]{CEPR09}), since $|\ensuremath\mathbb{F}|$ has a small characteristic. Hence extracting heavy hitters $i$ from the recovered graph $\tilde G_N$ takes time $O(s\poly(\log N,1/\epsilon))$ and therefore, the overall runtime is $O(s^{1+\beta}\poly(\log N,1/\epsilon))$. In the end, we shall obtain a candidate list of size $O(s\poly(\log N,1/\epsilon))$.
\end{proof}
\section{Proof of Theorem~\ref{thm:toplevel}}\label{sec:toplevel_proof}
\begin{algorithm}[tb]
\caption{Toplevel System}
\label{algo:fasttoplevel}
\begin{algorithmic}
\Require{$\mtx{\Phi}$, $\mtx{\Phi}\signal$, $N$, $k$, $\epsilon$}
\Ensure{$\widehat\signal$}
\State{$\widehat\signal\leftarrow0$}
\State{${\mathbf \mu}\leftarrow\mtx{\Phi}\signal$}
\For{ $j\gets 0$ to $\log k$}
\State {Run Algorithm~\ref{algo:encoding_paradigm} on ${\mathbf \mu}$ with length $N$, $s\leftarrow k/2^j$, $\eta\leftarrow \frac{\epsilon}{\gamma^j(1-\gamma)}$ and obtain a candidate list $I$}
\State {Run Algorithm~\ref{algo:weak} on candiate set $I$ with $s\leftarrow k/2^j$ and $\eta \leftarrow \epsilon \gamma^j$}
\State {Let $\signal'$ be the result}
\State {$\widehat\signal\gets \widehat\signal + \signal'$}
\State {${\mathbf \mu}\gets {\mathbf \mu}-\mtx{\Phi}\signal'$}
\EndFor
\State \Return $\widehat\signal$
\end{algorithmic}
\end{algorithm}
\begin{proof}
Suppose that in Lemma~\ref{lem:expander_recovery}, the exponent of $1/\epsilon$ in runtime is $c = c(\beta,\gamma) > 2$. Choose $\alpha < 1$ such that $\alpha^c > 1/2$.
Using Lemma~\ref{lem:expander_recovery} for identification and Theorem~\ref{thm:weakI} for estimation, with appropriate choice of constants, we claim that at the beginning of the $j$-th step, $\ensuremath\mathbf{x} = \ensuremath\mathbf{y}+\ensuremath\mathbf{z}$, where $|\supp{\ensuremath\mathbf{y}}|\leq k/2^j$ and
\[
\|\ensuremath\mathbf{z}\|_1\leq 1+\epsilon\left(1+\alpha+\alpha^2+\cdots+\alpha^{j-1}\right).
\]
We shall prove this claim by induction. Letting $s = k/2^j$, $\eta = \epsilon (1-\alpha)\alpha^j$ for identification, which introduces at most $\eta$ into the tail and the tail remains at most $3/2$ by assuming that all head items, i.e., the non-zero elements in $\ensuremath\mathbf{y}$, are all larger than $\eta/s$.
The identification procedure returns a candidate $I$ that contains $3/4$ fraction of $\supp{\ensuremath\mathbf{y}}$ (note that when the head is flat, we can change $\supp{\ensuremath\mathbf{y}}$ to be a superset that satisfies this condition without changing the norm of $\ensuremath\mathbf{z}$). Then the estimation procedure, with $s=O(k/2^j)$ and $\eta=\epsilon\alpha^{j+1}$ will give us
\[
\ensuremath\mathbf{x} = \widehat{\ensuremath\mathbf{x}} + \widehat{\ensuremath\mathbf{y}} + \widehat{\ensuremath\mathbf{z}},
\]
where $|\supp{\ensuremath\mathbf{x}}|=O(s)$, $|\supp{\hat{\ensuremath\mathbf{y}}}|\leq s/2$ and
\[
\|\widehat{\ensuremath\mathbf{z}}\|_1\leq \|\ensuremath\mathbf{z}\|_1 + \epsilon(1-\alpha)\alpha^j + \alpha^{j+1} = \|\ensuremath\mathbf{z}\|_1 +\alpha^j.
\]
It is easy to verify that $\|\hat{\ensuremath\mathbf{z}}\|_1\leq 1 + \epsilon/(1-\alpha) = O(1)$ and thus Lemma~\ref{lem:expander_recovery} for identification and Theorem~\ref{thm:weakI} can be applied at the next round and the inductive hypothesis is satisfied. Therefore, in the end we shall obtain that
\[
\|\widehat{\ensuremath\mathbf{x}} - \ensuremath\mathbf{x}\|_1\leq \left(1+\frac{\epsilon}{1-\alpha}\right)\|\ensuremath\mathbf{x}-\ensuremath\mathbf{x}_k\|_1.
\]
The number of measurements used for identification is
\[
O\left(\sum_j \frac{1}{\epsilon^2 \alpha^{2j}}\cdot \frac{k}{2^j}\log\frac{N}{\frac{k}{2^j}}\right) = O\left(\frac{k}{\epsilon^2}\sum_j\left(\frac{1}{2\alpha^2}\right)^j\left(j+\log\frac{N}{k}\right)\right) = O\left(\frac{k}{\epsilon^2}\log\frac{N}{k}\right)
\]
and the number of measurements used for estimation is
\[
O\left(\sum_j \frac{1}{\epsilon^2\alpha^j}\cdot \frac{k}{2^j}\log\frac{N}{\frac{k}{2^j}}\right) = O\left(\frac{k}{\epsilon^2}\sum_j\left(\frac{1}{2\alpha}\right)^j\left(j+\log\frac{N}{k}\right)\right) = O\left(\frac{k}{\epsilon^2}\log\frac{N}{k}\right)
\]
hence the total number of measurements is $O(\epsilon^{-2}k\log(N/k))$ as claimed.
It can be verified in a similar way that total runtime is $O(k^{1+\beta}\poly(\log N,1/\epsilon))$.
Finally, replacing $\epsilon$ with $(1-\alpha)\epsilon$ completes the proof.\end{proof}
\section{Identification of Heavy Hitters}\label{sec:backpointers}
In the previous section, we showed how to estimate all candidates in a candidate set $I$ quickly. The main bottleneck in a highly efficient algorithm is finding a non-trivial set $I \subset [N]$ of candidates which we address in this section.
The overall strategy is as follows. Using the two-layer hashing scheme $(B_1,d_1,B_2,d_2)$, we expect that a heavy hitter dominates the first-layer buckets where it lands in $\Omega(d_1)$ repetitions. In each of these repetitions, it is a heavy hitter in a signal of length $B_1$, and we expect to recover it using the Weak algorithm applied to the signal of length $B_1$ with $I = [B_1]$. After finding the heavy buckets in each repetition, the remaining problem is to extract the position of a heavy hitter $i$ from the $\Omega(d_1)$ repetitions that contain $i$. To do this, we shall encode the index $i$
in such a way that if we recover the buckets containing $i$ in enough repetitions we shall be able to reconstruct $i$.
To that end, we introduce the following model of weak list recovery in the \emph{sparse recovery channel}.
\begin{definition}
\label{def:channelI}
The $(m,N,s)$ {\em Sparse Recovery Channel} takes an
$m$-by-$N$ matrix $\mtx\Phi$ as input, chooses a signal $\signal$ with decomposition $\signal={\mathbf y}+{\mathbf z}$ with $|\supp{{\mathbf y}}| \leq s$ and
$\nerr{{\mathbf z}}_1 \le O(1)$, and outputs $\mtx{\Phi}\signal$.
\end{definition}
Note that $\signal$ may depend on $\mtx\Phi$. Also
note that {\em any} signal may be chosen by the channel and normalized
so that $\nerr{\ensuremath\mathbf{z}}_1\le 3/2$. It will be convenient to assign the
normalization at this point to match the Weak system (Defintion~\ref{def:weakI}).
Next, we define the {\em Weak Recovery Criterion} appropriate for this
channel. See Figure~\ref{fig:channel}.
\begin{figure}
\begin{center}
\scalebox{.8}{
\begin{picture}(330,85)(0,20)
\put( 50, 40){\framebox(50,20)[c]{Encoder}}
\put(165,15){\makebox(0,0)[c]{${\Phi}$}}
\put(160,15){\vector(-3,1){60}}
\put(170,15){\vector( 3,1){60}}
\put(230, 40){\framebox(50,20)[c]{Decoder}}
\put( 0, 50){\vector(1,0){50}}
\put( 25, 60){\makebox(0,0)[c]{$\ensuremath\mathbf{m}$}}
\put(100, 50){\vector(1,0){50}}
\put(125, 60){\makebox(0,0)[c]{$\Phi'$}}
\put(165, 50){\circle{30}}
\put(165, 50){\line(+1,+1){10}}
\put(165, 50){\line(+1,-1){10}}
\put(165, 50){\line(-1,+1){10}}
\put(165, 50){\line(-1,-1){10}}
\put(165,100){\vector(0,-1){35}}
\put(162,85){\makebox(0,0)[r]{$\ensuremath\mathbf{x}$}}
\put(180, 50){\vector(1,0){50}}
\put(205, 60){\makebox(0,0)[c]{$\Phi' \ensuremath\mathbf{x}$}}
\put(280, 50){\vector(1,0){50}}
\put(315, 60){\makebox(0,0)[c]{$\widehat{\ensuremath\mathbf{m}}:\widehat{\ensuremath\mathbf{m}}\approx_x \ensuremath\mathbf{m}$}}
\end{picture}
}
\end{center}
\captionsetup{font=footnotesize,labelfont=bf}
\caption{Sparse recovery channel. The encoder and decoder agree on some matrix $\mtx\Phi$. The encoder takes messages $\ensuremath\mathbf{m}$ and produces a measurement matrix $\mtx\Phi'$ based on $\ensuremath\mathbf{m}$ and $\mtx\Phi$. The channel is fed with $\mtx\Phi'$ and $\ensuremath\mathbf{x}$ and produces $\mtx\Phi'\ensuremath\mathbf{x}$, from which the decoder tries to recover $\widehat{\ensuremath\mathbf{m}}$ in the sense of weak list recovery.}\label{fig:channel}
\end{figure}
\begin{definition}[Weak list Recovery Criterion]
\label{def:criterionI}
Fix parameters $m,N,s,\epsilon$. Let $\ensuremath\mathbf{m}$ be a vector of $\beta$-bit
messages, for $i\in[N]$, and suppose $\widehat{\ensuremath\mathbf{m}}$ is a
list of possible index-message pairs. We say that
$\widehat{\ensuremath\mathbf{m}}$ is {\em correct in the List Weak sense} if, for at least $|\supp{\ensuremath\mathbf{y}}|-s/8$ indices $i$ in $\supp{\ensuremath\mathbf{y}}$,
we have $(i,\ensuremath\mathbf{m}_i)\in\widehat{\ensuremath\mathbf{m}}$.\end{definition}
The encoding/decoding scheme is given in Algorithm~\ref{algo:encoding_paradigm}. We break each message $\ensuremath\mathbf{m}_i$ associated with position $i$ into $d_1$ chunks, $\ensuremath\mathbf{m}_{i,1},\dots,\ensuremath\mathbf{m}_{i,d_1}$. Note that $\ensuremath\mathbf{m}_i$ could be much longer than $\log N$ bits in order to guarantee a successful list recovery. Now in the $j$-th repetition of the $d_1$ repetitions, we obtain a signal $\widetilde{\ensuremath\mathbf{x}}$ of length $B$. Each $\widetilde{\ensuremath\mathbf{x}}_\ell$ is associated with a message that can be viewed as a weighted sum of $\ensuremath\mathbf{m}_{i,j}$ for positions $i$ hashed into bucket $\ell$. If a heavy hitter $i$ is isolated in bucket $\ell$ and the noise is mild in this bucket, this weighted sum would be approximately $\ensuremath\mathbf{m}_{i,j}$, and we expect to recover $\ensuremath\mathbf{m}_{i,j}$ from the second-layer hashing, with inner encoding and decoding. Now we assume that we have recovered $\ensuremath\mathbf{m}_{i,j}$ for heavy hitter $i$ in sufficiently many repetitions $j$. The central difficulty is to match $\ensuremath\mathbf{m}_{i,j}$ with $\ensuremath\mathbf{m}_{i,j'}$ with $j\neq j'$ in order to find enough fraction of $\ensuremath\mathbf{m}_i$ in the end. In order to solve this we shall encode some linking information in the node that will enable us to match $\ensuremath\mathbf{m}_{i,j}$ with $\ensuremath\mathbf{m}_{i,j'}$. This will be the topic of the next subsection, in which we shall use the Parvaresh-Vardy code to overcome this difficulty.
\begin{algorithm}[bt]
{\small
\caption{Encding/Decoding paradigm.}
\label{algo:encoding_paradigm}
\begin{algorithmic}
\State{// Encoding with $(B_1,d_1,B_2,d_2)$ hashing scheme}
\For{ $i=1$ to $N$}
\State{{\bf Break}: Break the information of $i$ into
$d_1$ chunks}
\State{{\bf Outer encoding}: Encode the chunks with cluster info (from a regular expander graph) and against errors, getting $\{\ensuremath\mathbf{m}_{i,j}\}_{j=1}^{d_1}$}
\EndFor
\For{ $j=1$ to $d_1$}
\State{{\bf Inner encoding}: Encode $\ensuremath\mathbf{m}_{i,j}$, for $i\in[N]$}
\EndFor
\State{// Decoding with $(B_1,d_1,B_2,d_2)$ hashing scheme}
\For{ $j=1$ to $d_1$}
\State{// ... length $B_1$, $(B_2d_2)$-measurement
Sparse Recovery Channel ...}
\State{{\bf Inner decoding}: Recover $\widehat{\ensuremath\mathbf{m}}_j$ in the Weak List sense}
\State{{\bf Record Side Info}: Tag each element of $\widehat{\ensuremath\mathbf{m}}_j$ with $j$}
\EndFor
\State{{\bf Outer decoding}: From $\widehat{\ensuremath\mathbf{m}}=\bigcup_j
\widehat{\ensuremath\mathbf{m}}_j$'s, find
chunk clusters and correct errors; produce $I$}
\end{algorithmic}}
\end{algorithm}
Lemma~\ref{lem:R-S_coding} is a simple case to illustrate our idea of encoding, in which we show how to code $\beta = \log(B/k)$ bits in the length-$B$ Sparse Recovery Channel and how to recover the messages associated with $\Omega(k)$ heavy hitters in the length $B$ signal in time approximately $B$. The proof is postponed to Appendix~\ref{sec:R-S_coding}.
\begin{lemma}
\label{lem:R-S_coding}
Fix $k,B,\beta$, with $B=\Omega(k)$ and $\beta = O(\log(B/k))$. There is a coding scheme for the
length-$B$ $m$-measurememt Sparse Recovery Channel for
$m=O(\frac{k}{\epsilon}\log\frac{B}{k})$ in the weak list recovery sense in which decoding runs in
time $O(B\log^3\!\frac{B}{k})$. This scheme also uses a look up table of size $\beta$.
\end{lemma}
\section{Fast Candidate Set Generation}
\label{sec:fast}
In this section, we present a structure that provides a list of
$\poly(k\log(N))$ candidate heavy hitters that includes $\Omega(k)$
true heavy hitters, in time $\poly(k\log N)$.
The general setup is as follows. Let $B\approx (k\log N)^3$.
\mjsnote{How big does $B$ need to be, and why?} We will hash
$h:[N]\to[B]$ at random and repeat $L=O(\log(N)/\log(B))$ times. In
each repetition, we will recursively recover the $k$ heavy hitters in
the aggreated signal. We will show how to recover, along with each
recovered heavy hitter, $O(\log B)$ bits of auxiliary data, defined
below,
so that there are $\Omega(L\log B)=\Omega(\log N)$ bits of data
associated with each recovered heavy hitter. Our main goal here is to
piece together the $L$ chunks of $\log B$ bits for position $i$ to get
$i$, which is returned as a candidate.
To store auxiliary data, one can proceed as follows. First, to store
one additional bit $m_i$ associated with $i$, map $[B]\to[B]\times[2]$
following $h:[N]\to[B]$, deterministically, sending $i$ to
$(h(i),m_i)$. When we recover a heavy hitter recursively from a
signal of ``length'' $[B]\times[2]$, the second component of the
position gives us $m_i$.
To store $\log(B)$ bits, we have two choices. Either generalize the
above to map $[N]\to[B]\to[B^{O(1)}]$. Alternatively, one can solve a
noisy set of equations modulo 2: Each repetition gives the right hand
side for some fixed random equation in the $\log B$ bits; if enough
repetitions succeed, we will have enough equations to recover all the
bits.
\subsection{Graph Structure: Ideal, Contracted, and Implemented}
\paragraph{Expander.}
For constant $d$, pick a directed graph $G_L$ of degree $d$,
by giving each node $d$ out-arcs at random.
We now give an important property of this graph. Note that the
following property says something about $L$ starting nodes $u$ and
$\log_d L$ possible radii from $u$, but
does {\em not} make a claim about expansion from all
exponentially-many small sets.
\begin{lemma}
\label{lem:expander}
Let $\delta=\log_d(3L)$. Except with probability $1/\poly(L)$, for
each node $u\in G_L$, there are $.8L$ unique nodes at distance
$\delta$ from $u$ among $3L$ total endpoints of length-$\delta$
paths.
\end{lemma}
\begin{proof}
Intuitively, we proceed as follows. Fix a starting node, $u$. Assume
inductively that the neighborhood of $u$ at radius $j$ contains $f(j)$
unique nodes, which is about the maximum number $d^j$ of unique
nodes---in fact, assume that the neighborhood contains $f(j)\approx
d^j$ nodes that have not yet been seen at any
radius less than $j$, either. Then the immediate neighbors of these
are chosen independently at random from all $L$ nodes, and we can
proceed.
Our analysis treats separately the case of $f(j)<\log L$, $\log L\le
f(j)<\sqrt{L\log L}$, $\sqrt{L\log L}\le f(j)< L/d$, and $f(j)\approx
L/d$. The reasons are indicated below by boxed expressions.
Fix distance $j$. Let $X_r$ be the indicator variable for the $r$'th
arc ending on another node chosen during this or previous $j$.
Then $E[X_r]\le (r+d^j)/L$. Let $X=\sum_{r\le df(j)} X_r$, so
$E[X]\approx(d^2f^2(j)/2+f(j)d^{j+1})/L\approx d^2f^2(j)/(2L)$. As long
as $f(j)< L/d$, we
will aim for failure probability $1/L\log L$, enough to take a union
bound over all starting nodes and all distances $j$.
\begin{itemize}
\item
First, for $f(j)<0.6\log L$, consider the probability that
at least two arcs
are bad. That means that at least two arcs collide with previous
arcs. The probability is at most
\begin{eqnarray*}
\binom{df(j)}{2}\left(\frac{df(j)}{L}\right)^2
& \le & \frac{(df(j))^4}{L^2}\\
& \le & \frac{(0.6 d)^4 \log^{4}L}{L^2}\\
& \le & \frac1{L\log L},
\end{eqnarray*}
which is acceptably small. The aggregate expansion over the $J$
possible $j$'s in the range $d\le f(j)<0.6\log L$ is at least
$d^J\prod_{1\le j<J}\left(1-2/d^j\right)=d^J\left(1-\frac{O(1)}d\right)\geq d^J\left(1-\frac{2.37}{d}\right)$ for $3\le d\le 10^8$.
This analysis extends for $f(j)>0.6\log L$, but only up to
\framebox{$f(j)<o\left(\frac{\sqrt{L}}d\right)$}.
\item
Now suppose $0.6\log L\le f(j)\le\frac{\sqrt{7L\log L}}d$.
By Chernoff, we have
\[\Pr(X-E[X] > t)\le\left(\frac{e E[X]}{t}\right)^t.\]
We claim that we shall always have $f(j)\geq d^j/3$ in this case (for $d\geq 5$), and thus $f(j)\geq d^{j-1}$ and $f(j)d^{j+1}\leq d^2f^2(j)/2$, hence $E[X] \leq \frac{3}{2} d^2 f(j)^2/L$.
Since \framebox{$f(j)\le\frac{\sqrt{7L\log L}}d$}
,
$eE[X]/t<0.02$ if
$t\ge 0.6\log L(\ge(21e/2)\cdot 0.02\cdot \log L)$. In that case, ($L\geq 4$ to gaurantee that $0.6\log L\geq 1$)
\[\left(\frac{e E[X]}{t}\right)^t\le (0.02)^{0.6\log L}\le 2^{-3.38\log L}<1/L\log L.\]
Here the
expansion is $f(j)$ to $df(j)-t$ to $d(df(j)-t)-t$, etc. Rewriting,
\[df(j)-t=df(j)\left(1-\frac t{df(j)}\right)
\ge df(j)\left(1-\frac 1d\right),\]
since $t\le 0.6\log L$ and \framebox{$f(j)\ge 0.6\log L$}. The aggregate expansion for $k$ consecutive steps of this case is
\[
d^k f(j) - (d^{k-1}+\cdots +1)t = d^k f(j) \left(1-\frac{1}{d}\cdot\frac{1-d^{-J}}{1-d^{-1}}\cdot\frac{t}{f(j)}\right) \geq d^k f(j)\left(1-\frac{1}{d-1}\right)
\]
for all $1\leq k\leq J$. Now it is easy to see the claim of a lower bound of $f(j)$ is correct, as the shrinking factor
\[
\left(1-\frac{1}{d-1}\right)\left(1-\frac{2.37}{d}\right)\geq \frac{1}{3}
\]
for $d\geq 5$.
\item
Next, suppose $\frac{\sqrt{7L\log L}}d\le f(j)\le 3L/d^2$.
Using an
alternative form of
Chernoff, we get
\[\Pr(X > (1+\epsilon)E[X])\le
e^{-\frac{\epsilon^2 E[X]}{3}}.\]
Here $E[X]\approx d^2f^2(j)/(2L)$. Put $\epsilon=1$, so that
$1+\epsilon=2$. Then the failure probability is
\[e^{-\frac{E[X]}{3}}\approx e^{-d^2f^2(j)/(6L)}
\le e^{-(7/6)\log L}\le 1/L\log L,\]
using the fact that \framebox{$\frac{\sqrt{7L\log L}}d\le f(j)$}.
The expansion is from $f(j)$ to
$df(j)-(1+\epsilon)E[X]=df(j)-2E[X]$, {\em i.e.}, to
\begin{eqnarray*}
df(j)- 2E[X]
& \approx & df(j) - \frac{d^2f^2(j)}{L}\\
& = & df(j)\left(1 - \frac{df(j)}{L}\right).
\end{eqnarray*}
Here we may assume that $f(j)=3L/d^i$ for some $i$ and, because
\framebox{$f(j)\le 3L/d^2$}, we may assume $i\ge 2$. Thus the expansion
is from $f(j)$ to $df(j)\left(1 - \frac{3}{d^i}\right)$
for some $i\ge 1$ and $i(j+1)=i(j) + 1$. Thus the aggregrate
expansion over all
$J\approx
\log_d(3L/d^2) - \log_d\left(\frac{\sqrt{7L\log L}}d\right) + 1$ of the
$j$'s in the range $\frac{\sqrt{7L\log L}}d\le f(j)\le 3L/d^2$ is at
least
\[d^J\prod_{i\ge 1}\left(1 - \frac{3}{d^i}\right)\ge
d^J\left(1 - \frac{O(1)}d\right).\]
\item
Finally, suppose $f(j)\approx 3L/d$, by making $L$ approximately $1/3$
times a power of $d$. (Note that $f(j-1)\approx 3L/d^2$ is
handled by the previous case.) The idea is that even if a path collides with
another path from this or a previous distance, we will count
the affected node once, giving up on the independence of extensions
that we needed earlier but do not need for the last step. We will
tolerate many more collisions---most of the $3L$ paths will collide,
since there are only a total of $L$ possible unique endpoints---in
order to get a large fraction of nodes hit. This means that expansion
will be $.8L / (3L/d) \approx d/4$, which is much worse than the $d(1-1/d)$
that we achieved at previous steps, so we can only afford this
analysis for \framebox{at most $O(1)$ possible $j$'s}.
Proceeding, each of the $L$ nodes is uncovered with
probability $(1-1/L)^{3L}\approx e^{-3}\approx 1/20$.
Nodes are left uncovered with negative correlation, so we can use the
arithmetic and Chernoff bound as for independent events. We expect
$\mu\le L/20$ uncovered nodes and get $L/5=L-.8L$ uncovered nodes,
{\em i.e.} we get $t=(3/20)L$ more than the expected number, with
probability at most
\[\left(\frac{e\mu}{t}\right)^{t}\approx .9^t\approx .985^L<1/L\]
and $1/L$ is small
enough to take a union bound over starting nodes. (No need for
$1/L\log L$ here.) The aggregate expansion factor over the
previous stages is $(1-O(1)/d)$ times the design maximum, and we can
ignore that factor, getting $.8L$ unique nodes out of a total of
around $3L$ paths.
\end{itemize}
(Note: $.985^L<1/L$ if $\log(N)/\log B = L\ge 400$, {\em i.e.}, if
$N\ge B^{400}=(k\log N)^{1200}$, which, for $k=2$, occurs for $N\ge
2^{20,000}\approx10^{6000}$ or so. No attempt has been made here to
optimize these constants.)
\end{proof}
\paragraph{Definitions of Our Graphs.}
Let $G_{LN}$ be a directed graph on $LN$ nodes, regarded as $L$ rows and
$N$ columns. Each column of $G_{LN}$ is a copy of $G_L$. Thus all
arcs of $G_{LN}$ are within the same column.
Now we defined a contracted version of node names. For $B'=\poly(B)$,
partition the $r$'th row of $G_{LN}$ randomly into $B'$ pieces, by
giving each node $(r,i)$ a $\log(B')$-bit random string $p(r,i)$.
The {\em contracted name} of node $(r,i)\in
G_{LN}$ is $(r,p(r,i))$.
Now define a graph, $G_{LB'}$. It has $L$ rows and $B'$ nodes in each
row, defined as the partition pieces. Formally, the nodes are of the
form $(r,p)$, where $p$ is some value taken by the function $p(r,i)$
for some $i$. For each arc $(r,i)\to(r',i)\in G_{LN}$, put an arc
$(r,p(r,i))\to(r',p(r',i))\in G_{LB'}$.
Note that, in $G_{LB'}$, we can no longer regard the set of nodes as
an $L$-by-$B'$ rectangle, only as a collection of $L$ rows, each with
$B'$ nodes. There is no notion of column in $G_{LB'}$, and so its
arcs are not confined to columns. There may be parallel arcs. The
number of nodes is $LB'$, a small number, but the
total number of arcs (counting parallel arcs multiply) is $dLN$, a
large nubmer, the
same in $G_{LB'}$ as in $G_{LN}$.
\paragraph{Implementation as Measurement Matrix.}
We now show how to use the $O(\log B)$ auxiliary bits to encode
$G_{LB'}$. Specifically, for a node $(r,i)$ pointing to $(r',i)$,
where $r$ and $r'$ are rows and $i$ is a column, we store the
contracted names $p(r,i)$ and $p(r',i)$ among the $O(d\log B)$
auxiliary bits of $i$ in repetition $r$. It will turn out that the
rows $r$ and $r'$
can be recovered at runtime, and do not need to be stored. Thus,
intuitively, $r$ is known, then $p(r,i)$ gives partial information
about $i$, so that $(r,p(r,i))$ is partial information about $(r,i)$.
This is a lossless encoding of $G_{LB'}$, with $LB'$ nodes and $dLN$
arcs.
\paragraph{Runtime Graph.}
Finally, we define a noisy runtime version $\widetilde G_{LB'}$ of
$G_{LB'}$. The nodes of $\widetilde G_{LB'}$ have names of the form $(r,p)$,
just like $G_{LB'}$. When we do recursive recovery, in repetition
$r$, we recover some node by its contracted name, $p$, encoded in
the auxiliary bits. Since we know what repetition $r$ the algorithm
is in, we have learned $(r,p)$. We
also recover arcs out of that node, in contracted form. That is,
using $r$ and $G_L$ to learn some $r'$, we get $(r',p')$, where
$p'=p(r',i')$ for some $i'$, and $p'$ is also stored in the auxiliary
bits. Put
the arc $(r,p)\to(r',p')$ into
$\widetilde G_{LB'}$. Thus, if recursive recovery returns $k'$
putative heavy hitters in each of $L$ repetitions, there are $dLk'$
arcs in $\widetilde G_{LB'}$, and the entire graph $\widetilde
G_{LB'}$ is small enough that we can consume time polynomial in its
size.
Note that if recursive recovery produces $i\ne i'$ with
$p(r,i)=p(r,i')$, then one of these recoveries is incorrect. Discard
$(r,p(r,i))=(r,p(r,i'))$ as a node detected to have failed.
\subsection{Recovery Algorithm}
\paragraph{Intuition.}
If all goes well, a node $(r,p)\in\widetilde G_{LB'}$ is dominated by
a single heavy hitter $i$, such that $(r,p)=(r,p(r,i))$. We then use
arcs out of $(r,i)\in G_{LN}$ as the basis for arcs out of
$(r,p(r,i))\in \widetilde G_{LB'}$. Our goal here is to show that,
for enough heavy hitters $i$, there are $\Omega(L)$ good nodes
$(r,p(r,i))\in \widetilde G_{LB'}$, {\em i.e.}, dominated by $i$,
that we can link together with each other and with at most a small
fraction of $L$ bad nodes. Then we can use error-correcting codes to
get $i$ from the collection of $\log N$ bits with a small constant
fraction of errors.
\paragraph{Challenge.}
We need failure probability around $1/N^k$ overall, or about
$1/N\approx 1/B^L$ for each $i$. Each node fails with probability
around $1/\poly(B)$, so we need to consider $\Omega(L)$ nodes at a
time to form an event with small enough probability. We cannot
directly link nodes in $\widetilde G_{LB'}$ by walking from one node
to an immediate neighbor. Also, it is likely enough that $\widetilde
G_{LB'}$ contains a large component of $\Omega(kL)$ nodes and some
isolated nodes, so we cannot link nodes in $\widetilde G_{LB'}$ by
taking directed components---this would be not much better than taking
all nodes in $\widetilde G_{LB'}$. We need an appropriate notion of
strong connectivity, that we define next.
\paragraph{Algorithm, formally.}
For each node $(r,p)\in\widetilde G_{LB'}$, do a breadth first search
of depth $\delta=\log_d (3L)$, getting exactly $3L$ paths of length
$\delta$. Call the multiset of nodes at path endpoints the
{\em ($\delta$-out-) neighborhood} of $(r,p)$. If the neighborhood has
more than $1.01L$ unique nodes, discard $(r,p)$. Otherwise, put a
node $v$ into the cluster $C_u$ of a starting node $u$ if the
neighborhoods of $u$ and $v$ intersect in at least $.58L$ nodes.
Below, we will show that clusters for many different $i$ have at least
$cL$ nodes
dominated by the same $i$ and at most $c'L$ other nodes, where $c'<c$
are constants so that error-correcting codes can correct the
$c'L\log(B)$ bits in error and produce the $O(L\log B)=\log N$ bits in
the name of $i$.
\paragraph{Algorithm, formally---Error-Correcting Code.}
We will use error-correcting codes that
tolerate many erasures (much more than half), since we are recovering
$cL$ nodes,
which is a small fraction of all $L$ nodes for that heavy hitter.
Reed-Solomon may work here: We recover $cL$ points on a low-degree
polynomial, of which $c'L$ are in error, for constant $c'$ much
smaller than $c$. We do not know in advance which points on the
polynomial will be recovered, but Berlekamp-Welch Reed-Solomon
decoding works in this model.
(See http://people.csail.mit.edu/madhu/FT02/slides/lect11slx4.pdf.)
That is, we want to encode $\log(N)$ bits, as $m=\log(N)/\log(B)$
chunks of $O(\log B)$ bits each, regarded as an element of a field of
size $q=\poly(B)$. Use a polynomial $f$ of degree
$m-1$ over the field of size $q$ to enocde the $m$
chunks. Let $t=c'L$ be a bound on the number of errors. Evaluate $f$ at
$L\ge m+2t$ points; note that $q>L$. If we recover $m+2t=cL$ points of
which at most $t$ are wrong, then the recovered points $y$ satisfy
$g=y\psi$, where $g$ is a polynomial of degree $m+t-1$ and $\psi$
is the error locator polynomial, non-zero of degree at most $t$,
that is zero at the errors. The $m+2t$ evaluations are enough to find
the $m+2t$ total coefficients in $g$ and $\psi$. This is a linear system, that
has a solution (as we just argued). The solution $(g,\psi)$ is not
necessarily
unique---for example, if the actual number $e$ of errors is much less
than $t$ (including even $e=0$) then $\psi$ is a degree-$t$
polynomial constrained only to
be zero on some set of $e<t$ points. Starting with a solution in
which the degree of $y$ is only $e$ and the degree of $g$ is at most
$m+e-1$, we can multiply $g$ and $\psi$ by the same polynomial of degree
$t-e$ and get a new solution. Even if $e=t$, we can multiply $g$ and
$\psi$ by the same constant to get a new solution, and $(g,y)=(0,0)$ is a
solution. But we insist on a non-zero solution from the
$(t-e+1)$-dimensional space of solutions and any non-zero solution works.
Since $\psi\not\equiv 0$ and $\psi$ has degree $t$, there
remain $m+t$ points where $\psi$ is not zero, which are non-errors, so
we can interpolate $f$ from $y$ at these points.
\subsection{Correctness Proof}
\paragraph{Basic Arithmetic and Intuition.}
Above we showed tight control over the out-degree of the expander
$G_L$. Suppose for now that the expander also has very close to $d^j$
nodes {\em preceeding} each given node by $j$ steps. Then the total
number of paths is $3L^2$, since each of $L$ starting nodes has
$3L$ paths. For each $j\le\delta=\log_d N$, a bad node $\nu$ has at
most $d^{\delta-j}$ path extensions of length $\delta-j$ and (by our
heroic assumption) $\nu$ is the $j$'th node of exactly $d^j$ starting
nodes. Taking a union over all $j$, each bad node can then ruin
$d^{\delta-j}\cdot d^j\cdot\delta= O(L\log_d L)$ paths. Each of $L$
nodes fails more or less independently with probability $1/B\le
c/\log_d L$, so we can arrange that there are at most $L/\log B$ bad
nodes except with acceptable probability. So the number of ruined
paths is $L\log_d L\cdot L/\log B$, which we can arrange to be a small
constant fraction of the total number $3L^2$ of paths.
\paragraph{Bounding in-degrees.}
Unfortunately, the in-degrees are not so tightly bounded. Let $X_u$
be the in-degree of $u$ if $u$ fails and $X_u=0$, otherwise, and let
$X=\sum_u X_u$. Then,
since nodes fail effectively independently and the average
distance-$j$ in-degree equals the average (and concentrated)
distance-$j$ out-degree, which is at most $d^j$ even in the recovered
graph $\widetilde G_{LB'}$, we conclude that $E[X]$ is the same in
the constant-in-degree graph as in the real $\widetilde G_{LB'}$.
Furthermore, each in-degree (at any distance
$j$) is at most $L$ in $\widetilde G_{LB'}$ and at least $1$ in the
constant-in-degree graph, so the variance of $X$ in $\widetilde
G_{LB'}$ is at most $L^2$ times the variance of $X$ in the ideal
graph. We can accomodate the increased
variance by increasing $B$ by the factor $L^2<\log^2 N$, which leaves
$B$ of the form $(k\log N)^{O(1)}$.
Thus, fixing $j$ and letting $X$ be the sum over all $O(kL)$ recovered
nodes of $X_u$'s as above, we have $E[X]=d^jkL/B$ and
\[\Pr(X - E[X] > t)\le \left(\frac{eE[X]}{t}\right)^t,\]
which is at most $1/N$ if $t\approx d^jkL/\log B$. Thus we may assume
that, for {\em each} $j$, there are at most $d^jkL/\log B$ total
paths of length $j$ {\em to} bad nodes. Each bad node also has
distance-$(\delta-j)$ out-degree close to $d^{\delta-j}$, so, for each
$j$, the total number of paths incident on bad nodes at position $j$
may be assumed to be at most $kL^2/\log(B)$. Taking a union over all
$\log L$ values of $j$, the total number of paths incident on bad
nodes is at most some small constant fraction of the total number of
paths, $kL^2$.
\paragraph{Final Arithmetic.}
In the good case, a node has at least $.8L$ paths to unique nodes
with the same $i$, at most $0.01L$ paths to nodes with different $i$, and
the remainder of the paths are to duplicates of nodes with the same $i$.
Call such nodes {\em $i$-well-connected}. Note that a node can only be
$i$-well-connected with a single $i$. An $i$-well-connected and an
$i'$-well-connected node have
intersection size at least $(2\cdot(.8-.01)-1)L=.58L$ or at most
$0.01L$, depending on whether or not $i=i'$.
Thus a cluster $C_u$ consists of {\em good} nodes that are dominated
by some $(r,p(r,i))$ and also $i$-well-connected, and an arbitrary
number of {\em bad} nodes $\nu$ that are {\em not} of that form but
nevertheless $\nu$ has neighborhood intersection with $u$ of size at
least $.58L$ because at least $.58L$ of $\nu$'s paths are bad. But, for
each such $\nu$, there can be at most one such $i$, because at least
half of $\nu$'s neighborhood is pledged to that $i$. A cluster for
some $i$ is recoverable even if
it has as many as, say, $L/10$ bad nodes in it. Each bad node in that
cluster accounts for at least $.58L>L/2$ paths, for a total of at least
$L^2/20$ paths associated with an unrecoverable $i$. Since the total
number of paths is at most, say, $kL^2/100$, at most $k/5$ of the
$i$'s can be unrecoverable, leaving $\Omega(k)$ recoverable.
\section{Discussions and Open Problems}
\label{sec:closing}
At the core part of this paper lies the following list recovery problem: Suppose that there are $d_1 = \frac{1}{\epsilon}\cdot\frac{\log(N/k)}{\log(B/k)}$ lists $L_1,\dots, L_{d_1}$ with $|L_i| = O(k/\epsilon)$ for all $i=1,\dots,d_1$, we want to recover all possible codewords $c=(c_1,\dots,c_{d_1})$ such that $c_i\in L_i$ for at least $\Omega(d_1)$ different $i$s. We used an expander structure to reduce the problem to $kd_1/\epsilon$ subproblems, each of which has a smaller number of nodes. It is natural to be tempted to apply Parvaresh-Vardy code directly without the expander structure. Indeed it works for some configurations of $k$ and $\epsilon$ with a runtime of $O(k\poly(\log N,1/\epsilon))$, but only for small $k$ and $\epsilon$. A direct application already fails even for $k=\exp(\sqrt{\log n})$. The runtime resulting from a direct application is also better for very small $k$, however, obtaining the precise range is difficult and beyond the scope of our work, as it relies on the precise complexity of factorizing a polynomial, which is not explicit in the literature.
Next we list a few open problems.
\tightpgh{Restriction on $\epsilon$.} The algorithm in this paper restricts $\epsilon$ to $(\frac{\log k}{\log N})^\gamma$ for any $\gamma > 0$ because of its way of applying the Parvaresh-Vardy code. In a sense our construction reduces the problem to a list recovery problem. We ask if it is possible to find an improvement by applying a better list recoverable code.
The ultimate goal is to relax the restriction of $\epsilon$ to $\epsilon\leq \epsilon_0$ for some constant $\epsilon_0 > 0$.
\tightpgh{Sparse Recovery in $\ell_2/\ell_1$ norm.} The ultimate problem is the $\ell_2/\ell_1$ problem with error guarantee as in \eqref{eqn:mixed-norm}. We hope that the algorithm in this paper offers new ideas for the mixed-norm problem. Again the difficulty is in identification, as an RIP$_2$ matrix would be sufficient for estimation.
\tightpgh{Post-measurement Noise.} In many algorithms on the sparse recovery problem, the input to the decoding algorithm is $\mtx{\Phi}\ensuremath\mathbf{x} + \nu$ instead of $\mtx{\Phi}\ensuremath\mathbf{x}$, where $\nu$ is an arbitrary noise vector. It can been seen that our algorithm does tolerate substantial noise in $\ell_1$ norm. We leave to future work full analysis and possible improved algorithms.
\subsection{Expander Encoding}
\label{sec:expander}
\label{sec:expander_description}
\noindent\textbf{Parameters.} We assume that the constants $\beta,\gamma > 0$ are fixed; the parameters $B_1$, $d_1$, $B_2$, $d_2$ are as in Lemma~\ref{lem:two-layer-isolation} such that
$B_1 = \Omega\bigl((\frac{k}{\epsilon^2})^{1+\beta}\log\frac{N}{k}\bigr)$; $c \leq m$ are constant integers; $h$ is an integer; and $\epsilon = O\big( \big(\frac{\alpha}{m}\big)^\frac{m}{m-c}\big(\frac{\log(B_1/k)}{\log(N/k)}\big)^{\gamma}\big)$.
Let $G$ be a graph of $d_1$ nodes with constant degree $\delta$ that satisfies Theorem~\ref{fact:graph_expander}, and $\alpha,\zeta,\kappa$ be constants provided by Theorem~\ref{lem:graph_expander} when applied to $G$. Without loss generality we can assume that $\alpha\leq 1/2$. Adjust the hidden constants together with $c$, $m$ and $h$ appropriately (depending on $\beta$ and $\gamma$) such that
\begin{enumerate}
\renewcommand{\labelenumi}{(\alph{enumi}) }
\setlength{\itemsep}{0pt}
\setlength{\parskip}{0pt}
\setlength{\parsep}{0pt}
\item $B_1 > d_1$;
\item $(h-1) m\log_{B_1} N < \alpha d_1$;
\item $(\alpha d_1 - (h-1)m\log_{B_1} N)\cdot h^m > d_1^c$;
\item $c \geq \log\delta/\log\kappa$.
\end{enumerate}
We note that an instance of $m,h$ is to choose $m\geq c(1+1/\gamma)$ and $h = \Theta(d_1^{c/m})$.
\tightpgh{Encoding.} We shall use Reed-Solomon for inner encoding. Next, we define our outer coding, which uses the Parvaresh-Vardy code \cite{PV05}. Take $N$ disconnected copies of $G$ and call the union $G_N$, where each node is indexed by a pair $(i,r)\in [N]\times [d_1]$. See Figure~\ref{fig:underlying_graph}. Also, let $\ensuremath\mathbb{F}$ be a field such that $|\ensuremath\mathbb{F}| = \Theta(B_1)$ is a power of $2$ and $E(x)$ be an irreducible monic polynomial over $\ensuremath\mathbb{F}$ such that $\deg E(x) = \log_{B_1}N$. View each $i\in [N]$ as a polynomial $f$ over $\ensuremath\mathbb{F}$ with degree $\log_{B_1} N - 1$. For each $(i,r)\in G_N$, associate with it an element $p(i,r) \in \ensuremath\mathbb{F}^{m+1}$ as
\[
p(i,r) = (x_{i,r}, f(x_{i,r}), (f^h \bmod{E})(x_{i,r}),\dots, (f^{h^{m-1}}\bmod{E})(x_{i,r})),
\]
where $f$ is a polynomial associated with $i\in [N]$ and $x_{i,r}\in \ensuremath\mathbb{F}$ so that $x_{i,r}$ are distinct for different $r$. This is possible because of Property (a).
Attach to a node $(i,r)$ a message $\ensuremath\mathbf{m}_{i,r}$ containing the information of $p(i,r)$ as well as $H(i, v_1(r))$,$\dots$, $H(i,v_\delta(r))$, where $v_1(r),\dots,v_\delta(r)$ are the neighbours of $r$ in $G$ and $H(i,j)\in [B_1]$ gives the bucket index where $i$ lands in the $j$-th outer hashing repetition. It is clear that $\ensuremath\mathbf{m}_{i,r}$ has $\Theta(\log B_1) = O(d_2)$ bits and therefore we can encode it in $d_2$ hash repetitions, see Lemma~\ref{lem:R-S_coding}.
\tightpgh{Decoding.} In each of the $d_1$ repetitions, we shall recover $O(k/\epsilon)$ heavy buckets and thus obtain $O(k/\epsilon)$ nodes with their messages. Even when the messages are recovered correctly, we only know that a message corresponds to $\ensuremath\mathbf{m}_{i,r}$ for some $i\in [N]$ and we do not know which $i$ it is. However, if we can determine that enough messages are associated with the same $i$, we would have obtained enough $p(i,r)$ for different values of $r$ then we should be able to find $f$ and thus recover the position $i$.
To determine enough $p(i,r)$ for the same $i$, we do clustering as follows. Suppose that there are $k$ heavy hitters at position $i_1,\dots,i_k$. Let $\widetilde G$ be a graph of $d_1\times O(k/\epsilon)$ nodes, arranged in a $d_1\times O(k/\epsilon)$ grid. For now we assume that the messages are recovered correctly for each heavy hitter $i$ in all $d_1$ repetitions. (This means that there are no collisions and the noise in the buckets are all small.)
Each message has the form $p(i,r),h_1,\dots,h_\delta$, where $h_j = H(i,v_j(r))$ for $1\leq j\leq \delta$. Add an arc $(i,r)\to (h_j, v_j(r))$ for each $1\leq j\leq \delta$.
Since the messages are recovered correctly, the graph $\widetilde G$ will contain several disjoint copies of the expander graph $G$, say $G_{i_1},\dots,G_{i_k}$, though each $G_{i_j}$ is not necessarily aligned within the same column in $\widetilde G$. There will be arcs incoming to $G_{i_j}$ from nodes not in any $G_{i_j}$, but there are no outgoing arcs from $G_{i_j}$. In this case, we can recover each $G_{i_1}$ perfectly, and collect the full set $\{\ensuremath\mathbf{m}_{i_j,r}\}_{r=1}^{d_1}$ and thus recover $i_j$.
Let us rearrange the nodes within each row and align each copy of $G$ in the same column for clarity. In this case, the columns $i_1,\dots,i_k$ are exact copies of the expander graph $G$. See Figure~\ref{fig:ideal_recovery} for an illustration.
The heavy hitters may not, however, be recovered in some repetitions and the messages could be seriously corrupted. When we are adding the arcs, we introduce two kinds of errors, respectively:
\begin{enumerate}\renewcommand{\labelenumi}{(\roman{enumi}) }
\setlength{\itemsep}{0pt}
\setlength{\parskip}{0pt}
\setlength{\parsep}{0pt}
\item We lose a node in $G_{i_j}$, i.e., the node is not present in $\widetilde G$ because the heavy hitter $i_j$ is not recovered in that repetition;
\item We connect a node in $G_{i_j}$ to a node in some other $G_{i_{j'}}$ ($j\neq j'$), owing to errorous message.
\end{enumerate}
As before, we align each ``ideal copy'' of $G$ in the same column. See Figure~\ref{fig:real_recovery} for an example. We know that for a heavy hitter $i$, only a few messages $\{\ensuremath\mathbf{m}_{i,r}\}_r$ are ruined and the $i$-th column of $G_N$ will contain a large connected subgraph $G'$ of $G$, by Theorem~\ref{lem:graph_expander}. Hence, if we start a breadth-first search from an appropriate node with depth $c\log_\delta d_1$, the whole $G'$ will be visited. In other words, we shall obtain a large set of $\{p(i,r)\}$, only a small number of which will be associated with the same $i$, but we expect to obtain enough $\{p(i,r)\}$ of the same $i$, which turns out to be sufficient to extract $f$ associated with $i$ using a good error-correcting code such as the Parvaresh-Vardy code that allows us to recover the codeword from a large fraction of errors. Without attempting to identify the `appropriate node' described above, we shall perform this breadth-first search on every node in $\widetilde{G}$.
\tightpgh{Guarantee.} Thus we have shown that the system described above meets the aforementioned guarantee. The proof is postponed to Appendix~\ref{sec:expander_recovery}.
\begin{lemma}\label{lem:expander_recovery}
Let $\beta,\gamma>0$. The encoding and decoding strategy of
Section~\ref{sec:expander_description} are correct in the sense of the
guarantee of that section, against the channel described in that
section. It uses $O(\epsilon^{-2} s\log(N/s))$ measurements and runs in time $ O(s^{1+\beta}\poly(\log N,1/\epsilon))$, provided that $N=\Omega(\max\{s^2, s/\epsilon^2\})$ and $\epsilon = O\bigl((\frac{\log s}{\log N}\big)^\gamma\bigr)$.
\end{lemma}
\section{Introduction}
Sparse signal recovery is a critical data-acquisition and processing problem that arises in many modern scientific and computational applications, including signal and image processing, machine learning, data networking, and medicine~\cite{DDT+08,LDP07}. It is a method for acquiring linear measurements or observations of a signal with a measurement matrix $\Phi$, and an algorithm, $\mathcal{D}$, for recovering the significant components of the original signal. We model this problem mathematically by assuming that we \emph{measure} a vector $\ensuremath\mathbf{x}$ and collect observation $\ensuremath\mathbf{y}=\Phi \ensuremath\mathbf{x}$, then we run a \emph{recovery algorithm} and produce an approximation $\widehat{\ensuremath\mathbf{x}}=\mathcal{D}(\Phi,\ensuremath\mathbf{y})$ to $\ensuremath\mathbf{x}$ with the guarantee that the approximation error $\|\widehat{\ensuremath\mathbf{x}}-\ensuremath\mathbf{x}\|$ is bounded above.
More quantitatively, let us denote the length of the vector $\ensuremath\mathbf{x}$ by $N$, the sparsity (or compression) parameter $k$, and distortion parameter
$\epsilon$. Let $\ensuremath\mathbf{x}_{[k]}$ denote the best $k$-term approximation to $\ensuremath\mathbf{x}$, the ``heavy hitters'' of $\ensuremath\mathbf{x}$, {\em i.e.}, $\ensuremath\mathbf{x}$ with all but the $k$ largest-magnitude terms zeroed out. There are many different ways to assess the error of the recovery algorithm and the quality of the measurement matrix, depending on the particular application. (See Table~\ref{table:previous} for an overview of all of problem variations.) In this paper, we address the $\ell_1/\ell_1$-forall problem\footnote{More generally, the expression $\ell_p/\ell_q$ means that we measure the approximation error $\|\widehat{\ensuremath\mathbf{x}} - \ensuremath\mathbf{x}\|_p$ with the $\ell_p$ norm and we compare it to the $\ell_q$ error of the best $k$-term approximation, $\|\ensuremath\mathbf{x}_{[k]}-\ensuremath\mathbf{x}\|_q$.} which is to
give a measurement matrix $\Phi$ and a recovery algorithm $\mathcal{D}$, such that, for any input vector $\ensuremath\mathbf{x}$, we have
\[
\|\widehat{\ensuremath\mathbf{x}}-\ensuremath\mathbf{x}\|_1\le(1+\epsilon)\|\ensuremath\mathbf{x}_{[k]}-\ensuremath\mathbf{x}\|_1.
\]
The goal is to use the minimum number of measurements (rows of
$\Phi$), namely, $O(k\log(N/k)/\epsilon^2)$ and to keep the runtime of
$\mathcal{D}$ to polynomial in $k\log(N)/\epsilon$.
What makes this problem challenging is that we must simultaneously keep the number of measurements small, ensure the recovery algorithm is highly efficient, and achieve a good approximation for all input vectors. If we increase the number of measurements by factors of $\log N$, it is easy to optimize the run-time. Similarly, if we severely restrict the distortion parameter $\epsilon$, we may also increase the number of measurements by factors of $\epsilon$. In many applications, all three quantities are important; i.e., in medical imaging applications, the measurements reflect the time a patient is observed, the recovery time drives the effectiveness of real-time imaging systems, and the recovery accuracy determines the diagnostic effectiveness of the imaging system.
\tightpgh{Related work.} There has been considerable work on this problem in a variety of parameter settings and we summarize the results in Table~\ref{table:previous}. A number of parameter values are incommensurate: we can achieve better approximation guarantees (using the $\ell_2/\ell_2$ norm) but only in the for-each model and in the for-all signal model, we can achieve $\ell_2/\ell_1$ error guarantees. A somewhat harder problem than the one we address in this paper is the
mixed-norm (or $\ell_2/\ell_1$) for-all result. In this setting, the goal is to give $\Phi$ and
$\mathcal{D}$, such that, for any $\ensuremath\mathbf{x}$, we have
\begin{equation}\label{eqn:mixed-norm}
\|\widehat{\ensuremath\mathbf{x}}-\ensuremath\mathbf{x}\|_2\le\frac{\epsilon}{\sqrt{k}}\|\ensuremath\mathbf{x}_{[k]}-\ensuremath\mathbf{x}\|_1.
\end{equation}
It is known that if $(\Phi,\mathcal{D})$ solves the $\ell_2/\ell_1$ problem it also solves the $\ell_1/\ell_1$ problem \cite{CDD09}.
In another direction, the $\ell_2/\ell_2$ for-each problem is to give
{\em distribution} $\mathcal{F}$ on $\Phi$
and $\mathcal{D}$, such that, for any $\ensuremath\mathbf{x}$, if $\Phi\sim\mathcal{F}$,
we have
\[
\Pr_{\Phi\sim\mathcal{F}}\left\{\|\widehat{\ensuremath\mathbf{x}}-\ensuremath\mathbf{x}\|_2 \le(1+\epsilon)\|\ensuremath\mathbf{x}_{[k]}-\ensuremath\mathbf{x}\|_2\right\} \geq 1 - O(1).
\]
The $\ell_2/\ell_2$ for-each problem with constant failure probability was solved in~\cite{GLPS}, where the authors gave an algorithm with
constant-factor-optimal runtime and number of measurements. The failure probability was recently improved to exponentially small in \cite{ICALP}, but the technique is not likely to give an $\ell_1/\ell_1$ for-all result without additional logarithmic factors in the number of measurements.
The first sublinear-time algorithm in the for-all setting (for the $\ell_1/\ell_1$ norm) was given in~\cite{PS12}, though that algorithm had a number of limitations.
\begin{itemize}
\item The runtime, while sublinear, was $\sqrt{kN}$ or, more
generally, of the form $k^{1-\alpha}N^{\alpha}$ for any constant
$\alpha>0$. That algorithm did not achieve runtime polynomial in
$k\log(N)/\epsilon$.
\item The algorithm required a precomputed table of size $Nk^{0.2}$.
\item The result was far from optimal in its dependence of the
number of measurements on $\epsilon$.
\end{itemize}
\tightpgh{Our results.} In this work, we rectify the above limitations, assuming the (modest) restriction that $\epsilon<\log k/\log N$. We also make the measurement dependence on $\epsilon$ optimal. The best lower bound for the $\ell_1/\ell_1$ for-all problem is $\Omega(k/\epsilon^2 + (k/\epsilon)\log(\epsilon N/k))$ \cite{NNW12}, which is also the best lower bound for the $\ell_2/\ell_1$ for-all problem. Our algorithm uses $O(k/\epsilon^2\log(N/k))$ measurements when $\epsilon < (\log k/\log N)^{\gamma}$, which is suboptimal only by a logarithmic factor. When $k\leq \log^c N$ for some $c>0$, the runtime is reduced to $O(k\poly(N,1/\epsilon))$.
\begin{theorem}[Main Theorem]
\label{thm:mainresult}
Let $\beta,\gamma > 0$. There is an \emph{approximate sparse recovery system} consisting of an $m \times N$ measurement matrix $\mtx{\Phi}$ and a decoding algorithm $\mathcal{D}$ that satisfy the following property: for any vector $\signal\in \mathbb{R}^n$, given
$\mtx{\Phi}\signal$, the system approximates $\signal$ by $\widehat \signal=\mathcal{D}(\mtx{\Phi}
\signal)$, which satisfies
\[
\|\widehat \signal - \signal\|_1
\le (1+\epsilon)\|\signal_{[k]} - \signal\|_1.
\]
Provided that $N=\Omega(\max\{k^2, k/\epsilon^2\})$, the matrix $\mtx{\Phi}$ has $m = O(k/\epsilon \log(N/k)((\log N/\log k)^\gamma + 1/\epsilon))$ rows and the decoding algorithm $\mathcal{D}$ runs in time $O(k^{1+\beta}\poly(\log N,1/\epsilon))$. When $\epsilon = O\bigl((\frac{\log k}{\log N})^\gamma\bigr)$, the number of rows $m = O(k/\epsilon^2\log(N/k))$. If, in addition, $k\leq \log^{O(1)} N$, the runtime can be reduced to $O(k\poly(N,1/\epsilon))$.
\end{theorem}
\input{intro_figures}
\tightpgh{Overview of Techniques.} Our overall approach builds on~\cite{PS12} and~\cite{ICALP} with several critical innovations. In Figure~\ref{fig:measure} is a framework which captures both the algorithm in \cite{PS12} and the algorithm in this paper.
First, we describe the encoding procedure at a high level. Initially each $i\in [N]$ is associated with a unique message $\ensuremath\mathbf{m}_i$, which is encoded to a longer message $\ensuremath\mathbf{m}_i'$. In \cite{PS12} this encoding is trivial, namely, $\ensuremath\mathbf{m}_i' = \ensuremath\mathbf{m}_i$; while in our work it is a more complicated procedure (see Figure~\ref{fig:encoding}). The first hash assigns one of $B$ buckets to each $i\in[N]$, while maintaining the original index $i$; the {\em aggregation} step sums each bucket. There are
$\frac{\log(N/k)}{\epsilon\log(B/k)}$ repetitions. The index $i$ in each repetition is now associated with a chunk of $\ensuremath\mathbf{m}_i'$. In \cite{PS12}, the aggregated buckets are hashed into $(k/\epsilon)$ buckets and there
are $\log(B/k)/\epsilon$ repetitions. Thus, altogether, there are
$O(\epsilon^{-3}k\log(N/k))$ measurements. In our work, there are only $\log(B/k)$ repetitions, saving a factor of $1/\epsilon$, so the total number of measurements is $O(\epsilon^{-2}k\log(N/k))$.
The \emph{identification} portion of the recovery algorithm is shown in Figure~\ref{fig:recover}. To recover the identity of heavy hitters, the algorithm reads off the measurements and recovers the message chunk associated with each bucket. This message chunk is supposed to be associated with the heavy hitter in the bucket. Then, all $B$ buckets are examined exhaustively. The pre-image of each heavy bucket under the first hash is determined, in \cite{PS12}, from a look-up table and searched exhaustively. In our work, this is done by the decoding procedure illustrated in Figure~\ref{fig:decoding}. We encode the ``linking information'' into the message chunks so that we can collect across the repetitions enough heavy buckets which contain the same heavy hitter $i$ (whose actual value is unknown at this stage of the algorithm). Thus, we obtain a (small) fraction of $\ensuremath\mathbf{m}_i'$, which is sufficient for the Parvaresh-Vardy decoding algorithm to produce the exact $\ensuremath\mathbf{m}_i$, from which we recover the value of $i$ immediately.
The \emph{estimation} portion of the recovery algorithm estimates the coefficient at each of those candidate positions by reading the aggregated bucket value of the corresponding heavy buckets at the first hash level.
Putting these pieces together, we have a {\em weak recovery system}, which identifies all but $k/2$ of the heavy hitters. We then repeat with smaller (easier) sparsity parameter $k/2<k$ and smaller (harder) distortion parameter
$(3/4)\epsilon<\epsilon$, resulting in a number of measurements whose leading
term is $(k/2)(4/3\epsilon)^2=(8/9)k/\epsilon^2<k/\epsilon^2$.
Summing the geometric progression gives the result we need. Finally, we note that our algorithm works (deterministically) with any unbalanced expander having the appropriate properties.
\tightpgh{Encoding and Decoding details.} See Figure~\ref{fig:encoding} and Figure~\ref{fig:decoding} for a detailed illustration of these steps. For each message $\ensuremath\mathbf{m}$, the Parvaresh-Vardy code encodes it into a longer message $\ensuremath\mathbf{m}'$, which automatically exhibits a chunk structure, so that if a few number of the chunks are correct, the original $\ensuremath\mathbf{m}$ will be recovered. Suppose there are $D$ chunks. Now, choose a $d$-regular expander graph $G$ ($d$ is a constant) on $D$ nodes such that after removing $O(D)$ nodes from $G$, the remaining graph still contains an expander of size $\Omega(D)$. For the $i$-th chunk of $\ensuremath\mathbf{m}'$, append to it the information of the neighbours of the $i$-th vertex in $G$. Then we apply Reed-Solomon to protect the appended chunks.
To decode, we first recover the appended message chunks. The two-layer hash guarantees that for the same heavy hitter, at most $O(D)$ of them will be wrong and the remaining ones are all correct. Now, consider a breadth-first search from a correct message chunk (whose ``linking information'' is therefore correct). By the special property of the expander graph $G$, we shall be able to visit all nodes (i.e., all corresponding message chunks) of a smaller expander graph of size $\Omega(D)$ in $\log D$ steps. This small fraction of good message chunks of $\ensuremath\mathbf{m}'$ will enable the P-V code to recover the original message $\ensuremath\mathbf{m}$ successfully. Recall that $d$ is a constant, the total number of vertices visited is $O(d^{\log D}) = O(\poly(D)) = O(\poly(\log N))$ for appropriate $D$. This enables a sublinear recovery time.
\tightpgh{Our contributions.}
\begin{itemize}
\item We give an algorithm for sparse recovery in the for-all setting,
under a modest restriction on the distortion factor $\epsilon$, having the
number of measurements that matches the best upper bound, attained by
super-linear algorithms; e.g., \cite{IR08}, and optimal in runtime up to a
power.
\item We conjecture that our algorithm can be extended from the 1-norm to the
mixed norm guarantee and that the restriction on $\epsilon$ can be
weakened or eliminated. Thus our algorithm may be a stepping stone to the
final algorithm.
\item Our work is not the first to consider list recovery. Indyk et al.\ introduces the idea in the context of combinatorial group testing \cite{INR10}.
The idea of list recovery is also used in \cite{ICALP}, where the list decoding, however, would affect the hashing and the hashing was thus required to be sufficiently random. In our algorithm, the messages $\{\ensuremath\mathbf{m}_i\}$ are independent of the hashing, which enables us to obtain a better result.
\item Finally, our encoding/decoding techniques are reminiscent of network coding and may have other contexts for soft-decoding or network coding.
\end{itemize}
\tightpgh{Paper Organization.} In Section~\ref{sec:prelim} we review some properties of expanders.
In Section~\ref{sec:weak}, we show that provided with good identification results, unbalanced expanders with appropriate properties will give a weak system.
Our construction of weak system culminates in Section~\ref{sec:backpointers}, where we shall show how to achieve good identification via message encoding and decoding. Then we build the overall algorithm on the weak system in Section~\ref{sec:toplevel}. Finally we close with a short discussion and open problems in Section~\ref{sec:closing}.
\begin{table*}
{\centering\footnotesize
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
Paper & A/E & Number of & Column sparsity/ & Decode time & Approx. error & Noise\\
& & Measurements & Update time & & & \\
\hline
\cite{CCFC02} & E & $k \log^c N$ & $\log^c N$ & $N \log^c N$ & $\ell_2 \le C \ell_2$ & \\
\hline
\cite{CM06}
& E & $k \log^c N$ & $\log^c N$ & $k \log^c N$ & $\ell_2 \le C \ell_2$ & \\
\hline
\cite{GLPS} & E & $\epsilon^{-1}k \log(N/k)$ & $\log^c N$ & $\epsilon^{-1}k\log^c N$ & $\ell_2 \le (1+\epsilon) \ell_2$ & Y \\
\hline
\cite{Donoho06,CRT06}
& A & $k\log( N/k)$ & $k\log(N/k)$ & LP & $\ell_2 \le (C/\sqrt{k}) \ell_1$ & Y \\
\hline
\cite{GSTV07:HHS}
& A & $\epsilon^{-2}k\log^c N$ & $\epsilon^{-2}k\log^c N$ & $\epsilon^{-4}k^2\log^c N$&
$\ell_2\le (\epsilon/\sqrt{k})\ell_1$ & Y \\
\hline
\cite{GSTV06} & A & $k\log^c N$ & $\log^c N$ & $k\log^c N$ & $\ell_1\le (C\log N)\ell_1$ & Y \\
\hline
\cite{IR08} & A & $\epsilon^{-2}k\log(N/k)$ & $\epsilon^{-1}\log(N/k)$ & $N\log(N/k)$ & $\ell_1\le(1+\epsilon)\ell_1$ & Y\\
\hline
\cite{PS12} & \multirow{2}{*}{A} & \multirow{2}{*}{$\ell^c\epsilon^{-3}k\log(N/k)$} & \multirow{2}{*}{$\ell^c\epsilon^{-3}\log(N/k)\log k$} & \multirow{2}{*}{$\ell^c \epsilon^{-3} k(N/k)^{1/\ell}$} & \multirow{2}{*}{$\ell_1\le(1+\epsilon)\ell_1$} & \multirow{2}{*}{Y}\\
(any integer $\ell$)
& & & & & & \\
\hline
\hline
This paper & & & & & &\\
(any $\beta>0$)
& A & $\epsilon^{-2}k\log(N/k)$
& $\epsilon^{-1}\log(N/k)$
& $k^{1+\beta}(\epsilon^{-1}\log N)^c$
& $\ell_1\le(1+\epsilon)\ell_1$ & Y\\
(restrictions on $\epsilon$ apply) & & & & & &\\
\hline
\hline
Lower bound `A' & A & $\epsilon^{-2}k\log(N/k)$ & $\epsilon^{-1}\log(N/k)$ & $\epsilon^{-2}k\log(N/k)$ & $\ell_2\le(\epsilon/\sqrt{k})\ell_1$ & Y\\
\hline
\end{tabular}
\captionsetup{font=footnotesize,labelfont=bf}
\caption{Summary of the best previous results and the
result obtained in this paper. Some constant factors are omitted
for clarity. ``LP'' denotes (at least) the time to do a linear
program of size at least $N$. The column ``A/E'' indicates whether
the algorithm works in the forall (A) model or the foreach (E)
model. The column ``noise'' indicates whether the algorithm
tolerates noisy measurements. Measurement and decode time dependence on
$\epsilon$, where applicable, is polynomial. The constants $c$ could be different in different occurrences. The lower bound on number of measurements in table above is, in fact, the best upper bound attained by super-linear algorithms.}
\label{table:previous}
}
\end{table*}
\section{Weak System}\label{sec:weak}
To simplify our analysis, we decompose a signal $\ensuremath\mathbf{x}$ into two parts of disjoint support, $\ensuremath\mathbf{x} = \ensuremath\mathbf{y} + \ensuremath\mathbf{z}$, where $\ensuremath\mathbf{y}$ has small support and $\ensuremath\mathbf{z}$ has small norm. We call $\ensuremath\mathbf{y}$ the \emph{head} and $\ensuremath\mathbf{z}$ the \emph{tail}. To simplify the language we may also use head to refer to $\supp{\ensuremath\mathbf{y}}$. We aim to recover the elements in $\ensuremath\mathbf{y}$. Introduced in \cite{PS12}, a \emph{weak system} takes an additional input, some set $I$ of indices (called the candidate set), and tries to estimate $\ensuremath\mathbf{x}_i$ for $i\in I$, hoping to recover some head items with estimate error dependent on $\|\ensuremath\mathbf{z}\|_1$. It is shown in \cite{PS12} that when $I$ contains the entire head, we can always recover a good fraction of the head. In this paper we make a slight modification on the definition of weak system as below. We only need $I$ to contain a good fraction of the head instead of the entire head.
\begin{definition}[Weak system]
\label{def:weakI}
A \emph{Weak system}
consists of parameters $N,s,\eta,\zeta$, an $m$-by-$N$
{\em measurement matrix} $\mtx{\Phi}$, and a {\em decoding algorithm}
$\mathcal{D}$, that satisfy the following property:
For any $\signal\in \mathbb{R}^N$ that can be written as
$\signal=\mathbf{y}+\mathbf{z}$, where $|\supp{\mathbf{y}}|\le s$ and
$\nerr{\mathbf{z}}_1\le 3/2$, given the
measurements $\mtx{\Phi}\signal$
and
a subset $I\subseteq[N]$ such that $|I\cap\supp{\ensuremath\mathbf{y}}|\geq (1-\zeta/2)|\supp{\ensuremath\mathbf{y}}|$, the
decoding algorithm $\mathcal{D}$ returns $\widehat\signal$, such that $\signal$ admits the following decomposition:
\[\signal=\widehat\signal+\widehat{\mathbf{y}}+\widehat{\mathbf{z}},\]
where
$|\supp{\widehat{\signal}}| = O(s)$,
$|\supp{\widehat{\mathbf{y}}}|\le \zeta s$, and
$\nerr{\widehat{\mathbf{z}}}_1 \le \nerr{\mathbf{z}}_1+\eta$.
Intuitively, $\ensuremath\mathbf{\widehat y}$ and $\ensuremath\mathbf{\widehat z}$ will be the head and the tail of the residual $\signal-\widehat{\signal}$, respectively.
\end{definition}
\begin{theorem}[Weak]
\label{thm:weakI}
Suppose that $\Phi$ is the adjacency matrix of an $(N,Bd,d,4s,\eta)$-bipartite expander such that (a) $d=O(\frac{1}{\eta\zeta^2}\log\frac{N}{s})$ and $B = O(\frac{d}{\zeta\eta})$ and (b) it is an instance of a $(B,d)$-hashing scheme. With appropriate instantiations of constants, Algorithm~\ref{algo:weak} (see Appendix) yields a correct Weak system that runs in time $O(|I|\eta^{-1}\zeta^{-2}\log(N/s))$.
\end{theorem}
The proof is essentially the same as \cite[Lemma 4]{PS12} and is therefore postponed to Appendix~\ref{sec:proof of weakI}.
To complete the construction of a Weak system, it remains to show that a bipartite expander as required by Theorem~\ref{thm:weakI} exists. By probabilistic methods, we show that it can be attained by both one-layer and two-layer hashing schemes, with appropriate parameters. We state the results for two-layer hashing schemes only because our identification procedure uses it.
All proofs use standard techniques and are postponed to the Appendix.
\begin{lemma}[expanding property]\label{lem:two-layer}
Let $\epsilon \in (0, 1/4)$, $k\geq 1$ and $N = \Omega(\max\{k/\epsilon^2, k^2\})$. A random two-layer $(B_1,d_1,B_2,d_2)$ hashing scheme gives an $(N,B_2d_1d_2,d_1d_2,4k,\epsilon)$ bipartite expander with probability $\geq 1-1/N^c$, where $B_1=\Omega(\frac{k}{\epsilon^2})$, $d_1=\Omega(\frac{1}{\epsilon}\frac{\log(N/k)}{\log(B_1/k)})$, $B_2 = \Omega(\frac{k}{\epsilon})$ and $d_2 = \Omega(\log\frac{B_1}{k})$ with appropriate choices of constants.
\end{lemma}
\begin{remark}\label{rem:constraint_k}
The constraint that $k = O(\sqrt{N})$ could be weakened to $k = O(N^{1-\xi})$ for any $\xi > 0$. The constants hidden in various $\Omega(\cdot)$ notations above will depend on $\xi$.
\end{remark}
\begin{lemma}[isolation property]\label{lem:two-layer-isolation}
Let $\epsilon > 0$, $\alpha>1$ be arbitrary constants and $(B_1,d_1,B_2,d_2)$ be a two-layer hashing scheme with $B_1=\Omega(\frac{k}{\zeta^\alpha\epsilon^{2\alpha}})$, $d_1=\Omega(\frac{\alpha}{\alpha-1}\cdot \frac{1}{\zeta\epsilon}\frac{\log(N/k)}{\log(B/k)})$, $B_2 = \Omega(\frac{k}{\zeta\epsilon})$ and $d_2 = \Omega(\frac{1}{\zeta}\log\frac{B_1}{k})$. Then with probability $\geq 1-1/N^c$, the two-layer hashing scheme with parameters prescribed above gives a bipartite graph with the $(L, \epsilon, \zeta)$-isolation property, where $L=O(k/\epsilon)$.
\end{lemma}
\section{Preliminaries}\label{sec:prelim}
Our main algorithm will be built on regular graph expanders and unbalanced bipartite expanders. In this section we review some properties of expanders. Let $n,m,d,\ell$ be positive integers and $\epsilon,\kappa$ be positive reals. The following two definitions are adapted from \cite{GUV09}.
\begin{definition}[expander]
An $(n,\ell,\kappa)$-expander is a graph $G(V,E)$, where $|V|=n$, such that for any set $S\subseteq V$ with $|S|\leq \ell$ it holds that $|\Gamma(S)|\geq \kappa|S|$.
\end{definition}
When $n$ is clear from the context, we abbreviate the expander as $(\ell,\kappa)$-expander.
\begin{definition}[bipartite expander]
An $(n,m,d,\ell,\epsilon)$-bipartite expander is a $d$-left-regular bipartite graph $G(L\cup R, E)$ where $|L| = n$ and $|R| = m$ such that for any $S\subseteq L$ with $|S|\leq \ell$ it holds that $|\Gamma(S)|\geq (1-\epsilon)d|S|$, where $\Gamma(S)$ is the neighbour of $S$ (in $R$).
\end{definition}
When $n$ and $m$ are clear from the context, we abbreviate the expander as $(\ell,d,\epsilon)$-bipartite expander. When $d$ is also clear from the context, we simply write $(\ell,\epsilon)$-bipartite expander.
Consider the adjacency matrix $A_G$ of an $d$-regular expander $G$. It always holds that the biggest eigenvalue of $A_G$ is $d$. Let $\lambda(G)$ denote the largest absolute value of any other eigenvalue. The following theorem is now well-known.
\begin{theorem}[\cite{FKS89}]\label{fact:graph_expander}
For all sufficiently large $n$ and even $d$, there exists a $d$-regular expander $G$ such that $|V(G)| = n$ and $\lambda(G) \leq C\sqrt{d}$ for some absolute constant $C>0$.
\end{theorem}
Next we present a result due to Upfal \cite{Upfal92}, implicitly used in the proof of Lemma~1 and~2 therein. It states that there exists a expander graph of $n$ nodes and constant degree, such that after removing a constant fraction of nodes the remaining subgraph contains an expander of size $\Omega(n)$.
\begin{theorem}[\cite{Upfal92}]
\label{lem:graph_expander}
Let $G$ be a $\delta$-regular expander of $n$ nodes such that $\lambda(G) \leq C\sqrt{\delta}$, where $\delta$ is a (sufficiently large) constant. There exist absolute constants $\alpha,\zeta > 0$ and $\kappa > 1$ such that after removing an arbitrary set $T$ of nodes with $|T|\leq\zeta n$ from $G$, the remaining graph contains a subgraph $G'$ such that $|V(G')| \geq \alpha n$ and $G'$ is a $(|V(G')|, n/2,\kappa)$ graph expander.
\end{theorem}
The following definitions concern hashing, in which the parameters $N,
B_1, B_2, d_1, d_2$ are positive integers. We adopt the conventional notation that $[m] = \{1,2,\dots,m\}$.
\begin{definition}[one-layer hashing scheme]
The $(N,B,d)$ (one layer) hashing scheme is the uniform distribution on the set of all functions $f:[N]\to [B]^d$
\end{definition}
Each instance of such a hashing scheme induces a $d$-left-regular bipartite graph with $Bd$ right nodes. When $N$ is clear from the context, we simply write $(B,d)$ hashing scheme.
\begin{definition}[two-layer hashing scheme]
An $(N,B_1,d_1,B_2,d_2)$ (two-layer) hashing scheme is a distribution $\mu$ on the set of all functions $f:[N]\to [B_2]^{d_1d_2}$ defined as follows. Let $g$ be a random function subject to the $(N,B_1,d_1)$ hashing scheme and $\{h_{i,j}\}_{i\in[d_1],j\in[d_2]}$ be a family of independent functions subject to the $(B_1,B_2,d_2)$ hashing scheme which are also independent of $g$. Then $\mu$ is defined to be the distribution induced by the mapping
$
x\mapsto \left(h_{1,1}(g_1(x)),\dots,h_{1,d_2}(g_1(x)),h_{2,1}(g_2(x)),\dots,h_{2,d_2}(g_2(x)),\dots,
h_{d_1,1}(g_{d_1}(x)),\dots,h_{d_1,d_2}(g_{d_1}(x))\right)$.
\end{definition}
Each instance of such a hashing scheme gives a $d_1d_2$-left-regular bipartite graph of $B_2 d_1 d_2$ right nodes. When $N$ is clear from the context, we simply write $(B_1,d_1,B_2,d_2)$ hashing scheme. Conceptually we hash $N$ elements into $B_1$ buckets and repeat $d_1$ times, those buckets will be referred to as first-layer buckets; in each of the $d_1$ repetitions, we hash $B_1$ elements into $B_2$ buckets and repeat $d_2$ times, those buckets will be referred to as second-layer buckets.
We note that bipartite expander graphs can be used as hashing schemes because of their unique neighbours property (and hence isolation property).
\begin{definition}[unique neighbours]
Let $G=(L\cup R,E)$ be a bipartite graph and $S,T\subseteq L$. Define
\[
U_S(T) = \{y\in R: (x,y)\in E\text{ for some }x\in T\text{ while }(z,y)\not\in E\text{ for all }z\in S\setminus\{x\}\}.
\]
\end{definition}
\begin{definition}[isolation property]
An $(n,m,d,\ell,\epsilon)$-bipartite expander $G$ is said to satisfy the $(L, \eta, \zeta)$-isolation property if for any set $S\subset L(G)$ with $|S|\leq L$, there exists $S'\subset S$ with $|S'|\geq (1-\eta)|S|$ such that for all $x\in S'$ it holds that $|U_S(\{x\})|\geq (1-\zeta) d$.
\end{definition}
\section{Toplevel System}\label{sec:toplevel}
Now we define a Toplevel system, similarly to~\cite{GLPS,PS12}, that is an algorithm that solves our overall problem.
\begin{definition}
\label{def:toplevel}
An
{\em approximate sparse recovery system}
(briefly, a \emph{Toplevel} system),
consists of parameters $N$, $k$, $\epsilon$, an $m$-by-$N$
{\em measurement matrix} $\mtx{\Phi}$, and a
{\em decoding algorithm} $\mathcal{D}$ that satisfy the following property: for any vector $\signal\in \mathbb{R}^n$, given
$\mtx{\Phi}\signal$, the system
approximates $\signal$ by $\widehat \signal=\mathcal{D}(\mtx{\Phi}
\signal)$, which satisfies
\[
\|\widehat{\signal} - \signal\|_1
\le (1+\epsilon)\|\signal_{[k]} - \signal\|_1.
\]
\end{definition}
Using this definition, we restate our main result from Theorem~\ref{thm:mainresult} in a slightly different form.
\begin{theorem}
\label{thm:toplevel}
Let $\beta,\gamma > 0$. There is a Toplevel system that
uses $O(\epsilon^{-2}k\log(N/k))$ measurements and runtime
$O(k^{1+\beta}\poly(\log N,1/\epsilon))$, provided that $N=\Omega(\max\{k^2, k/\epsilon^2\})$ and $\epsilon = O\bigl((\frac{\log k}{\log N})^\gamma\bigr)$.
\end{theorem}
The proof follows easily using the results on the weak system. We need Lemma~\ref{lem:expander_recovery} for identification and Theorem~\ref{thm:weakI} for estimation. The proof of this theorem is postponed to Appendix~\ref{sec:toplevel_proof}.
\begin{remark} We note that
\begin{enumerate}
\renewcommand{\labelenumi}{(\alph{enumi}) }
\setlength{\itemsep}{0pt}
\setlength{\parskip}{0pt}
\setlength{\parsep}{0pt}
\item the constants in big $O$-notations and the power in $\poly(\log N,1/\epsilon)$ depend on $\beta$ and $\gamma$; and,
\item as in Remark~\ref{rem:constraint_k}, The constraint that $k = O(\sqrt{N})$ could be weakened to $k = O(N^{1-\xi})$ for any $\xi > 0$;
\item the factor $k^{1+\beta}$ in the runtime is due to our choice of $B_1 = \Omega((k/\epsilon^2)^{1+\beta}\log(N/k))$ such that $\log B_1 = O(\log(B_1/k)) = O(d_2)$. When $k \leq (\log N)^c$ for some $c > 0$, since $B_1 = \Omega(k/\epsilon^{2(1+\beta)})$, choosing $B_1 = \Theta(k\log(N/k)/\epsilon^{2(1+\beta)})$ would suffice. It leads to runtime $O(k\poly(\log N,1/\epsilon))$.
\item For large $\epsilon$ we can take $d_1 = (\log(N/k)/\log(B_1/k))^{1+\alpha}$ for $\alpha > 0$, which gives an algorithm which uses more measurements $O(k\log^{1+\alpha}(N/k)/\epsilon^2)$ but suboptimal by only a logarithmic factor from the best known lower bound.
\end{enumerate}
\end{remark}
|
2,877,628,091,537 | arxiv | \section{Introduction}
This paper concerns the following question.
\begin{problem}\label{main problem}
{\em Let $X$ denote the space $\mathbb{R}^n$ or, more generally, a Banach space, and let $\mathcal{C}$ be a differentiability class. Given an arbitrary subset $C$ of $X$ and a collection $\mathcal{H}$ of affine hyperplanes of $X$ such that every $H\in\mathcal{H}$ passes through some point $x_{H}\in C$, and $C=\{x_H : H\in\mathcal{H}\}$,
what conditions on $\mathcal{H}$ are necessary and sufficient for the existence of a {\em convex} hypersurface $S$ of class $\mathcal{C}$ in $X$ such that $H$ is tangent to $S$ at $x_H$ for every $H\in\mathcal{H}$? }
\end{problem}
In \cite{GhomiJDG2001} M. Ghomi considered a version of this problem and solved it in the particular case that $S$ is an ovaloid (that is to say, a closed hypersurface of strictly positive curvature), $\mathcal{C}=C^{m}$, $m\geq 2$, and $C$ is a $C^m$ smooth submanifold of $\mathbb{R}^n$.
In \cite{AzagraMudarra2017PLMS}, as a consequence of a Whitney-type extension theorem for convex functions of the class $C^{1,1}$, we solved this problem in the case that $C$ is a compact subset of $\mathbb{R}^n$ and $\mathcal{C}=C^{1,1}$. A similar result was given in \cite[Corollary 1.5]{AMHilbert} for $X$ a Hilbert space, $C$ arbitrary, and $\mathcal{C}=C^{1,1}$; however, the proof of this corollary was incomplete in the case that $C$ is unbounded (we only sketched the proof of the {\em easy} implication, and overlooked one important difference between bounded and unbounded convex bodies).
More recently, in \cite[Corollary 1.15]{AzagraMudarraGlobalGeometry}, we provided a solution to Problem \ref{main problem}, for arbitrary $C$, in the case that $X=\mathbb{R}^n$ and $\mathcal{C}=C^1$.
As far as we know, nothing is known about the case that $C$ is arbitrary and $\mathcal{C}=C^{m}$, $m\geq 2$, and in fact Problem \ref{main problem} looks extremely hard to solve in this generality. The main reasons why we consider this problem very difficult are the facts that: 1) partitions of unity cannot be used to patch local convex extensions, as they destroy convexity; and 2) convex envelopes do not preserve smoothness of orders higher than $C^{1,1}$.
The aim of this paper is twofold. On the one hand, we wish to clarify what can be understood (and also what we think should be understood, if we want to be practical) by a convex hypersurface of class $C^{1,1}$ or $C^{1, \omega}$ (where $\omega$ is a modulus of continuity) in a Hilbert space, or more generally, in a Banach space. This question will keep us busy in Section \ref{sectionwhatisconvexhypersurfacec11}. On the other hand, we want to give a complete proof of \cite[Corollary 1.5]{AMHilbert}, and furthermore to extend this result to the class $C^{1, \omega}$ and to other Banach spaces. That is, we mean to provide a complete solution to Problem \ref{main problem} for $\mathcal{C}=C^{1, \omega}$.
Of course, the solution to Problem \ref{main problem} will depend on the notion of convex hypersurface of class $\mathcal{C}$ with which we choose to work. However, as we will see in Section \ref{sectionwhatisconvexhypersurfacec11}, all reasonable notions of $C^{1,1}$ smoothness for a convex hypersurface in a Hilbert space are equivalent, and therefore we can give a precise statement of our main result in this particular case right now. Let us first notice that an equivalent reformulation of Problem \ref{main problem} is the following.
\begin{problem}\label{main problem 2}
Let $X$ be a Banach space, and let $\mathcal{C}$ be a differentiability class. Denote by $S_X$ the unit sphere of $X$.
Given a subset $C$ of $X$ and a mapping $N: C \to S_X$, what conditions are necessary and sufficient to ensure the existence of a (not necessarily bounded) convex body $V$ of class $\mathcal{C}$ such that $C \subseteq \partial V$ and the outer unit normal to $\partial V$ coincides on $C$ with the given mapping $N$?
\end{problem}
One of the main results of this paper is the following.
\begin{theorem}\label{main theorem for C11 Hilbert}
Let $C$ be a subset of a Hilbert space $X$, and let $N:C\to S_X$ be a mapping. Then the following statements are equivalent.
\begin{enumerate}
\item There exists a $C^{1,1}$ convex body $V$ such that $C\subseteq \partial V$ and $N(x)$ is outwardly normal to $\partial V$ at $x$ for every $x\in C$.
\item There exists some $r>0$ such that
$$
\langle N(y), y-x\rangle \geq \tfrac{r}{2} \|N(y)-N(x)\|^2 \quad \textrm{for all} \quad x, y\in C.
$$
\end{enumerate}
Moreover, if $(2)$ is satisfied with a constant $r>0$, then the body $V$ in $(1)$ can be taken so that the outward unit normal $N_{\partial V}:\partial V\to S_X$ is $r^{-1}$-Lipschitz.
In addition, if we further assume that $C$ is bounded then $V$ can be taken to be bounded as well.
\end{theorem}
An equivalent reformulation of this result which was suggested to us by Arie Israel is the following {\em finiteness principle} for Problem \ref{main problem}.
\begin{theorem}\label{main theorem for C11 Hilbert, finiteness principle version}
Let $C$ be a subset of a Hilbert space $X$, and let $\mathcal{H}$ be a collection of affine hyperplanes of $X$ such that every $H\in\mathcal{H}$ passes through some point $x_H\in C$, and $C=\{x_H : H\in\mathcal{H}\}$. The following statements are equivalent:
\begin{enumerate}
\item There exists a convex hypersurface $S$ of class $C^{1,1}$ in $X$ such that $S$ has bounded principal curvatures and $H$ is tangent to $S$ at $x_H$ for every $H\in\mathcal{H}$.
\item There exists some $M>0$ such that, for every couple $H_1, H_2$ of hyperplanes in $\mathcal{H}$, there exists a convex hypersurface $S(H_1, H_2)$ of class $C^{1,1}$ such that the principal curvatures of $S(H_1, H_2)$ are bounded by $M$ and $S(H_1, H_2)$ is tangent to $H_1$ and $H_2$ at $x_{H_1}$ and $x_{H_2}$, respectively.
\end{enumerate}
\end{theorem}
By saying that the principal curvatures of a $C^{1,1}$ convex body $V$ are bounded by some constant $M\geq 0$ we mean that the Lipschitz constant of the Gauss map $N_{\partial V}:\partial V\to S_X$ is bounded by $M$. This terminology is natural enough, since for a $C^2$ convex body $W$ the principal curvatures of $\partial W$ at a point $x\in\partial W$ are the eigenvalues of the differential of $N_{\partial W}$ at $x$, and these eigenvalues are bounded by $\textrm{Lip}(N_{\partial W})$. In our setting $N_{\partial V}$ may not be differentiable at some points (but it will be almost everywhere differentiable if $X=\mathbb{R}^n$, thanks to Rademacher's theorem).
For the precise statements of our main results in the cases that $\mathcal{C}=C^{1, \omega}$, where $\omega$ is a modulus of continuity, or that $X$ is a superreflexive Banach space, see Section \ref{sectionmainresults}.
\section{What is a convex hypersurface of class $C^{1,1}$?}\label{sectionwhatisconvexhypersurfacec11}
There is no controversy about what a convex hypersurface is. At least in the case $X=\mathbb{R}^n$, the following definition is accepted (explicitly or implicitly) everywhere in the literature.
\begin{definition}
{\em Let $X$ be a Banach space, and $S$ be a subset of $X$. We will say that $X$ is
a convex hypersurface in $X$ provided that there exists a closed convex set $V$ with nonempty interior such that $S=\partial V$.
}
\end{definition}
Such a set $V$ is sometimes called a {\em convex body}. However, some authors prefer to call such a set $V$ a convex body only if $V$ is bounded too. That is, for some authors a convex body is always bounded, while others indulge in dealing with {\em unbounded} convex bodies as well. {\bf In this paper, convex bodies are allowed to be unbounded.}
As for the $C^m$ regularity ($m\in\mathbb{N}$) of a convex hypersurface $S=\partial V$, there can be no dispute either. The most natural definition from a geometrical point of view is that $S$ be a submanifold of $\mathbb{R}^n$ of class $C^m$, and this happens to be equivalent to one of the most practical analytical definitions, namely, that the Minkowski functional (or gauge) of $V$, denoted by $\mu_V$, be of class $C^m$ on $X\setminus\mu_{V}^{-1}(0)$. Recall that, given a convex body $V$ in $X$, up to a translation we may always assume that $0\in\textrm{int}(V)$, and then define the Minkowski functional of $V$ by
$$
\mu_V(x)=\inf \left\lbrace t>0 : \frac{1}{t}x\in V \right\rbrace.
$$
However, for the regularity class $C^{1,1}$, these two definitions are no longer equivalent and, what is worse, in the literature there seems to be no universal agreement about what a hypersurface of class $C^{1,1}$ is. Some authors say that a submanifold $M$ of $\mathbb{R}^n$ is of class $C^{1,1}$ provided that $M$ is {\em locally} of class $C^{1,1}$ (meaning, for instance, that a normal to $M$ is locally Lipschitz), while other authors demand that a uniform Lipschitz constant should exist. These definitions are equivalent in the case that $V$ is bounded. Therefore, there cannot be any disagreement, either, about what a {\em bounded} convex body of class $C^{1,1}$ is.
We are thus left with the following questions: what is an {\em unbounded} convex body of class $C^{1,1}$? And, more generally, what is a (not necessarily convex) hypersurface of class $C^{1,1}$?
In this paper we will make a distinction between hypersurfaces of class $C^{1,1}$ and hypersurfaces of class $C^{1,1}_{\textrm{loc}}$.
\begin{definition}\label{definition of C11 hypersurface}
{\em Let $M$ be a $C^1$ hypersurface of $\mathbb{R}^n$ or, more generally, of a Hilbert space $(X, \|\cdot\|)$, and let $N:M\to S_X$ be a continuous unit normal. We will say that $M$ is of class $C^{1,1}$ provided that $N$ is Lipschitz continuous (with respect to the ambient distance), that is to say, there exists some constant $L>0$ such that
$$
\|N(x)-N(y)\|\leq L\|x-y\|
$$
for all $x, y\in M$.
We will say that $M$ is of class $C^{1,1}_{\textrm{loc}}$ whenever this condition holds locally, that is, for every $z\in M$ there exists some positive numbers $r, L$, depending on $z$, such that
$$
\|N(x)-N(y)\|\leq L\|x-y\|
$$
for all $x, y\in B(z, r)$.}
\end{definition}
We will use a similar terminology for functions. A function $F:X\to\mathbb{R}$ will be said to be of class $C^{1,1}$ provided $F\in C^{1}(X)$ and the gradient $\nabla F$ is (globally) Lipschitz. If $\nabla F$ is locally Lipschitz then we will say that $F$ is of class $C^{1,1}_{\textrm{loc}}$.
\medskip
In $\mathbb{R}^n$ it is well known that a convex hypersurface $M=\partial V$ is of class $C^{1,1}$ if and only if there exists an $r>0$ such that the balls of radii $r$ inside $V$ roll freely on $M$. This means that for every $x\in M$ there exists a ball $B(z,r)\subset V$ such that $\partial B(z,r)\cap M=\{x\}$; of course in this case we have $z=x- rN(z)$, where $N:\partial V\to S_X$ is the outer unit normal; see \cite{Lucas, GhomiHoward} and the references therein for instance. It is also known, see \cite{DelfourZolesio, KrantzParks}, that if $M=\partial A$ is the boundary of a proper open subset $A$ of $\mathbb{R}^n$, then $M$ is of class $C^{1,1}$ if and only if there exists some $r>0$ such that the signed distance to $M$, defined by
$$
b_A(x) = \left\lbrace
\begin{array}{ccl}
d(x,A) & \mbox{ if } & x\in X \setminus A\\
0 & \mbox{ if } & x\in \partial A \\
- d(x,\partial A) & \mbox{ if } & x\in \interior(A),
\end{array}
\right.
$$
is of class $C^{1,1}$ on the set $\{x\in\mathbb{R}^n : \textrm{dist}(x, \partial A)<r\}$. In the case that $A$ is convex, this fact allows us to realize $M$ as a level set of a $C^{1,1}$ convex function defined on $\mathbb{R}^n$. This kind of representation becomes very useful when we want to transfer smooth extension results from convex functions to convex bodies, and the other way around.
Unfortunately, the proofs of these finite-dimensional results do not immediately extend to Hilbert spaces, mainly due to the following fact: if $A$ is an open {\em convex} subset of an infinite-dimensional Hilbert space and $x\in \textrm{int}(A)$, the distance of $x$ to $\partial A$ may not be attained. An instance of this situation is provided by the following.
\begin{ex}
{\em Let $X$ be a separable Hilbert space of infinite dimension, and let us denote an orthonormal basis of $X$ by $\{e_n\}_{n\in\mathbb{N}}$. Define
$$
W=\left\{x\in X \, : \, \sum_{n=1}^{\infty}\frac{\langle e_n, x\rangle ^2}{(1+ 2^{-n})^2}\leq 1\right\}.
$$
Then $W$ is a bounded convex body, and clearly we have that
$$
d(0, \partial W)=1.
$$
However, it is not difficult to see that the closed ball $B(0,1)$ is contained in the interior of $W$, hence this distance is not attained.}
\end{ex}
Our aim in this section is to show that these results remain nonetheless true for convex bodies in Hilbert spaces, thus providing several equivalent definitions of $C^{1,1}$ smoothness for (possibly unbounded) convex bodies. We do not claim that the following two results are completely original. As a matter of fact, some of the properties and implications of these two results are proved, in a more general setting, in the papers \cite{ClarkeSternWolenski, PoliquinRockafellarThibault}, which explore the notion of {\em proximally smooth sets}; for instance see \cite[Theorems 4.1 and 4.8]{ClarkeSternWolenski}, and \cite[Theorem 4.1]{PoliquinRockafellarThibault}. What seems to be new in our results of this section is the equivalence of $(3)$, $(4)$ and $(5)$ of Theorem \ref{characterization of C11 convex bodies} below, and perhaps also $(1)$ of Theorem \ref{regularity of the signed distance} (for which we have been unable to find a reference). We could have limited ourselves to proving what we think is new or could not find in the literature, providing a reference for what is known, but we chose to present a self-contained proof of these results which does not rely on the less elementary notions and tools of the papers \cite{ClarkeSternWolenski, PoliquinRockafellarThibault}.
\begin{theorem}[Regularity of the signed distance to the boundary of a $C^{1,1}$ convex body]\label{regularity of the signed distance}
Let $V$ be a convex body of class $C^{1,1}$ in a Hilbert space $X$. Let us denote the signed distance to $\partial V$ by $b_V$, and the outer unit normal to $S:=\partial V$ by $N_S: S\to S_{X}$. Then the following properties are satisfied:
\begin{enumerate}
\item If $x \in X$ is such that $b_V(x)>- \lip(N_S)^{-1},$ then the distance $\dist(x, S)$ is attained at a unique point, which we will denote by $P_{S}(x),$ and $x-P_S(x) = b_V(x) N_S(P_S(x)).$
\item For every $\varepsilon \in (0,1)$, the mapping $P_S: \lbrace z\in X \: :\: b_V(z) \geq -\lip(N_S)^{-1} \rbrace \to S$ satisfies
$$
\langle P_S(x)-P_S(y), x- y \rangle \geq \varepsilon \| P_S(x)-P_S(y) \|^2 \quad \text{for every} \quad x,y\in U_\varepsilon, \quad \text{where}
$$
$$
U_\varepsilon:=\lbrace z\in X \: :\: b_V(z) \geq - (1-\varepsilon)\lip(N_S)^{-1} \rbrace.
$$
In particular, $P_S$ is Lipschitz on $U_\varepsilon$ with $\lip(P_S, U_\varepsilon) \leq \frac{1}{\varepsilon}.$
\item The function $b_V$ is Fr\'echet differentiable at every point $x\in X$ such that $b_V(x)>- \lip(N_S)^{-1},$ with $\nabla b_V(x)=N_S\left(P_{S}(x) \right).$ In particular, $\nabla b_V= N_S$ on $S=\partial V$. Moreover, $\nabla b_V$ is Lipschitz on each $U_\varepsilon,$ with $\lip(\nabla b_V,U_\varepsilon) \leq \tfrac{1}{\varepsilon} \lip(N_S)$, for every $\varepsilon \in (0,1).$
\item The function $b_V$ is convex on $X.$
\end{enumerate}
\end{theorem}
\begin{theorem}\label{characterization of C11 convex bodies}
Let $S$ be a convex hypersurface of a Hilbert space $X$; say $S=\partial V$, where $V$ is a closed convex body (not necessarily bounded). Assume that $S$ is a $C^1$ submanifold, so that the outer unit normal $N_S:S\to S_X$ is well defined. Then, the following statements are equivalent:
\begin{enumerate}
\item The mapping $N_S :S\to S_{X}$ is $L$-Lipschitz.
\item For every $0<r<1/L$, the balls of radii $r$ inside $V$ {\em roll freely on $S$,} meaning that for every $x\in S$ there exists a ball $B(z,r)\subset V$ such that $\partial B(z,r)\cap S=\{x\}$.
\item The mapping $N_S$ satisfies
$$
\langle N_S(y), y-x \rangle \geq \tfrac{1}{2L} \| N_S(x)-N_S(y)\|^2 \quad \text{for every} \quad x,y\in S.
$$
\item There exists a {\em convex} function $F:X\to\mathbb{R}$ of class $C^{1,1}$ such that $\lip(\nabla F) \leq L,$ $S=F^{-1}(1)$ and $\nabla F(x)=N_S(x)$ for every $x\in S$.
\end{enumerate}
Furthermore, if $V$ is bounded and $0\in \interior(V)$, then the above statements are also equivalent to:
\begin{enumerate}
\item[{(5)}] $\mu_V$ is of class $C^{1,1}$ on the set $\{x\in X: \mu_V(x) \geq \alpha\}$ for every $\alpha >0.$
\end{enumerate}
\end{theorem}
Therefore any of these conditions can be taken as the definition of a convex body of class $C^{1,1}$.
In the remainder of this section we will prove Theorems \ref{regularity of the signed distance} and \ref{characterization of C11 convex bodies}. We will first establish $(1)$ of Theorem \ref{regularity of the signed distance}, then we will turn to the proof of Theorem \ref{characterization of C11 convex bodies}, and finally return to proving $(2), \: (3)$ and $(4)$ of Theorem \ref{regularity of the signed distance}.
\medskip
\subsection*{Proof of Theorem \ref{regularity of the signed distance} $(1)$} If $x \in X \setminus \interior(V)$, because $V$ is closed and convex and the norm $\|\cdot\|$ is Hilbertian, we can find a unique point $P_S(x) \in \partial V= S$ such that $\| x-P_S(x)\| = \dist(x,S)$ and the mapping $ X \setminus \interior(V) \ni x \mapsto P_S(x)$ is $1$-Lipschitz.
Let us now assume that $x_0\in \interior(V)$ with $r:=\dist(x,S)< \lip(N_S)^{-1}.$ We can find a sequence $(z_n)_n $ in $S$ such that $\lim_n \| z_n-x_0\| = r.$ If we define $x_n:= z_n- r N_S(z_n),$ we claim that $(x_n)_n$ converges to $x_0.$ Indeed, if $n$ is large enough so that $r> \frac{1}{n},$ the point $x_0+ (r-\frac{1}{n}) N_S(z_n)$ belongs to the interior of the ball $B(x_0,r)$ and then $x_0+ (r-\frac{1}{n}) N_S(z_n) \in V.$ The convexity of $V$ implies that
$$
\big \langle N_S(z_n), x_0 + (r-\tfrac{1}{n})N_S(z_n) -z_n \big \rangle \leq 0.
$$
This allows us to write
\begin{align*}
&\|x_n-x_0\|^2 = \| z_n-x_0-r N_S(z_n) \|^2 = \| z_n-z_0\|^2 + r^2 \| N_S(z_n) \|^2 + 2r \big \langle N_S(z_n), x_0-z_n \big \rangle \\
& = \| z_n-z_0\|^2 + r^2 + 2r \big \langle N_S(z_n), x_0+(r-\tfrac{1}{n})N_S(z_n)-z_n \big \rangle -2r \big \langle N_S(z_n), (r-\tfrac{1}{n})N_S(z_n) \big \rangle \\
& \leq \| z_n-z_0\|^2 + r^2 -2r \big \langle N_S(z_n), (r-\tfrac{1}{n})N_S(z_n) \big \rangle = \| z_n-z_0\|^2 + r^2 -2r(r-\tfrac{1}{n}).
\end{align*}
The last term tends to $r^2+r^2-2r^2=0$ as $n \to \infty.$ This shows that $\lim_n \| x_n-x_0\| =0.$ Now, since $N_S$ is Lipschitz we can write, for every $n,m\in \mathbb{N},$
$$
\|z_n-z_m\| = \| x_n + r N_S(z_n)-x_m + r N_S(z_m) \| \leq \| x_n-x_m\| + r \lip(N_S) \| z_n-z_m\|.
$$
This leads us to
$$
(1- r\lip(N_S)) \| z_n-z_m\| \leq \| x_n-x_m\|, \quad n,m \in \mathbb{N},
$$
which shows that $(z_n)_n$ is a Cauchy sequence because so is $(x_n)_n$ and $r < \lip(N_S)^{-1}.$ Thus there exists some $z_0 \in S$ with $\dist(x,S)= \lim_n \| z_n-x\| = \| z_0-x\|.$ This proves that the distance function to $S$ is attained on the set $\lbrace z\in X \: :\: \dist(z,S) < \lip(N_S)^{-1} \rbrace.$ In addition, bearing in mind that $S$ is a one-codimensional manifold of class $C^1$ and $N_S$ is the outer unit normal to $S,$ it is straightforward to see that, for every $x\in X$ and $y\in S:$
\begin{equation}\label{characterizationminimizingpoints}
\| x-y\| = \dist(x,S) \quad \text{if and only if} \quad x-y= b_V(x) N_S(y).
\end{equation}
Let $x$ be a point with $\dist(x,S) < \lip(N_S)^{-1},$ or equivalently $|b_V(x)| <\lip(N_S)^{-1}$, and let us see that $\dist(x,S)$ is attained at a unique point. We already know that the distance $\dist(x,S)$ is attained at some $y\in S.$ Assume that there are different points $y_1, y_2 \in S$ such that $\dist(x,S)= \| x-y_1\|= \|x-y_2\|.$ It then follows from \eqref{characterizationminimizingpoints} that
$$
x-y_1 = b_V(x) N_S(y_1), \quad x-y_2= b_V(x)N_S(y_2);
$$
which easily implies that
$$
\| y_1-y_2 \| \leq |b_V(x)| \| N_S(y_2)-N_S(y_1) \| \leq |b_V(x)| \lip(N_S) \| y_1-y_2 \| < \| y_1-y_2\|,
$$
a contradiction. Therefore, the point $y$ is the unique $y$ for which we have $\| x-y\| =\dist(x,S).$
\subsection*{Proof of Theorem \ref{characterization of C11 convex bodies}}
\noindent $(1) \implies (2):$ Let $x\in S$ and $r \in (0, 1/L).$ We first claim that $z:=x-rN_S(x) \in \interior(V).$ Indeed, otherwise we would have $x-tN_S(x) \in S$ for some $t \in (0,r]$ and by convexity of $V$
$$
0 \leq t^{-1}\langle N_S(x-tN_S(x)), x-t N_S(x)-x \rangle = \tfrac{1}{2}\| N_S(x)-N_S(x-tN_S(x))\|^2 -1 \leq \tfrac{L^2}{2}t^2-1,
$$
which is absurd. Thus $z\in \interior(V).$ Now assume for the sake of contradiction that $B(z,r)$ is not contained in $V,$ where $z=x-rN_S(x).$ We have that $t:=\dist(z,S)<r$ and by Theorem \ref{regularity of the signed distance} $(1)$ there exists a unique $y\in S$ such that $\|z-y\| = t.$ Moreover, by the characterization \eqref{characterizationminimizingpoints}, $y$ satisfies $y=z+tN_S(y)$ and we can write
$$
rN_S(x)-rN_S(y) = x-z-tN_S(y) + (t-r)N_S(y) = x-y+(t-r)N_S(y).
$$
By convexity of $V$ we have $\langle N_S(y), y-x \rangle \geq 0$ and then
$$
r^2 \| N_S(x)-N_S(y)\|^2 = \| x-y\|^2 +(r-t)^2 + 2(r-t) \langle N_S(y), y-x \rangle \geq \| x-y\|^2+(r-t)^2>\|x-y\|^2.
$$
This is a contradiction because $N_S$ is $L$-Lipschitz. We have shown that $B(z,r) \subset V$.
Finally, if $y\in B(z,r) \cap S,$ then $\| y-z\| \leq r = \dist(z,S)$ as $B(z,r) \subset V.$ This proves that $y=x$ because the distance $\dist(z,S)$ is attained at a unique point.
\medskip
\noindent $(2) \implies (3):$ Given $0<r<1/L$ and $x\in S,$ there exists a ball $B(z_x,r)$ contained in $V$ and such that $B(z_x,r) \cap S= \lbrace x \rbrace.$ The tangent hyperplane to $S$ at the point $x$ coincides with the tangent hyperplane to $\partial B(z_x,r)$ at $x,$ and this implies that $N_S(x) = (x-z_x) / \| x-z_x\|.$ Hence $z_x= x-r N_S(x).$
Now we consider two points $x,y\in S$ and define $p:= x+r(N_S(y)-N_S(x)).$ It is immediate that $p\in B(x-rN_S(x),r) \subset V$ and then $\langle N_S(y),y-p \rangle \geq 0$ since $V$ is convex. Consequently we have
\begin{align*}
\langle N_S(y), y-x \rangle \geq \langle N_S(y), p-x \rangle = r \langle N_S(y), N_S(y)-N_S(x) \rangle = \tfrac{r}{2}\| N_S(x)-N_S(y) \|^2
\end{align*}
Since $r\in (0, 1/L)$ is arbitrary we obtain the desired inequality.
\medskip
\noindent $(3) \implies (4):$ If $V$ is a half-space (that is, $S$ is a hyperplane) then the result is obvious. Therefore we may assume that $V$ is not a half-space. Let us consider the $1$-jet on $S$ given by $(f,G)=(1, N_S).$ It is immediate from $(3)$ that
$$
f(x)-f(y)-\langle G(y),x-y \rangle \geq \tfrac{1}{2L} \| G(x)-G(y)\|^2 \quad \text{for all} \quad x,y\in S.
$$
Thus we can apply \cite[Theorem 2.4]{AzagraMudarraExplicitFormulas} to obtain a convex function $F \in C^{1,1}(X)$ such that $(F, \nabla F)=(f,G)= (1,N_S)$ on $S$ and $\lip(\nabla F) \leq L.$ Let us see that, in fact, $F^{-1}(1)=S.$ Indeed, there exist points $x$ such that $F(x)<1$ as otherwise $\nabla F=0$ on $S.$ Thus $W:=F^{-1}(-\infty,1] $ is a closed convex body such that $S\subseteq \partial W=F^{-1}(1)$. Also, given any $x\in X \setminus V,$ the convexity of $F$ together with the fact that $\nabla F=N_S$ give us
$$
F(x) \geq F(P_V(x)) + \langle N_S(P_V(x)), x-P_V(x) \rangle = 1 + \dist(x,V)>1,
$$
where $P_V(x)$ denotes the projection of $x$ onto $V.$ This shows that $W \subseteq V.$
To show that $V\subseteq W$, we need to use the following.
\begin{fact}\label{if V is not a half-space}
{\em
If a convex body $V$ is not a half-space, and if $x_0\in\textrm{int}(V)$, then there exists a direction $v\in X \setminus\{0\}$ such that the line $\mathcal{L}:=\{x_0+tv: t\in\mathbb{R}\}$ intersects $S=\partial V$ at exactly two points $x_1=x_0+t_1v$ and $x_2=x_0+t_2v$, with $t_1<0<t_2$.}
\end{fact}
Assuming this is true for a moment, let us see why $V\subseteq W$.
Assume there exists $x_0\in V$ such that $x_0\notin W$, that is, $F(x_0)>1$. Since $F=1$ on $\partial V$ we necessarily have $x_0\in\textrm{int}(V)$. Let $v, x_1, x_2, t_1, t_2$ be as in Fact \ref{if V is not a half-space}. Then the convex function $\varphi:\mathbb{R}\to\mathbb{R}$ defined by $\varphi(t)=F(x_0+tv)$ takes the value $1$ at the points $t_1$ and $t_2$, while $\varphi(0)>1,$ which is absurd. Therefore we must have $V=W$, and consequently $S=F^{-1}(1)$.
\medskip
Now let us prove Fact \ref{if V is not a half-space}. By assumption $V$ admits at least two different support hyperplanes, say $\mathcal{H}_1$, $\mathcal{H}_2$, the boundaries of two open half-spaces $\mathcal{U}_1, \mathcal{U}_2$, both containing $V$. Then there exists $v\in X \setminus\{0\}$ such that the line $\mathcal{L}:=\{x_0+tv: t\in\mathbb{R}\}$ intersects $\mathcal{H}_1$ and $\mathcal{H}_2$ at two different points $y_1\in\mathcal{H}_1$, $y_2\in\mathcal{H}_2$. We may write $y_1=x_0+s_1v$, $y_2=x_0+s_2v$, and assume (up to replacing $v$ with $-v$ if necessary) that $s_1<0<s_2$. Since $x_0\in\textrm{int}(V)\subseteq\mathcal{U}_1\cap\mathcal{U}_2$, $V$ is a convex body, $\mathcal{H}_1$ and $\mathcal{H}_2$ support $V$, the ray $\{x_0+tv :t<0\}$ intersects $\mathcal{H}_1$, and the ray $\{x_0+tv: t>0\}$ intersects $\mathcal{H}_2$, we may conclude that there exist unique numbers $t_1\in [s_1, 0)$ and $t_2\in (0, s_2]$ such that $x_0+t_1v\in\partial V$ and $x_0+t_2v\in\partial V$.
\medskip
\noindent $(4) \implies (1):$ It is immediate since $\nabla F$ is $L$-Lipschitz.
\medskip
Let us now further assume that $V$ is bounded and $0 \in \interior(V).$ We have that $\mu_V^{-1}(0)=0$ and, since $V$ is a convex body of class $C^1,$ we know that $\mu_V$ is differentiable on $X \setminus \lbrace 0 \rbrace$ and
\begin{equation}\label{comparisongradientminkowskinormal}
\nabla \mu_V(x) = \frac{1}{ \Big \langle N_S \left( \frac{x}{\mu_V(x)} \right), \frac{x}{\mu_V(x)} \Big \rangle } N_S\left( \frac{x}{\mu_V(x)} \right) \quad \text{for all} \quad x\in X \setminus \lbrace 0 \rbrace.
\end{equation}
In particular, we have that
\begin{equation}\label{gradientminkowskifunctional}
\langle \nabla \mu_V(z), z \rangle =1 \quad \text{for all} \quad z\in S.
\end{equation}
Let $0<r \leq R$ be such that
\begin{equation}\label{ballscontaininbody}
B(0,r) \subset V \subset B(0,R).
\end{equation}
Then $\mu_V$ is $r^{-1}$-Lipschitz and $\mu_V \geq R^{-1} \|\cdot \|$ on $X.$ Therefore $\|\nabla \mu_V\| \leq r^{-1}$ and also, because $N_S= \nabla \mu_V/ \| \nabla \mu_V\|$ on $S,$ the identity \eqref{gradientminkowskifunctional} gives
\begin{equation} \label{geometricestimationc11normal}
\langle N_S(z), z \rangle \geq r \quad \text{for all} \quad z\in S.
\end{equation}
Finally, combining \eqref{gradientminkowskifunctional} with \eqref{ballscontaininbody} we obtain
\begin{equation}\label{infimumminkowskigradient}
\| \nabla \mu_V (z)\| \geq R^{-1} \quad \text{for all} \quad z\in S.
\end{equation}
Now let us see why $(1)$ and $(5)$ are equivalent.
\noindent $(1) \implies (5):$ Let us assume that $N_S: S \to S_X$ is Lipschitz. Using that $\mu_V$ is $r^{-1}$-Lipschitz and \eqref{ballscontaininbody} we can write, for every $x,y\in X\setminus \lbrace 0 \rbrace,$
\begin{align}\label{previousestimationinverseminkowski}
\bigg \| \frac{x}{\mu_V(x)} - \frac{y}{\mu_V(y)} \bigg \| & = \frac{\| \left( \mu_V(y)-\mu_V(x) \right) x + \mu_V(x)(x-y)\|}{\mu_V(x) \mu_V(y)} \leq \frac{|\mu_V(x)-\mu_V(y)| \| x \|}{\mu_V(x) \mu_V(y)} +\frac{\| x-y\| }{\mu_V(y)} \\
& \leq \frac{r^{-1}\| x-y\| \|x\|}{\mu_V(x)\mu_V(y)} + \frac{\| x-y\| }{\mu_V(y)} \leq \frac{1}{\mu_V(y)}\left( 1 +R r^{-1} \right) \| x-y\|. \nonumber
\end{align}
Given $x,y\in X \setminus \lbrace 0 \rbrace,$ let us denote $\overline{x}= \frac{x}{\mu_V(x)}$ and $\overline{y}= \frac{y}{\mu_V(y)}.$ Using first \eqref{comparisongradientminkowskinormal}, then \eqref{geometricestimationc11normal} and finally \eqref{previousestimationinverseminkowski} we get
\begin{align*}
\| \nabla \mu_V(x) & - \nabla \mu_V(y)\| = \bigg \| \frac{N_S(\overline{x})}{\langle N_S(\overline{x}), \overline{x} \rangle} - \frac{N_S(\overline{y})}{\langle N_S(\overline{y}), \overline{y} \rangle} \bigg \| \\
& = \frac{\| \left(\langle N_S(\overline{y}), \overline{y} \rangle-\langle N_S(\overline{x}), \overline{x} \rangle \right)N_S(\overline{x}) + \langle N_S(\overline{x}), \overline{x} \rangle\left( N_S(\overline{x})-N_S(\overline{y}) \right) \| }{\langle N_S(\overline{x}), \overline{x} \rangle \langle N_S(\overline{y}), \overline{y} \rangle} \\
& \leq \frac{| \langle N_S(\overline{y})-N_S(\overline{x}), \overline{x} \rangle| + | \langle N_S(\overline{y}), \overline{y}-\overline{x} \rangle|}{\langle N_S(\overline{x}), \overline{x} \rangle \langle N_S(\overline{y}), \overline{y} \rangle} + \frac{\lip(N_S)}{\langle N_S(\overline{y}), \overline{y} \rangle} \| \overline{x}-\overline{y}\| \\
& \leq \frac{\left( 1 + \| \overline{x}\| \lip(N_S) \right) \| \overline{x}-\overline{y}\|}{\langle N_S(\overline{x}), \overline{x} \rangle \langle N_S(\overline{y}), \overline{y} \rangle} + \frac{\lip(N_S)}{\langle N_S(\overline{y}), \overline{y} \rangle} \| \overline{x}-\overline{y}\| \leq \left( \frac{1 + R \lip(N_S) }{r^2} + \frac{\lip(N_S)}{r} \right)\| \overline{x}-\overline{y}\| \\
& \leq \frac{\left( \frac{1 + R \lip(N_S) }{r^2} + \frac{\lip(N_S)}{r} \right)}{\mu_V(y)}\left( 1 +R r^{-1} \right) \| x-y\|.
\end{align*}
This proves that, for every $\alpha >0,$ there exists a constant $M_\alpha>0$ such that
$$
\lip( \nabla \mu_V, U_\alpha) \leq M_\alpha, \quad \text{where} \quad U_\alpha= \lbrace z\in X \: : \: \mu_V(z) \geq \alpha \rbrace,
$$
which shows $(5).$
\medskip
\noindent $(5) \implies (1):$ By assumption we have that $\nabla \mu_V$ is Lipschitz on $S.$ Since $N_S= \nabla \mu_V / \| \nabla \mu_V\| $ on $S$ we can write, for every $x,y\in S,$
$$
\| N_S(x)-N_S(y)\| \leq \frac{2\| \nabla \mu_V(x)- \nabla \mu_V (y)\|}{\| \nabla \mu_V(y)\|} \leq \frac{2 \lip( \nabla \mu_V, S)}{\|\nabla \mu_V(y)\|} \| x-y\| \leq 2 R \lip( \nabla \mu_V, S)\| x-y\|,
$$
where the last inequality follows from \eqref{infimumminkowskigradient}. We have thus shown that $N_S$ is Lipschitz on $S.$
\subsection{Proof of Theorem \ref{regularity of the signed distance} $(2), \:(3)$ and $(4)$} We start with the proof of $(2).$ Let $\varepsilon \in (0,1)$ and let $x,y \in \interior(V)$ be such that $d_S(x), d_S(y) \leq (1-\varepsilon)\lip(N_S)^{-1}.$ By Theorem \ref{characterization of C11 convex bodies} $(2),$ the point $P_S(y)$ does not belong to the open ball centered at $P_S(x)-\lip(N_S)^{-1}N_S(P_S(x))$ and with radius $\lip(N_S)^{-1}.$ This is equivalent to
$$
\| P_S(x)-P_S(y) \|^2 \geq 2 \lip(N_S)^{-1} \langle P_S(x)-P_S(y), N_S(P_S(x)) \rangle.
$$
We learnt from $(1)$ that $P_S(x)-x = d_S(x) N_S(P_S(x))$. Using that $d_S(x) \leq (1-\varepsilon)\lip(N_S)^{-1},$ the above inequality yields
\begin{equation}\label{inequalityXpsxpsy}
(1-\varepsilon) \| P_S(x)-P_S(y) \|^2 \geq 2\langle P_S(x)-P_S(y), P_S(x)-x \rangle.
\end{equation}
Similary we deduce
\begin{equation}\label{inequalityYpsxpsy}
(1-\varepsilon) \| P_S(x)-P_S(y) \|^2 \geq 2\langle P_S(y)-P_S(x), P_S(y)-y \rangle.
\end{equation}
After summing \eqref{inequalityXpsxpsy} and \eqref{inequalityYpsxpsy} and making some elementary calculations we get
\begin{equation}\label{Pfirmlynonexpansiveinterior}
\langle P_S(x)-P_S(y), x- y \rangle \geq \varepsilon \| P_S(x)-P_S(y) \|^2.
\end{equation}
Now, observe that, if $z\in S$ and $w\in \interior(V)$ with $d_S(w) <\lip(N_S)^{-1}$, then
$$
\|z-P_{S }(w)\|\leq \|z-w\|+\|w-P_S(w)\|\leq \|z-w\| +\| w-z\|=2\|z-w\|.
$$
This fact together with \eqref{Pfirmlynonexpansiveinterior} tell us that $P_S$ is continuous on the set $b_V^{-1} \left( [-(1-\varepsilon)\lip(N_S)^{-1},0] \right)$ and, consequently, \eqref{Pfirmlynonexpansiveinterior} holds for every $x,y$ belonging to this set.
Finally, recall that the metric projection onto a convex set in a Hilbert space is firmly non-expansive (see \cite[Proposition 4.8]{BauschkeCombettesbook} for instance), which implies that
$$
\langle P_S(x)-P_S(y), x- y \rangle \geq \| P_S(x)-P_S(y) \|^2 \quad \text{for all} \quad x,y\in X \setminus \interior(V).
$$
All these observations allow us to conclude $\langle P_S(x)-P_S(y), x- y \rangle \geq \varepsilon \| P_S(x)-P_S(y) \|^2$ for every $x,y \in U_\varepsilon.$
\medskip
The following Claim will be helpful in the proof of $(3)$.
\begin{claim}\label{claimprojectionanddistanceofinteriorpoints}
{\em Let $r< \lip(N_S)^{-1}$ and $z \in S.$ Then for $0 \leq t\leq r,$ we have that $P_S(z-tN_S(z)))=z$ and $b_V(z-tN_S(z))=-t.$ }
\end{claim}
\begin{proof}
If $0 \leq t \leq r,$ the distance from $z-tN_S(z)$ to $S$ is attained at a unique point by $(1).$ On the other hand, $B(z-rN(z), r) \cap S= \lbrace z \rbrace$ by Theorem \ref{characterization of C11 convex bodies} $(2)$ and, if $y\in S,$ we have
$$
\| y-(z-tN_S(z)) \| \geq \| y-(z-rN_S(z))\| - |r-t| \geq r-(r-t)=t,
$$
with identity if and only if $y=z.$ This shows that $b_V( z-tN_S(z))=-\dist(z-tN_S(z), S)=-t$ and $z=P_S(z-tN_S(z)).$
\end{proof}
Let us now proceed with the proof of $(3).$
\noindent $(3):$ If $x\in X \setminus V,$ the convexity of $V$ implies that $b_V$ is differentiable at $x$ with $\nabla b_V(x)= \tfrac{x-P_S(x)}{b_V(x)},$ and using $(1)$ we obtain the formula $\nabla b_V = N_S \circ P_S$ on $X \setminus V.$ Now assume that $x\in V$ is such that $b_V(x)>-\lip(N_S)^{-1}$ and let us prove the differentiability of $b_V$ at $x$ with $\nabla b_V(x)=N_S(P_S(x)).$ Observe that $b_V$ is $1$-Lipschitz on $X$ and the norm $\| \cdot \|$ on $X$ is (Fr\'{e}chet) differentiable at $N_S(x)$ with $\| N_S(x)\|=1$ and $\nabla (\| \cdot \|) ( N_S(x))=N_S(x).$ We can use a theorem of Fitzpatrick's \cite[Theorem 2.4]{Fitzpatrick} which tells us that the $1$-Lipschitz function $b_V$ will be differentiable at $x$ with $\nabla b_V(x)= N_S(x)$ as soon as we check that
\begin{equation}\label{directionaldifferentiabilitysigneddistance}
\lim_{t \to 0} \frac{b_V(x+tN_S(P_S(x)))-b_V(x)}{t}=1.
\end{equation}
Assume first that $x\in \partial V= S.$ If $t>0,$ then $x+tN_S(x) \in X \setminus V$ and $P_S( x+ tN_S(x))=x,$ which shows that $b_V(x+tN_S(x))=t.$ Hence $b_V(x+tN_S(x))-b_V(x)=t$ and \eqref{directionaldifferentiabilitysigneddistance} holds when $t\to 0^+.$ On the other hand, if $r>0$ is such that $r<\lip(N_S)^{-1}$ and $t \in [-r,0),$ we can apply the last part of Claim \ref{claimprojectionanddistanceofinteriorpoints} to obtain that $b_V(x+tN_S(x))=t.$ Thus \eqref{directionaldifferentiabilitysigneddistance} trivially holds when $t \to 0^{-}.$
Let us now check \eqref{directionaldifferentiabilitysigneddistance} for points $x\in \interior(V)$ with $\dist(x,S)< \lip(N_S)^{-1}.$ Take $0<\varepsilon < \dist(x,S)$ such that $ \dist(x,S)+\varepsilon < \lip(N_S)^{-1},$ define $r:= \dist(x,S) + \varepsilon$, and let $0<|t| \leq \varepsilon.$ We have $x-P_S(x)=-\dist(x,S) N_S(P_S(x))$ by virtue of $(1)$ and
$$
x +t N_S(P_S(x)) = P_S(x)-( \dist(x,S)-t) N_S(P_S(x)),
$$
where $\dist(x,S)-t \in [0,r)$ thanks the choice of $r$ and $\varepsilon.$ Applying the last part of Claim \ref{claimprojectionanddistanceofinteriorpoints}, we obtain
$$
b_V( x +t N_S(P_S(x)) = b_V \left( P_S(x)-( \dist(x,S)-t) N_S(P_S(x)) \right) = t-\dist(x,S).
$$
This immediately yields \eqref{directionaldifferentiabilitysigneddistance}. In conclusion, we have shown that $b_V$ is Fr\'echet differentiable at every $x\in X$ such that $b_V(x)>- \lip(N_S)^{-1},$ with $\nabla b_V(x) = N_S P_{S}(x).$ Moreover, the mapping $U_\varepsilon \ni x \mapsto P_S(x)$ is $ \varepsilon^{-1}$-Lipschitz by $(2)$, and therefore
$$
\|\nabla b_V(x)- \nabla b_V (y) \| \leq \lip(N_S) \lip(P_{S}) \| x-y\| \leq \tfrac{1}{\varepsilon} \lip(N_S)\|x-y\|
$$
for every $x,y\in U_\varepsilon.$
\medskip
\noindent $(4):$ Outside $V$ we have that $b_V=\dist( \cdot, V)$, and $\dist( \cdot, V)$ is convex on $X.$ Hence $b_V$ is convex on any line segment contained in $X\setminus \interior{V}$. Let us now see that $b_V$ is convex on $\interior(V)$. If $[x,y]$ is a line segment contained in $\interior(V)$ and
$$
z_{\lambda}:=(1-\lambda)x+\lambda y, \quad \lambda \in [0,1],
$$
is a point of $[x,y],$ for every $\varepsilon>0$ we can find a point $p_\lambda \in S$ such that
$$
\|z_\lambda-p_\lambda\| \leq \dist(z_\lambda, S) + \varepsilon = -b_V(z_\lambda) +\varepsilon .
$$
Let $W_\lambda$ denote the tangent hyperplane to $S$ at $p_\lambda$; since $V$ is convex we have that $W_\lambda \cap\interior{V}=\emptyset$. Then, if $p_x$ and $p_y$ denote the orthogonal projections of $x$ and $y$ onto $W_\lambda$, we have $p_x, p_y \in X\setminus V$, and therefore
$$
\dist(x, S)\leq \|x-p_x\| \quad \textrm{and} \quad \dist(y, S)\leq \|y-p_y\|.
$$
On the other hand, the function
$$
[0,1]\ni t\mapsto \dist \left( (1-t)x+ty, W_\lambda \right)
$$
is obviously affine, so we have
\begin{align*}
-& b_V(z_\lambda) + \varepsilon \geq \|z_\lambda-p_\lambda\| \geq \dist(z_\lambda, W_\lambda) =(1-\lambda)\dist(x, W_\lambda)+\lambda \dist(y, W_\lambda) \\
& = (1-\lambda)\|x-p_x\|+\lambda \|y-p_y\| \geq (1-\lambda) \dist(x, S)+\lambda \dist(y, S) = -(1-\lambda)b_{V}(x)-\lambda b_{V}(y),
\end{align*}
that is to say,
$$
b_{V}\left( (1-\lambda)x+\lambda y\right) = b_V(z_\lambda) \leq (1-\lambda)b_{V}(x)+\lambda b_{V}(y) + \varepsilon.
$$
Letting $\varepsilon \to 0^+,$ the above argument shows that $b_{V}$ is convex on $\interior{V}$, and by continuity it follows that $b_{V}$ is convex on $V$. Finally, if $x\in X\setminus V$ and $y\in \interior{V}$, hence the line segment $[x,y]$ is transversal to $S$, we may write $[x,y]=[x,z]\cup [z,y]$, where $z\in S,$ $[x,z]\subset X\setminus\interior{V} $ and $[z, y]\subset V$. Consider the function $\varphi:[0,1]\to\mathbb{R}$ defined by
$\varphi(t)=b_{V}\left( (1-t)x + ty\right)$, and let $t_0\in (0,1)$ be the number such that $z=(1-t_0)x+t_0 y$. We know that $\varphi$ is convex on $[0, t_0]$, and $\varphi$ is convex on $[t_0, 1]$ as well. Besides $\varphi$ is differentiable at $t_0$ because $b_V$ is differentiable on a neighbourhood of $S$ by $(3)$. Hence $\varphi$ is convex on $[0,1]$, for every $x,y$. It follows that $b_V$ is convex on $[x,y]$. Therefore $b_V$ is convex on $X$.
\section{Main results}\label{sectionmainresults}
In this section we will establish some generalizations of Theorem \ref{main theorem for C11 Hilbert} which are valid for convex bodies of class $C^{1, \omega}$ in Hilbert spaces or for convex bodies of class $C^{1, \alpha}$ in Banach spaces with equivalent norms of power type $1+\alpha$, with $\alpha\in (0,1]$. Of course, the usual norm of any Hilbert space satisfies this property with $\alpha=1$. In fact, it is well known that superreflexive Banach spaces are characterized as being Banach spaces with equivalent norms of class $C^{1, \alpha}$ for some $\alpha\in (0, 1]$, and Hilbert spaces are characterized as being Banach spaces with equivalent norms of class $C^{1,1}$. For reference about renorming properties of superreflexive spaces see \cite{Pisier, DGZ, FabianEtAl}.
But we must first specify what we mean by a convex body of class $C^{1, \alpha}$, $0<\alpha\leq 1$, in a Banach space, or more generally, by a convex body of class $C^{1, \omega}$, where $\omega$ is a modulus of continuity.
The first difficulty we encounter is that Definition \ref{definition of C11 hypersurface} no longer makes sense in a non-Hilbertian Banach space, as we do not have a notion of orthogonality in this setting. For the same reason, the statement of Theorem \ref{characterization of C11 convex bodies} does not make sense in a Banach space.
On the other hand, even if we should like to restrict our investigation to Hilbert spaces $X$, it is unclear what convex bodies in $X$ should be called of class $C^{1, \omega}$ (where $\omega$ is a modulus of continuity). As a matter of fact, there are no analogues of Theorems \ref{characterization of C11 convex bodies} and \ref{regularity of the signed distance} for the class $C^{1, \alpha}$ when $\alpha<1$. This can be shown by considering a bounded convex body $W$ in $\mathbb{R}^2$ such that $0\in \textrm{int}(W)$ as an interior point and such that the graph of $y=|x|^{3/2}-1$, $-2\leq x\leq 2$, is contained in $\partial W$, and $\partial W$ is $C^{\infty}$ smooth away from the point $(0,-1)$. The Minkowski functional $\mu_W$ of such a body will be of class $C^{1, 1/2}$ on the set $\{(x,y) : 1/2 <\mu_{W}(x,y)<2\}$ (see the proof of $(1) \implies (5)$ in Theorem \ref{characterization of C11 convex bodies}), and the outer normal $N_{\partial W}$ will be $1/2$-H\"older continuous, so we are tempted to call $W$ a $C^{1, 1/2}$ convex body; however, property $(2)$ of Theorem \ref{characterization of C11 convex bodies}, as well as properties $(1)$ and $(3)$ of Theorem \ref{regularity of the signed distance}, will fail for this body $W$. Since $W$ is bounded and $\mu_W$ is $C^{1, 1/2}$ it is easy to see that $W$ still satisfies $(3)$ of Theorem \ref{characterization of C11 convex bodies} for a $C^{1,1/2}$ convex function $\varphi$.
In view of these facts, at least from an analytical point of view, and with the purpose of solving Problem \ref{main problem} for the classes $\mathcal{C}=C^{1, \omega}$ in a Hilbert space, or $\mathcal{C}=C^{1, \alpha}$ in a superreflexive space, we consider that, among all the available options, the following definition is the most satisfactory.
\begin{definition}\label{definition of C1omega convex body}
{\em Let $S$ be a subset of a Banach space $X$. We will say that $S$ is a convex hypersurface of class $C^{1, \alpha}$, where $\alpha\in (0, 1]$, provided that there exist a number $M>0$ and a convex function $F \in C^{1, \alpha}(X)$ such that $S=F^{-1}(1)$ and
$$
M^{-1} \leq \|D F(x)\|_*\leq M \: \textrm{ whenever } \: x\in S.
$$
More generally, if $\omega$ is a modulus of continuity, we will say that a subset $S$ of $X$ is a convex hypersurface of class $C^{1, \omega}$ if there exist a number $M>0$ and a convex function $F\in C^{1, \omega}(X)$ such that $S=F^{-1}(1)$ and
$$
M^{-1}\leq \|D F(x)\|_* \leq M \: \textrm{ whenever } \: x\in S.
$$
}
\end{definition}
By a modulus of continuity $\omega$ we will understand a concave and strictly increasing function $\omega : [0, + \infty) \to [0, + \infty)$ such that $\omega(0)=0$ and $\lim_{t \to +\infty} \omega(t)=+\infty$. Observe that such a function $\omega$ has a well defined inverse $\omega^{-1}:[0, \infty)\to [0, \infty)$ which is convex, strictly increasing, and satisfies $\omega^{-1}(0)=0$ and $\lim_{s \to +\infty} \omega^{-1}(s)=+\infty$.
\medskip
The following result generalizes Theorem \ref{characterization of C11 convex bodies} to a large extent, but it does not provide sharp constants in $(4)$.
\begin{theorem}\label{characterizationC1omegaconvexbody}
Let $S$ be a convex hypersurface of a Hilbert space $X$, say $S=\partial V$, where $V$ is a closed convex body (not necessarily bounded). Assume that $S$ is a $C^1$ submanifold, so that the outer unit normal $N_S:S\to S_X$ is well defined. Let $\omega$ be a modulus of continuity, and denote $\varphi(t):=\int_0^t \omega (s) ds$. Then, the following statements are equivalent:
\begin{enumerate}
\item There exists $M>0$ such that $\| N_S(x)-N_S(y)\| \leq M \omega(\|x-y\|)$ for every $x,y\in S.$
\item There exists $M>0$ such that $W_x:= \lbrace p\in X \: \: \: \langle N_S(x),x-p\rangle \geq M \varphi(2\|x-p\|) \rbrace \subseteq V$ with $S \cap W_x = \lbrace x \rbrace$ for every $x\in S$.
\item There exists $M>0$ such that $\langle N_S(y), y-x \rangle \geq \tfrac{\|N_S(x)-N_S(y)\|}{2} \omega^{-1}\left( \tfrac{\| N_S(x)-N_S(y)\|}{4M} \right)$ for every $x,y\in S.$
\item There exists a {\em convex} function $F:X\to\mathbb{R}$ of class $C^{1,\omega}$ such that $S=F^{-1}(1)$ and $\nabla F(x)=N_S(x)$ for every $x\in S$.
\item $S$ is a convex hypersurface of class $C^{1, \omega}.$
\end{enumerate}
Furthermore, if $V$ is bounded and $0\in \interior(V)$, then the above statements are also equivalent to:
\begin{enumerate}
\item[{(6)}] For every $\alpha >0$, $\mu_V$ is of class $C^{1,\omega}$ on the set $\{x\in X: \mu_V(x) \geq \alpha\}$
\end{enumerate}
\end{theorem}
\begin{proof}
\noindent $(1) \implies (2):$ Let $x,y$ be two different points in $S$, and assume that $y\in W_x.$ Then we have
$$
0 \leq \langle N_S(y), y-x \rangle = \langle N_S(y)-N_S(x),y-x \rangle + \langle N_S(x), y-x \rangle \leq M \omega(\|x-y\|)\|x-y\| -M\varphi(2\|x-y\|),
$$
where the last term is negative since $\varphi(2t) > t \omega(t)$ for every $t>0.$ This proves that $W_x \cap S= \lbrace x \rbrace.$ Now, observe that $x-\varepsilon N_S(x)$ belongs to $\interior(V) \cap \interior(W_x)$ for $\varepsilon>0$ small enough. Thus $W_x$ and $V$ are two convex bodies such that $W_x \cap \partial V$ is a single point and $\interior(V) \cap \interior(W_x) \neq \emptyset.$ Therefore $W_x \subset V.$
\medskip
\noindent $(2) \implies (3):$ Let $x,y\in S$ and define $r:=\| N_S(y)-N_S(x)\|$. We may assume that $r>0$, as $(3)$ trivially holds when $N_S(x)=N_S(y)$. Also set $p:=x+ \tfrac{1}{r} \omega^{-1}\left( \tfrac{r}{4M} \right)(N_S(y)-N_S(x)).$ Bearing in mind that $2t \omega(t) \geq \varphi(2t)$ for every $t\geq 0$ (which follows from the concavity of $\omega$), we can write
\begin{align*}
\langle N_S(x), x-p \rangle & = \tfrac{1}{r} \omega^{-1}\left( \tfrac{r}{4M} \right) \langle N_S(x), N_S(x)-N_S(y) \rangle = \tfrac{r}{2} \omega^{-1}\left( \tfrac{r}{4M} \right) \\
& =2M \tfrac{r}{4M} \omega^{-1}\left( \tfrac{r}{4M} \right) \geq M \varphi \left( 2 \omega^{-1}\left( \tfrac{r}{4M} \right) \right)=M \varphi ( 2\|x-p\| ).
\end{align*}
This shows that $p\in W_x,$ which implies that $p\in V$ by virtue of $(2).$ We thus have $\langle N_S(y), y-p \rangle \geq 0$ by convexity of $V.$ Finally, we can write
\begin{align*}
\langle N_S(y) &, y-x \rangle = \langle N_S(y),y-p \rangle + \langle N_S(y), p-x \rangle \geq \langle N_S(y), p-x \rangle \\
& = \tfrac{1}{r} \omega^{-1}\left( \tfrac{r}{4M} \right) \langle N_S(y), N_S(y)-N_S(x) \rangle = \tfrac{r}{2} \omega^{-1}\left( \tfrac{r}{4M} \right)=\tfrac{\|N_S(x)-N_S(y)\|}{2} \omega^{-1}\left( \tfrac{\| N_S(x)-N_S(y)\|}{4M} \right).
\end{align*}
\medskip
\noindent $(3) \implies (4):$ We define $(f,G):= (1,N_S)$ on $S.$ By $(3)$ the jet $(f,G)$ satisfies the inequality
$$
f(x) \geq f(y) + \langle G(y), x-y \rangle + \tfrac{\|G(x)-G(y)\|}{2} \omega^{-1}\left( \tfrac{\| G(x)-G(y)\|}{4M} \right), \quad x,y \in S,
$$
and then \cite[Theorem 4.11]{AzagraMudarraExplicitFormulas} provides us with a convex function $F:X \to \mathbb{R}$ of class $C^{1,\omega}$ with $F=1$ and $\nabla F= N_S$ on $S.$ The same argument as in the proof of Theorem \ref{characterization of C11 convex bodies} gives that, in fact, $F^{-1}(1)=S.$
\medskip
\noindent $(4) \implies (5):$ This is obvious from Definition \ref{definition of C1omega convex body}.
\medskip
\noindent $(5) \implies (1).$ Let $F$ be a function as in Definition \ref{definition of C1omega convex body}. Of course we have $N_S= \nabla F / \| \nabla F \|$ on $S$, and then
$$
\| N_S(x)- N_S(y) \| \leq 2 \frac{\| \nabla F(x)-\nabla F(y)\|}{\|\nabla F(y)\|} \leq \frac{2}{\inf_{S} \| \nabla F \|} \lip(\nabla F) \omega \left( \| x-y\| \right)
$$
for every $x,y\in S.$ This shows $(1).$
\medskip
The proofs of $(1) \implies (6)$ and $(6) \implies (1)$ in the case that $V$ is bounded are similar to those of Theorem \ref{characterization of C11 convex bodies}.
\end{proof}
\medskip
The next two results generalize Theorem \ref{main theorem for C11 Hilbert}.
\begin{theorem}\label{main theorem for C1omega Hilbert}
Let $C$ be a subset of a Hilbert space $X$, and let $N:C\to S_X$ be a mapping. Then the following statements are equivalent.
\begin{enumerate}
\item There exists a $C^{1, \omega}$ convex body $V$ such that $C\subseteq \partial V$ and $N(x)$ is outwardly normal to $\partial V$ at $x$ for every $x\in C$.
\item There exists some $\delta>0$ such that
$$
\langle N(y), y-x\rangle \geq \|N(x)-N(y)\| \omega^{-1}\left( \delta \|N(x)-N(y)\| \right) \quad \textrm{for all} \quad x, y\in C.
$$
\end{enumerate}
Moreover, if we further assume that $C$ is bounded, then $V$ can be taken to be bounded as well.
\end{theorem}
\begin{theorem}\label{main theorem for C1alpha superreflexive}
Let $C$ be a subset of a superreflexive Banach space $X$ such that $X$ has an equivalent differentiable norm with modulus of smoothness of power type $1+\alpha$, where $\alpha\in (0,1]$. Let us denote by $X^{*}$ (resp. by $S^{*}$) the dual space of $X$, endowed with the dual norm $\|\cdot\|_{*}$ of $\|\cdot\|$ (resp. the dual sphere of $(X, \|\cdot\|)$). Let $D:C\to S^{*}$ be a mapping. Then the following statements are equivalent.
\begin{enumerate}
\item There exists a $C^{1, \alpha}$ convex body $V$ such that $C\subseteq \partial V$ the hyperplane $H_x:=\{y\in X : D(x)(y)=D(x)(x)\}$ is tangent to $\partial V$ at $x$ and $V \subseteq H_x^{-}:=\{y\in X : D(x)(y)\leq D(x)(x)\}$ for every $x\in C$.
\item There exists some $\delta>0$ such that
$$
D(y)(y-x) \geq \delta \|D(x)-D(y)\|_{*}^{1+\frac{1}{\alpha} } \quad \textrm{for all} \quad x, y\in C.
$$
\end{enumerate}
Moreover, if we further assume that $C$ is bounded, then $V$ can be taken to be bounded as well.
\end{theorem}
Finally, let us observe that the above theorems cannot be extended to Banach spaces which are not superreflexive.
\begin{remark}
{\em Assume that Theorem \ref{main theorem for C1omega Hilbert} is true for a Banach space $(X, \|\cdot\|)$. Pick a point $x_{0}\in X\setminus\{0\}$, and a linear form $\xi_0\in S^*$. Then condition $(2)$ of Theorem \ref{main theorem for C1omega Hilbert} is trivially satisfied for $C:=\{x_0\}$ and $D(x_0) :=\xi_0$. Therefore there exists a {\em bounded} $C^{1, \omega}$ convex body $W$ such that $W$ is of class $C^{1, \omega}$. Up to a translation we may assume that $0\in\interior{W}$. Hence the Minkowski functional of $W$, denoted by $\mu_W$ is subadditive, positively homogeneous, and satisfies $\mu_W(x)=0 \iff x=0$. Moreover, with the same proof as in $(1) \implies (5)$ of Theorem \ref{characterization of C11 convex bodies} we obtain that $\mu_W$ is of class $C^{1,\omega}$ on the superlevel sets $\lbrace x\in X \: : \: \mu_W(x) \geq \alpha \rbrace$ for every $\alpha>0.$ If $0<r\leq R$ are such that $B(0,r) \subset W \subset B(0,R),$ then $R^{-1} \|\cdot \| \leq \mu_W \leq r^{-1} \| \cdot \|$ on $X$ and hence the function
$$
\rho(x) :=\mu_W (x)+ \mu_W (-x)
$$
defines an equivalent norm in $X$ which is uniformly differentiable on its unit sphere, and this implies that $X$ is superreflexive; see \cite{DGZ} for instance.
}
\end{remark}
\section{Proofs of the main results}
In this section we will prove Theorems \ref{main theorem for C11 Hilbert}, \ref{main theorem for C1omega Hilbert} and \ref{main theorem for C1alpha superreflexive}.
\subsection{Proof of Theorem \ref{main theorem for C11 Hilbert}}
If $V$ is a $C^{1,1}$ convex body whose outer unit normal is $L$-Lipschitz, we know from Theorem \ref{characterization of C11 convex bodies} (3) that the inequality of $(2)$ in Theorem \ref{main theorem for C11 Hilbert} is satisfied with $L=r^{-1},$ for every $x,y\in \partial V.$
\medskip
Conversely, let us assume that $(2)$ is satisfied for $C \subset X, \: N:C \to S_X$ and $r>0.$ For every $y\in C,$ we define $B_y:= B(y-rN(y),r).$ Let us define
$$
V:= \overline{\co}\left( \bigcup_{y\in C} B_y \right),
$$
that is the closed convex hull of the union of the balls $B_y$. Obviously, we have $C\subset V$. Let us first see that in fact $C\subset\partial V$. Suppose that $y\in C \cap \interior(V).$ Then $y$ can be written as $y= \sum_{i=1}^n\lambda_i w_i;$ where $w_i \in \interior(B_{y_i}), \: y_i\in C, \: \lambda_i \geq 0,$ for every $i=1,\ldots,n, \: \sum_{i=1}^n \lambda_i=1$ and $n\in \mathbb{N}.$ By the assumption we have
$$
\langle N(y), y-y_i \rangle \geq \tfrac{r}{2} \| N(y)-N(y_i)\|^2 , \quad i=1,\ldots,n.
$$
This is equivalent to
$$
\langle N(y), y-z_i \rangle \geq r, \quad \text{where} \quad z_i:=y_i-rN(y_i), \quad i=1,\ldots,n.
$$
We obtain that $\langle N(y), y-\sum_{i=1}^n \lambda_i z_i \rangle \geq r,$ where $\| y - \sum_{i=1}^n \lambda_i z_i \| \leq \sum_{i=1}^n \lambda_i \| w_i-z_i\|<r,$ a contradiction. Hence $C \subseteq \partial V.$ Now we claim the following.
\begin{claim}\label{claimballrollsfreelysuficiency}
{\em For every $x\in \partial V$ there exists $z_x\in V$ such that $B(z_x,r) \subset V$ and $x\in \partial B(z_x,r).$ }
\end{claim}
\begin{proof}
If $y\in \co \left( \bigcup_{x\in C} B_x \right)$, then $y$ can be written as $y= \sum_{i=1}^n \lambda_i w_i,$ where $\lambda_i \geq 0$ and $w_i \in B_{y_i}$ for every $i=1, \ldots,n,$ $\sum_{i=1}^n \lambda_i=1$ and $n\in \mathbb{N}.$ Set $z_i:=y_i-rN(y_i)$ (the center of $B_{y_i}$), for $i=1, \ldots,n$, and $z:= \sum_{i=1}^n \lambda_i z_i$. Given any point $p \in B(z,r),$ it is clear that $p= \sum_{i=1}^n \lambda_i p_i,$ where $p_i:= p-z+z_i$ and $\|p_i-z_i\| = \| p-z\|\leq r$, hence $p_i\in B_{y_i}$ for every $i=1, \ldots, n$. This shows that $p\in \co \left( \bigcup_{x\in C} B_x \right) \subset V$ and therefore $B(z,r) \subset V$ with $\| z-y\| \leq \sum_{i=1}^n \lambda_i \| z_i-w_i\| \leq r.$
Now, let $x\in \partial V$ and consider a sequence $(y_k)_k \subset \co \left( \bigcup_{y\in C} B_y \right)$ converging to $x.$ By the above argument we can find a sequence $(z_k)_k$ on $V$ such that $y_k \in B(z_k,r) \subset V$ for every $k.$ Up to passing to a subsequence, we may assume that $(z_k)_k$ weakly converges to some $z_x \in V.$ Let us see that $B(z_x,r) \subset V.$ Indeed, otherwise there exist $u \in X \setminus \lbrace 0 \rbrace, \: \alpha \in \mathbb{R}$ and $w \in B(z_x,r)$ such that $V \subset \lbrace \langle u, \cdot \rangle \leq \alpha \rbrace$ and $\langle u, w \rangle > \alpha.$ The point $w_k:=w+z_k-z_x$ belongs to $B(z_k,r) \subset V$ for every $k$ and $(w_k)_k$ weakly converges to $w.$ Thus we have $\alpha \geq \lim_k \langle u, w_k \rangle = \langle u, w \rangle > \alpha,$ a contradiction. Therefore $B(z_x,r) \subset V.$ Also, observe that $(z_k-y_k)_k$ weakly converges to $(z_x-x),$ where $\| z_k-y_k\| \leq r$ for every $k$, and because $B(0,r)$ is weakly closed, we have $\|z_x-x\|\leq r$, that is, $x\in B(z_x,r).$ In fact, $x\in \partial B(z_x,r)$ because $x\in \partial V$ and $B(z_x,r) \subset V.$
\end{proof}
Now, let us see that $\partial V$ is a $C^1$ manifold. We can assume without loss of generality that $0 \in \interior(V).$ For every $x\in \partial V,$ take $z_x$ as in Claim \ref{claimballrollsfreelysuficiency} and define $g(y):=r^{-1}\|y-z_x\|$ for every $y\in X.$ Observe that $\mu_V(y)=\mu_{V-z_x}(y-z_x)$ and $g(y)=\mu_{B(z_x,r)-z_x}(y-z_x)$ for every $y\in X.$ Then, because $B(z_x,r) \subset V$ and $x\in \partial V \cap \partial B(z_x,r),$ $\mu_V$ and $g$ are two continuous convex functions such that $\mu_V \leq g$ on $X$ and $\mu_V (x)=g(x)=1.$ Since $g$ is differentiable at $x$, we conclude that $\mu_V$ is differentiable at $x$ too with $\nabla \mu_V(x)=\nabla g(x)= r^{-1}(x-z_x)/\|x-z_x\|.$ We have shown that $\mu_V$ is differentiable on $\partial V,$ and by homogeneity, $\mu_V$ is differentiable on an open neighbourhood of $\partial V.$ In conclusion $V$ is a $C^1$ manifold.
To see that $\partial V$ is of class $C^{1,1}$ with $\lip(N_{\partial V}) \leq r^{-1},$ it is now enough to apply Theorem \ref{characterization of C11 convex bodies} $(2)$ in combination with Claim \ref{claimballrollsfreelysuficiency}.
Finally, if $y\in C,$ observe that, by definition of $V,$ the point $z_y:=y-rN(y)$ is such that Claim \ref{claimballrollsfreelysuficiency} is true for the ball $B(z_y,r).$ Using the above argument we obtain (assuming that $0\in \interior(V)$) that $\nabla \mu_V(y)=r^{-1}(y-z_y)/\|y-z_y\| = r^{-1}N(y).$ In consequence $N$ coincides with the outer unit normal $N_{\partial V}$ to $\partial V$ at points of $C.$ This completes the proof of Theorem \ref{main theorem for C11 Hilbert}.
\subsection{Proof of Theorem \ref{main theorem for C1omega Hilbert}}
It is clear that $(1$) implies $(2)$ from the characterizations provided in Theorem \ref{characterizationC1omegaconvexbody}.
\medskip
Conversely, let us assume that $(2)$ is satisfied. Let us define $\varphi(t) := \int_0^t \omega(s) ds$ for every $t\geq 0.$ The Fenchel conjugate of $\varphi$ is defined by $$\varphi^*(t)=\int_0^t \omega^{-1}(s)ds $$ for every $t\geq 0,$ and it is clear that $\varphi^*(t)\leq t \omega^{-1}(t).$ By assumption we have
$$
\langle N(y),y-x \rangle \geq \| N(x)-N(y)\| \omega^{-1}\left( \delta \| N(x)-N(y)\| \right) \geq \delta^{-1} \varphi^*\left( \delta \| N(x)-N(y)\| \right), \quad x,y\in C.
$$
Therefore, the jet $(f,G):=(1,N)$ satisfies the inequality
$$
f (x) \geq f(y) + \langle G (y), x-y \rangle + \delta^{-1} \varphi^*\left( \delta \| G(x)-G(y)\| \right) \quad \text{for every} \quad x,y\in C.
$$
According to \cite[Theorem 4.11]{AzagraMudarraExplicitFormulas}, the function
$$
H:=\textrm{conv}(g), \quad \text{where} \quad g(x)=\inf_{y\in C} \lbrace 1+ \langle N(y),x-y \rangle + \delta^{-1}\varphi(\|x-y\|) \rbrace, \quad x\in X,
$$
is convex and of class $C^{1,\omega}(X)$ with $H=1$ and $\nabla H=N$ on $C.$ Bearing in mind the identities $\varphi(\omega^{-1}(\delta)) + \varphi^*(\delta) = \delta \omega^{-1}(\delta)$ and $\varphi'=\omega,$ it is easy to see that, for every $y\in C,$ the function $z \mapsto 1+ \langle N(y),z-y \rangle + \delta^{-1}\varphi(\|z-y\|)$ attains its global minimum at $z_y= y-\omega^{-1}N(y)$ and this minimum value is $1-\delta^{-1} \varphi^*(\delta).$ This easily implies
\begin{equation}\label{estimationminimumfunction}
H\left( y- \omega^{-1}(\delta)N(y) \right) = \inf_X H = 1-\delta^{-1} \varphi^*(\delta) \quad \text{for every} \quad y \in C.
\end{equation}
We now define
\begin{equation}\label{definitionsetsmodulifunctions}
A:= \overline{\co}(C \cup \lbrace y-\omega^{-1}(\delta)N(y) \: : \: y\in C \rbrace), \quad F(x):= H(x)+ \delta^{-1} \varphi^*(\delta) \varphi \left( d_A(x) \right) \quad x\in X,
\end{equation}
where $d_A$ stands for the distance function to $A.$ The function $F$ is convex because so are $H,$ $\varphi$ and $d_A,$ and $\varphi$ is increasing. In addition, we have that
\begin{equation}\label{inequalityHilbertnorm}
\varphi( \| x+h\|) + \varphi( \| x-h\|) - 2 \varphi( \| x\|) \leq \varphi( 2 \| h \| ) , \quad x,h \in X;
\end{equation}
see \cite[Lemma 4.6]{AzagraMudarraExplicitFormulas}. Thus if $x,h\in X$ and $y \in A$ is such that $d_A(x)=\| x-y\|,$ the inequality \eqref{inequalityHilbertnorm} for $x-y$ and $h$ gives
$$
\varphi( d_A(x+h)) + \varphi( d_A(x-h))-2\varphi( d_A(x)) \leq \varphi( \| x+h-y\|) + \varphi(\| x-h-y\|) - 2 \varphi( \| x-y\|) \leq \varphi( 2 \|h\|).
$$
Since $\varphi \circ d_A$ is continuous and convex, the above inequality shows that $\varphi \circ d_A$ is of class $C^{1,\omega}(X);$ see \cite[Proposition 4.5]{AzagraMudarraExplicitFormulas} for a proof of this fact. This shows that $F$ is a $C^{1,\omega}(X)$ convex function. Finally, let us check that $V= F^{-1} (-\infty,1]$ is the desired convex body. By \eqref{estimationminimumfunction}, $V$ is a non-degenerate sublevel set of a differentiable convex function, that is, $V$ is a convex body of class $C^1.$ It is obvious that $C \subseteq F^{-1}(1) = \partial V$ and the outer unit normal $N_{\partial V}$ to $\partial V$ coincides with $\nabla F / \| \nabla F \| = N$ on $C.$ According to Definition \ref{definition of C1omega convex body} $V$ will be of class $C^{1,\omega}$ as soon as we find $M>0$ such that $M^{-1} \leq \| \nabla F(x) \| \leq M$ whenever $ F(x)=1.$ Given $x\in X$ with $F(x)=1$ and $\varepsilon>0,$ it is easy to see from \eqref{definitionsetsmodulifunctions} that we can find
\begin{equation}\label{approximationinequality}
z \in \co \lbrace y-\omega^{-1}(\delta)N(y) \: : \: y\in C \rbrace \quad \text{such that} \quad \| x-z \| \leq d_A(x) + \omega^{-1}(\delta) + \varepsilon.
\end{equation}
Since $z\in A,$ we have that $\varphi(d_A(z))=0$ and $\nabla (\varphi \circ d_A) (z)=0.$ Then \eqref{estimationminimumfunction} and the convexity of $H$ give $F(z)=1-\delta^{-1} \varphi^*(\delta)$ and $\nabla F(z)=0.$ Because $H$ is bounded below by $1-\delta^{-1} \varphi^*(\delta),$ it follows from \eqref{definitionsetsmodulifunctions} that $d_A(x) \leq \varphi^{-1}(1).$ Since $\nabla F$ is $\omega$-continuous on $X$, there exists some $L>0$ such that
$$
\| \nabla F(x) \| \leq \| \nabla F(z) \| + L \omega( \| x-z\| ) = L \omega( \| x-z\| ),
$$
and \eqref{approximationinequality} together with the preceding remarks yield
$$ \| \nabla F(x) \| \leq L \omega \left( d_A(x) + \omega^{-1}(\delta) +\varepsilon \right) \leq L \omega \left( \varphi^{-1}(1) + \omega^{-1}(\delta) +\varepsilon \right).
$$
By letting $\varepsilon \to 0^+$ we obtain $\| \nabla F(x) \| \leq L \omega \left( \varphi^{-1}(1) + \omega^{-1}(\delta) \right).$ On the other hand, the convexity of $F$ gives
$$
\| \nabla F(x) \| \geq \frac{F(x)-F(z)}{\|x-z\|} = \frac{\delta^{-1} \varphi^*(\delta)}{\|x-z\|} \geq \frac{\delta^{-1} \varphi^*(\delta)}{ \varphi^{-1}(1)+\omega^{-1}(\delta)+\varepsilon},
$$
Letting $\varepsilon \to 0^+,$ we conclude $\| \nabla F(x)\| \geq \delta^{-1} \varphi^*(\delta)\left( \varphi^{-1}(1)+\omega^{-1}(\delta) \right)^{-1}. $
In addition, let us see that if $C$ is bounded, the convex body $V$ is also bounded. Indeed, the set $A$ in \eqref{definitionsetsmodulifunctions} is bounded because so is $C$ and because $\|N\|=1$. Thus the function $\varphi \circ d_A$ is coercive, that is, $\lim_{\|x\| \to \infty} \varphi(d_A(x))= +\infty$ (observe that $\varphi$ is coercive because so is $\omega$ and we have the inequality $\varphi(t) \geq \tfrac{t}{2} \omega( \tfrac{t}{2} )$). Since $H$ is bounded below on $X,$ the function $F$ of \eqref{definitionsetsmodulifunctions} is coercive too and therefore $V=F^{-1}(-\infty,1]$ is a bounded subset.
\subsection{Proof of Theorem \ref{main theorem for C1alpha superreflexive}}
Let us first assume that $V$ is the $C^{1,\alpha}$ convex body of $(1)$ and consider a convex function $F \in C^{1,\alpha}(X)$ as in Definition \ref{definition of C1omega convex body}. We know from \cite[Proposition 5.3]{AzagraMudarraExplicitFormulas} that there exists $\delta>0$ such that
\begin{equation}\label{inequalitycw1alphanecessity}
F(x)-F(y)+ DF(y)(y-x) \geq \delta \| DF(x)-DF(y)\|_*^{1+\frac{1}{\alpha}} \quad \text{for every} \quad x,y\in X.
\end{equation}
Also, because $V=F^{-1}(-\infty,1],$ $\partial V =F^{-1}(1)$ and $F$ is $C^1,$ the hyperplane $\lbrace y\in X \: : \: DF(x)(y-x)=0 \rbrace$ is tangent to $\partial V$ at $x$ for every $x\in \partial V.$ By the assumption, $D$ must be a positive multiple of $D F$ and therefore $D(x) = \frac{DF(x)}{\| DF(x)\|_*}$ for every $x\in C.$ We can easily dedude that
\begin{equation}\label{estimationnormaldifferentialvarphialpha}
\| D(x)-D(y)\|_* \leq \frac{2 \| D F(x)-D F(y)\|_*}{\| D F(y)\|_*} \quad \text{for every} \quad x,y\in C.
\end{equation}
By plugging \eqref{estimationnormaldifferentialvarphialpha} in \eqref{inequalitycw1alphanecessity} and bearing in mind that $\inf_{\partial V} \| DF\|_*$ is positive we conclude
$$
D(y)(y-x) \geq \tfrac{\delta}{2^{1+\frac{1}{\alpha}}} \Big ( \inf_{\partial V} \| D F\|_* \Big )^{\frac{1}{\alpha}}\| D(x)-D(y)\|_* ^{1+\frac{1}{\alpha}} \quad \text{for every} \quad x,y\in C.
$$
\medskip
Conversely, assume that $(2)$ is satisfied. Let $L \geq 2$ be a constant such that
\begin{equation}\label{inequalityrenorming}
\| x+h\|^{1+\alpha} + \| x-h\|^{1+\alpha} - 2\| x\|^{1+\alpha} \leq L \| h\|^{1+\alpha}, \quad x,h \in X.
\end{equation}
Since $X$ is reflexive and the norm $\| \cdot\|$ is strictly convex, for every $y\in C$ we can find a unique $N(y) \in S_X$ such that $D(y)(N(y))=1.$ By assumption, the jet $(f,G):=(1,D)$ defined on $C$ satisfies the inequality
$$
f (x) \geq f(y) + G (y)( x-y ) + \delta\| G (y)-G (x)\|_*^{1+\frac{1}{\alpha}}, \quad x,y\in C .
$$
Let $M>0$ be a constant such that $\delta = \tfrac{\alpha}{(1+\alpha) M^{1/\alpha}}.$ According to \cite[Theorem 5.5]{AzagraMudarraExplicitFormulas}, the function
$$
H:=\textrm{conv}(g), \quad \text{where} \quad g(x)=\inf_{y\in C} \lbrace 1+ D(y)(x-y ) + \tfrac{M}{1+\alpha} \|x-y\|^{1+\alpha} \rbrace, \quad x\in X,
$$
is convex and of class $C^{1,\alpha}(X)$ with $H=1$ and $D H=D$ on $C.$ It is easy to see that each function $z \mapsto 1+ D(y)(z-y ) + \tfrac{M}{1+\alpha} \|z-y\|^{1+\alpha}$ attains its global minimum at the point $z_y= y-M^{-1/\alpha} N(y)$ and this minimum value is $1-\tfrac{\alpha}{1+\alpha}M^{-1/\alpha} = 1-\delta.$ This shows that
$$
H\left( y-M^{-1/\alpha} N(y) \right) = \inf_X H = 1-\delta \quad \text{for every} \quad y \in C.
$$
Let us define
$$
A:= \overline{\co}(C \cup \lbrace y-M^{-1/\alpha} N(y) \: : \: y\in C \rbrace), \qquad F := H + d_A^{1+\alpha} \quad \text{on} \quad X,
$$
where $d_A$ stands for the distance function to $A.$ The function $d_A^{1+\alpha}$ is convex because $A$ is a convex subset. Given $x,h\in X$ we can find $y \in A$ is such that $d(x,A)=\| x-y\|$ because $X$ is reflexive. Then the inequality \eqref{inequalityrenorming} for $x-y$ and $h$ gives
$$
d_A(x+h)^{1+\alpha} + d_A(x-h)^{1+\alpha}-2d_A(x)^{1+\alpha} \leq \| x+h-y\|^{1+\alpha} + \| x-h-y\|^{1+\alpha} - 2 \| x-y\|^{1+\alpha} \leq L \|h\|^{1+\alpha}.
$$
Therefore, since $d_A^{1+\alpha}$ is continuous and convex, $d_A^{1+\alpha}$ is of class $C^{1,\alpha}(X);$ see \cite[Proposition 5.4]{AzagraMudarraExplicitFormulas}. Now it is enough to define $V=F^{-1} (-\infty,1] $ and imitate the proof of Theorem \ref{main theorem for C1omega Hilbert}.
\section*{Acknowledgements}
D. Azagra and C. Mudarra were partially supported by Grant MTM2015-65825-P and by the Severo Ochoa Program for Centres of Excellence in R\&D (Grant SEV-2015-0554).
|
2,877,628,091,538 | arxiv | \section{Introduction} \label{mvb:introduction}
Undirected graphical models have been proved to be useful in a variety
of applications in statistical machine learning. Statisticians and
computer scientists devoted resources to studies in graphs with nodes
representing both continuous and discrete variables. Such models
consider a graph $G = (V, E)$, whose nodes set $V$ represents $K$
random variables $Y_1, Y_2, \ldots, Y_K$ connected or disconnected
defined by the undirected edges set $E$. This formulation allows
pairwise relationships among the nodes to be described in terms of
edges, which in statistics are defined as correlations. The graph
structure can thus be determined under the independence assumptions on
the random variables. Specifically, variables $Y_i$ and $Y_j$ are
conditionally independent given all other variables if the associated
nodes are not linked by an edge. Two important types of graphical
models are the Gaussian model, where the $K$ variables are assumed to
follow a joint multivariate Gaussian distribution, and the Markov model,
which captures the relationships between categorical variables.
However, the assumption that only the pairwise correlations among the
variables are considered may not be sufficient for real applications.
When the joint distribution of the nodes is multivariate Gaussian, the
graph structure can be directly inferred from the inverse of the
covariance matrix of the random variables and in recent years a large
body of literature has emerged in this area for high-dimensional data.
Researchers mainly focus on different sparse structure of the graphs
or, in other words, the covariance matrix for high-dimensional
observations. For example, \cite{Meinshausen:2006} proposes a
consistent approach based on LASSO from \cite{Tibshirani:1996} to
model the sparsity of the graph. Due to the fact that the Gaussian
distribution can be determined by the means and covariance matrix, it
is valid to consider only the pairwise correlations, but this may not
true for some other distributions. The multivariate Bernoulli
distribution discussed in \cite{Whittaker:1990}, which will be studied in
Section~\ref{mvb:formulation}, has a probability density function
involving terms representing third and higher order moments of the
random variables, which is also referred to as clique effects. To
alleviate the complexity of the graph, the so-called Ising model
borrowed from physics gained popularity in the machine learning
literature. \cite{Wainwright:2008} introduces several important
discrete graphical models including the Ising model and \cite
{Banerjee:2008} discussed a framework to infer sparse graph structure
with both Gaussian and binary variables. In this paper, higher than
second interactions among a group of binary random variables are
studied in detail. The multivariate Bernoulli model is equivalent to
Ising model and other undirected graphical model with binary nodes,
which has been used in the machine learning community for various
applications. It can be extended to include $k$-node cliques by adding
monomials of up to $k$ orders \cite{Wainwright:2008}. The Ising model
assumes the nodes
taking values in $\{
-1, 1\}$,
which makes the interpretation of the interactions different form the
multivariate Bernoulli model. The literature related to structure
selection of Ising models and the applications include but are not
limited to \cite{Ravikumar:2010} and \cite{Xue:2012}.
What's more, in some real applications, people are not only interested
in the graph structure but also want to include predictor variables
that potentially have influence on the graph structure. \cite
{Gao:2001} considers a multivariate Bernoulli model which uses a
smoothing spline ANOVA model to replace the linear predictor \cite
{McCullagh:1989} for main effects on the nodes, but set the second and
higher order interactions between the nodes as constants. Higher order
outcomes with hierarchical structure assumptions on the graph involving
predictor variables are studied in \cite{Ding:2011}.
This paper aims at building a unified framework of a generalized linear
model for the multivariate Bernoulli distribution which includes both
higher order interactions among the nodes and covariate information.
The remainder is organized as follows. Section~\ref{mvb:bibernoulli}
starts from the simplest multivariate Bernoulli distribution, the
so-called bivariate Bernoulli distribution, where there are only two
nodes in the graph. The mathematical formulation and statistical
properties of the multivariate Bernoulli distribution are addressed in
Section~\ref{mvb:formulation}. Section~\ref{mvb:comparison} serves to
get a better understanding of the differences and similarities of the
multivariate Bernoulli distribution with the Ising and multivariate
Gaussian models. Section~\ref{mvb:glm} extends the model to include
covariate information on the nodes, edges and cliques, and discusses
parameter estimation, optimization and associated problems in the
resulting multivariate Bernoulli logistic model. Finally,
Section~\ref{mvb:conclusion}
provides conclusion of the paper and some proofs are deferred to \hyperref[app]{Appendix}.
\section{Bivariate Bernoulli distribution}\label{mvb:bibernoulli}
To start from the simplest case, we extend the widely used univariate
Bernoulli distribution to two dimensions in this section and the more
complicated multivariate Bernoulli distribution is explored in Section
\ref{mvb:formulation}. The Bernoulli random variable $Y$, is one with
binary outcomes chosen from $\{0, 1\}$ and its probability density
function is
\[
f_Y(y) = p^y(1-p)^{1-y}.
\]
Next, consider bivariate Bernoulli random vector $(Y_1, Y_2)$, which
takes values from $(0, 0)$, $(0, 1)$, $(1, 0)$ and $(1, 1)$ in the
Cartesian product space $\{0, 1\}^2 = \{0, 1\} \times\{0, 1\}$. Denote
$p_{ij} = P(Y_1 = i, Y_2 = j)$, $i,j = 0,1$, then its probability
density function can be written as
\begin{eqnarray}\label{mvb:BBpdf}
P(Y = y) &=& p(y_1, y_2)
\nonumber
\\
&=& p_{11}^{y_1y_2}p_{10}^{y_1(1-y_2)}p_{01}^{(1-y_1)y_2}p_{00}^{(1-y_1)(1-y_2)}
\\
&=& \exp \biggl\{\log(p_{00}) + y_1\log \biggl(
\frac{p_{10}}{p_{00}} \biggr) + y_2\log \biggl(\frac{p_{01}}{p_{00}} \biggr)
+y_1y_2\log \biggl(\frac
{p_{11}p_{00}}{p_{10}p_{01}} \biggr) \biggr\},
\nonumber
\end{eqnarray}
where the side condition $p_{00} + p_{10} + p_{01} + p_{11} = 1$ holds
to ensure it is a valid probability density function.
To simplify the notation, define the natural parameters $f$'s from
general parameters as follows:
\begin{eqnarray}
\label{mvb:f1} f^1 &=& \log \biggl(\frac{p_{10}}{p_{00}} \biggr),
\\[5pt]
\label{mvb:f2} f^2 &=& \log \biggl(\frac{p_{01}}{p_{00}} \biggr),
\\[5pt]
\label{mvb:f12}f^{12} &=& \log \biggl(\frac
{p_{11}p_{00}}{p_{10}p_{01}} \biggr),
\end{eqnarray}
and it is not hard to verify the inverse of the above formula
\begin{eqnarray}
p_{00} = \frac{1}{1 + \exp(f^1) + \exp(f^2) + \exp(f^1 + f^2 +
f^{12})},
\\
p_{10} = \frac{\exp(f^1)}{1 + \exp(f^1) + \exp(f^2) + \exp(f^1 + f^2 +
f^{12})},
\\
p_{01} = \frac{\exp(f^2)}{1 + \exp(f^1) + \exp(f^2) + \exp(f^1 + f^2 +
f^{12})},
\\
p_{11} = \frac{\exp(f^1 + f^2 + f^{12})}{1 + \exp(f^1) + \exp(f^2) +
\exp(f^1 + f^2 + f^{12})}.
\end{eqnarray}
Here the original density function \eqref{mvb:BBpdf} can be viewed as a
member of the exponential family, and represented in a log-linear
formulation as:
\begin{equation}
\label{mvb:BBloglinear}P(Y = y) = \exp \bigl\{\log(p_{00}) +
y_1f^1 + y_2f^2
+y_1y_2 f^{12} \bigr\}.
\end{equation}
Consider the marginal and conditional distribution of $Y_1$ in the
random vector $(Y_1, Y_2)$, we have
\begin{proposition}\label{mvb:BBmarginal}
The marginal distribution of $Y_1$ in a bivariate Bernoulli vector
$(Y_1, Y_2)$ following density function \eqref{mvb:BBpdf} is univariate
Bernoulli with density
\begin{equation}
\label{mvb:BBmarginalpdf} P(Y_1 = y_1) =
(p_{10} + p_{11})^{y_1}(p_{00} +
p_{01})^{(1-y_1)}.
\end{equation}
What's more, the conditional distribution of $Y_1$ given $Y_2$ is also
univariate Bernoulli with density
\begin{eqnarray}
\label{mvb:BBconditionalpdf} P(Y_1 = y_1 |
Y_2 = y_2) = \biggl(\frac{p(1, y_2)}{p(1, y_2) + p(0,
y_2)}
\biggr)^{y_1} \biggl(\frac{p(0, y_2)}{p(1, y_2) + p(0, y_2)} \biggr)^{1-y_1}.
\end{eqnarray}
\end{proposition}
The proposition implies that the bivariate Bernoulli distribution is
similar to the bivariate Gaussian distribution, in that both the
marginal and conditional distributions are still Bernoulli distributed.
On the other hand, it is also important to know under what conditions
the two random variables $Y_1$ and $Y_2$ are independent.
\begin{lemma}\label{mvb:biind}
The components of the bivariate Bernoulli random vector $(Y_1, Y_2)$
are independent if and only if $f^{12}$ in \eqref{mvb:BBloglinear} and
defined in \eqref{mvb:f12} is zero.
\end{lemma}
The Lemma~\ref{mvb:biind} is a special case for Theorem \ref
{mvb:independence} in Section~\ref{mvb:formulation}, and the proof is
attached in \hyperref[app]{Appendix}. It is not hard to see from the
log-linear formulation \eqref{mvb:BBloglinear} that when $f^{12} = 0$,
the probability density function of the bivariate Bernoulli is
separable in $y_1$ and $y_2$ so the lemma holds. In addition, a simple
calculation of covariance between $Y_1$ and $Y_2$ gives
\begin{eqnarray}
\label{mvb:covariance} \operatorname{cov}(Y_1, Y_2) &=& E
\bigl[Y_1 - (p_{11}+p_{10})\bigr]
\bigl[Y_2 - (p_{11} + p_{01})\bigr]
\nonumber
\\[-8pt]
\\[-8pt]
\nonumber
&=& p_{11}p_{00} - p_{01}p_{10},
\end{eqnarray}
and using \eqref{mvb:f12}, the disappearance of $f^{12}$ indicates that
the correlation between $Y_1$ and $Y_2$ is null. When dealing with the
multivariate Gaussian distribution, the uncorrelated random variables
are independent as well and Section~\ref{mvb:formulation} below shows
uncorrelatedness and independence is also equivalent for the
multivariate Bernoulli distribution.
The importance of Lemma~\ref{mvb:biind} was explored in \cite
{Whittaker:1990} where it was referred to as Proposition~2.4.1. The
importance of $f^{12}$ (denoted as \textit{u-terms}) is discussed and
called \textit{cross-product ratio} between $Y_1$ and $Y_2$. The same
quantity is actually \textit{log odds} described for the univariate
case in \cite{McCullagh:1989} and for the multivariate case in \cite{Ma:2010}.
\section{Formulation and statistical properties} \label{mvb:formulation}
\subsection{Probability density function}
As discussed in Section~\ref{mvb:bibernoulli}, the two dimensional
Bernoulli distribution possesses good properties analogous to the
Gaussian distribution. This section is to extend it to high-dimensions
and construct the so-called multivariate Bernoulli distribution.
Let $ Y= (Y_1, Y_2, \ldots, Y_K)$ be a $K$-dimensional random vector of
possibly correlated Bernoulli random variables (binary outcomes) and
let $y = (y_1,\ldots
,y_K)$ be a realization of $Y$. The most general form $p(y_1, \ldots,
y_K)$ of the joint probability density is
\begin{eqnarray*}
P(Y_1 = y_1, Y_2 = y_2, \ldots,
Y_K = y_K) &=& p(y_1, y_2,
\ldots, y_K)
\\
&=& p(0, 0, \ldots,0)^{[\prod_{j=1}^K (1-y_j)]}
\\
& &{}\times p(1, 0, \ldots, 0)^{[y_1 \prod_{j=2}^K(1-y_j)]}
\\
& &{}\times p(0, 1, \ldots, 0)^{[(1-y_1)y_2 \prod_{j=3}^K(1-y_j)]}\cdots
\\
& & {}\times p(1, 1, \ldots, 1) ^{[\prod_{j=1}^K y_j]},
\end{eqnarray*}
or in short
\begin{eqnarray}
\label{mvb:pdf}p(y) = p_{0, 0, \ldots,0}^{[\prod_{j=1}^K (1-y_j)]} p_{1, 0, \ldots, 0}^{[y_1 \prod_{j=2}^K(1-y_j)]}
p_{0, 1, \ldots, 0}^{[(1-y_1)y_2 \prod_{j=3}^K(1-y_j)]} \cdots p_{1, 1, \ldots, 1} ^{[\prod_{j=1}^K
y_j]}.
\end{eqnarray}
To simplify the notation, denote the quantity $S$ to be
\begin{equation}
\label{mvb:bigS} S^{j_1j_2\cdots j_r} = \sum_{1\le s\le r}
f^{j_s} + \sum_{1\le
s<t\le r} f^{j_sj_t} +
\cdots+ f^{j_1j_2\cdots j_r},
\end{equation}
and in the bivariate Bernoulli case $S^{12} = f^1 + f^2 + f^{12}$. To
eliminate the product in the tedious exponent of \eqref{mvb:pdf},
define the interaction function $B$
\begin{equation}
\label{mvb:bigB} B^{j_1j_2\cdots j_r}(y) = y_{j_1}y_{j_2}\cdots
y_{j_r},
\end{equation}
so correspondingly in the bivariate Bernoulli distribution for the
realization $(y_1, y_2)$ of random vector $(Y_1, Y_2)$, the interaction
function of order 2 is $B^{12}(y) = y_1y_2$.\vspace*{1pt} This is the only available
order two interaction for the bivariate case. In general, there are
$\bigl({K\atop 2}\bigr) = \frac{K(K-1)}{2}$ different second interactions among
the binary components of the multivariate Bernoulli random vector.
The log-linear formulation of the multivariate Bernoulli distribution
induced from \eqref{mvb:pdf} is
\begin{eqnarray}
\label{mvb:MBloglinear} l(y,\mathbf{f}) &=& -\log\bigl[p(y)\bigr]
\nonumber
\\[-8pt]
\\[-8pt]
\nonumber
\label{eq2} &=& - \Biggl[\sum_{r=1}^K
\biggl(\sum_{1\leq j_1<j_2<\cdots
<j_r\leq K}f^{j_1j_2\cdots j_r}B^{j_1j_2\cdots
j_r}(y)
\biggr)- b({\mathbf{f}}) \Biggr],
\end{eqnarray}
where ${\mathbf{f}} = (f^1,f^2,\ldots, f^{12\cdots K})^T$ is the vector
of the natural parameters for multivariate Bernoulli, and the
normalizing factor $b({\mathbf{f}})$ is defined as
\begin{equation}
\label{mvb:b}b({\mathbf{f}}) = \log\sum_{r=1}^K
\biggl[1+ \biggl(\sum_{1\leq j_1<j_2<\cdots<j_r\leq K}\exp\bigl[S^{j_1j_2\cdots j_r}
\bigr] \biggr) \biggr].
\end{equation}
As a member of the exponential distribution family, the multivariate
Bernoulli distribution has the fundamental `link' between the natural
and general parameters.
\begin{lemma}[(Parameter transformation)] \label{mvb:transform}
For the multivariate Bernoulli model, the
general parameters and natural parameters have the following relationship.
\begin{eqnarray*}\label{mvb:f}
&&\hspace*{-4pt}\exp\bigl( f^{j_1j_2\cdots j_r}\bigr)
\nonumber
\\[-8pt]
\\[-8pt]
\nonumber
&&\hspace*{-4pt}\quad =\frac{\prod p(\mbox{even \# zeros among~}j_1,j_2,\ldots,j_r
\mbox{ components and other components are all zero})}{\prod p(\mbox{odd \# zeros among~}j_1,j_2,\ldots,j_r \mbox{~components and
other components are all zero})},
\end{eqnarray*}
where \# refers to the number of zeros among the superscript
$y_{j_1}\cdots y_{j_r}$ of $f$. In addition,
\begin{eqnarray}
\label{mvb:S} &&\exp\bigl(S^{j_1j_2\cdots j_r}\bigr)
\nonumber
\\[-8pt]
\\[-8pt]
\nonumber
&&\qquad = \frac{ p(j_1,j_2,\ldots,j_r \mbox{ positions are
one, others are zero})}{p(0,0,\ldots,0)}
\end{eqnarray}
and conversely the general parameters can be represented by the natural
parameters
\begin{eqnarray}
\label{mvb:p}&& p(j_1,j_2,\ldots,j_r
\mbox{ positions are one, others are zero})
\nonumber
\\[-8pt]
\\[-8pt]
\nonumber
&&\qquad= \frac{\exp(S^{j_1j_2\cdots j_r})}{\exp (b({\mathbf{f}}) )}.
\end{eqnarray}
\end{lemma}
Based on the log-linear formulation \eqref{mvb:MBloglinear} and the
fact that the multivariate Bernoulli distribution is a member of the
exponential family, the interactions functions $B^{j_1j_2\cdots
j_r}(y)$ for all combinations $j_1j_2\cdots j_r$ define the sufficient
statistics. In addition, the log-partition function $b({\mathbf{f}})$
as in~\eqref{mvb:b} is useful to determine the expectation and variance
of the sufficient statistics to be addressed in later sections.
\subsection{Independence, marginal and conditional distributions}
One of the most important statistical properties for the multivariate
Gaussian distribution is the equivalence of independence and
uncorrelatedness. As a natural multivariate extension of the univariate
Bernoulli distribution, it is of great interest to explore independence
among components of the multivariate Bernoulli distribution and it is
the topic for this section.
The independence of components of a random vector is determined by
separability of coordinates in its probability density function and it
is hard to get directly from \eqref{mvb:pdf}. However, based on the
relationship between the natural parameters and the outcome in the
log-linear formulation~\eqref{mvb:MBloglinear}, the independence
theorem of the distribution can be derived as follows with proof
deferred to \hyperref[app]{Appendix}.
\begin{theorem}[(Independence of Bernoulli outcomes)]\label{mvb:independence}
For the multivariate Bernoulli
distribution, the random vector $ Y= (Y_1, \ldots, Y_K)$ is independent
element-wise if and only if
\begin{equation}
\label{mvb:independence1} f^{j_1j_2\cdots j_r} = 0\qquad \forall 1\leq
j_1<j_2<\cdots<j_r\leq K, r \geq2.
\end{equation}
In addition, the condition in equation \eqref{mvb:independence1} can be
equivalently written as
\begin{equation}
\label{mvb:independence2} S^{j_1j_2\cdots j_r} = \sum_{k=1}^r
f^{j_k} \qquad\forall r \geq2.
\end{equation}
\end{theorem}
The importance of the theorem is to link the independence of components
of a random vector following the multivariate Bernoulli distribution to
the natural parameters. Notice that to ensure all the single random
variable to be independent of all the others is a strong assertion and
in graphical models, researchers are more interested in the
independence of two groups of nodes, so we have the following theorem:
\begin{theorem}[(Independence of groups)]\label{mvb:independence_group}
For random vector $ Y= (Y_1, \ldots, Y_K)$
following the multivariate Bernoulli distribution, without of loss of
generality, suppose two blocks of nodes $Y' = (Y_1, Y_2, \ldots, Y_r)$,
$Y'' = (Y_{r+1}, Y_{r+2}, \ldots, Y_s)$ with $1\leq r < s \leq K$, and
denote index set $\tau_1 = \{1, 2, \ldots, r\}$ and $\tau_2 = \{r+1,
r+2, \ldots, s\}$. Then $Y'$ and $Y''$ are independent if and only if
\begin{equation}
\label{mvb:ind_group} f^\tau= 0\qquad \forall \tau\cap
\tau_1\neq\emptyset \mbox{ and } \tau\cap\tau_2\neq\emptyset.
\end{equation}
\end{theorem}
The proof of Theorem~\ref{mvb:independence_group} is also deferred to
\hyperref[app]{Appendix}. The theorem delivers the message that the two groups
of binary nodes in a graph are independent if all the natural
parameters $f$'s corresponding to the index sets that include indices
from both groups disappear.
Furthermore, analogous to the multivariate Gaussian distribution,
researchers are interested in statistical distributions of marginal and
conditional distributions for the multivariate Bernoulli distribution.
Likewise, the multivariate Bernoulli distribution maintains the good
property that both the marginal and conditional distributions are still
multivariate Bernoulli as stated in the following proposition.
\begin{proposition}\label{mvb:marginal}
The marginal distribution of the random vector $(Y_1, \ldots, Y_K)$
which follows multivariate Bernoulli distribution with density function
\eqref{mvb:pdf} to any order is still a \textup{multivariate Bernoulli}
with density
\begin{equation}
P(Y_1 = y_1, Y_2 = y_2, \ldots,
Y_r = y_r) = \sum_{y_{r+1}}\cdots
\sum_{y_K} p(y_1, \ldots, y_K)
\end{equation}
for some $r < K$.
What's more, the conditional distribution of $(Y_1, Y_2, \ldots, Y_r)$
given the rest is also \textup{multivariate Bernoulli} with density
\begin{equation}
P(Y_1 = y_1 ,\ldots, Y_r = y_r |
Y_{r+1} = y_{r+1}, \ldots, Y_K = y_K) =
\frac{p(y_1, \ldots, y_K)}{p(y_{r+1}, \ldots, y_K)}.
\end{equation}
\end{proposition}
\subsection{Moment generating functions}
The moment generating function for the multivariate Bernoulli
distribution is useful when dealing with moments and proof of Theorem
\ref{mvb:independence}.
\begin{eqnarray}\label{mvb:mgf}
\psi(\mu_1, \mu_2, \ldots, \mu_K) &=& E
\bigl[\exp (\mu_1Y_1 + \mu _2Y_2 +
\cdots+ \mu_KY_K ) \bigr]
\nonumber
\\
&=& p_{00\cdots0}e^0 + p_{10\cdots0}e^{\mu_1} + \cdots+
p_{11\cdots
1}e^{\mu_1+ \mu_2+\cdots+\mu_K}
\\
&=& \sum_{r=1}^K\sum
_{j_1\le j_2\le\cdots\le j_r} \frac
{\exp[S^{j_1j_2\cdots j_r}]}{\exp[b(\mathbf{f})]}\exp \Biggl[\sum
_{k=1}^r\mu_{j_k} \Biggr].\nonumber
\end{eqnarray}
Hence, from the formula the moment generating function is solely
determined by the $S$ functions, which are the transformation of the
natural parameters $f$'s.
\subsection{Gradient and Hessian}
As a member of the exponential family, the gradient and Hessian (Fisher
information) are the mean and covariance of the random vector $(Y_1,
Y_2, \ldots, Y_K)$. Therefore, they are important in statistics but
also crucial for model inference when the proper optimization problem
is established. To examine the formulation of gradient and Hessian for
the logarithm of the multivariate Bernoulli distribution \eqref
{mvb:pdf}, let us define some notations.
Denote $\mathcal{T}$ to be the set of all possible superscripts of the
$f$'s including the null superscript with $f^\emptyset= 0$, so it has
$2^K$ elements. In other words, $\mathcal{T}$ is the power set of
indices $\{1, 2, \ldots, K\}$. Let $|\cdot|$ be the cardinality of a
set then $|\mathcal{T}| = 2^K$. We can define the relation subset
$\subset$ for $\tau_1,\tau_2\in\mathcal T$ as follows.
\begin{definition}\label{mvb:subset}
For any two superscripts $\tau_1 = \{j_1,j_2,\ldots, j_r\}$ such that
$\tau_1\in\mathcal T$ and $\tau_2 = \{k_1,k_2,\ldots, k_s\}$ with $\tau
_2\in\mathcal T$ and $r \leq s$, we say that $\tau_1\subseteq\tau_2$ if for
any $j\in\tau_1$, there is a $k\in\tau_2$ such that $j=k$.
\end{definition}
Based on the definition, the $S$'s in \eqref{mvb:bigS} can be
reformulated as
\begin{equation}
\label{mvb:simpleS} S^\tau= \sum_{\tau_0\subseteq\tau}f^{\tau_0},
\end{equation}
specifically, $S^\emptyset= 0$. Consider the gradient of the
log-linear form \eqref{mvb:MBloglinear} with respect to the $f$'s, for
any $\tau\in\mathcal T$,
\begin{eqnarray}\label{mvb:gradient}
\frac{\partial l(y,\mathbf{f})}{\partial f^\tau} &=& -B^\tau(y) + \frac
{\partial b(\mathbf{f})}{\partial f^\tau}
\nonumber
\\[-8pt]
\\[-8pt]
\nonumber
&=& -B^\tau(y) + \frac{\sum_{\tau_0\supseteq\tau
}{\exp[S^{\tau_0}]}}{b({\mathbf{f}})}.
\end{eqnarray}
The derivation of partial derivative of $b$ with respect to $f^\tau$ in
\eqref{mvb:gradient} is
\begin{eqnarray}
\frac{\partial b(\mathbf{f})}{\partial f^\tau} &=& \frac{1}{\exp
[b(\mathbf{f})]} \cdot\frac{\partial\exp[b(\mathbf{f})]}{\partial
f^\tau}
\nonumber
\\
&=& \frac{1}{\exp[b(\mathbf{f})]} \cdot\frac{\partial \sum_{\tau_0\in
\mathcal{T}}\exp[S^{\tau_0}]}{\partial f^\tau}
\nonumber
\\[-8pt]
\\[-8pt]
\nonumber
&=& \frac{\sum_{\tau_0\supseteq\tau}{\exp[S^{\tau_0}]}}{\exp[b(\mathbf
{f})]}
\\
&=& E\bigl[B^\tau(y)\bigr],
\nonumber
\end{eqnarray}
and the result can also be derived from the moment generating function
\eqref{mvb:mgf} by taking derivatives with respect to the $\mu$'s.
A simple example of \eqref{mvb:gradient} in the bivariate Bernoulli
distribution \eqref{mvb:BBloglinear} is
\[
\frac{\partial l(y,\mathbf{f})}{\partial f^1} = -y_1 + \frac{\exp(f^1)
+ \exp(S^{12})}{b({\mathbf{f}})}.
\]
Further, the general formula for the second order derivative of \eqref
{mvb:MBloglinear} with respect to any two natural parameters $f^{\tau
_1}$ and $f^{\tau_2}$ is
\begin{eqnarray}\label{mvn:hessian}
\frac{\partial^2 l(y, f)}{\partial f^{\tau_1}\partial f^{\tau_2}} &=& \frac{\partial^2 b(\mathbf{f})}{\partial f^{\tau_1}\partial f^{\tau_2}}
\nonumber
\\
&=& \frac{\partial}{\partial f^{\tau_1}} \biggl(\frac{\sum_{\tau
_0\supseteq\tau_2}{\exp[S^{\tau_0}]}}{\exp[b(\mathbf{f})]} \biggr)
\nonumber
\\[-8pt]
\\[-8pt]
\nonumber
&=& \frac{\sum_{\tau_0\supseteq\tau_1, \tau_0\supseteq\tau_2}\exp
[S^{\tau_0}]\exp[b(\mathbf{f})] - \sum_{\tau_0\supseteq\tau_1}{\exp
[S^{\tau_0}]}\sum_{\tau_0\supseteq\tau_2}{\exp[S^{\tau_0}]}}{\exp
[2b(\mathbf{f})]}
\\
&=& \operatorname{cov} \bigl(B^{\tau_1}(y),
B^{\tau
_2}(y) \bigr).\nonumber
\end{eqnarray}
In the bivariate Bernoulli distribution,
\[
\frac{\partial^2 l(y, f)}{\partial f^1\partial f^2} = \frac{\exp
[S^{12}]\exp[b(\mathbf{f})] -
(\exp[f^1] + \exp[S^{12}])(\exp[f^2] +
\exp[S^{12}])}{\exp[2b(\mathbf{f})]}.
\]
\section{The Ising and the multivariate Gaussian models} \label{mvb:comparison}
As mentioned in Section~\ref{mvb:introduction}, the Ising and the
multivariate Gaussian distributions are two main tools to study
undirected graphical models, and this section is to compare the
multivariate Bernoulli model introduced in Section \ref
{mvb:formulation} with these two popular models.
\subsection{The Ising model}
The Ising model, which originated from \cite{Ising:1925}, becomes
popular when the graph structure is of interest with nodes taking
binary values. The log-linear density of the random vector $(Y_1, \ldots
, Y_K)$ is
\begin{equation}
\label{mvb:ising}\log\bigl[f(Y_1, \ldots, Y_K)\bigr] =
\sum_{j=1}^K\theta _{j,j}Y_j
+ \sum_{1\leq j < j' \leq K} \theta_{j, j'}Y_jY_{j'}
-\log \bigl[Z(\Theta)\bigr],
\end{equation}
where $\Theta= (\theta_{j,j'})_{K\times K}$ is a symmetric matrix
specifying the network structure, but it is not necessarily positive
semi-definite. The log-partition function $Z(\Theta)$ is defined as
\begin{equation}
\label{mvb:bigZ} Z(\Theta) = \sum_{Y_j\in\{0,1\}, 1\leq j \leq K}\exp \Biggl(
\sum_{j=1}^K\theta_{j,j}Y_j
+ \sum_{1\leq j<j'\leq K}\theta _{j,j'}Y_jY_{j'}
\Biggr),
\end{equation}
and notice that it is not related to $Y_j$ due to the summation over
all possible values of $Y_j$ for $j = 1, 2, \ldots, K$.
It is not hard to see that the multivariate Bernoulli is an extension
of the Ising model, which assumes all $S^\tau= 0$ for any $\tau$ such
that $|\tau| > 2$ and $\theta_{j,j'} = S^{jj'}$. In other words, in the
Ising model, only pairwise interactions are considered. \cite
{Ravikumar:2010} pointed out that the higher order interactions, which
is referred to as clique effects in this paper, can be converted to
pairwise ones through the introduction of additional variables and thus
retain the Markovian structure of the network defined in \cite
{Wainwright:2008}.
\subsection{Multivariate Gaussian model}
When continuous nodes are considered in a graphical model, the
multivariate Gaussian distribution is important since, similar to the
Ising model, it only considers interactions up to order two. The
log-linear formulation is
\begin{equation}
\label{mvb:gaussian} \log\bigl[f(Y_1, \ldots, Y_K)\bigr]
= \bigl(-\tfrac
{1}{2}(Y-\mu)^T\Sigma(Y-\mu) \bigr) - \log
\bigl[Z(\Sigma)\bigr],
\end{equation}
where $Z(\Sigma)$ is the normalizing factor which only depends on the
covariance matrix $\Sigma$.
\subsection{Comparison of different graphical models}
The multivariate Bernoulli \eqref{mvb:MBloglinear}, Ising \eqref
{mvb:ising} and multivariate Gaussian \eqref{mvb:gaussian} are three
different kinds of graphical models and they share many similarities
\begin{enumerate}
\item All of them are members of the exponential family.
\item Uncorrelatedness and independence are equivalent.
\item Conditional and marginal distributions maintain the same structure.
\end{enumerate}
However, some differences do exist. the multivariate Bernoulli and the
Ising models both serve as tools to model graph with binary nodes, and
are certainly different from the multivariate Gaussian model which
formulates continuous variables. In addition, the multivariate
Bernoulli specifies clique effects among nodes whereas the Ising model
simplifies to deal with only pairwise interactions and the multivariate
Gaussian essentially is uniquely determined by its mean and covariance
structure, which is also based on first and second order moments. Table
\ref{mvb:compare} illustrates the number of parameters needed to
uniquely determine the distribution for these models as the number of
nodes $K$ in the graph increases.
\begin{table}[b]
\caption{The number of parameters in the multivariate Bernoulli, the
Ising and the multivariate Gaussian models}
\label{mvb:compare}
\begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}lccc@{}}
\hline
Graph dimension & Multivariate Bernoulli & Ising & Multivariate
Gaussian\\
\hline
$1$ & $1$ & $1$ & $2$ \\
$2$ & $3$ & $3$ & $5$ \\
$3$ & $7$ & $6$ & $9$ \\
$\cdots$ & $\cdots$ & $\cdots$ & $\cdots$\\
$K$ & $2^K - 1$ & $\frac{K(K+1)}{2}$ & $K + \frac{K(K+1)}{2}$\\
\hline
\end{tabular*}
\end{table}
\section{Multivariate Bernoulli logistic models} \label{mvb:glm}
\subsection{Generalized linear model}
As discussed in Section~\ref{mvb:formulation}, the multivariate
Bernoulli distribution is a member of the exponential family and as a
result, the generalized linear model theory in \cite{McCullagh:1989}
applies. The natural parameters ($f$'s) in Lemma~\ref{mvb:transform}
can be formulated as a linear predictor in~\cite{McCullagh:1989} such
that for any $\tau\in\mathcal{T}$ with $\mathcal{T}=\{1, 2, \ldots, K\}$
\begin{equation}
\label{mvb:linear} f^{\tau}(x) = c_0^\tau+
c_1^\tau x_1 + \cdots+ c_p^\tau
x_p,
\end{equation}
where the vector $c^\tau= (c_0^\tau, \ldots, c_p^\tau)$ for $\tau\in
\mathcal{T}$ is the coefficient vector to be estimated and $x = (x_1,
x_2, \ldots, x_p)$ is the observed covariate. Here $p$ is the number of
variables and there are $2^K - 1$ coefficient vectors to be estimated
so in total $p \times(2^K - 1)$ unknown parameters. Equation~\eqref{mvb:linear}
is built on the canonical link where natural parameters are directly
modeled as linear predictors, but other links are possible and valid as well.
When there are $n$ samples observed from a real data set with outcomes
denoted as $y(i) = (y_1(i), \ldots, y_K(i))$ and predictor variables
$x(i) = (x_1(i), \ldots, x_p(i))$, the negative log likelihood for the
generalized linear model of the multivariate Bernoulli distribution is
\begin{equation}
\label{mvb:linearlike} l\bigl(y,\mathbf{f}(x)\bigr) = \sum
_{i=1}^n \biggl[-\sum_{\tau\in\mathcal{T}}f^\tau
\bigl(x(i)\bigr)B^\tau\bigl(y(i)\bigr) + b\bigl(\mathbf{f}(x)\bigr)
\biggr],
\end{equation}
where, similar to \eqref{mvb:b} the log partition function $b$ is
\[
b\bigl(\mathbf{f}(x)\bigr) = \log \biggl[1 + \sum_{\tau\in\mathcal{T}}
\exp\bigl[S^\tau \bigl(x(i)\bigr)\bigr] \biggr].
\]
When dealing with the univariate Bernoulli distribution using formula
\eqref{mvb:linearlike}, the resulting generalized linear model
corresponding to the multivariate Bernoulli model is the same for
logistic regression. Thus the model is referred to as the multivariate
Bernoulli logistic model in this paper.
\subsection{Gradient and Hessian}
To optimize the negative log likelihood function \eqref{mvb:linear}
with respect to the coefficient vector~$c^\tau$, the efficient and
popular iterative re-weighted least squares algorithm mentioned
in~\cite{McCullagh:1989} can be implemented. Nevertheless, the gradient
vector and Hessian matrix (Fisher Information) with respect to the
coefficients $c^\tau$ are still required.
Consider any\vspace*{1pt} $\tau\in\mathcal{T}$, the first derivative with respect to
$c_j^\tau$ in the negative log likelihood \eqref{mvb:linearlike} of the
multivariate Bernoulli logistic model, according to \eqref
{mvb:gradient} and ignoring index~$i$, is
\begin{eqnarray}\label{mvb:linear_gradient}
\frac{\partial l(y,f)}{\partial c^\tau_j} = \frac{\partial
l(y,f)}{\partial f^\tau} \frac{\partial f^\tau}{\partial
c^\tau_j}
= \sum_{i = 1}^n
\biggl[-B^\tau(y) + \frac
{\sum_{\tau_0\supseteq\tau}{\exp[S^{\tau_0}(x)]}}{\exp[b(\mathbf
{f}(x))]} \biggr] x_j.
\end{eqnarray}
Further, the second derivative for any two coefficients $c_j^{\tau_1}$
and $c_k^{\tau_2}$ is
\begin{eqnarray}\label{mvb:linear_hessian}
\frac{\partial^2l(y, f)}{\partial c^{\tau_1}_j\partial c^{\tau_2}_k} &=& \frac{\partial}{\partial c^{\tau_1}_j} \biggl(\frac{\partial l(y,
f)}{\partial f^{\tau_2}}
\frac{\partial f^{\tau_2}}{\partial c^{\tau
_2}_k} \biggr)
\nonumber
\\
&=& \frac{\partial f^{\tau_1}}{\partial c^{\tau_1}_j}\frac{\partial^2
l(y, f)}{\partial f^{\tau_1}\partial f^{\tau_2}}\frac{\partial
f^{\tau_2}}{\partial c^{\tau_2}_k}
\nonumber
\\[-8pt]
\\[-8pt]
\nonumber
&=& \sum_{i=1}^n \frac{\partial^2 l(y, f)}{\partial f^{\tau_1}\partial
f^{\tau_2}}
x_j x_k
\\
&=& \frac{\sum_{\tau_0\supseteq\tau_1,
\tau_0\supseteq\tau_2}\exp[S^{\tau_0}(x)]}{\exp[b(f(x))]} x_j x_k -
\frac{\sum_{\tau_0\supseteq\tau_1}{\exp
[S^{\tau_0}(x)]}\sum_{\tau_0\supseteq\tau_2}{\exp[S^{\tau_0}(x)]}}{\exp
[2b(f(x))]} x_j
x_k.\nonumber
\end{eqnarray}
\subsection{Parameters estimation and optimization} \label{mvb:estimation}
With gradient \eqref{mvb:linear_gradient} and Hessian \eqref
{mvb:linear_hessian} at hand, the minimization of the negative log
likelihood~\eqref{mvb:linearlike} with respect to the coefficients
$c^\tau$ can be solved with Newton--Raphson or the Fisher's scoring
algorithm (iterative re-weighted least squares) when the Hessian is
replaced by the Fisher information matrix. Therefore, in every
iteration, the new step size for current estimate $\hat c^{(s)}$ is
computed as
\begin{equation}
\label{mvb:step} \triangle c = - \biggl(\frac{\partial^2l(y,
f)}{\partial c^{\tau_1}_j\partial c^{\tau_2}_k} \bigg|_{c=\hat
c^{(s)}}
\biggr)^{-1} \cdot \biggl(\frac{\partial l(y,f)}{\partial c^\tau
_j} \bigg|_{c=\hat c^{(s)}} \biggr).
\end{equation}
The process continues until the convergence criterion is met.
\subsection{Variable selection}
Variable selection is important in modern statistical inference. It is
also crucial to select only the significant variables to determine the
structure of the graph for better model identification and prediction
accuracy. The pioneering paper \cite{Tibshirani:1996} introduced the
LASSO approach to linear models. Various properties of the method were
demonstrated such as in \cite{Zhao:2006} and extensions of the model
to different frameworks were discussed in \cite{Meinshausen:2006,Zhao:2007,Park:2008} etc.
The approach can be extended to the multivariate Bernoulli distribution
since it is a member of the exponential family. What we have to do is
to apply the $l_1$ penalty to the coefficients in \eqref{mvb:linear},
and the target function is
\begin{equation}
\label{mvb:lasso} L_\lambda(x, y) = \frac{1}{n}\sum
_{i=1}^nl\bigl(y(i),\mathbf{f}\bigl(x(i)\bigr)\bigr)
+ \sum_{\tau\in\mathcal{T}}\lambda_\tau\sum
_{j=1}^p\bigl|c_j^\tau\bigr|,
\end{equation}
where $\lambda_\tau$ are the tuning parameters need to be chosen
adaptively. The superscript $\tau$ allows flexibility to have
natural\vadjust{\goodbreak}
parameters with different levels of complexity. For tuning in penalized
regression problems, the randomized generalized approximate
cross-validation (GACV) designed for smoothing spline models introduced
in \cite{Xiang:1994} can be derived for LASSO problem, such as in
\cite{Shi:2008}. The widely used information criterion AIC and BIC can
also be implemented, but the degrees of freedom cannot be calculated
exactly. \cite{Ma:2010} demonstrates that the number of nonzero
estimates can serve as a good approximation in the multivariate
Bernoulli logistic model. There are several efficient algorithms
proposed to optimize the problem \eqref{mvb:lasso}, for example, the
LASSO-pattern search introduced in \cite{Shi:2008} can handle large
number of unknowns provided that it is known that at most a modest
number are nonzeros. Recently, \cite{Shi:2012} has extended the
algorithm in
\cite{Shi:2008} to the scale of multi-millions of unknowns. Coordinate
descent \cite{Friedman:2010} is also proven to be fast in solving
large $p$ small $n$ problems.
\subsection{Smoothing spline ANOVA model}
The smoothing spline model gained popularity in non-linear statistical
inference since it was proposed in \cite{Craven:1978} for univariate
predictor variables. More importantly, multiple smoothing spline models
for generalized linear models enable researchers to study complex real
world data sets with increasingly powerful computers as described in
\cite{Wahba:1995}.
As a member of the exponential family, the multivariate Bernoulli
distribution can be formulated under smoothing spline ANOVA framework.
\cite{Gao:2001} considers the smoothing spline ANOVA multivariate
Bernoulli model but the interactions are restricted to be constant.
However, in general the natural parameters or linear predictors $f$'s
can be relaxed to reside in a reproducing kernel Hilbert space. That is
to say, for the observed predictor vector $x$, we have
\begin{equation}
\label{mvb:splineANOVA} f^\tau(x) = \eta^\tau(x)\qquad \mbox{with }
\eta^\tau\in\mathcal{H}^\tau , \tau\in\mathcal{T},
\end{equation}
where $\mathcal{H}^\tau$ is a reproducing kernel Hilbert space and the
superscript $\tau$ allows a more flexible model such that the natural
parameters can come from different reproducing kernel Hilbert spaces.
Further, $\mathcal{H}^\tau$ can be formulated to have several
components to handle multivariate predictor variables, that is $\mathcal
{H}^\tau= \oplus_{\beta=0}^p\mathcal{H}^\tau_\beta$ and details can be
found in \cite{Gu:2002}.
As a result, the $\eta^\tau$ is estimated from the variational problem
\begin{equation}
\label{mvb:splineTarget} \mathcal{I_\lambda}(x, y) = \frac{1}{n}
\sum_{i=1}^nl\bigl(y(i),\bolds{\eta }
\bigl(x(i)\bigr)\bigr) + \lambda J(\bolds{\eta}),
\end{equation}
where $\bolds{\eta}$ is the vector form of $\eta^\tau$'s. The penalty
is seen to be
\begin{eqnarray}
\label{mvb:penalty} \lambda J(\bolds{\eta}) = \lambda\sum
_{\tau\in\mathcal{T}}\theta_\tau ^{-1}\bigl\Vert P_1^\tau
\eta^\tau\bigr\Vert^2
\end{eqnarray}
with $\lambda$ and $\theta_\tau$ being the smoothing parameters. This
is an over-parameterization adopted in \cite{Gu:2002}, as what really
matters are the ratios $\lambda/\theta_\tau$. The functional $P_1^\tau$
projects function $\eta^\tau$ in $\mathcal{H}^\tau$ onto the smoothing
subspace $\mathcal{H}_1^\tau$.
By the argument of smoothing spline ANOVA model in \cite{Gu:2002}, the
minimizer $\eta^\tau$ has the expression as in \cite{Wahba:1990},
\begin{equation}
\label{mvb:lineareta} \eta^\tau(x) = \sum_{\nu= 1}^md_\nu^\tau
\phi_\nu^\tau(x) + \sum_{i=1}^nc_i^\tau
R^\tau(x_i, x),
\end{equation}
where $\{\phi^\tau_\nu\}_{\nu=1}^m$ is a basis of $\mathcal{H}_0^\tau=
\mathcal{H}^\tau\ominus\mathcal{H}^\tau_1$, the null space
corresponding to the projection functional $P_1^\tau$. $R^\tau(\cdot,
\cdot)$ is the reproducing kernel for $\mathcal{H}^\tau_1$.
The variational problem \eqref{mvb:splineTarget} utilizing the
smoothing spline ANOVA framework can be solved by iterative re-weighted
least squares \eqref{mvb:step} due to the linear formulation \eqref
{mvb:lineareta}. More on tuning and computations including software
will appear in \cite{Dai:2012}.
\section{Conclusion} \label{mvb:conclusion}
We have shown that the multivariate Bernoulli distribution, as a member
of the exponential family, is a way to formulate the graph structure of
binary variables. It can not only model the main effects and pairwise
interactions as the Ising model does, but also is capable of estimating
higher order interactions. Importantly, the independence structure of
the graph can be modeled via significance of the natural parameters.
The most interesting observation of the multivariate Bernoulli
distribution is its similarity to the multivariate Gaussian
distribution. Both of them have the property that independence and
uncorrelatedness of the random variables are equivalent, which is
generally not true for other distributions. In addition, the marginal
and conditional distributions of a subset of variables still follow the
multivariate Bernoulli distribution.
Furthermore, the multivariate Bernoulli logistic model extends the
distribution to a generalized linear model framework to include effects
of predictor variables. Under this model, the traditional statistical
inferences such as point estimation, hypothesis test and confidence
intervals can be implemented as discussed in \cite{McCullagh:1989}.
Finally, we consider two extensions to the multivariate Bernoulli
logistic model. First, the variable selection technique using LASSO can
be incorporated to enable finding important patterns from a large
number of candidate covariates. Secondly, the smoothing spline ANOVA
model is introduced to consider non-linear effects of the predictor
variables in nodes, edges and cliques level.
\begin{appendix}\label{app}
\section*{Appendix: Proofs}\label{appendix:proof}
\begin{pf*}{Proof of Proposition~\ref{mvb:BBmarginal}}
With the joint density function of the random vector $(Y_1, Y_2)$, the
marginal distribution of $Y_1$ can be derived
\begin{eqnarray*}
P(Y_1 = 1) &=& P(Y_1 = 1, Y_2 = 0) +
P(Y_1 = 1, Y_2 = 1)
\\
&=& p_{10} + p_{11}.
\end{eqnarray*}
Similarly,
\[
P(Y_1 = 0) = p_{00} + p_{11}.
\]
Combining the side condition of the parameters $p$'s,
\[
P(Y_1 = 1) + P(Y_1 = 0) = p_{00} +
p_{01} + p_{10} + p_{11} = 1.
\]
This demonstrates that $Y_1$ follows the univariate Bernoulli
distribution and its density function is~\eqref{mvb:BBmarginal}.
Regarding the conditional distribution, notice that
\begin{eqnarray*}
P(Y_1 = 0 | Y_2 = 0) &=& \frac{P(Y_1 = 0, Y_2 = 0)}{P(Y_2 = 0)}
\\
&=& \frac{p_{00}}{p_{00} + p_{10}},
\end{eqnarray*}
and the same process can be repeated to get
\[
P(Y_1 = 1 | Y_2 = 0) = \frac{p_{10}}{p_{00} + p_{10}}.
\]
Hence, it is clear that with condition $Y_2 = 0$, $Y_1$ follows a
univariate Bernoulli distribution as well. The same scenario can be
examined for the condition $Y_2 = 1$. Thus, the conditional
distribution of $Y_1$ given $Y_2$ is given as \eqref{mvb:BBconditionalpdf}.
\end{pf*}
\begin{pf*}{Proof of Lemma~\ref{mvb:biind}}
Expand the log-linear formulation of the bivariate Bernoulli
distribution~\eqref{mvb:BBloglinear} into factors
\begin{equation}
\label{proof:biind}P(Y_1 = y_1, Y_2 =
y_2) = p_{00} \exp\bigl(y_1f^1
\bigr) \exp \bigl(y_2f^2\bigr) \exp\bigl(y_1y_2f^{12}
\bigr).
\end{equation}
It is not hard to see that when $f^{12} = 0$, the density function
\eqref{proof:biind} is separable to two components with only $y_1$ and
$y_2$ in them. Therefore, the two random variables corresponding to the
formula are independent. Conversely, when $Y_1$ and $Y_2$ are
independent, their density function should be separable in terms of
$y_1$ and $y_2$, which implies $y_1y_2f^{12} = 0$ for any possible
values of $y_1$ and $y_2$. The assertion dictates that $f^{12}$ is zero.
\end{pf*}
\begin{pf*}{Proof of Lemma~\ref{mvb:transform}}
Consider the log-linear formulation \eqref{mvb:MBloglinear}, the
natural parameters $f$'s are combined with products of some components
of $y$. Let us match terms in the $f^{j_1\cdots j_r}B^{j_1\cdots
j_r}(y)$ from log-linear formulation \eqref{mvb:MBloglinear} with the
coefficient for the corresponding product $y_{j_1}\cdots y_{j_r}$ terms
in \eqref{mvb:pdf}. The exponents of $p$'s in \eqref{mvb:pdf} can be
expanded to summations of different products $B^\tau(y)$ with $\tau\in
\mathcal{T}$ and all the $p$'s with $y_{j_1},\ldots, y_{j_r}$ in the
exponent have effect on $f^{j_1\cdots j_r}$ so all the positions other
than $j_1,\ldots, j_r$ must be zero. Furthermore, those $p$'s with
positive $y_{j_1}\cdots y_{j_r}$ in its exponent appear in the
numerator of $\exp[f^{j_1\cdots j_r}]$ and the product is positive only
if there are even number of $0$'s in the positions $j_1, \ldots, j_r$.
The same scenario applies to the $p$'s with negative products in the exponents.
What's more, notice that $p_{00\cdots0} = b(\mathbf{f})$ and
\begin{eqnarray}
\exp\bigl[S^{j_1\cdots j_r}\bigr] &=& \exp\biggl[\sum
_{1\le s\le r} f^{j_s} + \sum_{1\le s<t\le r}
f^{j_sj_t} + \cdots+ f^{j_1j_2\cdots j_r}\biggr]
\nonumber
\\[-8pt]
\\[-8pt]
\nonumber
&=& \prod_{1\le s\le r}\exp\bigl[f^{j_s}\bigr]
\prod_{1\le s<t\le r} \exp \bigl[f^{j_sj_t}\bigr]\cdots
\exp\bigl[f^{j_1j_2\cdots j_r}\bigr]
\end{eqnarray}
and apply the formula for $\exp[f^{j_1\cdots j_r}]$ with cancellation
of terms in the numerators and the denominators. The resulting \eqref
{mvb:S} can then be verified.
Finally, \eqref{mvb:p} is a trivial extension of \eqref{mvb:S} by
exchanging the numerator and the denominator.
\end{pf*}
\begin{pf*}{Proof of Theorem~\ref{mvb:independence}}
Here, we take use of the moment generating function \eqref{mvb:mgf} but
it is also possible to directly work on the probability density
function \eqref{mvb:pdf}. The mgf can be rewritten as
\begin{equation}
\label{mvb:mgf2} \psi(\mu_1,\ldots, \mu_K) =
\frac{1}{\exp[b(\mathbf{f})]}\sum_{r=1}^K\sum
_{j_1\le j_2\le\cdots\le j_r} \exp\bigl[S^{j_1j_2\cdots
j_r}\bigr]\prod
_{k=1}^r\exp [\mu_{j_k} ].
\end{equation}
It is not hard to see that this is a polynomial function of the unknown
variables $\exp(\mu_k)$ for $k=1,\ldots, K$. The independence of the
random variables $Y_1, Y_2, \ldots, Y_K$ is equivalent to that~\eqref
{mvb:mgf2} can be separated into components of $\mu_k$ or equivalently
$\exp(\mu_k)$.
$(\Rightarrow)$ If the random vector $Y$ is independent, the moment
generating function should be separable and assume the formulation is
\begin{equation}
\label{mvb:mgf3} \psi(\mu_1, \ldots, \mu_K) = C\prod
_{k = 1}^K \bigl(\alpha_k +
\beta_k \exp [\mu_k]\bigr),
\end{equation}
where $\alpha_k$ and $\beta_k$ are functions of parameters $S$'s and
$C$ is a constant. If we expand \eqref{mvb:mgf3} to polynomial function
of $\exp[\mu_k]$ and determine the corresponding coefficients, \eqref
{mvb:independence1} and \eqref{mvb:independence2} will be derived.
$(\Leftarrow)$ Suppose \eqref{mvb:independence2} holds, then we have
\begin{eqnarray*}
\exp\bigl[S^{j_1j_2\cdots j_r}\bigr] = \prod_{k=1}^r
\exp\bigl[f^{j_k}\bigr],
\end{eqnarray*}
and as a result, the moment generating function can be decomposed to a
product of components of $\exp[\mu_k]$ like \eqref{mvb:mgf3} with the
following relations
\begin{eqnarray*}
C &=& \frac{1}{\exp[b(\mathbf{f})]},
\\
\alpha_k &=& 1,
\\
\beta_k &=& \exp\bigl[f^k\bigr].
\end{eqnarray*}
\upqed\end{pf*}
\begin{pf*}{Proof of Theorem~\ref{mvb:independence_group}}
The idea of proving the group independence of multivariate Bernoulli
variables are similar to Theorem~\ref{mvb:independence}. Instead of
decomposing the moment generating function to products of $\mu_k$, we
only have to separate them into groups with each only involving the
dependent random variables. That is to say, the moment generating
function with two separately independent nodes in the multivariate
Bernoulli should have the form
\begin{eqnarray*}
&&\psi(\mu_1, \ldots, \mu_K)\\
&&\qquad = \bigl(\alpha_0
+ \alpha_1\exp[\mu_1] + \cdots + \alpha_r
\exp[\mu_r]\bigr)\cdot\bigl(\beta_0 + \beta_1
\exp[\mu_{r+1}] + \cdots+ \beta_s\exp[\mu_{K}]
\bigr).
\end{eqnarray*}
Matching the corresponding coefficients of this separable moment
generating function and the natural parameters leads to the conclusion
\eqref{mvb:ind_group}.
\end{pf*}
\end{appendix}
\section*{Acknowledgements}
Research of all three authors was supported in part by NIH Grant EY09946
and NSF Grant DMS-09-06818.
|
2,877,628,091,539 | arxiv | \section{Introduction}
The free energy of mono-domain ferromagnetic particles depends on the
relative orientation of the magnetization with respect to the crystal
lattice. This magnetic anisotropy results from the combination of
Coulomb repulsion favoring spin polarization, spin-orbit coupling
(SOC), and the crystal field breaking the orbital rotation invariance.
As a result, the orbital moment of magnetic atoms and their magnetic
anisotropy energy (MAE) depend strongly on their atomic coordination
\cite{Gambardella03,Canali07}.
The transport counterpart of MAE is
anisotropic magneto-resistance (AMR), i.e. the dependence of the resistance
on the angle $\theta$ between the magnetization and the current flow.
Whereas AMR in bulk was known back in the XIX century and is a rather small
effect, the recent observation of AMR in a variety of low dimensional systems
\cite{Gould04,Ruster05,Giddings05,Saito07,Moser07,Grigorenko06,Natelson-BAMR,Ralph-BAMR,Viret-BAMR,Nature07}, largely exceeding
bulk values, has opened a new research venue in the field of spin-polarized
quantum transport. Very large AMR has been reported in planar tunnel
junctions (TAMR) with a variety of electrode and barrier materials
\cite{Gould04,Ruster05,Giddings05,Saito07,Moser07,Grigorenko06}. Enhanced AMR has also been observed in atomic sized contacts,
both in the tunnel regime (TAMR) and in the contact (or ballistic\cite{ballistic})
regime (BAMR)\cite{Velev05}, for Py \cite{Ralph-BAMR}, Fe \cite{Viret-BAMR}, Ni
\cite{Natelson-BAMR}, and Co \cite{Nature07}. Additionally, GaMnAs islands in
the Coulomb Blockade regime show electrically tunable AMR (CB-AMR)
\cite{Wunderlich}. Thus, a wide range of nanostructures made from different
materials display enhanced AMR.
Here we focus on AMR in atomic-size conductors for several reasons. On the one hand,
conductance of atomic-sized contacts probes the electronic structure of the apex atoms.
These have coordination different from bulk and thus present different orbital and spin
magnetic moment \cite{Pt:2005}, and enhanced magnetic anisotropy
\cite{Gambardella03,Canali07,Bruegel06} which might be probed by BAMR. On the other hand,
nanocontacts allow to study AMR going from the contact (BAMR) to the tunnel (TAMR) regime
in the same system, as shown in the case of both Ni and Py \cite{Ralph-BMR,Ralph-BAMR}. Ni
nanocontacts have also been used as electrodes to explore the Coulomb Blockade and the
Kondo regimes \cite{Ralph-Science}.
The crux of the matter is to identify the necessary and sufficient conditions
to expect large values of AMR in quantum transport. Here we consider two
different transport regimes, coherent and sequential.
In the coherent regime we use the Landauer formalism that, at zero temperature,
relates the zero-bias conductance $G$ to the quantum mechanical transmission of the electrons at the
Fermi energy, $G=\frac{e^2}{h}T(\epsilon_F)$. This approach accounts for AMR
both in the tunneling regime (TAMR)\cite{TAMR-theory} and in the contact or
ballistic regime (BAMR)\cite{Velev05} in the absence of sharp resonances near
the Fermi energy. In the scattering-free case of perfect 1D chains,
$T(\epsilon_F)$ is simply given by the number of bands at the Fermi energy
${\cal N}(\epsilon_F)$. Because of the SOC, ${\cal N}(\epsilon_F)$ for
ferromagnetic 1D transition metal chains
\cite{Velev05,Viret-BAMR,Bruegel06,Nature07} depends on the angle $\theta$
between the chain axis and the magnetization, and this leads naturally to
stepwise $G(\theta)$ curves.
However, the idealized scattering-free picture fails to account for the
experimental results of conductance in metallic nanocontacts, for which
scattering channels are not perfectly conducting \cite{NMNC}. According
to the scattering-free theory, the conductance of atomic contacts of Ni,
in units of $e^2/h$, would be 6 or 7 for Ni \cite{Velev05}, in quantitative
{\em disagreement} with the measured \cite{Untiedt:prb:04} conductance
of Ni nanocontacts around $3 e^2/h$. The same applies to Fe, Co and Pt. Scattering
definitely affects $d$-bands, which suffer the so called {\it orbital blocking}
\cite{Jacob05}.
Here we present calculations of BAMR {\em plus} scattering. This
approach also permits to calculate the crossover from BAMR to TAMR. We find that
in the coherent regime large AMR is related to the orbital polarization of the
current. TAMR has been linked to the anisotropy of the density of states at
$\epsilon_F$, which turns out to be large in Ni chains. Unexpectedly, this does
not lead to a large value of TAMR, the reason being that the current is not
orbitally polarized in this limit.
In the sequential regime, valid to describe systems that feature transport
through resonant levels of width $\Gamma$ smaller than the temperature
$kT$\cite{Been}, we find enhanced AMR, regardless of the orbital polarization
of the current, if the chemical potential $\epsilon_F$ of the ferromagnetic
electrode crosses a resonance as it varies due to change of the magnetization
angle. This situation occurs in a single electron transistor with ferromagnetic
electrodes \cite{Wunderlich,CBAMR-theory}. Resonances might also occur in the
tip atoms of Ni nanocontacts in the tunneling regime\cite{Burton07}.
This paper is organized as follows: In Sec. \ref{sec:model} we introduce the
model system and the theoretical method for the calculation of AMR in magnetic
nanocontacts. In Sec. \ref{sec:contact} we calculate the AMR in the contact or
ballistic regime (BAMR). In Sec. \ref{sec:tunnel}
we treat AMR in the {\it coherent} tunneling regime (TAMR), while in Sec.
\ref{sec:sequential} we treat AMR in the {\it sequential} tunneling regime.
Finally, in Sec. \ref{sec:summary} we conclude the paper summarizing the main
results.
\section{Model and Methodology}
\label{sec:model}
As a model system for ferromagnetic nanocontacts we consider two semi-infinite
Ni 1D chains with lattice parameter $a$, separated by a gap $d>a$, as shown in
Fig. \ref{fig:model}. This model shares most of the relevant features with
realistic nanocontact models, like e.g. the low coordination of the tip atoms
of the two electrodes and elastic electron scattering due to the gap. On the
other hand the one-dimensionality and the resulting rotational invariance of our
model simplify considerably the calculations of the SOC and the interpretation
of the results. Such one-dimensional models have been employed before to study
fundamental properties of atomic-size nanocontacts
\cite{Delin,Smogunov,DalCorso}.
\begin{figure}[t]
\includegraphics[width=\linewidth]{model.eps}
\caption{
1D Model of Ni nanocontact: Elastic electron scattering in the contact
region of a real nanocontact is mimicked by a gap $d>a$ between two
semi-infinite 1D Ni chains with lattice spacing $a$.
}
\label{fig:model}
\end{figure}
We calculate the electronic structure of the system using a combination of density functional
theory (DFT) in the local spin density approximation (LSDA) and a Green's function technique
to account for the fact that, when $d\neq a$, the system is not translationally invariant. We
split the system into 3 regions, left (L) and right (R) electrodes, described as semi-infinite
Ni chains, and the central region (C) containing the 3 innermost atoms of each electrode, as
shown in Fig. \ref{fig:model}.
The electronic structure of both the electrodes and the central region is
described using effective one-body Hamiltonians obtained from {\it ab initio}
calculations, performed with CRYSTAL03\cite{CRYSTAL03} on the LSDA level, and
using a localized atomic orbital minimal basis set. CRYSTAL03, which does not
include SOC, yields spin-polarized solutions along an arbitrary axis with
majority and minority electrons. The SOC term $\hat{H}_{\rm
SO}=\lambda\vec{L}\cdot\vec{S}$ is added to the converged self-consistent LSDA
Hamiltonian $\hat{H}_{\rm LSDA}$:
\begin{equation}
\label{eq:hamiltonian}
\hat{H} = \hat{H}_{\rm LSDA} + \hat{H}_{\rm SO}.
\end{equation}
This post self-consistent approach \cite{Jaime,Velev05} is justified in the case
of Ni, for which the SOC is much smaller than the exchange splittings and the
bandwidths. We take $\lambda=70 meV$ for the Ni $3d$-orbitals.
The Green's function of the central region is obtained by means of the
so-called {\it partitioning} technique:
\begin{equation}
\label{eq:GC}
\hat{G}_C(E) = (E-\hat{H}_C-\hat\Sigma_L(E)-\hat\Sigma_R(E))^{-1},
\end{equation}
where $\hat{H}_C$ is the total Hamiltonian (including SOC) of the C region, and
$\hat\Sigma_L$ and $\hat\Sigma_R$ are self-energies that take into account the
coupling of the central region to the two electrodes
\cite{Datta,Jacob:thesis}.
In the coherent regime, i.e. at low temperatures and small bias
voltages when inelastic scattering events can be neglected, we use
the Landauer formalism to calculate the conductance of the system
which is obtained from the transmission function.
The transmission function in turn can be calculated by means of the
Caroli expression from the Green's function of the central
region, eq. (\ref{eq:GC}), and the so-called coupling matrices,
$\hat\Gamma_L(E):=i(\hat\Sigma_L-\hat\Sigma_L^\dagger)$ and
$\hat\Gamma_R(E):=i(\hat\Sigma_R-\hat\Sigma_R^\dagger)$, of
the electrodes \cite{Caroli}:
\begin{equation}
\label{eq:transm}
T(E) = {\rm Tr\,}[ \hat{G}_C(E) \, \hat\Gamma_L(E) \, \hat{G}_C^\dagger(E)
\, \hat\Gamma_R(E) ].
\end{equation}
The zero-bias conductance is then given by $G=\frac{e^2}{h}T(\epsilon_F)$. The
orbital projected density of states $\rho_\alpha$ and the density of states of
the central region $\rho$ can be calculated from the Green's function:
$\rho_\alpha(E)=-\frac{1}{\pi}{\rm Im}[G_{\alpha\alpha}(E)]$
and $\rho(E)=-\frac{1}{\pi}{\rm Im \, Tr}[\hat{G}_C(E)]$.
\section{Contact regime}
\label{sec:contact}
\begin{figure}
[hbt]
\includegraphics[width=\linewidth]{fig-ballistic.eps}
\caption{
(a) Transmission for an ideal infinite Ni chain ($d=a$, upper curves)
and for two semi-infinite Ni chains seperated by $d=1.3a$ (lower curves) for
magnetization parallel (black) and perpendicular (grey) to the chain axis.
(b) Zero-bias conductance $G$ as a function of $d$ for magnetization parallel
(black) and perpendicular (grey) to the chain axis. (c) BAMR (grey boxes) and OPC
(black circles) as a function of $d$. (d) $G$ as a function of
$\theta$ for different values of $d/a$: 1.0 (full black boxes), 1.1 (grey
circles), 1.2 (black full circles), 1.3 (grey triangles) and 1.4 (black
full triangles).
}
\end{figure}
\subsection{The ideal chain}
Magnetic anisotropy comes from the combined action of both, the crystal field that
breaks the orbital rotational invariance, and the atomic SOC term, that couples the
spin polarization to the orbital degrees of freedom.
The electronic structure of the ideal one dimensional Ni chain ($d=a$) presents
a number of common features with 3d and 4d transition
metals\cite{Bruegel06,Wierzbowska:prb:05} and permits to understand transport
results for $d\neq a$. The bands close to the Fermi energy are formed by $s$
and $d$ orbitals. In the absence of SOC, rotational invariance around the chain
axis permits to classify the $d$ orbitals according to the projection of their
angular momentum along the chain direction, $m_z$. On top of that, a weak
crystal field splits the otherwise degenerate $d$ levels into two doublets
$E_1$ (linear combination of states with $m_z=\pm 1$) and $E_2$ ($m_z=\pm 2$)
and a singlet $A_1$ ($m_z=0$), which is hybridized with the $s$ orbital. The
orbital degeneracy of the doublets is kept by the bands of the chain, as long
as SOC is not present. The bandwidth of the $E_2$ is significantly smaller than
that of $E_1$ due to the smaller overlap of the $E_2$ orbitals. As a result,
the $E_2$ bands yield a higher density of states.
The combined action of SOC and magnetism alters this situation
\cite{Velev05,Bruegel06}. When the magnetization is pointing perpendicular to
the chain axis $(\theta=90^\circ)$, SOC acts as an effective magnetic field
acting over $L_x$ that has to compete with the $L_z^2$-like terms arising from
the crystal field, which happens to be dominant. As a result, the bands for
$\theta=90^\circ$ look very similar to those without SOC, except in the points
where bands with $m_z \sigma $ and $m_z\pm1,\sigma\mp 1$ intersect, which are far
away from $\epsilon_F$ in the case of Ni. Therefore, for $\theta=90^\circ$ the
effect of SOC on transport is negligible. In contrast, when magnetization is
pointing along the chain axis $(\theta=0^\circ)$, SOC shifts the bands by an
amount $\lambda m_z \sigma$, where $m_z$ and $\sigma$ are the projection of
the spin and orbital momentum along the chain. As a result, the $E_2$ and $E_1$
orbital doublets are split so that one of the 2 minority $E_2$ bands is
shifted below the Fermi energy, compared to the $\theta=90^\circ$ case. This
can be seen in the stepwise curves in Fig. 2a, that correspond to
$T_{\|}(E)\equiv T(E,\theta=0^\circ)$ and $T_\bot(E)\equiv
T(E,\theta=90^\circ)$ for the ideal chain. At the Fermi energy, $T_\bot(E)\neq
T_\|(E)$. This change is responsible for BAMR\cite{Velev05}, defined as
BAMR$\equiv \frac{\Delta G}{G_\bot}\times 100$ where $\Delta G\equiv
G_\bot-G_\|$.
The interplay between SOC and magnetization results in a non-zero orbital
moment {\em density} along the magnetization direction.
The largest orbital moment occurs when $\theta=0$, i.e. when the magnetization is
along the chain \cite{Bruegel06}. The
{\em orbital} {\em polarization current} (OPC) defined as
\begin{equation}
{\rm OPC} \equiv \frac{\sum_{m} T_m-T_{-m}}{T(E)}
\end{equation}
where $T_m$ is the transmission of the $d$-orbitals with $m=\pm 2$ or $m=\pm 1$
along the chain direction, vanishes when $\theta=90^\circ$ but is {\em
non-zero} when $\theta=0$. Interestingly, there is a perfect one-to-one
correspondence between the OPC and the BAMR in the case of the ideal chain
without scattering. It is also apparent that the existence of an orbital
magnetic moment is not a sufficient condition for having a non-zero OPC, very
much like spin-polarization does not necessarily imply a spin-polarized current
\cite{Jacob05}.
\subsection{The effect of weak scattering }
Now we see how elastic scattering, controlled with the chain separation $d$,
affects BAMR. The stretched bond mimics the contact region. This perturbation
preserves the axial symmetry of the ideal chain but introduces scattering. As a
consequence $T(E)$, shown in Fig. 2a, is not quantized anymore, as expected
\cite{Jacob05,Untiedt:prb:04} and yet the BAMR (Fig. 2c) is close to that of
the ideal case for values of $d/a\le1.4$. Relatedly, the $G(\theta)$ curve is
not stepwise (as is the case of the ideal chain) anymore when scattering is
included (Fig. 2d). On the other hand the $G(\theta)$ curve is also different
from bulk behavior where $G(\theta)\propto\cos^2\theta$. The quantized step in
the ideal case ($d=a$) that corresponds to the critical angle at which the $E_2$
band is pushed below the Fermi energy \cite{Velev05}, becomes progressively
smoother as the gap between the chains increases. Our $G(\theta)$ curves
including scattering agree with those of the experiments \cite{Viret-BAMR}.
This is one of the important results of the model.
As $d$ increases, the scattering increases and $G$ goes down but interestingly, the BAMR
signal first \emph{increases} slightly for $d/a\le1.3$ before finally going down with
increasing scattering. The initial increase in BAMR is related to the initially stronger
decrease of the contribution of the $A_1$-channel to the conductance. The $A_1$-channel is
not affected by the SOC in contrast to the $E_2$ channels that are mainly responsible
for the BAMR signal. The decrease of the BAMR signal for larger values of $d$ is expected
within the framework of our model, since the relative contribution to the conductance of
the $d$-channels compared to the $s$-channel decreases as the gap opens. The reason for
this is the shorter spread of the $d$-orbitals compared to the $s$-orbitals \cite{Jacob05}.
Relatedly, the OPC (Fig. 2c) also decreases as $d$ increases. Removing the contribution of
the $s$-channel to the conductance would thus enhance BAMR. This could be accomplished e.g.
by oxidation of the contact \cite{Jacob06}.
\section{Coherent tunneling regime}
\label{sec:tunnel}
In this section we study the anisotropic magnetoresistance in the regime of weakly coupled
semi-infinite chains. In Figs. 3a and 3c we plot the Landauer transmission $T(E)$ (calculated
from the Caroli expression eq. (\ref{eq:transm})) for $d=4a$, definitely in the tunnel regime,
and the density of states (DOS) projected onto the tip atom of a semi-infinite
Ni chain both for $\theta=0^\circ$ ($\rho_\|(E)$) and
$\theta=90^\circ$($\rho_\bot(E)$). The very small transmission is dominated by
the $s$-channel, and therefore quite independent of $\theta$. In contrast, the
DOS is very different for $\theta=0^\circ$ and $\theta=90^\circ$. The two peak
structure around $\epsilon_F$ for $\theta=0^\circ$ is related to the split
$E_2$ bands, which merge when $\theta=90^\circ$. In Figs. 3b and 3d we plot
the zero-bias conductance $G(\theta)$ and the DOS at the Fermi level
$\rho(\epsilon_F,\theta)$ as a function of $\theta$. Whereas the maximal change
in the conductance is smaller than $1\%$, the change in the DOS exceeds 200$\%$.
This challenges the simplistic link between DOS and tunnel conductance.
\begin{figure}
[t]
\includegraphics[width=\linewidth]{fig-tunnel.eps}
\caption{
Tunnel regime ($d=4a$):
(a) Transmission function for magnetization angles $\theta=0^\circ$ (black) and $\theta=90^\circ$ (grey).
(b) Zero-bias conductance in dependence of the magnetization angle $\theta$.
(c) DOS projected onto tip atom as a function of energy for $\theta=0$ (black) and $\theta=90^\circ$ (grey).
(d) DOS projected onto tip atom at Fermi level as a function of $\theta$.
}
\end{figure}
In the tunneling regime the Landauer formula can be rewritten as (see appendix):
\begin{equation}
\label{eq:tunnel}
G_{\rm Tunnel}=\frac{4e^2}{h}\sum_{\alpha,\beta} |V_{\alpha\beta}|^2 \rho^L_\alpha(\epsilon_F) \rho^R_\beta(\epsilon_F)
\end{equation}
where $V_{\alpha\beta}$ is the matrix element of the Hamiltonian connecting the
$\alpha$ and $\beta$ atomic orbitals of the tip atoms of the two Ni chains and
$\rho^{L,R}_\alpha(\epsilon_F)$ is the orbital-resolved DOS at the Fermi
energy, i.e. the DOS projected onto an atomic orbital $\alpha$ of a tip atom.
Using this expression, the conductance calculated in Fig. 3a from the Caroli
expression is indeed nicely reproduced.
Note, that the standard approximation by which
the conductance is proportional to the product of the DOS
of the tip atoms,
$G\propto \sum_{\alpha}\rho^{L}_\alpha(\epsilon_F)
\sum_{\alpha}\rho^{R}_\beta(\epsilon_F)$
is obtained only if the $V_{\alpha\beta}$ matrix
is assumed to be proportional to the identity, i.e. the
tunneling matrix elements are assumed to conserve the orbital
index and to be equal in size.
However, this is far from being the
case when $d$ and $s$ orbitals are involved.
In fact, in the case considered here
the conductance is completely dominated
by the $V_{s,s}$ term, for which the orbitally-resolved DOS
$\rho^{L,R}_s$ is essentially independent of $\theta$.
As a result, the strong dependence of the global density of states on $\theta$
is {\em not} followed, in this case, by a strong dependence of the conductance
on $\theta$. Notice that since the transmission is dominated by the $s$
channel both the orbital polarization
of the current and the AMR are negligible. In general, an anisotropy in the DOS
is not a sufficient condition to have AMR.
The small variation of $G(\theta)$ in Fig. 3b can be traced back to the
variation of $\epsilon_F$ as a function of $\theta$ and the non-flat
$\rho_s^{L,R}(E)$. In Fig. 4a we plot $\epsilon_F(\theta)$ for a semi-infinite
Ni chain. $\Delta \epsilon_F\equiv \epsilon_F(0^\circ)-\epsilon_F(90^\circ)$
can be as large as 10 meV. This change leads naturally to the second scenario
for enhancement of the AMR, considered in the next section: in a situation of
resonant transport through the change of the chemical potential
$\epsilon_F$ as a function of the magnetization direction
can result in a large variation
of $G$, regardless of the degree of orbital polarization of the current. It has
been recently suggested that these resonances could arise as localized tip
states in Ni wires thicker than those considered here \cite{Burton07}.
\section{Sequential tunneling regime}
\label{sec:sequential}
In this section we consider a different scenario, motivated by recent
experiments\cite{Wunderlich} and by the remarks at the end of the previous
section. We study a single electron transistor (SET) with
Ni electrodes \cite{Ralph-Science,Liu06,Seneor07} and a non-magnetic central
island (CI) with a discrete electronic spectrum. The CI is weakly coupled to
the electrodes, so that the levels acquire a broadening $\Gamma$. The position
of these levels can be electrically tuned with a gate. Whenever a level of the
CI is in resonance with the Fermi energy the zero bias conductance of the system
has a maximum. We assume that both the level spacing of the CI
states, $\Delta E$, and the charging energy $E_Q$ are much larger than the
temperature $k_BT$ which is larger than $\Gamma$. Under these conditions, the
system is in the Coulomb Blockade regime.
In equilibrium the chemical potential of the central island and that of the
electrodes must be the same \cite{Been}: $\epsilon_F(\theta)=E_C+ \epsilon_N +
eV_G$ where $N$ is the number of electrons in the CI that satisfies this
condition and $\epsilon_N$ is the energy level occupied by the last electron.
>From this equation we immediately see that the charge state of the central
island can be controlled both with the gate and with the orientation of the
magnetization of the electrodes \cite{Wunderlich,CBAMR-theory}. This effect is
reminiscent of the so called magneto Coulomb effect, in which the chemical
potential of the electrode is varied with the intensity of the applied field
\cite{MCB}. Here the chemical potential is changed by rotating the applied
field.
\begin{figure}[t]
\includegraphics[width=\linewidth]{fig-sequential.eps}
\caption{
(a) Variation of the Fermi energy of the Ni chain as a function of
$\theta$. (b) $G(V_G,\theta)$ for a SET device coupled to the Ni chains.
The curves are vertically shifted. (c) $G(5 meV,\theta)$ in logarithmic
scale.
}
\end{figure}
In the $E_Q>kT>\Gamma$ situation, the linear conductance of the SET can be
obtained using either the finite temperature Landauer approach \cite{Datta}
or the sequential transport theory\cite{Been}:
\begin{equation}
G=\frac{e^2}{h}\frac{\Gamma}{8 k_B T} \cosh^{-2}\left(\Delta/2k_B T\right),
\end{equation}
where $\Delta=E_N(V_G)+\frac{e^2}{2C}-\epsilon_F(\theta)$. In Fig. 4b we plot
$G(V_G,\theta)$ for a SET with Ni electrodes. We take $kT=5 \Gamma=0.5$ meV.
The gate is chosen so that, for $\theta=0$ the conductance is maximal. As
$\theta$ changes the chemical potential of the electrodes moves away from the
peak. In Fig. 4c we plot $G(\theta)$ for $V_G$ corresponding to the vertical
line in Fig. 4b. Notice the logarithmic scale and the huge AMR, which might
have practical applications. Notice that crossing the conductance peak, either
by gate application or magnetization rotation, implies charging the CI by one
electron \cite{Wunderlich,CBAMR-theory}. The results of Fig. 4b assume that
$\Gamma$ is independent of $\theta$, which is true as long as the resonant
level is not coupled to the $E_2$ and $E_1$ bands. The height of the $G(V_G)$
curves would depend on $\theta$ otherwise. In principle, a complete
characterization of the $G(V_G,\theta)$ curve would yield the
$\epsilon_F(\theta)$ and $\Gamma(\theta)$ functions, which would provide
valuable information of the electronic structure of the electrodes.
\section{Summary and conclusions}
\label{sec:summary}
We have presented {\it ab initio} quantum transport calculations
of Ni nanocontacts as a function of the magnetization direction, $\theta$,
going from the ballistic to the tunnel regime. We have shown that
AMR is unrelated from quantization of conductance, which is an artifact of the
scattering free calculations and not expected in transition metal
nanocontacts. We also show that a large variation of the density of states at
$\epsilon_F$ as a function of $\theta$ is not a sufficient condition for large AMR.
We identify two sufficient conditions to obtain largely enhanced AMR in quantum transport.
First, in the coherent regime (contact and tunneling), large AMR is related to a large degree
of orbital polarization of the current, for a selected direction of the magnetization.
Second, in systems with resonances close to $\epsilon_F$, as it happens in single electron
transistors with ferromagnetic electrodes, large AMR is related to a large variation of the chemical
potential $\epsilon_F$ of the electrode as a function of $\theta$.
We report an {\it ab initio} calculation for this quantity.
These findings shed light on the choice of materials and the design of
nanostructures with enhanced anisotropic magnetoresistance.
We acknowledge R. Aguado, L. Brey, E. Tsymbal, E. Tosatti and C. Untiedt for
useful discussions. We acknowledge Spanish MEC and Generalitat Valenciana
for funding grants MAT2007-65487, Ramon y Cajal
Program, GV-ACCOMP07/054 and Consolider CSD2007-0010.
\begin{appendix}
\section{Derivation of tunneling formula}
For completeness, we derive eq. (\ref{eq:tunnel}) from the Landauer formalism in the limit
of weak coupling between the electrodes. Eq. (\ref{eq:tunnel}) can also be obtained from
Kubo formula (see e.g. the book by Mahan\cite{Mahan}, Sec. 9.3). Derivations similar to
ours can be found in the literature \cite{Datta,Maekawa}.
We consider two semi-infinite electrodes L and R with atomically sharp
tips separated by a distance $d$, as shown in Fig. \ref{fig:model}.
We label the tip atoms of the left and right lead $0$ and $1$, respectively.
Now the Green's function projected onto tip atom $0$ is given by:
\begin{equation}
\label{eq:G0}
\hat{G}_0(E) = (E-\hat{H}_0-\hat\Sigma_L(E)-\hat\Sigma_R(E))^{-1},
\end{equation}
where $\hat\Sigma_L$ is the self-energy representing the rest of the left
electrode without tip atom $0$ while $\hat\Sigma_R$ presents the self-energy of
the entire right electrode including the tip atom $1$. Thus the right
self-energy can be expressed by the Green's function of the isolated right electrode
$\hat{g}^R_1$ and the coupling $\hat{V}$ between the left and the right tip
atom as:
\begin{equation}
\hat\Sigma_R = \hat{V} \, \hat{g}^R_1 \, \hat{V}^\dagger.
\end{equation}
In the tunneling regime, i.e. for $d>>a$, when the coupling $\hat{V}$ becomes
very weak, the contribution of the right self-energy to $\hat{G}_0$ can be
neglected, so that $\hat{G}_0$ becomes equal to the Green's function of the {\it
isolated} left lead projected onto the tip atom, $\hat{g}_0^L$:
\begin{equation}
\hat{G}_0(E) \approx (E-\hat{H}_0-\hat\Sigma_L(E))^{-1} \equiv \hat{g}^L_0(E).
\end{equation}
The Caroli expression\cite{Caroli} for the Landauer transmission through the tip atom thus
becomes:
\begin{eqnarray}
T(E) &\approx& {\rm Tr\,}[ \hat{g}^L_0(E) \, \hat\Gamma_L(E)
\, (\hat{g}_0^L)^\dagger(E) \, \hat\Gamma_R(E) ].
\label{T}
\end{eqnarray}
The coupling matrix of the right lead $\hat\Gamma_R$ can be re-written in terms of the
spectral function of the {\it isolated} right lead projected onto the tip atom,
$\hat{a}^R_1:=i(\hat{g}^R_1-(\hat{g}^R_1)^\dagger)$ as:
\begin{equation}
\hat\Gamma_R:=i(\hat\Sigma_R-\hat\Sigma_R^\dagger)=\hat{V}\,\hat{a}^R_1\,\hat{V}^\dagger.
\end{equation}
The first three terms in eq. (\ref{T}) are computed
using the algebraic identity $\hat{g}^L_0 \, \hat\Gamma_L \,
(\hat{g}^L_0)^\dagger=i(\hat{g}^L_0-(\hat{g}^L_0)^\dagger)=\hat{a}^L_0$
where $\hat{a}^L_0$ is the spectral function of the {\it isolated} left lead
projected onto the tip atom $0$, we find for the transmission in the tunneling
regime:
\begin{equation}
T(E) \approx {\rm Tr}\,[ \hat{a}^L_0(E) \, \Gamma_R(E) ] =
{\rm Tr}\,[ \hat{a}^L_0(E) \, \hat{V} \, \hat{a}^R_1(E) \, \hat{V}^\dagger ].
\end{equation}
Thus the zero-bias conductance which is given by the transmission function at the Fermi energy, can be
approximated in the tunneling regime by:
\begin{eqnarray}
G &=& \frac{e^2}{h} \times T(\epsilon_F) \approx \frac{e^2}{h} \times {\rm Tr}\,[ \hat{a}^L_0(\epsilon_F) \, \hat{V} \, \hat{a}^R_1(\epsilon_F) \, \hat{V}^\dagger ]
\nonumber\\
&=& \frac{e^2}{h} \sum_{\alpha,\alpha^\prime,\beta,\beta^\prime} a^L_{\alpha\alpha^\prime}(\epsilon_F) \, V_{\alpha^\prime\beta} \,
a^R_{\beta\beta^\prime}(\epsilon_F) \, V^\ast_{\alpha\beta^\prime},
\end{eqnarray}
where in the last step we have labeled states on the left tip by $\alpha$ and $\alpha^\prime$ and states on the
right tip by $\beta$ and $\beta^\prime$. The spectral functions $\hat{a}^L$ and $\hat{a}^R$ are diagonal in the
basis of eigenstates of the isolated left and right lead, and the diagonal elements yield the DOS projected onto
the eigenstates: $a^L_{\alpha\alpha}=2\rho^L_\alpha$ and $a^R_{\beta\beta}=2\rho^R_\beta$ where $\alpha$ and
$\beta$ now label the projections of the eigenstates onto the tip atoms.
Thus we obtain eq. (\ref{eq:tunnel}):
\begin{equation}
G \approx \frac{e^2}{h} \sum_{\alpha,\beta} \rho^L_\alpha(\epsilon_F) \,
V_{\alpha\beta} \, \rho^R_\beta(\epsilon_F) \, V^\ast_{\alpha\beta}.
\label{Gtun}
\end{equation}
Notice that this result relates the tunnel conductance to the product of the
{\it orbital-resolved} DOS of the electrodes, as opposed to the {\it total} DOS.
\end{appendix}
|
2,877,628,091,540 | arxiv | \section{Introduction: Adaptive Behaviour Modeling for Ga\-me Theory}
Since the five last decades, game theory \index{game theory}
has become a major aspect in economic
sciences modelling \index{economic modelling} and in a great number of domains where
strategical aspects has to be involved.
Game theory is usually defined as a mathematical tool allowing to analyse
strategical interactions between individuals. \\
Initially funded by mathematical researchers, J. von Neumann, E. Borel or E.
Zermelo in 1920s, game theory increased in importance in the
1940s with a major work by J. von Neumann and O. Morgenstern and then with the
works of John Nash in the 1950s \cite{Eb}.
John Nash has proposed an original equilibrium ruled by an adaptive criterium.
In game theory, the Nash equilibrium is a kind of optimal strategy
\index{strategy} for games
involving two or more players, whereby the players reach an outcome
to mutual advantage.
If there is a set of strategies for a game with the property that no player can
benefit by changing his strategy while the other players keep their
strategies unchanged, then this set of strategies and the corresponding payoffs
constitute a Nash equilibrium. \\
We can understand easily that the modelization of a player behavior needs some
adaptive properties \index{adaptive properties}.
The computable model corresponding to genetic automata are in this way a good
tool to modelize such adaptive strategy \index{adaptive strategy}.\\
The plan of this paper is the following. In the next section, we present some
efficient algebraic structures, the automata with multiplicities, which allow to
implement powerful operators. We present in section 3, some topological
considerations about the definition of distances between automata which induces a
theorem of convergence on the automata behaviors.
Genetic operators are proposed for these automata in section 4. For that
purpose, we show that the relevant ``calculus'' is done by matrix representions
unravelling then the powerful capabilities of such algebraic structures.
In section 5, we focus our attention on the "iterated prisonner dilemma" and we
buid an original evolutive probabilistic automaton for strategy modeling,
showing that genetic automata are well-adapted to model adaptive strategies.
Section 6 shows how we can use the genetic automata developed previously to
represent agent evolving in complex systems description. An agent behavior
semi-distance is then defined and allows to propose an automatic computation of
emergent systems as a kind of self-organization detection.
\section{Automata from boolean to multiplicies theory (Automata with scalars)}
Automata\index{automata} are initially considered as theoretical tools. They are created in the
1950's following the works of A.
Turing who previously deals with the definition of an abstract "machine".
The aim of the Turing machines \index{Turing machines}
is to define the boundaries for what a computing
machine could do and what it could not do.\\
The first class of automata, called finite state automata
\index{finite state automata} corresponds to simple
kinds of machines \cite{Sc}.
They are studied by a great number of researchers as abstract concepts for
computable building.
In this aspect, we can recall the works of some linguist researchers, for
example N. Chomsky who defined the study of formal grammars.\\
In many works, finite automata are associated to a recognizing operator which allows to
describe a language \cite{BR,Ei}.
In such works, the condition of a transition is simply a symbol taken from an
alphabet.
From a specific state $S$, the reading of a symbol $a$ allows to make the
transitions which are labeled by $a$ and $\ come\ from S$
(in case of a deterministic automaton - a DFA - there is only one transition -
see below). \index{deterministic Finite Automaton}
A whole automaton is, in this way, associated to a language, the recognized
language, which is a set of words.
These recognized words are composed of the sequences of letters of the alphabet
which allows to go from a specific state called initial state, to another
specific state, called final state.\\
A first classification is based on the geometric aspect~: DFA (Deterministic
Finite Automata) and NFA (Nondeterministic Finite Automata).
\index{deterministic finite automaton} \index{nondeterministic finite automaton}
\begin{itemize}
\item In Deterministic Finite Automata, for each state there is at most one
transition for each possible input and only one initial state.
\item In Nondeterministic Finite Automata, there can be none or more than one
transition from a given state for a given possible input.
\end{itemize}
Besides the classical aspect of automata as machines allowing to recognize
languages, another approach consists in associating to the automata a functional
goal.
In addition of accepted letter from an alphabet as the condition of a transition, we
add for each transition an information which can be considered as an output data
of the transition, the read letter is now called input data.
We define in such a way an {\it automaton with outputs} or {\it weighted
automaton}.\\
\index{automaton with outputs} \index{weighted automaton}
Such automata with outputs give a new classification of machines.
{\it Transducers} are such a kind of machines, they generate outputs based on a
given input and/or a state using actions.
\index{transducers}
They are currently used for control applications.
{\it Moore machines} are also such machines where output depends only on a
state, i.e. the automaton uses only entry actions.
\index{Moore machines}
The advantage of the Moore model is a simplification of the behaviour.\\
Finally, we focus our attention on a special kind of automata with outputs which
are efficient in an operational way.
This automata with output are called {\it automata with multiplicities}.
\index{automaton with mutiplicities}
An automaton with multiplicities is based on the fact that the output data of
the automata with output belong to a specific algebraic structure, a
semiring \cite{Go,St}.
In that way, we will be able to build effective operations on such automata,
using the power of the algebraic structures of the output data
and we are also able to describe this automaton by means of a matrix
representation with all the power of the new (i.e. with semirings) linear algebra.\\
\begin{definition}
{\bf (Automaton with multiplicities)}\\
An automaton with multiplicities over an alphabet $A$ and a semiring $K$ is
the 5-uple $(A,Q,I,T,F)$ where
\begin{itemize}
\item $Q=\{S_1,S_2\cdots S_n\}$ is the finite set of state;
\item $I: Q\mapsto K$ is a function over the set of states, which
associates to each initial state a value of K, called entry cost, and to non-
initial state a zero value ;
\item $F: Q\mapsto K$ is a function over the set states, which
associates to each final state a value of K, called final cost, and to non-final
state a zero value;
\item $T$ is the transition function, that is $T: Q\times A\times Q\mapsto K$
which to a state $S_i$, a letter $a$ and a state $S_j$ associates a value $z$ of
$K$ (the cost of the transition)
if it exist a transition labelled with $a$ from the state $S_i$ to the
state $S_j$ and and zero otherwise.\\
\end{itemize}
\end{definition}
\begin{remark}
Automata with multiplicities are a generalisation of finite automata. In fact,
finite automata can be considered as automata with multiplicities in the
semiring $K$, the boolan set $B=\{0,1\}$ (endowed with the logical ``or/and'').
To each transition we affect 1 if it exists and 0 if not.\\
\end{remark}
\begin{remark}
We have not yet, on purpose, defined what a semiring is. Roughly it is the least
structure
which allows the matrix ``calculus'' with unit (one can think of a ring without the
"minus" operation).
The previous automata with multiplicities can be, equivalently, expressed by a
matrix representation which is a triplet
\begin{itemize}
\item $\lambda\in K^{1\times Q}$ which is a row-vector which coefficients are
$\lambda_i=I(S_i)$,
\item $\gamma\in K^{Q\times 1}$ is a column-vector which coefficients are
$\gamma_i=F(S_i)$,
\item $\mu: A^*\mapsto K^{Q\times Q}$ is a morphism of monoids (indeed
$K^{Q\times Q}$ is endowed with the product of matrices) such that
the coefficient on the $q_i$th row and $q_j$th column of $\mu(a)$ is
$T(q_i,a,q_j)$
\end{itemize}
\end{remark}
\section{Topological considerations}
If $K$ is a field, one sees that the space ${\mathcal A}_{(n)}$ of automata of dimension
$n$ (with multiplicities in $K$) is a $K$-vector space of dimension
$k.n^2+2n$ ($k$ is here the number of letters). So, in case the ground field is
the field of real or complex numbers \cite{Bo1}, one
can take any vector norm (usually one takes one of the H\"older norms
$||(x_i)_{i\in I}||_\alpha := \big(\sum_{i\in I} | x_i
|^\alpha\big)^{\frac{1}{\alpha}}$
for $\alpha\geq 1$,
but any norm will do) and the distance is derived, in the classical way, by
\begin{equation}
d({\mathcal A}_1,{\mathcal A}_2)=norm(V({\mathcal A}_1)- V({\mathcal A}_2))
\end{equation}
where $V({\mathcal A})$ stands for the vector of all coefficients of ${\mathcal A}=(\lambda,\mu,\gamma)$
arranged in some order one has then the result of Theorem \ref{th1}.
Assuming that $K$ is the field of
real or complex numbers, we endow the
space of series/behaviours with the topology of pointwise convergence (Topology
of F. Treves \cite{Tr}).
\begin{theorem}\label{th1}
Let $({\mathcal A}_n)$ be a sequence of automata with limit ${\mathcal L}$ (${\mathcal L}$ is an automaton),
then one has
\begin{equation}
Behaviour({\mathcal L})=\lim_{n\rightarrow \infty} Behaviour({\mathcal A}_n)
\end{equation}
where the limit is computed in the topology of Treves.
\end{theorem}
\section{Genetic automata as efficient operators}
We define the chromosome for each automata with multiplicities as the sequence
of all the matrices
associated to each letter from the (linearly ordered) alphabet.
The chromosomes are composed with alleles
which are here the lines of the matrix \cite{BFJOP2}.\\
In the following, genetic algorithms are going to generate new automata
containing possibly new transitions
from the ones included in the initial automata.\\
The genetic algorithm over the population of automata with multiplicities
follows a reproduction iteration
broken up in three steps \cite{Gol,Mi,Ko}:
\begin{itemize}
\item {\it Duplication}: where each automaton generates a clone of itself;
\item {\it Crossing-over}: concerns a couple of automata. Over this couple, we
consider a sequence of lines of each
matrix for all. For each of these matrices, a permutation on the lines of the
chosen
sequence is made between the analogue matrices of this couple of automata;
\item {\it Mutation}: where a line of each matrix is randomly chosen and a sequence
of new values is given for
this line.
\end{itemize}
Finally the whole genetic algorithm scheduling for a full process of
reproduction over all the population of
automata is the evolutionary algorithm:
\begin{enumerate}
\item For all couple of automata, two children are created by duplication,
crossover and mutation
mechanisms;
\item The fitness for each automaton is computed;
\item For all 4-uple composed of parents and children, the performless automata,
in term of fitness
computed in previous step, are suppressed. The two automata, still living,
result from the
evolution of the two initial parents.
\end{enumerate}
\begin{remark}
The fitness is not defined at this level of abstract formulation, but it is
defined
corresponding to the context for which the automaton is a model, as we will do in
the next section.
\end{remark}
\section{Applications to competition-cooperation modeling using prisoner
dilemma}
We develop in this section how we can modelize competition-cooperation processes
in a same automata-based representation. The genetic computation allows to make
automatic transition from competition to cooperation or from coopeartion to
competition. The basic problem used for this purpose is the well-known prisoner
dilemma \cite{Ax}.
\subsection{From adaptive strategies to probabilistic automata}
The prisoner dilemma is a two-players game where each player has two possible
actions:
cooperate ($C$) with its adversary or betray him ($\overline{C}$). So, four
outputs are possible for the global
actions of the two players. A relative payoff is defined relatively to these
possible outputs, as
described in the following table where the rows correspond to one player
behaviour and the
columns to the other player one.\\
\begin{table}[htp]
\begin{center}
\begin{tabular}{|l|c|c|} \hline
& $C$ & $\overline{C}$ \\ \hline
$C$ & (3,3) & (0,5) \\ \hline
$\overline{C}$ & (5,0) & (1,1) \\ \hline
\end{tabular}
\caption{Prisoner dilemma payoff}
\label{prisonerDilemmaPayoff}
\end{center}
\end{table}
In the iterative version of the prisoner's dilemma, successive steps can be defined.
Each player do not know the action of its adversary during the current step but
he knows it for the preceding step.
So, different strategies can be defined for a player behaviour, the goal of each
one is to
obtain maximal payoff for himself.\\
In Figures \ref{titfortat} and \ref{vindictive}, we describe two strategies
with transducers.
Each transition is labeled by the input corresponding to the player perception
which is the precedent adversary action and the output corresponding to the
present player
action.
The only inital state is the state 1, recognizable by the incoming arrow labeled
only by the output.
The final states are the states 1 and 2, recognizable with the double circles.\\
In the strategy of Figure \ref{titfortat}, the player has systematically the
same behaviour as its adversary at the previous step.
In the strategy of Figure \ref{vindictive}, the player chooses definitively to
betray as soon as his adversary does it.
The previous automaton represents static strategies and so they are not well
adapted for the
modelization of evolutive strategies.
For this purpose, we propose a model based on a probabilistic automaton
described by Figure \ref{probaDilemma} \cite{BFJOP1}.\\
\begin{figure} [htp]
\begin{center}
\includegraphics[scale=0.7]{titfortat.eps}
\caption{Tit-for-tat strategy automaton}
\label{titfortat}
\end{center}
\end{figure}
\begin{figure} [htp]
\begin{center}
\includegraphics[scale=0.7]{rancunier.eps}
\caption{Vindictive strategy automaton}
\label{vindictive}
\end{center}
\end{figure}
\begin{figure} [htp]
\begin{center}
\includegraphics[scale=0.7]{proba.eps}
\caption{Probabilistic multi-strategies two-states automaton}
\label{probaDilemma}
\end{center}
\end{figure}
This automaton represents all the two-states strategies for cooperation and
competitive
behaviour of one agent against another in prisoner's dilemma.\\
The transitions are labeled in output by the probabilities $p_i$ of their
realization. The first state is the state reached after cooperation action and
the second state is reached after betrayal. \\
For this automaton, the associated matrix representation, as described
previously, is:
\begin{eqnarray}
I &=& \left( \begin{array}{cc} p_1 & 1-p_1 \end{array} \right); \\
F &=& \left( \begin{array}{c} p_6 \\ 1-p_6 \end{array} \right);\\
T(C) &=& \left( \begin{array}{cc} p_2 & 1-p_2\cr p_3 & 1- p_3 \end{array} \right);\\
T(\overline{C}) &=& \left( \begin{array}{cc} 1-p_4 & p_4\cr 1-p_5 & p_5 \end{array} \right)
\end{eqnarray}
\subsection{From probabilistic automata to genetic automata}
With the matrix representation of the automata, we can compute genetic automata
as described
in previous sections. Here the chromosomes are the sequences of all the matrices
associated to
each letter. We have to define the fitness in the context of the use of these
automata. The
fitness here is the value of the payoff.
\subsection{General Genetic Algorithm Process for Genetic Automata}
A population of automata is initially generated. These automata are playing
against a predefined
strategy, named $S_0$.\\
Each automaton makes a set of plays. At each play, we run the probabilistic
automaton
which gives one of the two outputs: ($C$) or ($\overline{C}$). With this output
and the $S_0$'s output, we compute the payoff of the automaton, according with
the payoff table.\\
At the end of the set of plays, the automaton payoff is the sum of all the
payoffs of each play.
This sum is the fitness of the automaton. At the end of this set of plays, each
automaton has
its own fitness and so the selection process can select the best automata. At
the end of these
selection process, we obtain a new generation of automata.\\
This new generation of automata is the basis of a new computation of the 3
genetics operators.\\
This processus allows to make evolve the player's behavior which is modelized by
the probabilistic multi-stra\-te\-gies two-states automaton from cooperation to
competition or from competition to cooperation. The evolution of the strategy is
the expression of an adaptive computation. This leads us to use this formalism
to implement some self-organisation processes which occurs in complex systems.
\section{Extension to Emergent Systems Modeling}
In this section, we study how evolutive automata-based modeling can be used to
compute automatic emergent systems. The emergent systems have to be understood
in the meaning of complex system paradigm that we recall in the next section. We
have previously defined some way to compute the distance between automata and we
use these principles to define distance between agents behaviours that are modeled
with automata. Finally, we defined a specific fitness that allows to use genetic
algorithms as a kind of reinforcement method which leads to emergent system
computation \cite{Ho}.
\subsection{Complex System Description Using Automata-Ba\-sed Agent Model}
\begin{figure*} [ht]
\begin{center}
\includegraphics[scale=0.85]{sys2beh.eps}
\caption{Multi-scale complex system description: from global to individual
models}
\label{sys2beh}
\end{center}
\end{figure*}
According to General System Theory \cite{gst, Mo}, a complex system is composed
of entities in mutual interaction and interacting with the outside environment. A
system has some characteristic properties which confer its structural aspects,
as schematically described in part (a) of Figure \ref{sys2beh}:
\begin{itemize}
\item The set elements or entities are in interactive dependance. The alteration
of only one entity or one interaction reverberates on the whole system.
\item A global organization emerges from interacting constitutive elements. This
organization can be identified and carries its own autonomous behavior while it
is in relation and dependance with its environment. The emergent organization
possesses new properties that its own constitutive entities don't have. "The
whole is more than the sum of its parts".
\item The global organization retro-acts over its constitutive components. "The
whole is less than the sum of its parts" after E. Morin.\\
\end{itemize}
The interacting entities network as described in part (b) of Figure
\ref{sys2beh} leads each entity to perceive informations or actions from other
entities or from the whole system and to act itself.\\
A well-adapted modeling consists of using an agent-based representation which is
composed of the entity called agent as an entity which perceives and acts on an
environment, using an autonomous behaviour as described in part (c) of
Figure \ref{sys2beh}.\\
To compute a simulation composed of such entities, we need to describe the
behaviour of each agent. This one can be schematically described using internal
states and transition processes between these states, as described in part
(d) of Figure \ref{sys2beh}.\\
There are several definitions of ``agents'' or ``intelligent agents'' according
to their behaviour specificities~\cite{Fe, We}.
Their autonomy means that the agents try to satisfy a goal
and execute actions, optimizing a satisfaction function to reach it.\\
For agents with high level autonomy, specific actions are realized even when no
perception are detected from the environment.
To represent the process of this deliberation, different formalisms can be
used and a behaviour decomposed in internal states is an effective approach.
Finally, when many agents operate, the social aspects must also be taken into
account.
These aspects are expressed as communications through agent organisation with
message
passing processes.
Sending a message is an agent action and receiving a message is an agent
perception. The previous description based on the couple: perception and action,
is well adapted to this.
\subsection{Agent Behavior Semi-Distance}
We describe in this section the bases of the genetic algorithm used on the
probabilistic automata allowing to manage emergent self-organizations in the
multi-agent simulation.\\
For each agent, we define $e$ an evaluation function of its own
behaviour returning the matrix $M$ of values such that $M_{i,j}$ is
the output series from all possible successive perceptions when
starting from the initial state $i$ and ending at the final state $j$,
without cycle. It will clearly be $0$ if either $i$ is not an initial
state or $j$ is not a final one and the matrix $M_{i,j}$ is indeed a matrix of
evaluations \cite{BR} of subseries of
\begin{equation}
M^*:=(\sum_{a\in A} \mu(a)a)^*
\end{equation}
Notice that the
coefficients of this matrix, as defined, are computed whatever the
value of the perception in the alphabet $A$ on each transition on the successful
path\footnote{A {\it succesful path} is a path from an initial state to a final
state}. That means that the contribution of the agent behaviour for
collective organization formation is only based, here, on
probabilities to reach a final state from an initial one.
This allows to preserve individual characteristics in each agent
behaviour even if the agent belongs to an organization.\\
Let $x$ and $y$ two agents and $e(x)$ and $e(y)$ their respective
evaluations as described above.
We define $d(x,y)$ a semi-distance (or pseudometrics, see \cite{Bo1} ch IX)
between the two agents $x$ and $y$ as
$||e(x)-e(y)||$, a matrix norm of the difference of their
evaluations. Let ${\cal{V}}_x$ a
neighbourhood of the agent $x$, relatively to a specific criterium, for
example a spatial distance or linkage network.
We define $f(x)$ the agent fitness of the agent $x$ as~:
$$
f(x) =
\left\lbrace
\begin{array}{ll}
\frac{ {\displaystyle card({\cal{V}}_x) } }
{ {\displaystyle \sum\limits_{y_i \in {\cal{V}}_{x}} d(x, y_i)^2} }
\ \ \ \ &\mbox{if} \sum\limits_{y_i \in {\cal{V}}_{x}} d(x,
y_i)^2 \neq 0 \\
\infty &\mbox{otherwise}
\end{array}
\right.
$$
\subsection{Evolutive Automata for Automatic Emergence of Self-Organized Agent-
Based Systems}
In the previous computation, we defined a semi-distance between two agents. This
semi-distance is computed using the matrix representation
of the automaton with multiplicities associated to the agent behaviour. This
semi-distance is based on successful paths computation which
needs to define initial and final states on the behaviour automata. For specific
purposes, we can choose to define in some specific way, the
initial and final states. This means that we try to compute some specific action
sequences which are chararacterized by the way of going from some specific
states (defined here as initial ones) to some specific states (defined here as
final ones).\\
Based on this specific purpose which leads to define some initial and final
states, we compute a behaviour semi-distance and then the fitness function defined
previously. This fitness function is an indicator which returns high value when
the evaluated agent is near, in the sense of the behaviour semi-distance defined
previously, to all the other agents belonging to a predefined neighbouring.\\
Genetic algorithms will compute in such a way to make evolve an agent population
in a selective process. So during the computation, the genetic algorithm will make
evolve the population towards a newer one with agents more and more adapted to the
fitness. The new population will contain agents with better fitness, so the
agents of a population will become nearer each others in order to improve their fitness.
In that way, the genetic algorithm reinforces the creation of a system which
aggregates agents with similar behaviors, in the specific way of the definition
of initial and final states defined on the automata.\\
The genetic algorithm proposed here can be considered as a modelization of the
feed-back of emergent systems which leads to gather agents of similar behaviour,
but these formations are dynamical and we cannot predict what will be the
set of these aggregations which depends of the reaction of agents during the
simulation. Moreover the genetic process has the effect of generating a feed-
back of the emergent systems on their own contitutive elements in the way that
the fitness improvement lead to bring closer the agents which are picked up
inside the emergent aggregations.\\
For specific problem solving, we can consider that the previous fitness
function can be composed with another specific one which is able to measure the
capability of the agent to solve one problem. This composition of fitness
functions leads to create emergent systems only for the ones of interest, that
is, these systems are able to be developed only if the aggregated agents are
able to satisfy some problem solving evaluation.
\section{Conclusion}
The aim of this study is to develop a powerful algebraic structure to represent
behaviors concerning cooperation-competition processes and on which we can
add genetic operators. We have explained how we can use these structures for
modeling adaptive behaviors needed in game theory. More than for this
application, we have described how we can use such adaptive computations to
automatically detect emergent systems inside interacting networks of entities
represented by agents in a simulation.
|
2,877,628,091,541 | arxiv | \section{Topological Charge in Asymptotically Free Gauge Theories}
\subsection{Topological Susceptibility and Long Range Order in Chern-Simons Currents}
The possibility that the vacuum of pure-glue QCD possesses a ``secret long-range order'' associated with topological
charge was first explored by Luscher \cite{Luscher78}, who pointed out that if the topological susceptiblity
$\chi_t$ is nonzero, it implies the presence of a zero-mass pole in the correlator of two Chern-Simons
currents. Let us define the {\it abelian} 3-index Chern-Simons tensor
\begin{equation}
\label{eq:CStensor}
A_{\mu\nu\rho} = -Tr\left(B_{\mu}B_{\nu}B_{\rho}+\frac{3}{2}B_{[\mu}\partial_{\nu}B_{\rho]}\right)
\end{equation}
where $B_{\mu}$ is the Yang-Mills gauge potential. We consider the Chern-Simons current that is dual to this tensor,
\begin{equation}
j_{\mu}^{CS} = \epsilon_{\mu\nu\rho\sigma}A_{\nu\rho\sigma}\,\,.
\end{equation}
Although $j_{\mu}^{CS}$ is not gauge invariant, its divergence is the gauge invariant topological charge density
\begin{equation}
\label{eq:csdiv}
\partial_{\mu}j_{\mu}^{CS} = Tr F\tilde{F} = 32\pi^2 q(x) \,\,.
\end{equation}
Choosing a covariant gauge, $\partial_{\mu}A_{\mu\nu\rho}=0$, the correlator of two Chern-Simons currents
has the form
\begin{equation}
\label{eq:cscorr}
\langle j_{\mu}^{CS}(x)j_{\nu}^{CS}(0)\rangle = \int \frac{d^4p}{(2\pi)^4}\; e^{-ip\cdot x}\; \frac{p_{\mu}p_{\nu}}{p^2} G(p^2) \,\,.
\end{equation}
From (\ref{eq:csdiv}) we see that $G(p^2)$ must have a $p^2=0$ pole whose residue is the topological susceptibility,
\begin{equation}
G(p^2) \sim \frac{\chi_t}{p^2} \,\,.
\end{equation}
Of course, this pole does not imply the existence of a physical massless particle, because the Chern-Simons
current is not gauge invariant.
The gauge invariant topological charge correlator $\langle q(x)q(0)\rangle$ has no pole and
remains short range. Note that the $1/p^2$ pole in $G(p^2)$ gives rise to a contact term
in the topological charge correlator.
To clarify the nature of the long-range order in 4D Yang-Mills theory, Luscher \cite{Luscher78} drew on the analogy
with 2-dimensional $CP^{N-1}$ sigma models.
The continuum action for the $CP^{N-1}$ model is
\begin{equation}
S = \beta N \int d^2x \left( D_\mu {\bf z} \right)^\dagger \cdot D_\mu {\bf z} \,\, ,
\end{equation}
where ${\bf z}$ is an $N$-component complex scalar field subject to the
constraint ${\bf z}^\dagger \cdot {\bf z} = 1$, and the covariant derivative is
\begin{equation}
D_\mu = \partial_\mu + i A_\mu \,\, .
\end{equation}
Here $A_{\mu}$ is a $U(1)$ gauge field, but it is an auxiliary field with no kinetic $F_{\mu\nu}^2$ term.
It's equation of motion sets it equal to the flavor-singlet current,
\begin{equation}
\label{eq:London}
A_{\mu} = J_{\mu}
\end{equation}
where
\begin{equation}
\label{eq:current}
J_{\mu} = \frac{1}{2}i\left({\bf z}^{\dagger}\partial_{\mu}{\bf z} -(\partial_{\mu}{\bf z})^{\dagger}{\bf z} \right) \,\,.
\end{equation}
The $A_{\mu}$ field can be integrated out to give a theory of self-interacting $z$-particles. On the other hand,
if we integrate out the $z$'s, the effective low energy
Lagrangian for the gauge field includes a dynamically generated kinetic term which arises from
closed $z$-loops. This dynamically generated $F_{\mu\nu}^2$ term gives rise to a confining potential
between test $U(1)$ charges, and is also the origin of the $p^2=0$ pole in the Chern-Simons current correlator
and hence of the nonzero topological susceptibility.
The continuum topological charge density operator is defined as
\begin{equation}
q(x) \equiv \frac{1}{2\pi}\epsilon_{\mu\nu}\partial_{\mu}A_{\nu}\,\, .
\end{equation}
The $CP^{N-1}$ models possess a $U(1)$ gauge invariance and
have many properties in common with 4D QCD. For example, they are classically scale invariant and have
classical instanton solutions which, like QCD instantons, are of arbitrary radius. Moreover, the $CP^{N-1}$ models undergo
dimensional transmutation via a conformal anomaly, acquiring a mass gap and becoming asymptotically free.
The $CP^{N-1}$ analogy was also used by Witten \cite{Witten79} to support the assertion that, in unbroken, asymptotically
free gauge theories, classical instantons would ``melt'' due to quantum fluctuations and are thus irrelevant to topological
charge structure in QCD. The arguments in both Refs. \cite{Luscher78} and \cite{Witten79}
lead to a picture of the QCD vacuum which is in some respects a four-dimensional generalization of Coleman's original
discussion \cite{Coleman76} of $\theta$-dependence in the massive Schwinger model, in which the $\theta$
parameter appears as a background electric field, and instantons play no role.
For the comparison of topological charge structure in two-dimensional $U(1)$ theories with that
in QCD, Luscher argued that a precise
analogy between the two theories could be made by identifying the Chern-Simons currents,
since in both theories, nonvanishing topological susceptibility implies a $p^2=0$ pole in the $j_{\mu}^{CS}$
correlator. The crucial observation here is that {\it the $U(1)$ gauge potential $A_{\mu}$ in the $CP^{N-1}$ model
should be identified not with the 4D Yang-Mills gauge potential, but rather with the abelian 3-index Chern-Simons tensor
(\ref{eq:CStensor}).} Similarly, a Wilson loop in 2D corresponds not to a Wilson loop in 4D
but rather, a ``Wilson bag'', i.e. an integral of $A_{\mu\nu\rho}$ over the three-dimensional world volume of a
membrane-like surface. The surface of this bag separates regions of
spacetime which have effective values of $\theta$
which differ by $2\pi$ (or by fractions of $2\pi$ for a fractionally charged bag). Here the effective local value
of $\theta$ is the analog of the local background electric field in Coleman's Schwinger model analysis.
Just as the worldline of a charged particle in 2D serves as a domain wall separating vacua with two different
values of background electric field, so too does the Wilson bag surface in QCD separate two ``k-vacua'' with
values of $\theta$ that differ by integer multiples of $2\pi$.
\subsection{Theta-Dependence in QCD from string/gauge duality}
Remarkably, the same physical picture emerges from AdS/CFT duality in the context
of Witten's brane construction of QCD \cite{Witten98}. In this construction,
the Wilson bag surface associated with the Chern-Simons
tensor corresponds to a wrapped 6-brane in type IIA string theory. (The six brane is wrapped around a compact $S_4$,
so it looks like a 2-brane or membrane in $3+1$ dimensions.) Starting with the string theory on
$R_4\times S_1\times R_5$, one considers the result of introducing $N$ coincident 4-branes, which are wrapped around
the $S_1$ with supersymmetry breaking boundary conditions. The resulting theory on the branes is thus described (at least
at long distances) by a four-dimensional $SU(N)$ Yang-Mills gauge theory without supersymmetry. The origin of the
$\theta$ term in QCD is a five-dimensional Chern-Simons term on the 4-branes
of the form $a\wedge F\wedge F$ where a is the $U(1)$
gauge field that couples to the Ramond-Ramond charge of IIA string theory. When the radius of the compactified
dimension is small, this reduces to a four-dimensional theta term $\theta F\wedge F$ where $\theta$ is given by the Wilson
line of the $RR$ $U(1)$ field around the compact dimension,
\begin{equation}
\label{eq:wl5}
\theta = \oint a_5 dx_5 \,\,.
\end{equation}
In the brane-induced geometry of the IIA string theory, the compact $S_1$ goes around the circumference of a two-dimensional
disk $D$ which has a black hole singularity at it's center. The value of $\theta/2\pi$ given by (\ref{eq:wl5})
determines the number of units of R-R flux that are threaded through the singularity. This picture leads to multiple
``k-vacuum'' states where the local values of $\theta$ differ by integer multiples of $2\pi$.
Adjacent k-vacua are separated by domain walls (to be identified with Wilson bag surfaces) which are the AdS/CFT dual analog of
wrapped 6-branes. The fact that $\theta$ jumps by a multiple of $2\pi$ when crossing a domain wall
(a defining property of the Wilson bag surface integral)
follows in the string theory from the quantization of the R-R charge on the 6-brane.
\begin{figure}
\vspace*{2.0cm}
\special{psfile=laughlin.eps
angle=270 hscale=60 vscale=60
hoffset=0 voffset=120}
\vspace{6.5cm}
\caption{Holographic view of a domain wall in (1+1)-dimensional $CP^{N-1}$ theories from a (3+1)-dimensional perspective.
The long axis of the cylinder becomes the spatial axis of the (1+1)-dimensional theory. Plot is at a fixed time.}
\label{fig:laughlin}
\end{figure}
\subsection{Corbino disks, the integer quantum Hall effect, and thin domain walls}
The interpretation of the 4D theta term as a dimensionally reduced 5D Chern-Simons term is a central feature of the
AdS/CFT view of theta dependence. The general analogy we are pursuing in this paper suggests that we should interpret
a $\theta$ term in the $CP^{N-1}$ model as a dimensionally reduced three-dimensional Chern-Simons term. In fact, this
analogy brings out the deep connection between Witten's description of theta-dependence in Yang-Mills theory \cite{Witten98}
and Laughlin's famous topological
interpretation of the integer quantum Hall effect \cite{Laughlin}. The topology of the (2+1)-dimensional superconductor
considered by Laughlin is realized physically by a
``Corbino disk,'' a 2D disk with a hole in it. The Wilson line integral around the disk measures the magnetic flux
through the hole. For the purpose of dimensionally reducing this to a (1+1)-dimensional theory with a $\theta$
term, it is convenient to consider the topologically equivalent situation of a long thin cylinder, with units
of magnetic flux going down the center of the cylinder, as depicted in Figure \ref{fig:laughlin}. The quantized flow of Hall current down the length of the
cylinder corresponds to a change of $\theta$ by an integer multiple of $2\pi$. A domain wall between two different
k-vacua along the cylinder is represented in the higher dimensional theory by a magnetic monopole at that location
with a Dirac string on one side of it. Note that, in this picture, the transition from, e.g. the $\theta=0$ vacuum
to the $\theta=2\pi$ vacuum takes place over a distance proportional to the radius of the cylinder, so that in the dimensionally
reduced theory, the domain wall becomes infinitely thin. In fact, in the brane construction of QCD, the radius of
the compact spatial dimension can be interpreted as a cutoff, somewhat analogous to the lattice spacing in a lattice
formulation of the 4D theory. This suggests that, if we interpret the coherent sheets observed in the Monte Carlo
simulations as domain wall boundaries between k-vacua, their thickness
should scale to zero linearly with the lattice spacing, i.e. that they should be roughly the same thickness in lattice
units, independent of $\beta$. This is just what is observed in QCD configurations \cite{Horvath05_corr}. As we discuss
in Section IV(B), the thickness of the topological charge structures observed in $CP^{N-1}$ also scales to zero linearly in
the continuum limit.
\subsection{Wilson Lines, Wilson Bags, and Charge Screening}
As emphasized by Witten \cite{Witten79} one should draw a clear distinction
between topological charge structure in a spontaneously broken gauge theory
(e.g. the 2D $U(1)$ Higgs model) where instantons have a fixed size and are expected to
be relevant in the quantum theory, and the structure of an asymptotically free
theory such as $CP^{N-1}$, where the multiple k-vacuum/domain wall picture is expected.
Consider a 2D $U(1)$ gauge theory
in an infinite volume in which $\theta$ is a nonzero constant only on a finite subvolume $V$, surrounded by a region
in which $\theta=0$, i.e. we add a term to the Euclidean action
\begin{equation}
\label{eq:thetaterm}
S\rightarrow S+ \int d^2x \theta(x)\epsilon_{\mu\nu}F^{\mu\nu} \,\,.
\end{equation}
where $\theta(x)=\theta$ inside $V$ and $\theta(x)=0$ outside.
Upon integration by parts, such a $\theta$ term in the path integral
is equivalent to including a Wilson loop around the boundary of $V$, interpretable as the
worldline of a test charge of $\theta/2\pi$ (in units where the charge of the $CP^{N-1}$ field is one),
\begin{equation}
\label{eq:wloop}
S\rightarrow S+\frac{\theta}{2\pi}\oint_{C} A\cdot dx
\end{equation}
where $C=\partial V$ \,\,\, .
No matter what the physical mechanism for topological charge fluctuations, we expect the ground state energy to
be periodic under $\theta\rightarrow \theta+2\pi$. However, this periodicity can arise in two physically
distinct ways. In a dilute instanton scenario, the topological charge comes in locally
quantized lumps which are well enough separated from nearby lumps that we can carve out a local
subvolume around a given lump over which the topological charge integrates to an integer $\nu_i$.
(We do not consider models involving highly overlapping instantons.) The local quantization of
topological charge in an instanton model allows periodicity in $\theta$ to be satisfied locally in each small
subvolume, since the $\theta$ term simply multiplies the partition function by periodic factors $e^{i\theta\nu_i}$ for each
instanton. If we change $\theta$ continuously from 0 to $2\pi$, we expect smooth periodic
behavior without any
bulk transition in the vacuum. Expressing the $\theta$ term as a Wilson loop around the boundary, Eq. (\ref{eq:wloop}),
we can assume that, for a dilute instanton gas, as $V$ gets large, the gauge field
in the asymptotic region can be taken to be pure gauge. In this case,
the precise location of the Wilson loop has no physical significance. As long as it is in the asymptotic pure-gauge
region, it merely counts the number of instantons minus antiinstantons inside $V$.
A much different situation occurs in QCD-like theories where the gauge invariance is unbroken and a finite mass gap
arises via quantum effects. In these theories
(QCD, discussed in \cite{Horvath03_struct} and $CP^{N-1}$, discussed here),
Monte Carlo calculations appear to support Witten's arguments
that instantons disappear from the quantum theory and are, in some sense, replaced by domain walls between k-vacua.
Topological charge does not appear in locally quantized lumps but rather in extended coherent structures of codimension 1.
In this situation, the boundary condition that $F_{\mu\nu}=0$ asymptotically is not the correct one.
If a Wilson loop around $V$ is included in the path integral, it will introduce a physical domain
wall separating the $\theta=0$ vacuum outside from the nonzero-$\theta$ vacuum inside. Periodicity in $\theta$
arises in a discontinuous way, involving a ``string-breaking'' or charged pair production process, resulting in
the screening or partial screening of the Wilson loop. This is just the mechanism discussed in
Coleman's original description of $\theta$-dependence in the massive Schwinger
model.
If the topological susceptibility is nonzero, then for generic values of $\theta$, the free energy per unit
volume inside $V$, $E(\theta)$ is greater than $E(0)$, the value outside $V$.
The Wilson loop around $V$ thus satisfies an area law,
\begin{equation}
\label{eq:arealaw}
\langle W(C)\rangle \sim \exp\left[-(E(\theta)-E(0))V\right] \,\,.
\end{equation}
This exhibits the linear, confining Coulomb force between test charges of $\pm\theta/2\pi$
at opposite ends of the box. In the two-dimensional case, confinement of U(1) charge and nonvanishing topological susceptibility
both arise from the massless pole
in the Chern-Simons current correlator. In four-dimensional QCD, the analog of (\ref{eq:arealaw}) is not confinement
of quarks, but rather a ``volume law'' for Wilson bags:
\begin{equation}
\label{eq:bag}
\left\langle \exp\left[i(\theta/2\pi)\int_S A_{\mu\nu\rho}dx^{\mu}dx^{\nu}dx^{\rho}\right]\right\rangle \sim \exp\left[-(E(\theta)-E(0))V\right]
\end{equation}
where $S$ is the surface of a closed bag and $V$ is the enclosed 4-volume.
In the two-dimensional theory, as $\theta$ is increased the constant electric field
between the test charges increases. As $\theta$ crosses $\pi$, the field becomes strong enough to produce a pair of
charged scalars out of the vacuum and send them to opposite ends of the box, screening one unit of electric flux.
In the $CP^{N-1}$ model the gauge field is actually an auxiliary field composed of $z^+z^-$ pairs,
so it is more accurate to describe the screening process as resulting from a collective motion of the
charged $z$-particles in the vacuum which leaves a net charge at the two ends of the box.
At $\theta=\pi$, the screened and unscreened vacua are degenerate, with the two vacua containing a background of
$\pm\frac{1}{2}$ a unit of electric flux. As theta goes through $\pi$ there is a sudden transition from the unscreened
to the screened vacuum. [By contrast, a dilute instanton gas leads to a background electric
field $\propto \sin\theta$, which goes smoothly through zero at $\theta=\pi$.] As $\theta$ is further increased
toward $2\pi$, the energy per unit volume $E(\theta)$ decreases. Finally, at $\theta = 2\pi$ we have $E(2\pi)=E(0)$, and
the area term in the Wilson loop vanishes. The external test charge is completely screened by the polarization of
the vacuum. At this point there is again no net background flux and no force between the test charges.
A similar discontinuous behavior at $\theta=\pi$ is expected in the case of four-dimensional QCD \cite{Luscher78,Witten79}, where
the Wilson loop that is used to circumscribe the region of nonzero $\theta$ in the two-dimensional theory
is replaced by the Wilson bag,
which does the same thing in four-dimensional Yang-Mills theory.
The force between the bag walls vanishes when the bag has integer charge, i.e. when the
step in $\theta$ across the bag wall is $2\pi$ (or an integer multiple of $2\pi$).
This fully screened bag is the gauge theory version of the wrapped
6-brane of IIA string theory. The vanishing of the force between the walls of the bag for $\theta=2\pi$
is the gauge theory manifestation of the fundamental string theory result, first discovered by Polchinski \cite{Polchinski},
relating the quantization of Ramond-Ramond charge on a D-brane and
the vanishing of the force between two D-branes due to closed string exchange.
In order to discuss topological charge structure in this theoretical framework, we
need to determine what the elementary ``quasiparticle'' excitations of the vacuum
are, and what type of topological charge structure is associated with these excitations.
Note that we are discussing here the excitations in the flavor singlet channel accessed by
the gauge field and topological charge operators, after integrating out the $z$ fields. The
spectroscopy of the $CP^{N-1}$ models also includes a multiplet of light nonsinglet mesons (which
we use here to define the overall mass gap or relative lattice spacing for different $\beta$'s.)
These nonsinglet mesons are the lowest lying physical states in the charge neutral sector.
Unlike the singlet channel, the nonsinglet states can be excited by a local $z_i^*(x)z_j(x)$ operator.
The spectral structure of the flavor-singlet channel can be discussed in terms of the $A_{\mu}$ correlator,
or, equivalently, the Chern-Simons correlator (\ref{eq:cscorr}). [Because of the constraint
${\bf z}^\dagger \cdot {\bf z} = 1$, the lowest dimension operator that is available to access the flavor
singlet channel is the currrent (\ref{eq:current}), which is proportional to $A_{\mu}$.]
Consider the correlator as an analytic
function of $s= -p^2$. Since the topological charge correlator
$p^2G(p^2)$ is gauge invariant and there is a mass gap in the theory,
it can be asserted that the imaginary part of $p^2G(p^2)$ is zero below
some threshold for real particle production. The threshold is the mass of the lightest flavor-singlet
state, which could be either the mass of a flavor-singlet meson, or (if no stable flavor-singlet meson
exists) $s<4M^2$ where $M$ is the mass of the lightest nonsinglet meson. However, because of the zero mass
pole in the Chern-Simons correlator, the imaginary part of $G(p^2)$ includes a zero mass delta function
$\propto \delta(s)$.
This analytic structure in the gauge correlator for $CP^{N-1}$ models has a close analog in superconductivity theory \cite{Tinkham}.
In that case, the complex conductivity for any nonzero frequency $\omega$ below the mass gap has no absorptive (real)
part, due to the absence of resistance. However, because of the purely accelerative mode of the Cooper pairs,
the dispersive part has a $1/\omega$ pole,
and the absorptive part has a $\delta(\omega)$, representing the DC flow of supercurrent.
Because of the presence of the supercurrent, if a charged electron is inserted
into a superconductor, the quasiparticle that forms is
electrically neutral, corresponding to the electron plus the backflow of the Cooper pairs, which
screen the electron charge \cite{Kivelson}. The excess charge from the electron ends up on the surface of the superconductor.
Thus a quasiparticle excitation in a superconductor is an electrically neutral screened electron.
Following this analogy combined with the preceeding discussion, we propose that the coherent topological
charge excitations observed in $CP^{N-1}$ and QCD should be identified with the screened Wilson loop
and the screened Wilson bag,
respectively. In the $CP^{N-1}$ case, the screened Wilson line can be interpreted as the world line of a charged $z$-particle whose
Coulomb field has been cancelled out by the backflow of charge in the vacuum. Thus the gauge field
associated with an elementary excitation is a one-dimensional thread of $A_{\mu}$ flux which is constant along its length and
whose transverse cross-section is a delta-function. On the lattice, this corresponds to the coherent excitation
of a single line of links. Recall that the $A_{\mu}$ field is an auxiliary field whose equation of motion sets
it equal to the $z$-particle current, Eq. (\ref{eq:London}).
Thus a Wilson loop excitation can also be interpreted as a filamentary current flow.
\begin{figure}
\begin{center}
\epsfxsize=.60\textwidth
\epsfbox{qtot_wline20.ps}
\end{center}
\caption{Plot of the overlap topological charge distribution for a single Wilson line excitation.}
\label{fig:wline}
\end{figure}
In Figure \ref{fig:wline} we show the topological charge distribution
calculated by the overlap method for a gauge field which consists of a single straight Wilson line excitation. The
links along the Wilson line are taken to be a constant nonzero phase, with all other links on the lattice set to unity.
The topological charge distribution associated with the Wilson line excitation is a dipole layer, with
oppositely charged one-dimensional coherent regions on either side of the Wilson line. This resembles
the local structure observed in the Monte Carlo distributions.
In a similar way, a Wilson bag excitation of the Chern-Simons tensor which is constant on a 3-surface (and zero
elsewhere) in four-dimensional QCD yields
a topological charge distribution consisting of oppositely charged coherent three-dimensional regions on either
side of the bag surface. Again, this provides a reasonable description of the coherent structure seen in
the QCD Monte Carlo distributions.
\section {Lattice $CP^{N-1}$ Models and Monte Carlo Calculations}
\label{sec:lattice}
\subsection{Lattice Action}
For the lattice formulation of the $CP^{N-1}$ models, we introduce $U(1)$ link fields $U(x,x+\hat{\mu})$ in the usual way and take the action to
consist of gauge invariant nearest-neighbor hopping terms,
\begin{equation}
\label{eq:action}
S = -\beta N\sum_{x,\hat{\mu}}{\bf z}(x)^\dagger \cdot {\bf z}(x+\hat{\mu})U(x,x+\hat{\mu})\,+\,c.c. \,\,.
\end{equation}
The naive ultralocal definition of
topological charge density on the lattice is in terms of the plaquette phase:
\begin{equation}
\label{eq:logplaq}
q_P(x) = \frac{1}{2\pi i}\log U_P(x)
\end{equation}
where $U_P(x)$ is the product of phases around a plaquette at site $x$. Here the principle branch of the log is chosen
so that the charge on a plaquette always lies between $-\frac{1}{2}$ and $+\frac{1}{2}$. With toroidal boundary
conditions, this definition sums to an integer-valued global topological charge.
The analogous construction of topological charge in four-dimensional QCD from gauge variables on the lattice has been given by Luscher
\cite{Luscher82}.
We have found that, just as in four-dimensional QCD, a much better definition of the local topological charge density is given
in terms of an exactly chiral overlap Dirac operator D \cite{Neuberger}. As shown in \cite{Hasenfratz98}, if $D$ satisfies
Ginsparg-Wilson relations, then the lattice topological charge density operator which appears in the axial U(1)
anomaly equation is
\begin{equation}
\label{eq:overlapq}
q(x) = \frac{1}{2} tr\gamma_5D(x,x)
\end{equation}
where the trace is over spin indices in $CP^{N-1}$ and over spin and color indices in QCD.
Using the construction of the overlap operator D described in
the next section, we have studied topological charge distributions in $CP^1$, $CP^3$, and $CP^9$.
We find that the density $q(x)$ defined in (\ref{eq:overlapq}) reveals the coherent long-range structure
that was obscured by the short range noise inherent in the ultralocal operator $q_P(x)$. A detailed comparison
of distributions obtained using the overlap $q$ with those obtained using the plaquette operator $q_P$
has been carried out and will be presented elsewhere.
\subsection{Monte Carlo Calculations}
The lattice $CP^{N-1}$ action, Eq. (\ref{eq:action}) was used for Monte Carlo
simulation. The updating of the ${\bf z}$ fields was done by a Cabibbo-Marinari
algorithm consisting of a sequence of $SU(2)$ heat bath updates applied to
all possible pairs of $z$-components. The gauge links were updated by
a multi-hit Metropolis algorithm. Calculations were done for $CP^1, CP^3$, and
$CP^9$. For each value of $\beta$, we determine a value for the mass gap by
studying the exponential falloff of the $z_i^*z_j$ meson correlator for $i\neq j$,
\begin{equation}
\int dx_2\langle z_i^*z_j(x)\,z_j^*z_i(0)\rangle \,\sim\, const.\times e^{-\mu x_1} \,\,.
\end{equation}
The evaluation of $\mu$ determines the mass scale. In Table \ref{tab:massgap} we give the mass gap
$\mu$ in lattice units for various values of $N$ and $\beta$.
The Monte Carlo routine was checked extensively by comparing results with
the strong coupling expansion \cite{Seiberg} to order $\beta^8$ for both the meson correlator and also for the topological
charge correlator for the ultralocal $q_P(x)$ operator.
\begin{table}[h]
\centering
\caption{Massgap in lattice units}
\begin{tabular}{||c|c||c|c||c|c||} \hline \hline
$\beta$ & $CP^1$ & $\beta$ & $CP^3$ & $\beta$ & $CP^9$ \\ \hline \hline
1.0 & .438(5) & 0.8 & .554(2) & 0.7 & .406(2) \\ \hline
1.1 & .286(5) & 0.9 & .327(3) & 0.8 & .212(2) \\ \hline
1.2 & .179(3) & 1.0 & .180(1) & 0.9 & .0895(6) \\ \hline
1.3 & .111(1) & 1.1 & .0882(7) & 1.0 & .0579(2) \\ \hline
1.4 & .0696(8) & 1.2 & .0531(3) & 1.1 & .0475(4) \\ \hline \hline
\end{tabular}
\label{tab:massgap}
\end{table}
We study lattice volumes up to $50^2$. For each $N$, values of $\beta$ were chosen to
cover a range of correlation lengths from approximately $\xi=\mu^{-1}=3$ to 20.
Correlator fits were carried out by standard methods, using covariant $\chi^2$
minimization. The statistical errors and autocorrelation
times were determined by a bootstrap algorithm.
\subsection{Overlap Dirac Operator}
\label{sec:overlap}
The overlap construction \cite{Neuberger} provides a prescription for constructing an
exactly chiral Dirac operator $D$ satisfying the GW relations (\ref{eq:GW}). The construction
begins with a suitable ultralocal discretization of the Dirac operator as a kernel. Here we will
use the usual Wilson-Dirac operator as the kernel,
\begin{equation}
D_W = \frac{1}{2} \gamma_\mu (\nabla_\mu + \nabla_\mu^*)-
\frac{1}{2} a \nabla^*_\mu \nabla_\mu \,\, ,
\end{equation}
where $\nabla_\mu$ and $\nabla_\mu^*$ are the forward and backward
lattice derivatives, respectively,
\begin{eqnarray*}
\nabla_\mu \psi(x) &=& \frac{1}{a} \left( U_\mu(x) \psi(x+a{\hat \mu})
- \psi(x) \right) \\
\nabla_\mu^* \psi(x) &=& \frac{1}{a} \left( \psi(x)
- U^{\dagger}_\mu(x-a{\hat \mu}) \psi(x-a{\hat \mu}) \right) \,\, .
\end{eqnarray*}
The overlap operator can be written as
\begin{equation}
D = \frac{1}{a} \left( 1 + \gamma_5 \, \epsilon( H_W(m) ) \right) \,\, ,
\end{equation}
where $H_W(m)=\gamma_5 D_W(-m)$ and the sign function is
\begin{equation}
\epsilon( H_W(m) ) = \frac{H_W(m)}{\sqrt{H^{\dagger}_W(m) H_W(m)}} \,\, .
\end{equation}
This operator has a generalized chiral symmtry given by the
Ginsparg--Wilson relation
\begin{equation}
\gamma_5 \, D + D \gamma_5 = {\bar a} D \, \gamma_5 D \,\,,
\end{equation}
as is easily verified.
The Wilson mass parameter can
be chosen to lie in the range $0<m<2$ with the various values of
$m$ giving the same continuum physics. This range is
allowed, at least in the case of free fields and for sufficiently smooth gauge
field configurations. We have carried out our calcuations in the range
$0<m<1$ and found the results to be insensitive to the choice of $m$ in
this range.
For the lattice sizes used in this study (up to $50\times 50$), it was possible to construct the
overlap operator exactly, using a LAPACK singular value decomposition routine.
(See \cite{Rebbi} for a discussion.)
\section {Topological Charge Structure in $CP^{N-1}$ models}
\label{sec:Topcharge}
\begin{figure}
\vspace*{1.0cm}
\special{psfile=struct_rnd6.ps
angle=270 hscale=40 vscale=40 hoffset=-45 voffset=30}
\special{psfile=struct_rnd15.ps
angle=270 hscale=40 vscale=40 hoffset=175 voffset=30}
\vspace{6.5cm}
\caption[]{Two typical largest structures for random $q(x)$ distributions on a $50\times 50$ lattice.}
\label{fig:randomstructures}
\end{figure}
\begin{figure}
\vspace*{1.0cm}
\special{psfile=struct_1.2_005300.ps
angle=270 hscale=40 vscale=40 hoffset=-45 voffset=30}
\special{psfile=struct_1.2_007400.ps
angle=270 hscale=40 vscale=40 hoffset=175 voffset=30}
\vspace{6.5cm}
\caption[]{Two typical largest structures for $CP^3$ on a $50\times 50$ lattice at $\beta=1.2$.}
\label{fig:CP3structures}
\end{figure}
\begin{figure}
\vspace{-1.0cm}
\begin{center}
\epsfxsize=0.80\textwidth
\epsfbox{qint_cp3_1.2_000100.ps}
\end{center}
\vspace{-0.5cm}
\caption{Plot of the function $sign(q(x))$ for a $CP^3$ configuration on a $30\times 30$ lattice at $\beta=1.2$.}
\label{fig:labyrinth}
\end{figure}
Using the overlap definition of the topological charge, visual inspection of the TC distributions
for individual configurations reveals well-defined ``stringy'' patterns in the form of locally
one-dimensional long range sign coherent structures in the two-dimensional space. These structures are completely analogous to the
3D sheets found in four-dimensional QCD, and precisely what is expected from the Chern-Simons tensor analogy.
It is worth pointing out that, when we use the log-plaquette
definition of topological charge, we do not observe any such sign coherent structures.
[It is interesting to note, however, that much of the qualitative structure seen in the overlap $q(x)$
distribution can also be discerned from the $q_P(x)$ distribution after a simple smoothing
procedure, defining the charge at a site to be the average of $q_P$ for the four plaquettes
around the site. This comparison will be discussed in detail elsewhere.]
A direct way to get a qualitative view of the coherent topological charge structure in
$CP^{N-1}$ is to plot the largest connected structure in each configuration. Here we define a connected structure
as a set of nearest-neighbor-contiguous lattice points with the same sign of topological charge.
As a reference, we first study the structure plots for randomly generated distributions.
Two typical largest structures from a random TC distribution on a $50\times 50$ lattice
are shown in Fig. \ref{fig:randomstructures}. These
are to be compared with the structures shown in Fig. \ref{fig:CP3structures}
which are obtained from the overlap TC distribution
for typical $CP^3$ Monte Carlo configurations at $\beta=1.2$ (correlation length $\approx 19$). Overall, the $CP^3$ structures
are much larger in extent (typically as large as the lattice itself) compared with the random structures.
Even more striking is the ``stringiness'' of the $CP^3$ structures. They are characterized by long slender
regions of coherence which are locally one-dimensional. This contrasts with the random structures which
are not only smaller in extent but much more two-dimensional.
Another qualitative feature that can be illustrated graphically is the layered nature of the topological
charge distributions, with alternating sign coherent regions interleaved in a somewhat labyrinthine arrangement.
Figure \ref{fig:labyrinth} shows a plot of $f(x)=sign(q(x))$ on a $CP^3$ configuration on a $30\times 30$
lattice. As is the case in QCD, the presence of
thin alternating-sign coherent regions of codimension one is in some sense the maximum amount of long range order
allowable by the required (and observed) negativity of the correlator for nonzero separation.
To construct a quantitative measure of coherence, we determine the
inverse participation ratio (IPR) defined as the inverse fraction of the
lattice volume occupied by coherent structures, i.e.
$IPR(n)=V/V(n)$, where $V$ is the total volume and $V(n)$ is the
volume occupied by $n$ largest coherent structures. (Thus, small localized structures
give a large IPR, while a structure occupying most of the lattice would give an IPR
close to 1.)
\begin{figure}
\centering
\includegraphics[width=4in,angle=270]{structure_volume.eps}
\caption{Inverse participation ratio (IPR) for the overlap and plaquette
distributions of the topological charge. For a comparison, we show the
result for a random distribution of numbers.}
\label{fig:VolComp}
\end{figure}
Fig. \ref{fig:VolComp} shows the results for both the overlap $q(x)$ distribution and the log-plaquette operator $q_P(x)$.
Also shown for comparison is the same
plot for a set of random configurations.
These results are from a large ensemble of $CP^3$
configurations on a $40\times 40$ lattice with $\beta=1.0$
(correlation length$\approx 5$).
We see that the overlap
definition of $q(x)$ exhibits a clear indication of coherence, e.g.
the typical largest structures are much larger than those in a random
configuration. Somewhat surprisingly, the plaquette phase definition actually
exhibits {\it less} structure than the purely random distributions. This is an effect
of the nearest-neighbor anticorrelation for the plaquette phase.
\subsection {Topological Charge Correlator}
In the continuum, the Euclidean topological charge correlator must be negative outside of a
positive contact term at $x=0$. On the lattice, the overlap $q(x)$ is not ultralocal, but it can
be argued that it becomes local in the continuum limit, at least for sufficiently smooth gauge
fields \cite{Luscher78}. Spectral arguments only require the correlator to be negative
when the two operators are non-overlapping. The correlator $\langle q(x)q(0)\rangle$ is shown
in Figure (\ref{fig:tcc_lat}) for $CP^3$ for several values of $\beta$.
We see that the correlator consists of a
positive core at $x\leq \sqrt{2}$, and a negative short-range tail starting at $x=2$.
Note that the correlator is plotted
in lattice units not in physical units. Thus, for example, the location of the minimum of the correlator
is at $x =$ 2 lattice spacings, independent of $\beta$.
Also, in physical units, the y axis of the plot
would be rescaled by a factor of $1/\mu^4$, so the minimum at $x=2$ is in fact getting much deeper at large
$\beta$, indicating the development of a short-distance power law singularity.
\begin{figure}
\centering
\includegraphics[width=4in,angle=270]{chi_plusminus.ps}
\caption{Scaling behavior of the positive and negative contributions to the topological susceptibility
for $CP^3$}
\label{fig:chi_pn}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=4in,angle=270]{correlator_cp3.eps}
\caption{Topological charge correlator for $CP^3$ (lattice units).}
\label{fig:tcc_lat}
\end{figure}
In Figure (\ref{fig:chi_pn}) we show the separate scaling behavior of the positive and negative contributions to the topological
susceptibility, i.e. the integral over $\langle q(x)q(0)\rangle$ for $|x|<2$ and for $|x|\geq 2$.
This shows that the contribution of the contact term and the contribution of the negative tail
are separately divergent in the continuum limit, but that the divergence cancels and the topological
susceptibility scales nicely. The spatial extent of the positive core region
is clearly related to the thickness of the coherent regions, while the negative short-range piece
arises from the layered, alternating-sign structure of the configurations.
Figure (\ref{fig:chi_t_scaling}) shows the full topological susceptibility as a function of the mass gap $\mu$
for $CP^1, CP^3$, and $CP^9$. Here $\chi_t$ is plotted in physical units (dividing the lattice value
by $\mu^2$), so that a constant value indicates proper scaling behavior.
We observe that $\chi_t$ appears to be properly scaling with the mass gap
for both $CP^3$ and $CP^9$. On the other hand, for $CP^1$ the topological susceptibility is not
even approximately scaling. The anomalous scaling behavior of $\chi_t$ for $CP^1$ is believed to be a consequence
of the divergent contribution of small instantons with radius of order $a$ \cite{Luscher_CP1}. Because of this odd scaling
behavior for $CP^1$, we have focused most of our structure studies on $CP^3$ and $CP^9$.
\begin{figure}
\centering
\includegraphics[width=4in,angle=270]{chi_t_scaling.ps}
\caption{Scaling behavior of $\chi_t/\mu^2$ as a function of inverse correlation length
(= mass gap in lattice units) for $CP^1$, $CP^3$ and $CP^9$}
\label{fig:chi_t_scaling}
\end{figure}
\subsection{Thickness of structures}
To support the assertion that the size of the positive contact term in the correlator is determined by the
thickness of the coherent regions, we can compare this size with a direct measure of the thickness.
We calculate the average thickness of a given coherent structure as follows:(1) Choose a particular point
on the structure; (2) Walking along a straight path in each of the four directions from that point, measure
the length $l_{\min}$ of the shortest path out of the coherent region, i.e. the path to the nearest opposite-sign
point; (3) Average over all points on the structure. This length of the shortest path should average
to 1/2 the thickness of the structure, so we define the thickness to be $t=2\langle l_{min}\rangle$.
Now let us define $x=x_c$ to be the crossover point where the correlator turns from positive to negative.
In practice, we have estimated this value by linearly interpolating between the positive value of the
correlator at $x=\sqrt{2}$ and the negative value at $x=2$.
If the positive core of the correlator arises from the coherent structures, we might expect that the
thickness of the structures would be roughly $2x_c$.
\begin{figure}
\centering
\includegraphics[width=4in,angle=270]{thickness.eps}
\caption{Physical thickness of the structures as a function of lattice
spacing.}
\label{fig:thickness}
\end{figure}
As shown in Figure (\ref{fig:thickness}) the direct estimate of the thickness
$t$, and the value obtained from the core size of the correlator are in approximate agreement. Moreover,
both of these estimates give a thickness which is approximately constant in lattice units and thus scales
to zero linearly in physical units. This agreement leaves little doubt that the positive contact term
in the TC correlator arises from the presence of extended, one-dimensionally coherent topological charge structures
whose thickness scales to zero in the continuum limit.
\subsection{Hausdorff dimension of structures}
To construct another quantitative measure of the effective dimensionality of the largest sign-coherent
structures, we computed their Hausdorff dimension. Starting with each
site on a structure, we measured the number of other sites $N(r)$
on that structure within a radius $r$. By fitting $N(r)\propto r^d$,
we extract the Hausdorff dimension, $d$.
Computing the topological charge using the overlap operator, and measuring
the Hausdorff dimension of the largest structure in each configuration
in the ensemble, we obtain
$d=1.26(6)$, confirming the visual impression that these structures are
approximately one-dimensional. For comparison, we studied spin domains
in the two-dimensional Ising model just above $T_c$, adjusted to give structures of
the same volume as the $CP^{N-1}$ configurations. The Hausdorff dimension
of the Ising spin domains is found to be $d=1.86(5)$.
\subsection{Inherently Global Nature of the Structures}
\begin{figure}
\centering
\includegraphics[width=4in,angle=270]{inherent_global.eps}
\caption{Variation of $C(f)$ and $L(f)$ as a function of $f$}
\label{fig:length}
\end{figure}
One result of the QCD studies \cite{Horvath05_global} was that the 3D coherent sheets of
topological charge are {\it inherently global} in structure, in the sense that the
topological charge is distributed more or less uniformly throughout the structure
and not in localized lumps.
A close observation of the $CP^{N-1}$ topological charge distributions reveals that the structures in this case are
also inherently global. The 1D structures are, in fact, composed of continuous chains of
mountains or valleys of almost constant heights. In other words, the most
intense points join together to form the structures.
To prove this point quantitatively, we study the variation of the length of the largest
structure and the topological susceptibility as a
function of the fraction $f$ of points included, starting with the most intense points, as ranked by
$|q(x)|$. This type of analysis was applied to the QCD structures in Ref. \cite{Horvath05_global,Horvath05_formalism}.
The cumulative function $C(f)$ of the topological susceptibility \cite{Horvath05_global}
represents the fraction of the total topological susceptibility obtained
when only a fraction $f$ of the most intense points are included in the
calculation. It is defined as:
\begin{equation}
C(f) = \frac {\chi (f)} {\chi (1)}; \quad \chi (f) \equiv \frac {\left< Q^2 (f)\right> -
\left< Q(f) \right>^2} {V}; \quad
Q(f) = \sum_{x \epsilon S(f)} q(x)
\end{equation}
where $V \equiv a^2 N$, $N$ being the total number of points on the lattice,
and $S(f)$ is the set of points above the threshold introduced via $f$.
The length of a structure is defined as the maximal distance between two
points on the structure, i.e. $l(\Gamma) \equiv \{ max \left| x-y \right|: x,y
\epsilon \Gamma \}$, where $\Gamma$ is the set of points lying on the same structure.
The ratio of the length of the largest structure at any fraction $f$ to
that at $f=1$ is $L(f)$, i.e. $L(f) \equiv \frac {l(f)} {l(1)}$.
A plot of $C(f)$ and $L(f)$ (see Fig. \ref{fig:length}) as a function of $f$
shows that $L(f)$ increases
much more rapidly than $C(f)$, and reaches $1$ at $f=0.5$,
when only half of the total points are considered on the basis of the
intensity criterion. This proves that the most intense points are connected together
and the structures are in fact inherently global, in the sense discussed in \cite{Horvath05_global}.
\section{Conclusions and Discussion}
In this paper we have presented Monte Carlo results for topological charge structure in
two-dimensional $CP^{N-1}$ sigma models. These models exhibit long-range
one-dimensionally coherent topological charge structure which is precisely analogous to
the three-dimensional coherent sheets observed in four-dimensional pure glue QCD in Reference \cite{Horvath03_struct}.
The analogy between the two-dimensional U(1) gauge potential $A_{\mu}$ in $CP^{N-1}$ and the {\it abelian} 3-index
Chern-Simons tensor $A_{\mu\nu\sigma}$ in QCD provides a natural framework for interpreting the
long range structure in both theories. In this framework, the elementary topological charge excitations
of QCD are the ``Wilson bags'' first suggested by Luscher \cite{Luscher78}. A closed
Wilson bag in four dimensions is analogous in $CP^{N-1}$ to the creation and annihilation of a $z^+z^-$ pair, forming a
closed Wilson loop.
The alternating-sign layered arrangement of the coherent regions in the Monte Carlo configurations
(c.f. Fig. (\ref{fig:labyrinth})) is a central feature of the topological charge structure in both
two-dimensional and four-dimensional theories. At a calculational level, it is clear that this layered structure is enforced by
the requirement that, in the continuum, the two-point TC correlator is negative for nonzero separation.
An attractive feature of the Wilson bag (Wilson line) as the fundamental topological excitation of QCD ($CP^{N-1}$)
is that the associated topological charge distribution is a dipole layer, which leads naturally to
the alternating-sign layering that is observed. The physical vacuum is presumably a condensate of
these surfaces. Both the thickness of the surfaces and the average spacing between adjacent surfaces
are going to zero in the continuum limit, leading to the distinctive features of the TC correlator:
(1) A positive, divergent contact term, (2) A negative, divergent short distance term, and (3) A cancellation
between the (separately divergent) positive and negative contributions to the integrated correlator, giving
a finite topological susceptibility which scales properly, $\propto \mu^2$. For the $CP^{N-1}$ case,
we can also associate this structure with the
fact that the gauge field is an auxiliary field representing a coherent oscillation of charged $z$-particles.
The dynamically generated kinetic term for the gauge field, which arises from closed $z$-loops, is in fact
responsible for the $1/q^2$ pole in the CS correlator and hence is the origin of finite topological susceptiblity.
Thus, the picture of the gauge field vacuum as consisting of a dense gas of Wilson line excitations in two-dimensional
Euclidean space is apparently compatible with the more familiar view of the $CP^{N-1}$ ground state
as a plasma of flavor-singlet $z^+z^-$ pairs which supports the oscillations of the
gauge field, generates the $1/q^2$ pole via loop effects, and produces finite topological susceptibility.
This latter view is made explicit in the large N solution. Although the model of topological charge excitations
based on screened Wilson lines seems to be generally compatible with the large N analysis, the nature of
charge screening in the model differs significantly from that which appears in the large N solution.
Large N leads to a ``quark model'' view of both singlet and nonsinglet mesons, which are loosely bound but
confined $z^+z^-$ states held together by the linear Coulomb force associated with the dynamically generated
$F_{\mu\nu}^2$ term. There would thus be a constant density of topological charge (electric field) between
the two constituents of the bound state. One might then expect Euclidean topological charge distributions to
be dominated by large coherent two-dimensional bubbles of topological charge, whose thickness was determined
by the confinement scale. Not only is this inconsistent with what is seen in the Monte Carlo, but it also
would produce a positive TC correlator for distances less than the confinement length, violating the spectral requirement.
It has been argued \cite{Rabinovici,Samuel} that the large N solution is actually
misleading in this regard because the large N saddle-point approximation entails a subtle violation of
Elitzur's theorem. This argument suggests that the nature
of the screening of the $U(1)$ charge is more accurately represented by the lattice strong coupling expansion,
where it is easily seen that, because of the absence of a bare gauge kinetic term, screening takes place
{\it ultralocally}, in the sense that the only nonvanishing terms are those in which the net current flowing on
every link is zero. This phenomenon, which has been referred to as ``superconfinement,'' strongly suggests that,
even in the continuum limit, charge screening should take place locally. The fact that we observe essentially
one-dimensional coherent regions of topological charge, which have a typical thickness proportional to the lattice
spacing, appears to support the superconfinement view of charge screening in the $CP^{N-1}$ models. Further Monte Carlo
studies of charge screening in the $CP^{N-1}$ models might shed additional light on this issue.
The view of QCD dynamics provided by AdS/CFT holography constitutes a powerful new framework from which to explore
the structure of the QCD vacuum \cite{Gross}. The issue of topological charge structure and $\theta$-dependence
in QCD lies at the heart of the AdS/CFT correspondence. As discussed by Witten \cite{Witten98}, the string theory
dual of QCD topological charge is Ramond-Ramond charge in IIA string theory. This is the fundamental solitonic,
or ``magnetic'' charge of the theory which is not carried by ordinary string states, but is carried by D-branes.
Comparing Witten's discussion of $\theta$-dependence from AdS/CFT holography with the earlier, purely four-dimensional
discussion of Luscher \cite{Luscher78}, it is clear that the ``Wilson bag'' 3-surface
is holographically dual to a wrapped 6-brane in Witten's description. Both of them have the defining
property that the local value of $\theta$ jumps by $2\pi$ across the surface.
The possibility of directly confronting detailed aspects of the AdS/CFT correspondence with Monte Carlo data is
particularly exciting. Ongoing Monte Carlo experiments on the coherent structures in both $QCD$ and $CP^{N-1}$ should
provide further, more detailed, tests of the Wilson bag interpretation.
We are grateful to P. Arnold, P. Fendley, and Y. Lian for discussions on these and related topics. This work was supported
in part by the Department of Energy under grant DE-FG02-97ER41027.
\begin {thebibliography}{}
\bibitem{Bardeen00} W. A. Bardeen, A. Duncan, E. Eichten and H. Thacker,
Phys.\ Rev.\ D {\bf 62}, 114505 (2000).
\bibitem{Bali01} G. Bali, Phys. Rept. 343: 1 (2001).
\bibitem{Seiler} E. Seiler and I. O. Stamatescu, MPI-PAE/PTh 10/87.
\bibitem{Vicari} E. Vicari, Nucl.~Phys. B554: 301 (1999).
\bibitem{Hasenfratz98} P. Hasenfratz, V. Laliena, F. Niedermayer,
Phys. Lett. B427, 125 (1998).
\bibitem{Luscher82} M. Luscher, Commun. Math. Phys. 85:39 (1982).
\bibitem{Luscher78} M. Luscher, Phys. Lett. B78, 465 (1978).
\bibitem{Horvath03_struct} I. Horvath et al, Phys. Rev. D68, 114505 (2003).
\bibitem{Horvath05_global} I. Horv\'ath et al, Phys. Lett. B612: 49 (2005).
\bibitem{Horvath05_corr} I. Horv\'ath et al, Phys. Lett. B617: 21 (2005).
\bibitem{Horvath05_reality} A. Alexandru, I. Horvath, J. Zhang, Phys. Rev. D72:034506 (2005).
\bibitem{Witten79} E. Witten, Nucl. Phys. B149, 285 (1979).
\bibitem{Coleman76} S. Coleman, Annals Phys. 101, 239 (1976).
\bibitem{Witten98} E. Witten, Phys. Rev. D 81, 2862 (1998).
\bibitem{Laughlin} R.B. Laughlin, Phys. Rev. B. 23, 5632 (1981),
Phys. Rev. Lett. 50, 1395 (1983).
\bibitem{Polchinski} J. Polchinski, Phys. Rev. Lett. 75, 4724-4727 (1995).
\bibitem{Tinkham} M.~Tinkham,\underline{Superconductivity}, Documents on Modern Physics, Gordon and Breach, NY, 1965.
\bibitem{Kivelson} S. A. Kivelson, D. S. Rokhsar, Phys. Rev. B 41, 11693-11696 (1990).
\bibitem{Neuberger} H. Neuberger, Phys. Rev. Lett.81, 4060-4062 (1998).
\bibitem{Rebbi} L.~Giusti, C.~Hoelbling and C.~Rebbi,
Phys.\ Rev.\ D {\bf 64}, 054501 (2001).
\bibitem{Luscher_CP1} M. Luscher, Nucl.Phys. B200, 61 (1982).
\bibitem{Horvath05_formalism} I. Horvath, Nucl.Phys. B710, 464-484 (2005).
\bibitem{Rabinovici} E. Rabinovici and S. Samuel, Phys. Lett. B101: 323 (1981).
\bibitem{Samuel} S. Samuel, Phys. Rev. D28: 2628 (1983).
\bibitem{Seiberg} N. Seiberg, Phys. Rev. Lett. 53: 637 (1984).
\bibitem{Gross} D. Gross and H. Ooguri, Phys. Rev. D58: 106002 (1998).
\end {thebibliography}
\end {document}
|
2,877,628,091,542 | arxiv | \section{Overview and Motivation}
Almost for the past one decade, the AdS/CFT correspondence \cite{Maldacena:1997re}-\cite{Aharony:1999ti} has been found to provide an extremely elegant tool in order to explore various physical properties of strongly coupled (gauge theory) plasma at sufficiently high temperatures. The hydrodynamic description of such strongly coupled gauge theories has been studied quite successfully by considering asymptotically AdS black holes in the dual gravitational counterpart \cite{Policastro:2001yc}-\cite{Kovtun:2008kx}. The underlying motivation behind such analysis rests on the fact that the Quark Gluon Plasma (QGP) produced at RHIC, Brookhaven is strongly coupled where the usual techniques of perturbative Quantum Field Theory (QFT) do not apply.
Apart from being strongly coupled, the other characteristic feature of the QGP produced at RHIC is the anisotropic expansion of the fireball during the very early stage of the collision \cite{Ryblewski:2010bs}-\cite{Martinez:2010sd} which therefore has driven a lot of attention in the context of holography \cite{Mateos:2011ix}-\cite{Cheng:2014qia}. In \cite{Mateos:2011ix}-\cite{Mateos:2011tv}, the authors had proposed a systematic anisotropic construction in the context of Einstein-axion-dilaton gravity where they have considered a particular anisotropic ($ \theta $ deformed) version of $ \mathcal{N}=4 $ SYM plasma namely, $ \delta S_{YM}\sim \int \theta(z) Tr F\wedge F $, where the $ \theta $ parameter (which is dual to the axion in the bulk) depends linearly on one of the spatial directions of the brane. The corresponding hydrodynamic analysis of their model has been performed in \cite{Rebhan:2011vd}. The key outcomes of their analysis could be summarized as follows: (1) The DC conductivity along the isotropic direction of the brane is different from that of its value corresponding to the anisotropic direction, and most importantly, (2) the shear viscosity to entropy ($ \eta/s $) ratio corresponding to the longitudinal fluctuations has been found to differ significantly from that of its value computed from the transverse fluctuations. The most significant outcome of their analysis rests on the fact that one could have a natural violation of the conjectured lower bound on $ \eta/s $ ratio solely from the anisotropic considerations even in the context of Einstein gravity \cite{Rebhan:2011vd}.
Even before these analysis had performed, in \cite{Landsteiner:2007bd} the authors had studied hydrodynamics of a strongly coupled plasma in a slightly different context of anisotropy which was driven due to presence of the non commutativity along different spatial directions of the $ Dp $ brane in the presence of a background NS B field. Holographically such theories are supposed to describe non commutative $ \mathcal{N}=4 $ SYM plasma at strong coupling \cite{Seiberg:1999vs}-\cite{Hashimoto:1999ut}. In their analysis \cite{Landsteiner:2007bd}, the authors had found that despite of the spatial anisotropy (that is caused due to the distinction between the commutative and the non commutative spatial directions) the shear viscosity to entropy ($ \eta/s $) ratio turns out to be universal for two different shear channels. The reason that the universality of the bound is still maintained in the non commutative scenario could be understood in terms of the holographic stress tensor which surprisingly turns out to be the same as that of the commutative theory \cite{Landsteiner:2007bd}.
In summary, from the comparative analysis in the previous two paragraphs, one should be able to note that the $ \theta $ deformed $ \mathcal{N}=4 $ SYM differs significantly from that of the non commutative $ \mathcal{N}=4 $ SYM as long as we consider the hydrodynamic description of both the theories with respect to their shear channels. However, the comparison remains incomplete as the analysis of the diffusive modes, in particular the computation of the $ R $ charge diffusion corresponding to non commutative $ \mathcal{N}=4 $ SYM theory is still lacking in the literature. The purpose of the present article is therefore to fill up this gap and make a systematic comparison between two different anisotropic theories at strong coupling. In order to do that we essentially turn on $ U(1) $ fluctuations in the bulk and compute the corresponding $ R $ charge diffusion rates along both the commutative as well as the non commutative directions of the brane. Unlike the case for the shear viscosity \cite{Landsteiner:2007bd}, we observe a significant deviation in the charge transport phenomena along the non commutative direction of the brane. On the other hand, the charge diffusion constant along the direction of the commutative coordinates of the brane does not receive any non commutative corrections and thereby remains unchanged.
The organization of the paper is the following: In Section 2, we discuss the geometrical construction in the dual gravitational counterpart of the non commutative $ \mathcal{N}=4 $ SYM plasma. In Section 3, we explicitly compute the holographic charge diffusion rates both along the commutative as well as the non commutative directions of the brane and found that unlike the case for the shear modes their ratio is different from the unity. Finally, we conclude in Section 4.
\section{The dual set up}
We start our analysis with a formal introduction to the geometrical construction in the bulk space time that is holographically dual to non commutative $ \mathcal{N}=4 $ SYM theory at strong coupling. It is already known from the earlier literature that non commutative gauge theories at strong coupling could be consistently obtained from string theory by considering the so called decoupling limit in a system of $ Dp $ branes in the presence of a background NS B field that gives rise to certain scale of non commutativity in the large $ N $ limit \cite{Seiberg:1999vs}-\cite{Hashimoto:1999ut}. To start with, we consider the non commutative $ \mathcal{N}=4 $ SYM theory at finite temperature whose dual counterpart in the string frame reads as \cite{Landsteiner:2007bd},
\begin{eqnarray}
ds_{10}^{2}&=&\mathcal{H}^{-1/2}(-f dt^{2}+dx^{2}+h(dy^{2}+dz^{2}))+\mathcal{H}^{1/2}(f^{-1}dr^{2}+r^{2}d\Omega^{2}_{5})\nonumber\\
f&=& 1- \frac{r_H^{4}}{r^{4}},~~h=\frac{1}{1+\Theta^{2}\mathcal{H}^{-1}},~~\mathcal{H}=\frac{L^{4}}{r^{4}}
\end{eqnarray}
where, $ \Theta $ is the so called non commutative parameter and $ r_H $ is the usual position of the horizon. Following the AdS/CFT prescription, one could write $ L^{4}=4 \pi g^{2}_{YM}N \alpha'^{2} $ which in the decoupling ($ \alpha' \rightarrow 0 $) limit corresponds to a large value of $ N $ where $ N $ is the number of $ D3 $ branes. Finally, setting $ u = r_H^{2}/r^{2} $ the effective five dimensional metric in the Einstein frame could be formally expressed as \cite{Landsteiner:2007bd},
\begin{eqnarray}
ds^{2}&=& h^{-1/4}\mathcal{H}^{-1/2}(-f dt^{2}+dx^{2}+h(dy^{2}+dz^{2}))+\frac{L^{2}h^{-1/4}}{4u^{2}f}du^{2}\nonumber\\
f(u)&=& 1-u^{2},~~h(u)=\frac{u^{2}}{u^{2}+a^{2}},~~\mathcal{H}(u)= \frac{u^{2}}{u_T^{2}},~~u_T = \frac{r_H^{2}}{L^{2}},~~a=\Theta ~ u_T .\label{E1}
\end{eqnarray}
Eq (\ref{E1}) is in fact the starting point of our analysis. In the above mentioned coordinate system (\ref{E1}) the horizon is placed at $ u=1 $ and the boundary is located at $ u=0 $. One should take a note on the fact that here $ (t,x) $ are the usual commutative directions whereas on the other hand, the other two spatial coordinates $ (y,z) $ exhibit the non commutative nature \cite{Landsteiner:2007bd}.
From (\ref{E1}), it is in fact quite evident that due to presence of the non commutativity along two of the spatial directions of the brane, the full $ SO(3) $ symmetry of the boundary theory is reduced down to $ SO(2) $ leaving the rotational invariance only over the $ (y-z) $ plane of the brane. Finally, from (\ref{E1}) it is in fact quite trivial to note down the corresponding Hawking temperature which for the present case turns out to be,
\begin{eqnarray}
T = \frac{1}{\pi u_T L}.
\end{eqnarray}
\section{Charge diffusion}
Based on the original prescription \cite{Policastro:2002se}-\cite{Kovtun:2008kx} for evaluating retarded Green's function corresponding to $ U(1) $ currents ($ J_{\mu} $), the purpose of the present section is to first make a systematic analytic investigation of the DC conductivity ($ \sigma_{DC} $) both along the commutative as well as the non commutative directions of the brane and then compute the corresponding R- charge diffusion(s) ($ \mathfrak{D} $) using the so called Einstein relation, $ \mathfrak{D}=\sigma_{DC}/ \chi $, where $ \chi $ is the charge susceptibility and $ \sigma_{DC} $ is the DC electrical conductivity that could be formally expressed as \cite{Policastro:2002se}-\cite{Kovtun:2008kx},
\begin{eqnarray}
\sigma_{DC} &=& - \lim_{\mathfrak{w} \rightarrow 0}\frac{1}{\mathfrak{w}}Im\ \mathcal{G}^{R}_{ii}(\mathfrak{w} , \mathfrak{q}=0)\nonumber\\
\mathcal{G}^{R}_{ii}(\mathfrak{w} , \mathfrak{q}=0) &=& -i \int d\tau\ d\textbf{x}\ e^{i\mathfrak{w} \tau}\ \Delta (t)\ \langle[J_i (\textbf{x}), J_i (0)]\rangle. \label{E13}
\end{eqnarray}
In order to compute the above quantity in (\ref{E13}) and thereby the charge diffusion ($ \mathfrak{D} $), we essentially study the dynamics of vector $ U(1) $ perturbations over the fixed back ground of the anisotropic black brane (\ref{E1}) \cite{Policastro:2002se}-\cite{Kovtun:2008kx}. Dynamics of these vector perturbations are in general governed by the Maxwell's action namely,
\begin{eqnarray}
S_M = -\frac{1}{4g^{2}_{M}}\int d^{5}x \sqrt{-g}\mathcal{F}_{ab}\mathcal{F}^{ab}\label{E14}
\end{eqnarray}
where, $ g^{2}_{M} $ stands for the Maxwell coupling of the $ U(1) $ theory.
The basic physics behind our analysis rests on the fact that the infra red behavior of these $ U(1) $ fluctuations in the bulk is solely governed by the \textit{hydrodynamics} where the dispersion relation of the type $ \mathfrak{w}=-i \mathfrak{D}\mathfrak{q}^{2} $ appears naturally as a consequence of the pole appearing in the Laplace transformed version of the charge density in the complex $ \mathfrak{w} $ plane which could be interpreted as a natural consequence of the diffusion of conserved charges. In our analysis, considering the so called hydrodynamic limit namely, $ q \ll T $ we study fluctuations of the type, $\mathcal{A}_{m}\sim e^{i q.x}\mathcal{A}_{m}(t,u) $ over the background of (\ref{E1}). These fluctuations by means of the equation of motion as well as the relevant boundary conditions finally yield the dispersion relation of the above form in the limit $ \mathfrak{q}\rightarrow 0 $.
\subsection{Charge susceptibility}
The purpose of the present section is to compute the charge susceptibility ($ \chi $) corresponding to non commutative $ \mathcal{N}=4 $ SYM plasma at strong coupling. In the AdS/CFT framework, the dual geometry corresponding to this non commutative plasma (at finite temperature) is essentially described by the five dimensional black hole solution (\ref{E1}) in the bulk space time.
In our computations, we strictly follow the methods proposed in \cite{Kovtun:2008kx}. The bottom line of our analysis is the following: In order to compute the susceptibility ($ \chi $), one needs to systematically solve the temporal gauge field ($ \mathcal{A}_{t} $) in the bulk consistent with the boundary condition at the horizon ($ u=1 $).
The Maxwell equation that directly follows from (\ref{E14}) could be formally expressed as,
\begin{eqnarray}
\frac{1}{\sqrt{-g}}\partial_{\mu}(\sqrt{-g}\mathcal{F}^{\mu\nu})=0.\label{E15}
\end{eqnarray}
The equation of motion corresponding to $ \mathcal{A}_{t} $ that readily follows from (\ref{E15}) could be formally expressed as,
\begin{eqnarray}
\mathcal{A}''_{t}+\frac{\partial_u(\sqrt{-g}g^{tt}g^{uu})}{\sqrt{-g}g^{tt}g^{uu}}\mathcal{A}'_{t}=0.\label{E16}
\end{eqnarray}
The corresponding solution turns out to be,
\begin{eqnarray}
\mathcal{A}_{t}(u)= \mathfrak{C}_{2}+\frac{4\mathfrak{C}_{1} \left(a^2+u^2\right)^{7/8} \left(7 u^2 \, _2F_1\left(1,\frac{3}{2};\frac{13}{8};-\frac{u^2}{a^2}\right)-5 a^2\right)}{15 a^2 u^{3/4}}.\label{E7}
\end{eqnarray}
The coefficient $\mathfrak{C}_{2}$ is uniquely determined by demanding the fact that $ \mathcal{A}_{t} $ must vanish at the horizon ($ u=1 $) which yields,
\begin{eqnarray}
\mathfrak{C}_{2}= \frac{4 \mathfrak{C}_{1} \left(a^2+1\right)^{7/8} \left(5 a^2-7 \, _2F_1\left(1,\frac{3}{2};\frac{13}{8};-\frac{1}{a^2}\right)\right)}{15 a^2}.
\end{eqnarray}
On the other hand, the chemical potential is given by,
\begin{eqnarray}
\mu = \mathcal{A}_{t}(u)|_{u\rightarrow \varepsilon}=\mathfrak{C}_{2} -\frac{4\mathfrak{C}_{1} a^{7/4} }{3 \varepsilon^{3/4}}+\mathcal{O}(\varepsilon^{5/4})\label{E9}
\end{eqnarray}
where $ |\varepsilon|\ll 1 $. Clearly the above quantity in (\ref{E9}) diverges in the limit $ \varepsilon \rightarrow 0 $. In order to have a finite chemical potential for the bondary theory, we thereby define the renormalized chemical potential as,
\begin{eqnarray}
\mu_{R}= \lim_{\varepsilon \rightarrow 0}\left(\mu +\frac{4 \varepsilon}{3} \frac{\partial \mu}{\partial \varepsilon} \right)= \mathfrak{C}_{2}.\label{E10}
\end{eqnarray}
Finally, using (\ref{E7}) the charge density could be readily obtained as \cite{Kovtun:2008kx},
\begin{eqnarray}
\varrho = \frac{\delta S_{M}}{\delta \mathcal{A}_{t}}|_{u =0}= \frac{2 u_T \mathfrak{C}_{1}}{g^{2}_{M}L}.\label{E22}
\end{eqnarray}
Using (\ref{E10}) and (\ref{E22}), the charge susceptibility finally turns out to be,
\begin{eqnarray}
\chi = \frac{\varrho}{\mu_{R}}=\frac{15 a^{2}u_T}{2 g^{2}_{M}L \left(a^2+1\right)^{7/8} \left(5 a^2-7 \, _2F_1\left(1,\frac{3}{2};\frac{13}{8};-\frac{1}{a^2}\right)\right)}\approx \frac{u_{T}}{2g^{2}_{M}L}.\label{E12}
\end{eqnarray}
Interestingly here we note that the charge susceptibility ($ \chi $) (almost) does not get corrected in the non commutative parameter ($ a $) upto fifth orders in the perturbation series.
Having done these computations on charge susceptibility ($ \chi $), our next task would be to compute DC conductivities along both the commutative as well as non commutative directions of the brane. We denote $ \sigma_{\perp} $ as the conductivity along the commutative direction of the brane and $ \sigma_{\parallel} $ as the conductivity along the non commutative direction of the brane. Our purpose is to make a systematic comparison between these two conductivities and compare our results with the already existing results in the context of anisotropy \cite{Rebhan:2011vd}.
\subsection{Conductivity I: $ \sigma_{\perp} $}
As a first part of our analysis, we compute the DC electrical conductivity along one of the commutative directions of the brane, namely the $ x $ direction.
We consider fluctuations of the form,
\begin{eqnarray}
\mathcal{A}_{m}(u,t)=L \int d\mathfrak{w}e^{-i \mathfrak{w}t}\mathcal{A}_{m}(u).\label{E24}
\end{eqnarray}
Considering $ m =x $ and substituting (\ref{E24}) into (\ref{E15}), we obtain
\begin{eqnarray}
\mathcal{A}_{x}'' +\frac{\partial_u (\sqrt{-g}g^{uu}g^{xx})}{\sqrt{-g}g^{uu}g^{xx}}\mathcal{A}_{x}' -\mathfrak{w}^{2}\frac{g^{tt}}{g^{uu}}\mathcal{A}_{x}=0.\label{E25}
\end{eqnarray}
In order to solve the above equation (\ref{E25}) in the so called low frequency regime, we chose the following ansatz, namely,
\begin{eqnarray}
\mathcal{A}_{x}=(1-u)^{\alpha} \Psi (u).\label{E26}
\end{eqnarray}
Considering the so called in going wave boundary condition \cite{Policastro:2002se}-\cite{Kovtun:2008kx}, our first task is to explore the above equation (\ref{E25}) in the near horizon limit of the brane namely, $ u \sim 1 $. This essentially enable us to determine the coefficient $ \alpha $ uniquely. Substituting (\ref{E26}) into (\ref{E25}) and considering the incoming wave boundary condition near the horizon of the black brane it is in fact quite trivial to show that,
\begin{eqnarray}
\alpha = - \frac{i \mathfrak{w}}{4 \pi u_T^{3/2} T}.
\end{eqnarray}
Our next task would be to substitute (\ref{E26}) in to (\ref{E25}) and solve $ \Psi (u) $ perturbatively in the frequency $ \mathfrak{w} $ near the boundary of the space time. This will finally enable us to compute the DC conductivity ($ \sigma_{\perp} $). In order to solve $ \Psi (u) $ perturbatively in the frequency ($ \mathfrak{w} $) we consider the following perturbative expansion namely,
\begin{eqnarray}
\Psi (u)= \Psi ^{(0)} + i(\mathfrak{w}/T)\Psi^{(1)}+\mathcal{O}(\mathfrak{w}^{2}/T^{2})
\end{eqnarray}
where each of the individual coefficients satisfies equation of motion of the following form,
\begin{eqnarray}
\Psi''^{(1)}+\frac{1}{2 \pi u_T^{3/2} (1-u)}\left[ \Psi'^{(0)}+\frac{1}{2(1-u)}\Psi^{(0)}\right] +\frac{\partial_u (\sqrt{-g}g^{uu}g^{xx})}{\sqrt{-g}g^{uu}g^{xx}}\left[\Psi'^{(1)}+\frac{1}{4 \pi u_T^{3/2} (1-u)}\Psi^{(0)} \right] &=&0\nonumber\\
\Psi''^{(0)}+\frac{\partial_u (\sqrt{-g}g^{uu}g^{xx})}{\sqrt{-g}g^{uu}g^{xx}}\Psi'^{(0)}&=&0.\label{E18}\nonumber\\
\end{eqnarray}
In the following we quote corresponding solutions one by one. Let us first consider the second equation in (\ref{E18}). The corresponding solution turns out to be,
\begin{eqnarray}
\Psi^{(0)}= \frac{4 \mathfrak{d}_{1}\mathfrak{Z}}{195 u^{3/4} \sqrt[8]{a^2+u^2}}+\mathfrak{d}_{2} \label{e19}
\end{eqnarray}
where,
\begin{eqnarray}
\mathfrak{Z}= u^2 \sqrt[8]{\frac{u^2}{a^2}+1} \left(13 \left(3 a^2+7\right) F_1\left(\frac{5}{8};\frac{1}{8},1;\frac{13}{8};-\frac{u^2}{a^2},u^2\right)
-20 u^2 F_1\left(\frac{13}{8};\frac{1}{8},1;\frac{21}{8};-\frac{u^2}{a^2},u^2\right)\right)
\nonumber\\
-65 \left(a^2+u^2\right). \label{E30}
\end{eqnarray}
In the above, we have expressed solution (\ref{e19}) in terms of Appell polynomials where the coefficients $ \mathfrak{d}_{1} $ and $ \mathfrak{d}_{2} $ are related to each other through the condition $ \Psi^{(0)}(1)=0 $. On top of it, one can also impose the asymptotic normalization condition which for the present case turns out to be, $ \Psi^{(0)}(0)=1/L $. These two conditions should in principle sufficient to determine these unknown coefficients uniquely. However, for the present purpose of our analysis it is sufficient to know the boundary behaviour of the gauge fields since we will be finally evaluating the entities near the boundary of the space time. Expanding (\ref{E30}) near the boundary ($ u \sim 0 $) of the space time we note,
\begin{eqnarray}
\Psi^{(0)} \approx \frac{1}{L}\left( 1 - \frac{4 a^{7/4}}{3 \varepsilon^{3/4}}\right)^{-1} \left( 1 - \frac{4 a^{7/4}}{3 u^{3/4}}+\frac{\left(8 a^2+7\right) u^{5/4}}{10 \sqrt[8]{a^2}}\right)+\mathcal{O}(u^{13/4})\label{E20}
\end{eqnarray}
where the numerical prefactor guarantees a normalized mode at the boundary. Note that here $ \varepsilon $ is the UV cut off as mentioned earlier. At the end of our calculations we finally consider the $ \varepsilon \rightarrow 0 $ limit in order to extract the finite piece at the boundary.
In the subsequent analysis we drop all the terms starting with quadratic order in $ u $. Since $ u $ ranges between zero and one therefore it is indeed quite logical to truncate solutions upto certain order in $ u $ and particularly consider those terms that contribute significantly near the boundary of the space time. Using (\ref{E20}), the solution corresponding to ($ \Psi^{(1)} $) finally turns out to be,
\begin{eqnarray}
\Psi^{(1)} \approx \frac{1}{L}\left( 1 - \frac{4}{3 \varepsilon^{3/4}}\right)^{-1} \left( 1 -\frac{4 }{3 u^{3/4}}+\frac{a^{7/4} \sqrt[4]{u}}{3 \pi u_T^{3/2}}+\frac{a^{7/4} u^{5/4}}{6 \pi u_T^{3/2}}-\frac{u}{4 \pi u_T^{3/2}}\right) + \mathcal{O}(u^{2}).\label{E21}
\end{eqnarray}
Using (\ref{E20}) and (\ref{E21}), the non trivial piece in the DC conductivity (along $ x $- direction) finally turns out to be,
\begin{eqnarray}
\sigma_{\perp}= \frac{u_T}{g^{2}_{M}L T}.\label{eq23}
\end{eqnarray}
Finally, from (\ref{E12}) and (\ref{eq23}) one can easily read of the corresponding charge diffusion coefficient as,
\begin{eqnarray}
\mathfrak{D}_{\bot}=\sigma_{\perp}/\chi \sim \frac{1}{T}\label{E23}
\end{eqnarray}
where we have ignored the over all numerical pre factor. The above result (\ref{E23}) also follows from simple dimensional arguments. For example, it is straightforward to notice from
(\ref{E12}) that $ [\chi]=1/L^{2} $ since the dimension of the Maxwell coupling in five dimensions goes as $ [g^{2}_{M}]=L $ \cite{Kovtun:2008kx}. On the other hand, following the same line of arguments we note $ [\sigma_{\perp}]=1/L^{2} $. Using these facts it could be readily seen that $ [\mathfrak{D}_{\bot}]=L $, where we have used the fact $ [T]=1/L $.
Eq.(\ref{eq23}) is in fact an important observation in itself. It reveals certain important fact that the DC conductivity ($ \sigma_{\perp} $) along the commutative direction of the brane does not get modified due to the presence of the non commutative parameter ($ \Theta $). The same line of argument also holds for the corresponding charge diffusion rate ($ \mathfrak{D}_{\bot} $).
\subsection{Conductivity II: $ \sigma_{\parallel} $}
For the sake of completeness as well as the clarity, our final task would be to compute the DC conductivity along one of the non commutative directions of the brane, say the $ y $ direction and make a systematic comparison of our results with the result obtained in the previous section. To do that we first turn on fluctuations of the type,
\begin{eqnarray}
\mathcal{A}_{y}(u,t)=L \int d\mathfrak{w}e^{-i \mathfrak{w}t}\mathcal{A}_{y}(u)
\end{eqnarray}
which satisfy differential equation of the following form,
\begin{eqnarray}
\mathcal{A}_{y}'' +\frac{\partial_u (\sqrt{-g}g^{uu}g^{yy})}{\sqrt{-g}g^{uu}g^{yy}}\mathcal{A}_{y}' -\mathfrak{w}^{2}\frac{g^{tt}}{g^{uu}}\mathcal{A}_{y}=0.\label{e25}
\end{eqnarray}
To solve (\ref{e25}), we choose the following ansatz,
\begin{eqnarray}
A_{y}=(1-u)^{\beta}\Phi(u)
\end{eqnarray}
where the coefficient $ \beta $ could be readily obtained from the near horizon data namely,
\begin{eqnarray}
\beta = - \frac{i \mathfrak{w}}{4 \pi u_T^{3/2} T}.
\end{eqnarray}
Like in the previous case, the function $ \Phi(u) $ could be solved perturbatively in the frequency $ \mathfrak{w} $ which in the hydrodynamic limit ($\mathfrak{w}/T \ll 1 $) yields the following set of equations namely,
\begin{eqnarray}
\Phi''^{(1)}+\frac{1}{2 \pi u_T^{3/2} (1-u)}\left[ \Phi'^{(0)}+\frac{1}{2(1-u)}\Phi^{(0)}\right] +\frac{\partial_u (\sqrt{-g}g^{uu}g^{yy})}{\sqrt{-g}g^{uu}g^{yy}}\left[\Phi'^{(1)}+\frac{1}{4 \pi u_T^{3/2} (1-u)}\Phi^{(0)} \right] &=&0\nonumber\\
\Phi''^{(0)}+\frac{\partial_u (\sqrt{-g}g^{uu}g^{yy})}{\sqrt{-g}g^{uu}g^{yy}}\Phi'^{(0)}&=&0.\label{e28}\nonumber\\
\end{eqnarray}
The corresponding solutions turn out to be,
\begin{eqnarray}
\Phi^{(0)} &= & \frac{1}{L}\left[1+ \frac{4 u^{5/4} \sqrt[8]{\frac{u^2}{a^2}+1} F_1\left(\frac{5}{8};\frac{1}{8},1;\frac{13}{8};-\frac{u^2}{a^2},u^2\right)}{5 \sqrt[8]{a^2+u^2}}\right] \nonumber\\
& \approx &\frac{1}{L}\left[ 1+\frac{4 u^{5/4}}{5 \sqrt[8]{a^2}}\right]+\mathcal{O}(u^{13/4}) \nonumber\\
\Phi^{(1)} & \approx & \frac{1}{L}\left[ 1+\frac{4u^{5/4} }{5}-\frac{ u}{4 \pi u_T^{3/2}} \right]+\mathcal{O}(u^{2}).\label{e29}
\end{eqnarray}
Using (\ref{e29}), the corresponding DC conductivity finally turns out to be,
\begin{eqnarray}
\sigma_{\parallel}=\frac{u_T}{g^{2}_{M}LT}(1-\Theta^{1/4}u_T^{1/4}).\label{e31}
\end{eqnarray}
The way one would like to interpret the above result (\ref{e31}) is essentially the following: Unlike the previous case, the the DC conductivity ($ \sigma_{\parallel} $) along the non commutative direction of the brane is modified due to the presence of the non commutative parameter, and most importantly, the non commutative effects essentially suppress the value of the conductivity from that of its usual value corresponding to the commutative case. The same arguments also hold for the corresponding charge diffusion ($ \mathfrak{D}_{\parallel} $).
Finally, the ratio between the two charge diffusion rates turn out to be,
\begin{eqnarray}
\frac{\mathfrak{D}_{\parallel}}{\mathfrak{D}_{\perp}} =\frac{\sigma_{\parallel}}{\sigma_{\perp}}= 1-\Theta^{1/4}u_T^{1/4}.\label{e32}
\end{eqnarray}
Eq.(\ref{e32}) is the full non perturbative result in the non commutative parameter ($ \Theta $) and is consistent with the corresponding result in the commutative ($ \Theta\rightarrow 0 $) limit. The crucial observation that one should make at this stage is the fact that unlike the case for the shear viscosity to entropy ($ \eta/s $) ratio \cite{Landsteiner:2007bd}, the charge diffusion rates are rather different along different directions of the brane. In other words, the charge diffusion is sensitive to the intrinsic anisotropy of the plasma. Finally, before we conclude, it is important to emphasis that similar observations have also been made earlier in a different context of anisotropy where people had observed, $ \sigma_{anisotropy}\neq \sigma_{isotropy} $ \cite{Rebhan:2011vd}.
\section{Summary and final remarks}
Let us now summarize the key findings of our analysis. In our analysis, considering the so called hydrodynamic limit, we explore the charge transport phenomena in non commutative $ \mathcal{N}=4 $ SYM plasma at strong coupling. The motivation of our current analysis rests on the earlier results on shear viscosity to entropy ($ \eta/s $) ratio which was found to be universal despite of the intrinsic anisotropy of the $ Dp $ brane \cite{Landsteiner:2007bd}. In our analysis, however we observe that unlike the case for the $ \eta/s $ ratio, the charge diffusion rates are indeed different along two different directions of the brane. In particular, we observe that the holographic DC conductivity gets significantly modified (\textit{only}) along the non commutative directions of the brane and its value is in fact turns out to be lower compared to its commutative counterpart. Therefore we might conclude that from the point of view of the charge transport property, both the $ \theta $ deformed as well as the non commutative $ \mathcal{N}=4 $ SYM theories exhibit some sort of similarity whereas on the other hand, they differ quite significantly when we compare them with respect to their shear channels. Finally, it is noteworthy to mention that our result smoothly matches to that with the corresponding commutative result in the limit of vanishing $ \Theta $.
\\ \\\
{\bf {Acknowledgements :}}
The author would like to acknowledge the financial support from CHEP, Indian Institute of Science, Bangalore.\\
|
2,877,628,091,543 | arxiv | \section{Introduction}
In 1975 Nancy Lynch~\cite{L} proved that for every computable decision problem not decidable in polynomial time there exists an infinite computable set of instances,
$X$, such that the problem cannot be decided in polynomial time on any infinite subset of $X$. Such an $X$ is called a complexity core for the decision problem.
Lynch's result attracted the attention of several other authors, who considered decision problems in the form of membership problems for subsets $S\subseteq \{0,1\}^*$. Cores of non-sparse density with membership decidable in subexponential time are investigated in~\cite{DB, OS}. If a core exists, then so does a proper core~\cite{ESY}, i.e., $X\subseteq S$. Proper cores are necessarily $\bf P$-immune, and $\{0,1\}^*$ is itself a core if and only if $S$ is $\bf P$-bi-immune~\cite{BS}. Generalizations to complexity classes beyond $\bf P$ are given in~\cite{BD1, BD2, ESY}. These results are reviewed in~\cite[Chapter 6]{BDG}.
Lynch's construction of cores involves enumeration of all Turing machines, and in general the membership problem for cores is superpolynomial. In~\cite{OS} the authors observe that as all known cores are more or less artificially constructed (when the core is $\{0,1\}^*$, it is the $\bf P$-bi-immune set $S$ which is artificially constructed), it would be extremely interesting to find natural examples of cores. Theorem~\ref{thm} exhibits such a core, albeit with respect to a slight variation of Lynch's original definition of cores.
\subsection*{Notation} For any finite set $\Sigma$, $\Sigma^*$ is the set of all words over $\Sigma$, and $\hat\Sigma$ is the union of $\Sigma$ with a disjoint set of formal inverses.
\begin{theorem}\label{thm}
There exists a finitely presented group $G =\langle \Sigma \mid R\rangle$ and a nonempty subset $\Delta\subset \Sigma$ such that if $D$ is the domain of convergence for any (correct) partial algorithm deciding the word problem, then
\begin{equation}\label{eq}
\lim_{n\to\infty}\frac{|D\cap \hat\Delta^n|}{|\hat\Delta^n|} =0
\end{equation}
where $\hat\Delta^n$ denotes the set of all words of length at most $n$ in $\hat\Delta^*$.
\end{theorem}
The word problem for $G$ (with respect to the given presentation) is to decide whether an arbitrary word over $\hat\Sigma$ represents the identity in $G$. Theorem~\ref{thm} says that every partial algorithm for the word problem fails on virtually all words from $\hat\Delta^*$. Thus $\hat\Delta^*$ is a readily available set of provably hard instances. Clearly membership in $\hat\Delta^*$ is decidable in linear time, and $\hat\Delta^*$ can be sampled in linear time. In addition $\hat\Delta^*$ is a complexity core in the sense of Lynch except that the set inputs from $\hat\Delta^*$ on which a partial algorithm succeeds is not finite, as in Lynch's definition of a core, but rather of asymptotic density zero in the sense of Equation~\ref{eq}.
Theorem~\ref{thm} is an immediate consequence of Theorem~\ref{alg}, a recent result from combinatorial group theory.
\section{Background and Proof}
Recall that in a finite presentation $\langle \Sigma \mid R\rangle$, $\Sigma$ is a finite set of generators and $R$ is a finite set of relators, i.e., of words over $\hat\Sigma$. Also an arbitrary word over $\hat\Sigma$ represents the identity of $G$ if and only if it can be reduced to the empty word by inserting and deleting words from $R$ and their inverses, along with the trivial words $aa^{-1}, a^{-1}a$ for $a\in\Sigma$.
\begin{definition}[\cite{MO}] A finitely generated group $H$ is algorithmically finite if every infinite computably enumerable subset of words in the generators and their inverses contains two words which represent the same element of $H$.
\end{definition}
All finite groups are algorithmically finite. The interesting fact is that infinite algorithmically finite groups exist.
\begin{theorem} [\cite{MO} Theorems 1.1 and 1.3]\label{alg}
Infinite recursively presented algorithmically finite groups exist. Any partial algorithm for the word problem of such a group converges only on a set of asymptotic density zero.
\end{theorem}
\subsection*{Proof of Theorem~\ref{thm}} Let $H$ be an infinite finitely generated recursively presented algorithmically finite group. By the well known Higman embedding Theorem $H$ is a subgroup of a finitely presented group $G$. Without loss of generality the generators $\Sigma$ of $G$ can be augmented to include generators, $\Delta$, of $H$. By Theorem~\ref{alg} any partial algorithm for the word problem of $G$ fails everywhere on $\Delta^*$ except on a subset of asymptotic density $0$ in $\Delta^*$.
\section{Conclusion}
Infinite algorithmically finite groups are a new kind of group with unsolvable word problem. The proof of Theorem~\ref{alg} does not employ Turing machines; instead, Golod--Shafarevich presentations and analogs of simple sets from computability theory are used.
The construction of $H$ is not natural in our sense, as it involves enumeration of all recursively enumerable subsets of $\hat\Sigma^*$. However $G$ itself is specified by a straightforward finite presentation. Since the proof of the Higman Embedding Theorem is constructive~\cite{R}, as is the construction of the recursive presentation for $H$, one could in principle compute this finite presentation.
The convergence of the limit in Theorem~\ref{thm} can be made to occur exponentially fast. See~\cite{MO} Corollary 1.4.
Sampleability of hard instances is of interest in crytography. The word problem of the group $G$ from Theorem~\ref{thm} is unsolvable and thus not useful as a cryptoprimitive. It seems unlikely that our approach could produce a useful cryptoprimitive, but examples of sampleable cores at lower complexity levels might provide some useful insights.
|
2,877,628,091,544 | arxiv | \section{Introduction}
Consider a gas of particles so dilute that binary collisions are dominant.
The collision between two particles with thermal momenta $p_{th}=\hbar k_{th} =2\pi/\lambda_{dB}$
is said to be in the ultracold regime if the range of the accompanying interaction
is smaller than the de Broglie wave length $\lambda_{dB}$. Under such conditions, as they collide, the particles approach each other more closely than their wavelength, and details of the interaction become blurred. That is, ultracold collisions are expected to be determined by few parameters.
Ultracold collisions may take place in the degenerate limit of a gas of particles for which any particle is permanently within a wavelength from other particles, \emph{i. e.} $\lambda_{dB} > n^{1/3}$ with $n$ the density of the gas.
The search\cite{old}, generation \cite{KCC, KCC2, KCC3, KCC4} and further study \cite{FST1,FST2} of atomic quantum degenerate gases have lead naturally to the analysis of collisions between ultracold atoms.
The purpose of this manuscript is to find analytic expressions for the $s$-wave parameters that describe such collisions for a potential that exhibits some of the main features of an atom-atom interaction: the Morse potential\cite{morse}
\begin{equation}
V(r) = D((1 - e^{-\beta(r-r_0)})^2 -1), \label{eq:morse-pot}
\end{equation}
where $D$, $\beta$, $r_0$ are positive. This potential is repulsive for short distances, exhibits a local minimum with depth $D$, width determined by $\beta$, located at $r_0$ and is slightly attractive at long distances. Since its proposal it has been extensively used to describe anharmonic features of the vibrational spectra of diatomic molecules.
Simple $s$-wave analytical solutions for the potential can be found for bound\cite{morse,sage} and unbound\cite{matsumoto} states. The key is to use an auxiliary mathematical problem where the radial coordinate $r$, that is physically constrained to the interval $[0,\infty)$,
is allowed to vary in $(-\infty, \infty)$. In this work we analyze the consequences of using this auxiliary problem instead of the original one derived directly from the Schr\"odinger equation.
In general, for a spherically symmetric potential $V(r)$ and elastic collisions, the scattering effects at any relative momenta $p = \hbar k$ are contained in the partial wave phase shifts $\delta_\ell(k)$.
It can be shown that as $k \rightarrow 0$, the $s$-wave phase shift $\delta_{\ell=0}(k)$ can be expanded as\cite{blatt,joachain}
\begin{equation}
k\cot\delta_0(k) = -\frac{1}{a}+\tfrac{1}{2}r_ek^2+\cdots;
\label{eq:low_k_exp}
\end{equation}
$a$ is known as the scattering length and $r_e$ as the effective range. For other partial waves
$\delta_\ell(k)/k$ goes to zero as $k\rightarrow 0$; $s$-wave collisions contribute to the scattering between bosons and distinguishable particles. As a consequence, in those cases, collisions in the ultracold regime are expected to be isotropic and characterized by the scattering length.
In this article, expressions for $a$ and $r_e$ are obtained for the Morse Hamiltonian.
We begin in Section \secref{sec:radial} by making a brief revision of the bound and unbound eigenfunctions of the Morse Hamiltonian that vanish as
$r\rightarrow -\infty$. From those unbound functions, the phase shift $\delta_0(k)$ is explicitly calculated and the scattering parameters $a$ and $r_e$ are written in an analytical closed form. In Section \secref{sec:cost}, we study
the bound and unbound eigenfunctions of the Morse Hamiltonian
with the boundary condition that nullify $u$ as $r\rightarrow 0$, which is compatible with a radial coordinate restricted to the interval $ [0,\infty)$.
In an analogous way as for the auxiliary problem, the phase shift $\delta_0(k)$ can be calculated and the scattering parameters are implicitly found.
A comparison between the auxiliary and the physical system results is then performed.
\section{Radial solutions for the auxiliary problem\label{sec:radial}}
The Schr\"odinger equation
\begin{equation}
\left[\frac{\hbar^2}{2\mu} \frac{d^2}{dr^2}+E-V(r)\right]u(r) = 0.
\label{eq:radial}
\end{equation}
can be related to the stationary dynamics of a one dimensional collision of two
particles with reduced mass $\mu$ and Hamitonian eigenenergy $E$, or to a three dimensional $s$-wave problem for which the radial wavefunction has been written in the form $R(r) =u(r)/r$. Taking $V(r)$ as the Morse potential, Eq.~\ref{eq:morse-pot}, and
introducing the variables $d=\sqrt{2\mu D}/\hbar\beta$, $b = \sqrt{2\mu E}/\hbar\beta$ and $z = 2de^{-\beta(r-r_0)}$ a direct calculation shows that the general solution to Eq.~\eqref{eq:radial} is
\begin{eqnarray}
u_b(z) &=& e^{-z/2}C_1z^{+ib}M(\tfrac{1}{2}+ib-d,1+2ib,z) \nonumber \\
&+&e^{-z/2}C_2z^{-ib}M(\tfrac{1}{2}-ib-d,1-2ib,z),
\end{eqnarray}
where $C_1$ and $C_2$ are constants to be determined and
\begin{equation}
M(p,q,z)=\sum_{n=0}^\infty\frac{(p)_nz^n}{(q)_n n!},
\end{equation}
is Kummer's function\cite{Abramowitz1964} with $(p)_n$ the Pochhammer symbol. It will be useful to know that
\begin{equation}
M(p,q,z) = \frac{\Gamma(q) }{\Gamma(p) }e^{z}z^{p-q}\left[1+O\left(|z|^{-1}\right)\right]
\label{eq:M_z_lejos}
\end{equation}
when the real part of $z$ is positive.
\subsection{Bound States}
The bound states are determined by having $E<0$. In this case $b = i\sqrt{2\mu |E|}/\hbar\beta=i|b|$,
and
\begin{eqnarray}
u_b(z) &=& e^{-z/2}C_1z^{-|b|}M(\tfrac{1}{2}-|b|-d,1-2|b|,z) \nonumber \\
&+&e^{-z/2}C_2z^{+|b|}M(\tfrac{1}{2}+|b|-d,1+2|b|,z).
\end{eqnarray}
Since $u_b$ should not diverge when $z\ll 1$ ($\beta r\gg1$), $C_1$ must be zero.
We now need to apply a second boundary condition which will determine the quantization. Solving the 3D radial equation would require demanding $u_b\left (z(r)\right ) = 0$ when $r=0$; however, by applying the condition where $r\rightarrow -\infty$, the wave functions and eigenvalues take a much simpler form which is analytically tractable\cite{morse}. We will analyze the consequences of using this method in section \ref{sec:cost}.
When $r\rightarrow -\infty$, $z\rightarrow\infty$ and by using Equation~\eqref{eq:M_z_lejos} it is found that
\begin{equation}
u_b(z)=C_2 e^{z/2}z^{-\frac{1}{2}-d}\frac{\Gamma\left(1+2|b|\right)}{\Gamma\left(\frac{1}{2}+|b|-d\right)} \left[1+O\left(|z|^{-1}\right)\right].
\end{equation}
It is worth noting that $u_b(z)$ grows exponentially as $z$ grows unless $1/2+|b|-d$ is a negative integer. Since $u_b(z)$ should not grow exponentially in that region we define $-n=1/2+|b|-d$, where $n$ is a positive integer. This condition determines the quantization of the energy levels, $b_n=|b|=d-n-1/2$, $n\in{0,1,2,\dots}$ or
$E_n = -D+\hbar\beta\sqrt{2D/\mu}\left(n+1/2\right)-(\hbar^2\beta^2/2\mu)\left(n+1/2\right)^2$. Since $b_n$ is always positive then $n$ can only take a finite number of values for a given $d$. This means that the Morse potential can only hold a finite number of bound states. Since the first argument of $M$ turns out to be an integer, the solution can be rewritten in terms of Laguerre polynomials as
\begin{equation}
u_n(z) = \left( \frac{\beta n!\, 2|b_n|}{\Gamma\left(2|b_n|+n+1\right)}\right)^{1/2} e^{-z/2} z^{|b_n|} L_n^{(2|b_n|)}(z),
\label{eq:sol_ligada}
\end{equation}
with which the bound solutions for $l=0$ are fully determined.
\subsection{Unbound States}
The unbound states are determined by having $E>0$. In this case $b = \sqrt{2\mu |E|}/\hbar \beta=|b|$,
and
\begin{eqnarray}
u_b(z) &=& e^{-z/2}C_1z^{+i|b|}M(\tfrac{1}{2}+i|b|-d,1+2i|b|,z) \nonumber \\
&+&e^{-z/2}C_2z^{-i|b|}M(\tfrac{1}{2}-i|b|-d,1-2i|b|,z).
\end{eqnarray}
Again, we apply a boundary condition when $r\rightarrow -\infty$ which means that $z\rightarrow \infty$ where we require that \mbox{$u_b(z)\rightarrow 0$}. Using Equation~\eqref{eq:M_z_lejos} it is found that
\begin{eqnarray}
u_b(z) &=& e^{z/2}z^{-\frac{1}{2}-d} \nonumber \\
&\times & \left[C_1 \frac{\Gamma(1+2i|b|)}{\Gamma(\tfrac{1}{2}+i|b|-d)}+C_2 \frac{\Gamma(1-2i|b|)}{\Gamma(\tfrac{1}{2}-i|b|-d)}\right] \nonumber\\
&\times & \left(1+O(|z|^{-1})\right).
\end{eqnarray}
As in the bound case $u_b(z)$ grows exponentially with $z$ and we can only play with $C_1$ and $C_2$ to satisfy the condition $u_b(z(r))\rightarrow 0$ as $r\rightarrow -\infty$. For this we look for the relationship between $C_1$ and $C_2$ that nullifies the factor with square parenthesis and find that
\begin{equation}
\frac{C_1}{C_2} = \frac{\Gamma(-2i|b|)}{\Gamma(\tfrac{1}{2}-i|b|-d)}\overline{\left(\frac{\Gamma(\tfrac{1}{2}-i|b|-d)}{\Gamma(-2i|b|)}\right)},
\end{equation}
where $\overline{s}$ means the complex conjugate of $s$. Therefore we define $A(b) = \Gamma(-2i|b|)/\Gamma(\tfrac{1}{2}-i|b|-d)$, so we can satisfy the condition with $C_1 = \tilde{C}_b A(b)$ and $C_2 = \tilde{C}_b \overline{A}(b)$, where $\tilde{C}_b$ is a normalization factor that can depend on $b$. In this manner, the solution has the form\cite{matsumoto}
\begin{eqnarray}
u_b(z) &=& 2 e^{-z/2} \tilde{C}_b \nonumber \\
&\times & \Re\left\{A(b)z^{i|b|}M (\tfrac{1}{2}+i|b|-d,|+2i|b|,z )\right\},
\label{eq:sol_libre}
\end{eqnarray}
where $\Re$ means the real part.
It is important to analyze the asymptotic behavior of the solutions since the scattering phase shift depends on this. When $r \rightarrow \infty$, $ z \rightarrow 0 $ in such way that Equation~\eqref{eq:sol_libre} simplifies to
\begin{equation}
u_b(z) \mathop{\rightarrow}_{z\rightarrow 0} 2 \tilde{C}_b \Re\left\{A(b)z^{i|b|}\right\}.
\end{equation}
Writing it in terms of $r$ we get that the asymptotic behavior is given by
\begin{eqnarray}
u_b(z) &\approx& 2 \tilde{C}_b\Re\left\{A(b)\left(2de^{\beta r_0}\right)^{i|b|}\right\}\cos(kr) \nonumber \\
&+&2 \tilde{C}_b\Im\left\{A(b)\left(2de^{\beta r_0}\right)^{i|b|}\right\}\sin(kr),
\label{eq:asymptotic}
\end{eqnarray}
where $k=|b|\beta$ is the asymptotic wave number and $\Im$ means the imaginary part.
In absence of a potential, the normalized radial wave function has the form $u_k(r)= \sqrt{2/\pi} \sin(kr)$.
The presence of the Morse potential also produces an asymptotic sinusoidal solution as seen in Equation~\eqref{eq:asymptotic}. However, the cosine term results in an $s$-wave phase shift for the auxiliary problem $\delta_0^{(aux)}(k)$ which satisfies
\begin{equation}
\tan\delta_0^{(aux)}(k) = \frac{\Re\left\{A(k/\beta)\left(2de^{\beta r_0}\right)^{ik/\beta}\right\}}{\Im\left\{A(k/\beta)\left(2de^{\beta r_0}\right)^{ik/\beta}\right\}}.
\end{equation}
On the other hand $\Re\{s\}/\Im\{s\} = \tan(\arg(i\overline{s}))$, so
\begin{eqnarray}
\delta_0^{(aux)}(k) &=& \arg\left(i\overline{\left(A(k/\beta)\left(2de^{\beta r_0}\right)^{ik/\beta}\right)}\right) \nonumber \\
&=& \frac{\pi}{2}-\arg A(k/\beta)-\frac{k}{\beta}\log(2d)-kr_0
\end{eqnarray}
modulo $\pi$. Moreover,
\begin{equation}
\arg A\left (\tfrac{k}{\beta}\right ) = \arg\Gamma\left(-i\tfrac{k}{\beta}\right)-\arg\Gamma\left(\tfrac{1}{2}-i\tfrac{k}{\beta}-d\right).
\end{equation}
Using an expansion for $\arg\Gamma(x+iy)$\cite{Abramowitz1964} we finally get the phase shift
\begin{equation}
\delta_0^{(aux)}(k) = -\frac{k}{\beta}\left(\gamma+\ln(2d)+\beta r_0\right)+\Xi,
\label{eq:phaseshift}
\end{equation}
where $\gamma$ is the Euler-Mascheroni constant and
\begin{equation}
\Xi = \sum_{n=1}^\infty\frac{k}{\beta n}-\arctan\frac{2k}{\beta n}+\arctan\frac{k}{\beta\left(n-d-\frac{1}{2}\right)}.
\label{eq:phaseshift_sum}
\end{equation}
At this point we have full knowledge of the $s$-wave scattering phase shift from which, in principle, we can extract all the $s$-wave scattering information for the Morse potential, as we will exemplify when we calculate the scattering length and effective range.
We will now proceed by calculating the normalization factor. Using the scattering phase shift, we write the asymptotic behavior as
\begin{equation}
u_k(r)\mathop{\rightarrow}_{r \rightarrow \infty } 2 \tilde{C}_b |A(k/\beta)|\sin\left(kr+\delta_0(k) \right).
\end{equation}
Following Bethe,\cite{bethe} the normalization of continuum states is defined by their asymptotic behavior in which we require that
\begin{equation}
u_k(r) \mathop{\rightarrow}_{r \rightarrow \infty } \sqrt{\frac{2}{\pi } } \sin(kr+\delta_0(k)).
\end{equation}
Therefore, the normalization factor $\tilde{C}_b$ is given by
\begin{eqnarray}
\tilde{C}_b &=& \frac{1}{\pi}\left(\frac{|b|\sinh 2\pi |b|}{\nu^2+|b|^2}\right)^{1/2}e^{\gamma \nu} \\
&\times& \prod_{n=1}^{\infty}\left[\left(1-\frac{\nu}{n}\right)^2+\left(\frac{|b|}{ n}\right)^2 \right]^{-1/2}e^{-\nu/n},
\end{eqnarray}
where $\nu=d-1/2$.
To finalize this section we analyze the low energy scattering behavior of the Morse potential. As the particles energy tends to zero, the $s$-wave scattering amplitude determined by $\delta_0$ becomes dominant. For low energy, Eq.~\eqref{eq:low_k_exp} defines $a$, the \emph{scattering length} and $r_e$, the \emph{effective range}. In order to find expressions for them we will first calculate the low energy behavior of $\delta_0(k)$ and afterwards write the expansion~\eqref{eq:low_k_exp}. By identifying the coefficients of the expansion we will find the parameters we seek.
We begin by using the fact that $\arctan x = x-\frac{x^3}{3}+O(x^5)$ to rewrite the series~\eqref{eq:phaseshift_sum} when $k/\beta\ll 1$ as
\begin{eqnarray}
\Xi &=& -\left[\psi(\tfrac{1}{2}-d) + \gamma\right]\frac{k}{\beta} \nonumber \\
&+&\left[\tfrac{1}{6}\psi^{(2)}(\tfrac{1}{2}-d)+\tfrac{8}{3} \zeta(3) \right]\left( \frac{k}{\beta}\right)^3+O\left(k^5\right ),
\end{eqnarray}
where $\psi^{(n)}$ is the polygamma function\cite{Abramowitz1964} and $\psi=\psi^{(0)}$. Defining two variables
\begin{equation}
\eta = \frac{1}{\beta}\left ( 2\gamma+\ln(2d) +\beta r_0+\psi(\tfrac{1}{2}-d)\right )
\end{equation}
and
\begin{equation}
\xi = \frac{1}{\beta^3}\left (\tfrac{1}{6}\psi^{(2)}(\tfrac{1}{2}-d)+\tfrac{8}{3}\zeta(3)\right ),
\end{equation}
the phase shift is rewritten as
\begin{equation}
\delta^{(aux)}_0(k) =-k\eta+k^3\xi +O\left(k^5 \right).
\end{equation}
On the other hand, using the Maclaurin expansion of $\cot\delta_0$ we write,
\begin{equation}
k\cot\delta_0(k) = \frac{k}{\delta_0(k)}-\frac{k\delta_0(k)}{3}+O\left(\delta_0^3 \right),
\end{equation}
which yields
\begin{equation}
k\cot\delta_0(k) = -\frac{1}{\eta}+k^2\left( \frac{\eta}{3}-\frac{\xi}{\eta^2}\right) +O\left(k^3\right ).
\end{equation}
Identifying the terms in the previous expression with the ones in Eq.~\eqref{eq:low_k_exp} we find that the scattering length is given by
\begin{equation}
a = r_0+\frac{1}{\beta}\left[ 2\gamma+\ln(2d)+\psi(\tfrac{1}{2}-d)\right],\label{eq:alpha}
\end{equation}
while the effective range is
\begin{equation}
r_e = \frac{2}{3}a-\frac{\psi^{(2)}(\tfrac{1}{2}-d)+16\zeta(3)}{3\beta^ 3a^ 2} \label{eq:re}.
\end{equation}
The scattering length and effective range as a function of the depth of the potential are illustrated in Figure~\figref{fig:resultado} for the case $r_0\beta=4.15$ which could correspond to two atoms of ${}^6Li$ colliding with an electronic state ${}^3\Sigma_u^+$ when $D=40$meV \cite{lithium}.
One notices that for $d\approx n+1/2$, $n=0,1,2,\dots$ the scattering length is not well defined since
its limiting value from the right would be negative and diverging while from left it would be positive and diverging.
In the nomenclature of scattering theory that condition is known as the unitarity limit or the zero energy resonance\cite{joachain}. For those values of
$d$ the Morse potential is about to support a new bound state.
As for the effective range, it is always positive with the exception $d\ll 1$. This condition is not shared by other potentials like the square well which admit positive and negative values of $r_e$ for extended regions of the potential depth.
We also observe that the $r_e$ resonances are located to the left of the $a$ resonances where $a$ becomes zero.
\begin{figure}
\includegraphics[width=0.7\linewidth]{fig1.pdf}
\caption{(Color online) The scattering length and effective range obtained from the auxiliary problem as a function of $d=\sqrt{2\mu D}/\hbar\beta$ for $r_0\beta=4.15$.}
\label{fig:resultado}
\end{figure}
\section{Radial solutions for the physical problem: consequences of including $r<0$ in the auxiliary problem\label{sec:cost}}
An auxiliary mathematical problem was used to find the analytical results shown in the previous sections. The purpose of this section is to understand better the trade-offs of replacing the physical problem by the auxiliary one.
\subsection{Bound States}
First of all, the general methodology, described at the beginning of last section, when applied to the auxiliary problem yields simple analytical solutions. Nevertheless, if we allow $r$ to vary only in the $[0,\infty)$ interval and demand $u_b(z(r))|_{r=0} = 0$ that methodology also yields analytical solutions that do not reduce to the simple expression~\eqref{eq:sol_ligada}.
These solutions are
\begin{equation}
u_j(z) = C e^{-z/2}z^{+|b_j|}M(\tfrac{1}{2}+|b_j|-d,1+2|b_j|,z).
\end{equation}
Here $\vert b_j\vert $, which determine the eigenenergies, are the positive roots of the equation in $b$ given by
\begin{equation}
M\left(\tfrac{1}{2}+|b|-d,1+2|b|,2de^{\beta r_0}\right ) =0.
\end{equation}
In Figure~\figref{fig:energias} we compare the energy eigenvalues that result in the physical and auxiliary problems
for several values of the product $r_0 \beta$. As noticed before, if $d$ is in the interval $[n+1/2, n+3/2)$ for
$n=0,1,2,...$, then precisely $n+1$ bound states are supported for the auxiliary Morse problem. For the physical problem, this occurs
for greater values of $d$. This effect is more evident for small values of $d$ and $r_0\beta$. For instance,
for $r_0\beta \sim 1$ the Morse potential supports no bound states until $d>0.6$. For $r_0\beta \sim 4$, the first bound state is
found for $d>0.5$ which is the same value (modulo the limited double precision of the computer calculation) found in the real problem.
\begin{figure}
\includegraphics[width=0.7\linewidth]{fig2.pdf}
\caption{(Color online) Comparison between the energy values that result in the physical and auxiliary problems \mbox{($\Delta E = E^{(aux)}-E$)} as a function of
the scaled potential depth $d=\sqrt{2\mu D}/\hbar\beta$ and several values of the product $r_0 \beta$, with $\beta$ the inverse of the potential range and
$r_0$ the equilibrium distance of the potential.}
\label{fig:energias}
\end{figure}
\subsection{Unbound States}
For small potential depths, $d \rightarrow 0$, the scattering length evaluated using Eq.~\eqref{eq:alpha}
exhibits a logarithmic divergence and the free-particle expresions are not obtained. In order to verify the physical reliability of this property,
$a$ must be evaluated considering the physical boundary condition $u_b(z(r=0))=0$.
Imposing it, a direct calculation shows that the radial functions now take the form
\begin{equation}
u_b(z) = 2i\tilde C_b e^{-z/2} \Im\left\{\tilde{A}(b)z^{i|b|}M\left(\tfrac{1}{2}+i|b|-d,1+2i|b|, z\right) \right\},
\end{equation}
with
\begin{equation}
\tilde{A}(b) = z_0^{-i|b|}M\left(\tfrac{1}{2}-i|b|-d,1-2i|b|, z_0\right),
\end{equation}
and $ z_0 = 2de^ {\beta r_0}$.
In a similar way as we obtained the phase shift before we now get
\begin{eqnarray}
\delta_0(b) &=& -\arg\left[z_0^{i|b|}\tilde{A}(b)\right]\nonumber\\
& =& -\arg M\left(\tfrac{1}{2}-i|b|-d,1-2i|b|, z_0\right).
\label{eq:fas_cortar_en_cero}
\end{eqnarray}
Notice that, due to the structure of $\tilde A (b)$, the factor $z_0^{i|b|}$ that appears in the phase shift $\delta_0$ (which
gives rise to the divergence of $a$ in Eq.~\eqref{eq:alpha} is now canceled. From Eq.~\eqref{eq:fas_cortar_en_cero}
$a$ can be calculated by performing numerically the limit $k\rightarrow 0$ of $k\cot\delta_0(k)$, Eq.~\eqref{eq:low_k_exp}. In Fig.~\figref{fig:efecto_alfa} the resulting scattering lengths are illustrated and one can see that the divergence when $d\rightarrow 0$ is removed for the real problem. As for the effective range,
it now becomes zero as $d\rightarrow 0$ and for larger values of $d$ the differences between the values of $r_e$ for the physical and auxiliary problem result to be less than one percent for all the studied cases.
\begin{figure}
\includegraphics[width=0.7\linewidth]{fig3.pdf}
\caption{(Color online) The scattering length as a function of $d=\sqrt{2\mu D}/\hbar\beta$ for $r_0\beta=4.15$ for the physical and auxiliary problems.}
\label{fig:efecto_alfa}
\end{figure}
\section{Conclusions}
In this work, analytic expressions have been obtained that solve the eigenvalue problem of the Morse Hamiltonian under two different
boundary conditions. This Hamitonian is widely used to model the $s$-wave anharmonic vibrations of nuclei in diatomic molecules
and supports a finite number of bound states.
It was shown that the eigenvalue of the highest excited bound state derived from the boundary condition $u(z(r\rightarrow-\infty))=0$
differs significantly from that derived from the condition $u(z(r=0))=0$ for potentials with a range similar to the equilibrium position,
$\beta r_0\approx 1$, at the unitarity limit, $i.$ $e.$ with a potential depth $D$ close to the values that
yield the possibility for the Hamiltonian to support a new bound state. Outside this limit, the difference between the energy eigenvalues for the auxiliary
and the physical boundary condition becomes small. This is congruent with using the former in the standard analysis of molecular vibrations.
We also derived analytical expressions for the phase shift in binary collisions both for the auxiliary and the physical problem.
From them, the most important parameters necessary to describe an ultracold collision, that is, the scattering length $a$ and
effective range $r_e$ were evaluated. A divergence of $a$ predicted for very small potential depths $d=\sqrt{2\mu D}/\hbar\beta \ll 1$
for the auxiliary problem was removed by imposing the physical boundary condition.
This analysis illustrates the fact that, even though the scattering length is a property that summarizes the asymptotic
behavior of a wave function at $r\rightarrow \infty$,
it is highly influenced by its behavior at the origin. It is important to mention that precisely this observation is
the basis of the theories that use effective potentials to incorporate scattering effects. Perhaps the best well known example of the latter
is the Gross-Pitaevskii equation \cite{gross,pitaevskii} that models an ultracold gas of bosons.
{\bf Acknowledgement.} We acknowledge partial support by DGAPA-UNAM through the project IN111109.
|
2,877,628,091,545 | arxiv | \section{Introduction}
The experimental observation of neutrino oscillations, reported, e.g.,
in Ref.~\cite{Ago18}, confirmed that neutrinos are massive particles
having nonzero mixing between different flavors. Among various types
of neutrino oscillations, we distinguish neutrino spin oscillations~\cite{GiuStu15},
which are the transitions between different helicity states within
one neutrino type. If a left polarized neutrino changes its polarization,
it cannot be observed since right neutrinos are sterile in the standard
model. It will result in the effective reduction of the initial neutrino
flux.
It is known that external backgrounds, e.g., the neutrino interaction
with matter~\cite{MalSmi16}, can modify the process of neutrino
oscillations. The gravitational interaction was found in Ref.~\cite{AhlBur96}
to influence flavor oscillations of neutrinos. Neutrino spin oscillations
in various external fields in curved spacetime were studied in Refs.~\cite{Dvo06,Dvo13,Dvo19}
We considered both static metrics and a time dependent backgrounds,
like a gravitational wave. Note that the evolution of the fermion
spin in curved spacetime was analyzed in Refs.~\cite{ObuSilTer09,ObuSilTer17}
using both quasiclassical and quantum approaches.
The gravity induced neutrino spin oscillations, studied in Refs.~\cite{Dvo06,Dvo13,SorZil07,AlaNod15,Cha15},
were analyzed for neutrinos orbiting a massive object, e.g., a black
hole (BH). However, in this situation, even if neutrino spin oscillations
can be quite intense, it is rather difficult to understand what kind
of observational effects one can expect since a particle is gravitationally
captured by BH, or a neutrino falls to the BH surface. That is why
it is interesting to study spin effects, or neutrino spin oscillations,
e.g., in the neutrino gravitational scattering, when one can control
the helicities of both incoming and outgoing particles.
This research is inspired by the recent observation of the shadow
of a supermassive BH (SMBH)~\cite{Aki19}, which provides the unique
test of the general relativity in the strong field limit. A bright
ring around a BH shadow is formed by photons, which are emitted by
an accretion disk and then experience strong lensing in the gravitational
field of BH~\cite{GraHolWal19}. However, besides photons, a significant
flux of neutrinos was found in Ref.~\cite{CabMcLSur12} to be emitted
by an accretion disk. These particles are subject to neutrino oscillations.
In this work, we shall examine how a strong gravitational field of
BH and the neutrino interaction with an accretion disk can modify
the helicity of scattered particles.
The neutrino gravitational scattering was studied recently~\cite{Cor15},
mainly in connection with the determination of the BH shadow produced
by these particles~\cite{StuSch19}. In our work, we shall focus
on the analysis of spin oscillations in the neutrino gravitational
scattering, which effectively reduce the flux of neutrinos measured
in a detector.
Photons, which form the ring around the BH shadow, interact both with
its gravitational field and with plasma which surrounds BH. The interaction
with plasma can modify the size and the form of the BH shadow (see
Ref.~\cite{CunHer18} for a review). In the present work, we shall
study how the neutrino interaction with background matter, e.g., with
an accretion disk, can influence the observed flux of gravitationally
scattered neutrinos.
In this our work, we continue our studies of neutrino spin oscillations
in Refs.~\cite{Dvo06,Dvo13,Dvo19}. We start in Sec.~\ref{sec:GRAV}
with the analysis of the neutrino spin evolution when a particle gravitationally
scatters off a nonrotating BH. We find the expressions in quadratures
for the transition and survival probabilities for ultrarelativistic
neutrinos and analyze them for different impact parameters. Then,
in Sec.~\ref{sec:MATT}, we formulate the effective Schr\"{o}dinger equation
for neutrino spin oscillations in the scattering off BH surrounded
by background matter. We study astrophysical applications in Sec.~\ref{sec:APPL}.
In particular, we consider the effect of spin oscillations on the
measured neutrino fluxes when particles scatter off SMBH with a realistic
accretion disk. Finally, in Sec.~\ref{sec:DISC}, we discuss our
results. We remind how a scalar particle
moves in the Schwarzschild metric in Appendix~\ref{sec:PARTM}.
\section{Neutrino spin evolution in scattering off BH\label{sec:GRAV}}
In this section, we study how the spin of a neutrino evolves when
a particle scatters off a Schwarzschild BH. We solve the spin evolution
equation in quadratures and analyze the solution for ultrarelativistic
neutrinos. The transition and survival probabilities for neutrino
spin oscillations are derived
We study the neutrino motion in the vicinity of a nonrotating BH.
Using the spherical coordinates $(r,\theta,\phi)$, the interval in
this case has the form~\cite[p.~284]{LanLif71},
\begin{equation}
\mathrm{d}\tau^{2}=A^{2}\mathrm{d}t^{2}-A^{-2}\mathrm{d}r^{2}-r^{2}(\mathrm{d}\theta^{2}+\sin^{2}\theta\mathrm{d}\phi^{2}),\label{eq:intschw}
\end{equation}
where $A=\sqrt{1-r_{g}/r}$, $r_{g}=2M$ is the gravitational radius,
and $M$ is the BH mass. Since the Schwarzschild metric in Eq.~(\ref{eq:intschw})
is spherically symmetric, we can take that a neutrino moves in the
equatorial plane with $\theta=\pi/2$, i.e. $\mathrm{d}\theta=0$.
In Refs.~\cite{Dvo06,Dvo13}, we found that the neutrino invariant
spin $\bm{\zeta}$, defined in a locally Minkowskian frame, evolves
as
\begin{equation}
\frac{\mathrm{d}\bm{\zeta}}{\mathrm{d}t}=\frac{2}{\gamma}(\bm{\zeta}\times\bm{\Omega}_{g}),\label{eq:spinevgen}
\end{equation}
where $\gamma=\mathrm{d}t/\mathrm{d}\tau$. If a neutrino interacts
with a Schwarzschild BH, the vector $\bm{\Omega}_{g}$ in Eq.~(\ref{eq:spinevgen})
has only one nonzero component~\cite{Dvo06},
\begin{equation}
\bm{\Omega}_{g}=(0,\Omega_{2},0),\quad\Omega_{2}=\frac{1}{2}\frac{\mathrm{d}\phi}{\mathrm{d}t}\left(-A+\frac{\gamma}{\left(1+\gamma A\right)}\frac{r_{g}}{2r}\right),\label{eq:Omega2}
\end{equation}
where $\mathrm{d}\phi/\mathrm{d}t=LA^{2}/Er^{2}$ is the angular velocity,
which can be obtained using Eq.~(\ref{eq:eqmtr}), $L$ is the conserved
angular momentum of a neutrino, and $E$ is the neutrino energy. The
parameter $\gamma$ in Eqs.~(\ref{eq:spinevgen}) and~(\ref{eq:Omega2})
has the form, $\gamma=E/mA^{2}$.
We are interested in neutrino spin oscillations, i.e. in the change
of the neutrino helicity, $h=(\bm{\zeta}\mathbf{u})/|\mathbf{u}|$,
where $\mathbf{u}$ is the spatial part of the neutrino four velocity
in the locally Minkowskian frame. Therefore, besides the study of
the neutrino spin in Eq.~(\ref{eq:spinevgen}), we should account
for the evolution of $\mathbf{u}$.
The expression for $\mathbf{u}$ in the Schwarzschild metric has the
form~\cite{Dvo06},
\begin{align}
\mathbf{u}= & \left(\frac{\mathrm{d}r}{\mathrm{d}\tau}A^{-1},0,r\frac{\mathrm{d}\phi}{\mathrm{d}\tau}\right)\nonumber \\
& =\left(\pm\frac{1}{m}\left[E^{2}-m^{2}A^{2}\left(1+\frac{L^{2}}{m^{2}r^{2}}\right)\right]^{1/2},0,\frac{L}{mr}\right),\label{eq:uexpl}
\end{align}
where the signs $\pm$ stay for outgoing and incoming neutrinos respectively
(see Eq.~(\ref{eq:eqmtr})). At $r\to\infty$, $\mathbf{u}\to\mathbf{u}_{\pm\infty}=\left(\pm\left[E^{2}-m^{2}\right]^{1/2}/m,0,0\right)$,
i.e. the asymptotic neutrino motion happens along the first axis in
the locally Minkowskian frame. In this frame, an incoming neutrino
propagates oppositely the first axis. An outgoing particle moves along
this axis.
Since only $\Omega_{2}\neq0$, the nonzero neutrino spin components
are $\zeta_{1,3}\neq0$, and $\zeta_{2}=0$. It is convenient to represent
\begin{equation}
\zeta_{1}=\cos\alpha,\quad\zeta_{3}=\sin\alpha,\label{eq:zeta13alpha}
\end{equation}
where $\alpha$ is the rotation angle of the spin from its initial
direction.
Now we have to specify the initial condition for Eq.~(\ref{eq:spinevgen}).
We suppose that, initially, at $r\to\infty$ and $\phi\to0$, an incoming
neutrino is left polarized, i.e. the helicity is negative, $h_{-\infty}=(\bm{\zeta}_{-\infty}\mathbf{u}_{-\infty})/|\mathbf{u}_{-\infty}|=-1$.
Accounting for the expression for $\mathbf{u}_{-\infty}$ above, we
get that $\zeta_{-\infty1}=1$ and $\zeta_{-\infty3}=0$, or $\alpha_{-\infty}=0$
in Eq.~(\ref{eq:zeta13alpha}).
The helicity of an outgoing neutrino has the form, $h_{+\infty}=(\bm{\zeta}_{+\infty}\mathbf{u}_{+\infty})/|\mathbf{u}_{+\infty}|$,
where $\bm{\zeta}_{+\infty}=(\cos\alpha_{+\infty},0,\sin\alpha_{+\infty})$
and $\mathbf{u}_{+\infty}$ is given above. Using Eq.~(\ref{eq:zeta13alpha}),
we get that $h_{+\infty}=\cos\alpha_{+\infty}$. The transition $P_{\mathrm{LR}}$
and survival $P_{\mathrm{LL}}$ probabilities for neutrino spin oscillations
are
\begin{equation}
P=\frac{1}{2}(1\pm h_{+\infty}),\label{eq:Pgen}
\end{equation}
where the upper sign stays for $P_{\mathrm{LR}}$ and the lower one
for $P_{\mathrm{LL}}$.
The angle $\alpha$ corresponds to the spin projection on the $x$-axis
in the neutrino rest frame. This projection is in $m/E$ times shorter
for a nonmoving observer, which measures the neutrino polarization,
because of the Lorentz contraction. It means that the observed angle
should be rescaled by the factor $E/m$: $\alpha\to\alpha E/m$. It is
equivalent to the replacement of $\Omega_{2}$: $\Omega_{2}\to\Omega_{2}E/m$.
Now we should find $\alpha_{+\infty}$. Using Eqs.~(\ref{eq:spinevgen}),
(\ref{eq:Omega2}), (\ref{eq:zeta13alpha}) and~(\ref{eq:eqmtr}),
we get that the angle $\alpha$ obeys the equation,
\begin{align}
\frac{\mathrm{d}\alpha}{\mathrm{d}r}= & F,\quad F(r)=\pm\frac{AL}{mr^{2}}\frac{\frac{E}{m}\left(\frac{3r_{g}}{2r}-1\right)-A^{3}}{\frac{E}{m}+A}\left[\frac{E^{2}}{m^{2}}-A^{2}\left(1+\frac{L^{2}}{m^{2}r^{2}}\right)\right]^{-1/2},\label{eq:F}
\end{align}
where the signs $\pm$ stay for outgoing and incoming neutrinos. Then
we should account for the initial condition $\alpha_{-\infty}=0$
and the fact that $\alpha_{+\infty}$ is twice the angle corresponding
to the minimal distance between a neutrino and BH. We express the
final result for ultrarelativistic neutrinos, with $E\gg m$, as
\begin{equation}
\alpha_{+\infty}=y\int_{x_{m}}^{\infty}\frac{\mathrm{d}x}{x^{2}}\frac{(3-2x)\sqrt{x-1}}{\sqrt{x^{3}-y^{2}(x-1)}},\label{eq:aobslim}
\end{equation}
where $y=b/r_{g}$, $b=L/E$ is the impact parameter, and $x_{m}$
is the maximal root of the equation
\begin{equation}
x^{3}-y^{2}(x-1)=0.\label{eq:eqtosolve}
\end{equation}
Note that $y>y_{0}=3\sqrt{3}/2$ for a neutrino not to fall to BH
(see Appendix~\ref{sec:PARTM}).
The expression for roots $x_{1,2,3}$ of Eq.~(\ref{eq:eqtosolve})
for the arbitrary $y$ has the form,
\begin{equation}
x_{k}=\frac{2y}{\sqrt{3}}\cos\left[\frac{1}{3}\arccos\left(-\frac{3\sqrt{3}}{2y}\right)-\frac{2\pi}{3}(k-1)\right],\quad k=1,2,3,\label{eq:roots}
\end{equation}
where $x_{1}\equiv x_{m}$ is the maximal root. First, let us analyze
Eq.~(\ref{eq:aobslim}) in the case $y\gg y_{0}$. Using Eq.~(\ref{eq:roots})
and keeping only the leading terms, we get that the roots have the
form, $x_{1}=y-\tfrac{1}{2}-\tfrac{3}{8y}+\mathcal{O}(y^{-3})$, $x_{2}=1+\mathcal{O}(y^{-4})$,
and $x_{3}=-y-\tfrac{1}{2}+\tfrac{3}{8y}+\mathcal{O}(y^{-3})$. In
this case, we get that
\begin{equation}
\alpha_{+\infty}=8y\int_{a}^{\infty}\frac{\mathrm{d}x}{(2x-1)^{2}}\frac{(1-x)}{\sqrt{x^{2}-a^{2}}}\approx-\pi-\frac{\pi}{4y^{2}},\label{eq:abigy}
\end{equation}
where $a=y-\tfrac{3}{8y}$. The transition and survival probabilities
in Eq.~(\ref{eq:Pgen}) take the form,
\begin{equation}
P_{\mathrm{LR}}=\frac{1}{2}\left[1-\cos\frac{\pi}{4y^{2}}\right]\approx\frac{\pi^{2}}{64y^{4}},\quad P_{\mathrm{LL}}=\frac{1}{2}\left[1+\cos\frac{\pi}{4y^{2}}\right]\approx1-\frac{\pi^{2}}{64y^{4}}.\label{eq:Plim}
\end{equation}
One can see that $P_{\mathrm{LR}}\to0$ (and $P_{\mathrm{LL}}\to1$)
if $y\gg y_{0}$. This is expected since, at $y\gg y_{0}$, a neutrino
propagates far away from BH. The gravitational interaction, which
causes the spin flip, is weak. Thus the neutrino polarization is practically
unchanged.
Now we discuss the situation when $y\to y_{0}$. Then, Eq.~(\ref{eq:eqtosolve})
has the following roots: $x_{1}=x_{2}=3/2$ and $x_{3}=-3$. The spin
rotation angle takes the value
\begin{equation}
\alpha_{+\infty}=-3\sqrt{3}\int_{3/2}^{\infty}\frac{\mathrm{d}x}{x^{2}}\frac{\sqrt{x-1}}{\sqrt{x+3}}=-\frac{2\pi}{3},
\end{equation}
which is finite even if a neutrino asymptotically approaches BH. The
corresponding probabilities are $P_{\mathrm{LR}}=0.25$ and $P_{\mathrm{LL}}=0.75$
for such neutrinos. We shall present the transition and survival probabilities
for arbitrary $y$ in Sec.~(\ref{sec:APPL}), when we study some
possible astrophysical applications.
\section{Neutrino gravitational scattering accounting for the matter interaction\label{sec:MATT} }
In this section, we formulate the neutrino spin evolution equations
in background matter under the influence of a gravitational field
when a neutrino scatters off BH. Then, we derive the effective Schr\"{o}dinger
equation for scattered neutrinos.
Using the forward scattering approximation, one gets that the neutrino
interaction with background matter is described by the following effective
Lagrangian in Minkowsky spacetime~\cite{MohPal04}:
\begin{equation}
\mathcal{L}_{m}=-\sqrt{2}G_{\mathrm{F}}\bar{\nu}\gamma^{\mu}(1-\gamma^{5})\nu G_{\mu},\label{eq:Largmat}
\end{equation}
where $\nu$ is the neutrino bispinor, $\gamma^{\mu}$ and $\gamma^{5}$
are the Dirac matrices, and $G_{\mathrm{F}}=1.17\times10^{-5}\,\text{GeV}^{-2}$
is the Fermi constant. The four vector $G^{\mu}$ is the linear combination
of the hydrodynamic currents and polarizations of background fermions.
It depends on the chemical composition of matter and the type of the
neutrino. The explicit form of $G^{\mu}$ can be found in Ref.~\cite{DvoStu02}.
Basing on Eq.~\ref{eq:Largmat}, the influence of the neutrino interaction
with background matter on its spin evolution in curved spacetime was
studied in Refs.~\cite{Dvo13,Dvo19}. It results in the appearance
of the additional components of the vector $\bm{\Omega}_{g}$ in Eq.~(\ref{eq:Omega2}):
$\bm{\Omega}_{g}\to\bm{\Omega}=\bm{\Omega}_{g}+\bm{\Omega}_{m}$.
If we study the neutrino interaction with nonmoving and unpolarized
background fermions in curved spacetime, the vector $\bm{\Omega}_{m}$
has the form,
\begin{equation}
\bm{\Omega}_{m}=\frac{G_{\mathrm{F}}}{\sqrt{2}}\frac{g^{0}}{\gamma}\mathbf{u}=\frac{G_{F}}{\sqrt{2}}n_{\mathrm{eff}}\frac{1}{\gamma}\left(\frac{\mathrm{d}r}{\mathrm{d}\tau},0,Ar\frac{\mathrm{d}\phi}{\mathrm{d}\tau}\right),\label{eq:Omegam}
\end{equation}
where $g^{0}=e_{\,\mu}^{0}G^{\mu}=AG^{0}$, $e_{\,\mu}^{0}=(A,0,0,0)$
is the vierbein vector in the Schwarzschild metric (see Ref.~\cite{Dvo06}),
and $G^{0}=n_{\mathrm{eff}}$ is the invariant effective density of
background matter. We use Eq.~(\ref{eq:uexpl}) to derive Eq.~(\ref{eq:Omegam}).
If we study spin oscillations of electron neutrinos in the electrically
neutral hydrogen plasma then $n_{\mathrm{eff}}=n_{e}$, where $n_{e}$
is the electron number density. The expressions for $n_{\mathrm{eff}}$
for other neutrino oscillations channels and various types background
fermions can be found in Ref.~\cite{DvoStu02}.
Instead of dealing with Eq.~(\ref{eq:spinevgen}) for the spin precession
it is convenient to study the neutrino polarization density matrix,
$\rho=\tfrac{1}{2}[1+(\bm{\sigma\zeta})]$, which obeys the equation,
$\mathrm{i}\dot{\rho}=[H,\rho]$, where $H=-(\bm{\sigma\Omega})$
and $\bm{\Omega}$ includes both the gravity and matter contributions
in Eqs.~(\ref{eq:Omega2}) and~(\ref{eq:Omegam}).
Since the Liouville\textendash von Neumann equation for the density
matrix is rather complicated for the analysis, we can use the Schr\"{o}dinger
equation, $\mathrm{i}\dot{\psi}=H\psi$. As we mentioned in Sec.~(\ref{sec:GRAV}),
neutrinos move along the first axis in the locally Minkowskian frame
at $r\to\infty$. Hence, it is convenient to use this axis for the
spin quantization. It mean that we should transform the Hamiltonian
$H\to U_{2}HU_{2}^{\dagger}$, where $U_{2}=\exp(i\pi\sigma_{2}/4)$.
This procedure brings the meaning to the effective wave function $\psi$.
As in Eq.~(\ref{eq:F}), it is convenient to rewrite the Schr\"{o}dinger
equation using the radial coordinate $r$,
\begin{equation}
\mathrm{i}\frac{\mathrm{d}\psi}{\mathrm{d}r}=H_{r}\psi,\quad H_{r}=-U_{2}(\bm{\sigma\Omega}_{r})U_{2}^{\dagger},\label{eq:Schr}
\end{equation}
where
\begin{equation}
\bm{\Omega}_{r}=\frac{\mathrm{d}t}{\mathrm{d}r}\bm{\Omega}=\left(\frac{G_{F}}{\sqrt{2}}n_{\mathrm{eff}},\frac{F}{2},Ar\frac{\mathrm{d}\phi}{\mathrm{d}r}\frac{G_{F}}{\sqrt{2}}n_{\mathrm{eff}}\right).\label{eq:Omegar}
\end{equation}
Here $F$ is given in Eq.~(\ref{eq:F})
Equation~(\ref{eq:Schr}) should be supplied with the initial condition
$\psi_{-\infty}^{\mathrm{T}}=(1,0)$, which means that all incoming
neutrinos are left polarized. Since the neutrino velocity changes
the direction at $t\to+\infty$, the transition probability reads
$P_{\mathrm{LR}}=|\psi_{+\infty}^{(1)}|^{2}$, and, correspondingly,
the survival probability is $P_{\mathrm{LL}}=|\psi_{+\infty}^{(2)}|^{2}$,
where $\psi_{+\infty}^{\mathrm{T}}=(\psi_{+\infty}^{(1)},\psi_{+\infty}^{(2)})$
is the asymptotic solution of Eq.~(\ref{eq:Schr}).
The solution of Eqs.~(\ref{eq:Schr}) and~(\ref{eq:Omegar}) can
be found only numerically because of the nontrivial dependence of
$\bm{\Omega}_{r}$ on $r$. Moreover, in Sec.~\ref{sec:APPL}, we
discuss the situation when $n_{\mathrm{eff}}=n_{\mathrm{eff}}(r)$,
which make the analysis more complicated.
We also mention, that we cannot integrate Eqs.~(\ref{eq:Schr}) and~(\ref{eq:Omegar})
to the turn point $r_{m}$ and then automatically reconstruct $\psi_{+\infty}$,
as we made in Sec.~(\ref{sec:GRAV}) to find $\alpha_{+\infty}$.
In the presence of the background matter, the dynamics of the neutrino
polarization is nonabelian. Moreover, the term $\tfrac{\mathrm{d}\phi}{\mathrm{d}r}$
in $\bm{\Omega}_{r}$ changes the sign at $r_{m}$ (see Eq.~(\ref{eq:eqmtr})).
Thus, to obtain $\psi_{+\infty}$, one should integrate Eqs.~(\ref{eq:Schr})
and~(\ref{eq:Omegar}), first, in the interval $+\infty>r>r_{m}$
and, then, for $r_{m}<r<+\infty$, with the solutions being stitched
at $r_{m}$. This fact significantly reduces the accuracy of the numerical
simulation compared to Sec.~(\ref{sec:GRAV}).
\section{Astrophysical applications\label{sec:APPL}}
In this section, we present the numerical solutions of Eqs.~(\ref{eq:Schr})
and~(\ref{eq:Omegar}) for the neutrino scattering off SMBH surrounded
by an accretion disk. We discuss different orientations of neutrino
trajectories with respect to the disk plane. Measurable neutrino fluxes
are obtained.
First we notice, that standard model neutrinos are produced as left
polarized particles. If they gravitationally interact with BH, some
incoming left neutrinos become right polarized after scattering. A
neutrino detector can observe only left neutrinos. Hence, the observed
flux of neutrinos is $F_{\nu}=P_{\mathrm{LL}}F_{0}$, where $F_{0}$
is the flux of scalar particles. The value of $F_{0}$ is proportional
to the differential cross section, $F_{0}\sim\mathrm{d}\sigma/\mathrm{d}\varOmega$,
which is studied in Appendix~\ref{sec:PARTM}.
We assume that the neutrino beam scatters off a SMBH surrounded by
an accretion disk. For example, we can suppose that such a SMBH is
in the center of a Seyfert galaxy. We take that the plasma density
in the disk scales as $n_{e}\propto r^{-\beta}$. The value of $\beta$
is very model dependent. For example, $\beta\approx0.5$ in an advection
dominated accretion disk studied in Ref.~\cite{Igu00}. If we take
that the mass of SMBH in question is $M\sim10^{8}M_{\odot}$, the
plasma density in the vicinity of SMBH can be up to $n_{e}\sim10^{18}\,\text{cm}^{-3}$~\cite{Jia19}.
Thus, the dimensionless effective potential $V(r)=G_{F}n_{e}(r)r_{g}/\sqrt{2}$,
reads $V(x)=V_{\mathrm{max}}x^{-\beta}$, where $x=r/r_{g}$.
One can consider various neutrino trajectories with respect to an
accretion disk. However, to highlight the effect of the neutrino interaction
with matter we study two extreme cases: the neutrino motion in the
plane perpendicular to an accretion disk, marked by the symbol $\perp$,
and the neutrino propagation in the plane of an accretion disk, labeled
by the symbol $\parallel$. The effect of the neutrino matter interaction
is maximal in the latter situation since we assume that a disk is
slim.
First we discuss the case $\perp$, when only the gravity contributes
the neutrino scattering off BH. The transition and survival probabilities,
as the functions of the dimensionless impact parameter $y=b/r_{g}$,
are shown in Figs.~\ref{fig:perpscat}(a) and~\ref{fig:perpscat}(b).
Despite we show the probabilities for $y_{0}<y<11y_{0}$ (see also
Figs.~\ref{fig:paralscatLR} and~\ref{fig:paralscatLL} below),
we take that $y<30y_{0}$ in our simulations. One can see in Figs.~\ref{fig:perpscat}(a)
and~\ref{fig:perpscat}(b) that $P_{\mathrm{LR}}^{(\perp)}\to0.25$
($P_{\mathrm{LL}}^{(\perp)}\to0.75$) at $y\to y_{0}$ and $P_{\mathrm{LR}}^{(\perp)}\to0$
($P_{\mathrm{LL}}^{(\perp)}\to1$) at $y\gg y_{0}$, which is in agreement
with the results in Sec.~\ref{sec:GRAV}.
\begin{figure}
\centering
\subfigure[]
{\label{1a}
\includegraphics[scale=.35]{V01s02fig3.eps}}
\hskip-.6cm
\subfigure[]
{\label{1b}
\includegraphics[scale=.35]{V01s02fig4.eps}}
\\
\subfigure[]
{\label{1c}
\includegraphics[scale=.35]{V01s02fig6.eps}}
\protect
\caption{(a)~The transition probability of spin oscillations $P_{\mathrm{LR}}^{(\perp)}$,
when a neutrino moves perpendicularly to an accretion disk, versus
the dimensionless impact parameter $y$. (b)~The survival probability
of spin oscillations $P_{\mathrm{LL}}^{(\perp)}$, when a neutrino
moves perpendicularly to an accretion disk, as a function the dimensionless
impact parameter $y$. (c)~The ratio of the measured fluxes of neutrinos
and scalar particles, when they move perpendicularly to an accretion
disk, versus the scattering angle $\chi$ normalized by $\pi$.\label{fig:perpscat}}
\end{figure}
The probabilities as functions of the impact parameter, shown in Figs.~\ref{1a}
and~\ref{1b}, are not the measurable quantities. In
Fig.~\ref{1c}, we show the measured flux of neutrinos,
moving perpendicularly to an accretion disk, $F_{\nu}^{(\perp)}$,
normalized by the flux of scalar particles, versus the scattering
angle $\chi$. These fluxes are proportional to the differential cross
section, $F\sim\mathrm{d}\sigma/\mathrm{d}\varOmega$, where $\mathrm{d}\varOmega=2\pi\sin\chi\mathrm{d}\chi$.
The flux of scalar particles (or the cross section), $F_{0}$, is
given in Appendix (see also Ref.~\cite{DolDorLas06}).
One can see in Fig.~\ref{1c} that spin effects in the
neutrino gravitational scattering off BH significantly reduce the
observed flux of neutrinos compared to the case of scalar particles.
The influence of spin oscillations is maximal for neutrinos scattered
backwards. The reduction of the flux can be more than 20\% in this
situation.
Now we turn to the discussion of the neutrino interaction with both
the gravity and an accretion disk, i.e. we discuss the case $\parallel$.
The transition and survival probabilities are shown on Figs.~\ref{fig:paralscatLR}
and~\ref{fig:paralscatLL} for various $V_{\mathrm{max}}$, or maximal
plasma density $n_{e}$, and $\beta$.
One can see that the best coincidence between $\perp$ and $\parallel$
cases is implemented when $V_{\mathrm{max}}=0.1$ and $\beta=0.5$;
cf. Figs.~\ref{1a} and~\ref{2a},
as well as Figs.~\ref{1b} and~\ref{3a}.
Indeed, this situation corresponds to a low density accretion disk
with $n_{e}=10^{18}\,\text{cm}^{-3}$, which has relatively rapid
density decrease (great $\beta=0.5$), i.e. the influence of the neutrino
matter interaction is minimal. The opposite case is presented in Figs.
~\ref{2d} and~\ref{3d}, where
the influence of matter on neutrino spin oscillations is maximal since
matter density is higher, $n_{e}=2\times10^{18}\,\text{cm}^{-3}$.
Moreover, the density profile is less steep (small $\beta=0.2$),
i.e. a neutrino stays longer inside such a disk.
\begin{figure}
\centering
\subfigure[]
{\label{2a}
\includegraphics[scale=.35]{V01s05fig1.eps}}
\hskip-.6cm
\subfigure[]
{\label{2b}
\includegraphics[scale=.35]{V02s05fig1.eps}}
\\
\subfigure[]
{\label{2c}
\includegraphics[scale=.35]{V01s02fig1.eps}}
\hskip-.6cm
\subfigure[]
{\label{2d}
\includegraphics[scale=.35]{V02s02fig1.eps}}
\protect
\caption{The transition probability of spin oscillations $P_{\mathrm{LR}}^{(\parallel)}$,
when a neutrino interacts with an accretion disk, versus the dimensionless
impact parameter $y$. (a)~$V_{\mathrm{max}}=0.1$ ($n_{e}=10^{18}\,\text{cm}^{-3}$)
and $\beta=0.5$; (b)~$V_{\mathrm{max}}=0.2$ ($n_{e}=2\times10^{18}\,\text{cm}^{-3}$)
and $\beta=0.5$; (c)~$V_{\mathrm{max}}=0.1$ and $\beta=0.2$;
(d)~$V_{\mathrm{max}}=0.2$ and $\beta=0.2$.\label{fig:paralscatLR}}
\end{figure}
\begin{figure}
\centering
\subfigure[]
{\label{3a}
\includegraphics[scale=.35]{V01s05fig2.eps}}
\hskip-.6cm
\subfigure[]
{\label{3b}
\includegraphics[scale=.35]{V02s05fig2.eps}}
\\
\subfigure[]
{\label{3c}
\includegraphics[scale=.35]{V01s02fig2.eps}}
\hskip-.6cm
\subfigure[]
{\label{3d}
\includegraphics[scale=.35]{V02s02fig2.eps}}
\protect
\caption{The survival probability of spin oscillations $P_{\mathrm{LL}}^{(\parallel)}$,
when a neutrino interacts with an accretion disk, versus the dimensionless
impact parameter $y$ for different $V_{\mathrm{max}}$ and $\beta$.
(a)~$V_{\mathrm{max}}=0.1$ ($n_{e}=10^{18}\,\text{cm}^{-3}$) and
$\beta=0.5$; (b)~$V_{\mathrm{max}}=0.2$ ($n_{e}=2\times10^{18}\,\text{cm}^{-3}$)
and $\beta=0.5$; (c)~$V_{\mathrm{max}}=0.1$ and $\beta=0.2$;
(d)~$V_{\mathrm{max}}=0.2$ and $\beta=0.2$.\label{fig:paralscatLL}}
\end{figure}
We also mention that the analysis of neutrino spin oscillations in
accretion disks with a constant density, studied in Ref.~\cite{Jia19},
is problematic because of difficulties in the numerical solution of
Eqs.~(\ref{eq:Schr}) and~(\ref{eq:Omegar}) in the limit $\beta\to0$.
We have already mentioned above that the probabilities of spin oscillations
versus the impact parameter cannot be measured in an experiment. In
Fig.~\ref{fig:Fparal}, we show the fluxes of neutrinos $F_{\nu}^{(\parallel)}$
scattered off BH and interacting with an accretion disk. These fluxes
are normalized to the flux of scalar particles. As mentioned above,
the best coincidence between $\perp$ and $\parallel$ cases is implemented
for $V_{\mathrm{max}}=0.1$ and $\beta=0.5$; cf. Figs.~\ref{4a}
and~\ref{2c}.
\begin{figure}
\centering
\subfigure[]
{\label{4a}
\includegraphics[scale=.35]{V01s05fig5.eps}}
\hskip-.6cm
\subfigure[]
{\label{4b}
\includegraphics[scale=.35]{V02s05fig5.eps}}
\\
\subfigure[]
{\label{4c}
\includegraphics[scale=.35]{V01s02fig5.eps}}
\hskip-.6cm
\subfigure[]
{\label{4d}
\includegraphics[scale=.35]{V02s02fig5.eps}}
\protect
\caption{The fluxes of neutrinos, normalized by the flux of scalar particles,
for particles interacting with background matter of an accretion disk.
Here we represent the dependence of $F_{\nu}^{(\parallel)}$ on $\chi$
for different $V_{\mathrm{max}}$ and $\beta$. (a)~$V_{\mathrm{max}}=0.1$
($n_{e}=10^{18}\,\text{cm}^{-3}$) and $\beta=0.5$; (b)~$V_{\mathrm{max}}=0.2$
($n_{e}=2\times10^{18}\,\text{cm}^{-3}$) and $\beta=0.5$; (c)~$V_{\mathrm{max}}=0.1$
and $\beta=0.2$; (d)~$V_{\mathrm{max}}=0.2$ and $\beta=0.2$.\label{fig:Fparal}}
\end{figure}
Now we explicitly compare $\perp$ and $\parallel$ cases by plotting
the ratios of the corresponding fluxes in Fig.~\ref{fig:ratflux}.
First, we mention that $F_{\nu}^{(\perp)}<F_{\nu}^{(\parallel)}$.
Indeed, if a neutrino interacts only with gravity, spin oscillations
are in the resonance (see Refs.~\cite{Dvo06,Dvo13}). The interaction
with matter makes the survival probability greater. This fact explains
the observed feature.
\begin{figure}
\centering
\subfigure[]
{\label{5a}
\includegraphics[scale=.35]{V01s05fig7.eps}}
\hskip-.6cm
\subfigure[]
{\label{5b}
\includegraphics[scale=.35]{V02s05fig7.eps}}
\\
\subfigure[]
{\label{5c}
\includegraphics[scale=.35]{V01s02fig7.eps}}
\hskip-.6cm
\subfigure[]
{\label{5d}
\includegraphics[scale=.35]{V02s02fig7.eps}}
\protect
\caption{The ratio $F_{\nu}^{(\perp)}/F_{\nu}^{(\parallel)}$ versus the scattering
angle $\chi$ for different $V_{\mathrm{max}}$ and $\beta$. (a)~$V_{\mathrm{max}}=0.1$
($n_{e}=10^{18}\,\text{cm}^{-3}$) and $\beta=0.5$; (b)~$V_{\mathrm{max}}=0.2$
($n_{e}=2\times10^{18}\,\text{cm}^{-3}$) and $\beta=0.5$; (c)~$V_{\mathrm{max}}=0.1$
and $\beta=0.2$; (d)~$V_{\mathrm{max}}=0.2$ and $\beta=0.2$.\label{fig:ratflux}}
\end{figure}
The difference between $F_{\nu}^{(\perp)}$ and $F_{\nu}^{(\parallel)}$
can reach almost 20\%; see Fig.~\ref{5d}. It means that,
if high energy astrophysical neutrinos experience gravitational lensing
by BH surrounded by an accretion disk, the observed flux depends on
the orientation of the neutrino trajectory with respect to the disk
plane. This maximal difference between $F_{\nu}^{(\perp)}$ and $F_{\nu}^{(\parallel)}$
is for backwardly scattered neutrinos.
\section{Discussion\label{sec:DISC}}
In the present work, we have considered spin effects in the neutrino
scattering off a nonrotating BH. The neutrino spin evolution in curved
spacetime has been accounted for quasiclassically basing on the approach
developed in Refs.~\cite{Dvo06,Dvo13}. We have studied the neutrino
scattering off SMBH surrounded by an accretion disk and considered
some astrophysical applications.
In Sec.~\ref{sec:GRAV}, we have studied the neutrino spin evolution
in a gravitational scattering in the Schwarzschild metric. Supposing
that all incoming neutrinos are ultrarelativistic and left polarized,
we have obtained that the transition probability $P_{\mathrm{LR}}$
for outgoing particles can reach 25\% if the impact parameter is close
to the critical one $b\approx b_{0}=3\sqrt{3}r_{g}/2$. Note that
the fact that the helicity of ultrarelativistic (massless) particles
can be changed under the influence of a gravitational field was mentioned
earlier in Ref.~\cite{SinMobPap04}. We also mention that our calculation
of the probabilities in the limit $y\gg y_{0}$ in Eq.~(\ref{eq:Plim})
is consistent with the result of Ref.~\cite{Mer95}, where the neutrino
helicity flip in the idealized gravitational field was studied.
Then, in Sec.~\ref{sec:MATT}, we have derived the effective Schr\"{o}dinger
equation for a neutrino scattering off BH surrounded by background
matter with a nonuniform density. In the case of only gravitational
scattering, studied in Sec.~\ref{sec:GRAV}, it was possible to obtain
analytically the transition and survival probabilities for some impact
parameters. If, besides gravity, a neutrino interacts with a background
matter, the probabilities can be derived only in the numerical solution
of Eqs.~(\ref{eq:Schr}) and~(\ref{eq:Omegar}).
In Sec.~\ref{sec:APPL}, have considered the astrophysical applications
of our results. In particular, we have studied the effect of spin
oscillations on the neutrino scattering off SMBH surrounded by the
accretion disk. We have taken the parameters of the accretion disk,
such as the number density and the mass distribution, close to the
values resulting from observations and hydrodynamics simulations.
Using the numerical solution of Eqs.~(\ref{eq:Schr}) and~(\ref{eq:Omegar}),
we have found the transition and survival probabilities, as well as
the observed fluxes of outgoing neutrinos for different orientations
of the particles trajectories with respect to the accretion disk.
As one can see in Figs.~\ref{1c} and~\ref{fig:Fparal},
there is no deviation of the fluxes for the forward neutrino scattering
at $\chi=0$ if one compares them with the fluxes of scalar particles.
It means that neutrino spin oscillations do not affect the size of
the BH shadow. The major effect of spin oscillations is for the backward
neutrino scattering at $\chi=\pi$. Thus the intensity of the glory
flux for neutrinos is almost 20\% less than for scalar particles;
cf. Fig.~\ref{4a}.
The influence of the plasma interaction on the gravitational scattering
of scalar particles (photons) was extensively studied (see, e.g.,
Ref.~\cite{CunHer18} for a review). For example, the photons propagation
in plasma surrounding a nonrotating BH was examined in Ref.~\cite{PerTsu17}.
The form of the BH shadow was found to be unchanged, but its size
can be magnified. In Fig.~\ref{fig:ratflux}, we predict the asymmetry
in the observed neutrino fluxes depending on the orientation of the
neutrino trajectory with respect to the accretion disk. This asymmetry
is maximal for the backward neutrino scattering. Although neutrinos
interact with plasma much weaker than photons, the asymmetry can reach
almost 20\% for the realistic accretion disk; cf. Fig.~\ref{5d}.
Since the effects of spin oscillations on the neutrino gravitational
scattering are valid for ultrarelativistic particles, the results
obtained in the present work are of importance for the rapidly developing
area of the neutrino astronomy~\cite{Gal18}, where a significant
success was achieved in the detection of the ultrahigh energy (UHE)
cosmic neutrinos. Neutrinos with energies in the PeV range were reported
in Ref.~\cite{Aar13} to be detected. Moreover there are sizable
efforts in the identification of the sources of UHE neutrinos with
astronomical objects such as active galactic nuclei~\cite{Aar19}.
In our work we have demonstrated that, if the incoming flux of cosmic
neutrinos experience the gravitational lensing, the observed flux
can be reduced up to 20\%, compared to its initial value, because
of neutrino spin oscillations.
\section*{Acknowledgments}
I am thankful to J.~Jiang, Y.~N.~Obukhov, and A.~F.~Zakharov
for useful comments. This work is performed within the government
assignment of IZMIRAN. I am also thankful to RFBR (Grant No 18-02-00149a)
and DAAD for a partial support.
|
2,877,628,091,546 | arxiv | \section*{Hansen et al. Example}
\vspace*{-1pt}
In Section 3.2, I cited the well-known Hansen,~Ma\-dow and Tepping (HMT)
example illustrating the dangers of using model-dependent methods with
fair\-ly large samples even under minor model misspecifications. Sedransk
argues in his discussion that new advances in model diagnostics, such as
model averaging, might remedy the difficulty noted by HMT and provide
improvements over the ``straw man, the usual ratio estimator.'' I agree
with Sedransk that it
would be worthwhile analyzing this example and other examples to show
how one can make valid model-dependent inferences routinely with fairly
lar\-ge domain samples that can provide significant improvements over the
design-based (possibly model-assisted) methods, particularly in the
context of official statistics with many variables of interest. If this
goal can be achieved, then I believe model-dependent methods
(frequentist or Bayesian) will have significant impact on practice,
similar to their current use in small area estimation with small domain
samples. The HMT example showed the\vadjust{\eject} importance of using design weights
under their design with deep stratification by size and disproportional
sample allocation. The usual design unbiased weighted estimator is
almost as efficient as the usual combined weighted ratio estimator under
the HMT design because of deep stratification by size, so I~do not agree
with Sedransk's comment on the importance of ratio estimator in the HMT
example. It is interesting to note that under proportional sample
allocation, the BLUP estimator (unweighted ratio estimator) under the
incorrectly specified ratio model is identical to the combined weighted
ratio estimator and hence it performs well because it is design
consistent, unlike under disproportional sample allocation. The HMT
example demonstrated the importance of design consistency, and in fact
as noted in Section 3.2, Little (\citeyear{LIT83}) proposed restricting attention to
models that hold for the sample and for which the corresponding BLUP
estimator is design consistent. I have noted some limitations of this
proposal in Section 3.2. It should be noted that the HMT illustration of
the poor performance of the BLUP estimator used the repeated sampling
design-based approach to evaluate confidence interval coverage. On the
other hand, model-based inference~is based on the distribution induced
by the model conditional on the particular sample that has been drawn.
However, Rao (\citeyear{Rao97}) showed that the HMT conclusions still hold in the
conditional framework because of the effective use of size information
through size stratification.
\section*{Role of Design Weights}
I will now turn to Meeden's useful
comments on the role of design weights and the use of Polya posterior
(PP) for making inferences after the sample is observed. As noted in
Section 4.2, the PP approach when applicable permits routine interval
estimation for any finite population parameter of interest through
simulation of many finite populations from PP and this general interval
estimation feature of PP is indeed attractive. Meeden notes in his
discussion that an R package is also available for simulating many
complete populations. However, so far the PP methodology considered only
simple designs that may satisfy the \mbox{assumption} that the un-sampled units
are like the sampled units\vadjust{\eject} (exchangeability) which limits its
applicability in practice. Meeden agrees with my comment that the PP
approach needs extension to more complex designs \mbox{before} it becomes
attractive to users. Even for the simple designs where it is applicable,
it would be useful to identify scenarios where the PP can perform
significantly better than the routine design-based methods in terms of
confidence interval coverage, especially in cases where the traditional
methods do not perform well; for example, the Woodruff interval on
quantiles under size stratification noted in Section 1. Meeden notes the
work of Lazar, Meeden and Nelson (\citeyear{l-m-n08}) on the constrained PP which incorporates
known population information about auxiliary variables without any model
assumptions about how the auxiliary variables are related to the
variables of interest, similar to calibration estimation. It appears
that the constraints allowed by this method are more flexible than those
in the usual calibration estimation, such as the population median falls
in some known interval, and this feature might prove attractive to the
user, especially due to the availability of an R package. However, the
constrained PP could run into problems when the number of population
constraints is large, similar to traditional calibration estimation.
In his concluding remarks, Meeden says that one should not focus on
estimating the variance of an estimator, but this is a customary
practice as it allows reporting estimated coefficient of variation (CV)
of the estimator as a quality measure and the user can compute
confidence interval from this variance estimator for any desired
confidence level using normal approximation. Meeden also expresses
concerns that the frequentist practice is often ``obscured by the
prominent and unnecessary role played by the design weights after the
sample has been selected.'' But design weights or calibration weights
are needed for asymptotically valid design-based inferences, although it
is often necessary to modify the weights to handle special situations,
such as outlier weights. In fact, the PP-based \mbox{estimators} of a
population mean are often close to the traditional weighted estimators,
for example under stratified random sampling.
\section*{Calibration Estimators}
Slud and I seem to agree on the limitations of model-dependent
approaches (frequentist or Baye\-sian) when the sample size in a domain of
interest is sufficiently large: possible design inconsistency of the
resulting estimators under minor model\vadjust{\eject} misspecifications, leading to
erroneous inferences. In Section~3.1 I noted the popularity of
model-free calibration estimators in the large-scale production of
official statistics from complex surveys because of their ability to
produce common calibration weights and accommodate arbitrary number of
user-specified calibration constraints. In practice, design weights are
adjusted first for unit nonresponse and then calibrated to known
user-specified totals. The calibration weights are often modified to
satisfy specified range restrictions and calibration constraints
simultaneously, but there is no guarantee that such modified weights can
be found. Rao and Singh (\citeyear{RAOSIN}, \citeyear{RAOSIN09}) proposed a ``ridge shrinkage''
approach (assuming complete response) to get around the latter problem
by relaxing some calibration constraints incrementally while satisfying
the range restrictions. Slud mentions a new method he has developed
recently (Slud and Thibaudeau, \citeyear{SluThi}) that can do simultaneous weight
adjustment for nonresponse, calibration and weight compression. This
method looks very interesting and his empirical results are encouraging.
But a solution satisfying specified range restrictions on the weights
may not exist and it would be interesting to extend the Rao--Singh
approach to handle simultaneous nonresponse adjustment and calibration.
I agree with Slud that if the weights and calibration totals are
correctly specified, the resulting calibration estimator is design
consistent even if the underlying working linear regression model uses
an incorrect or incomplete set of predictor variables, as in the example
of Section 3.1. The effect of gross misspecification of the working
model is on the coverage performance of the associated confidence
intervals and hence it is ``more subtle than design-consistency'' as
noted by Slud. Incidentally, Dorfman (\citeyear{autokey2}) used this example to
question the contention of Hansen and Tepping (\citeyear{HANTEP90}) that ``design-based
estimators that happen to incorporate a model are inferentially
satisfactory, despite failure of the model'' and concluded that the
results on coverage for the linear regression estimator calibrated on
the population size $N$ and the population total $X$ ``dramatically call
this contention into question.'' Dorfman's statement may be correct in
regard to calibration estimators based solely on user-specified
totals $Z$, but as noted in Section 3.1 a model-assisted approach based
on a working model obtained after some model checking to eliminate
gross
misspecification of the working model can lead to good confidence
interval coverage in the Dorfman example.\vadjust{\eject}
\section*{Analysis of Survey Data}
Section 3.3 of my paper on the analysis
of complex survey data is somewhat brief due to my focus on estimating
totals and means, but I should have mentioned goodness-of-fit tests that
take account of survey design. I am thankful to Slud for pointing this
out and making reference to my own work (Rao and Scott, \citeyear{RaoSco84}) on
goodness-of-fit chi-squared tests for cross-classified survey data based
on log-linear models. I might add that Roberts, Rao and Kumar (\citeyear{RobRaoKum87})
considered goodness-of-fit tests of logistic regression models with
categorical predictor variables and binary response. Graubard, Korn and
Midthune (\citeyear{GRA}) extended the well-known Hosmer and Lemeshow (\citeyear{HOSLEM80})
grouping method of goodness-of-fit for logistic regression to complex
survey data. Roberts, Ren and Rao (\citeyear{ROB}) studied goodness-of-fit tests
for mean specification in marginal models for longitudinal survey data
and obtained an adjusted Hosmer and Lemeshow test using Rao--Scott
corrections as well as a quasi-score test obtained by extending the
method of Horton et al. (\citeyear{HORetal99}) to survey data.
Multilevel models for analysis of survey data are more complex than the
marginal models for estimating regression parameters because of the
presence of random effects in the models. Goodness-of-fit methods for
two-level models, when the model holds for the sample, are available in
the literature (e.g., Pan and Lin, \citeyear{PanLin05}) but very little is known for
survey data in the presence of sample selection bias. I am presently
studying model-checking methods for two-level models taking account of
the survey design.
\section*{Small Area Estimation}
Turning now to small area estimation, Slud notes ``But one serious
objection is that each response~va\-riable would require its own Bayesian
model'' unlike direct calibration estimators using common weights. Yet
model-dependent small area methods (either HB or EB) are gaining
acceptability because direct calibration estimators are unreliable due
to small sample sizes. However, practitioners often prefer benchmarking
the small area estimators to agree with a~reliable direct calibration
estimator at a higher level.
Sedransk notes that ``almost all of the
applications use an area-level model'' even though it makes strong
assumptions such as known sampling variances, as noted in Section 5. I
agree with him that the quality of the smoothing methods used in
practice to get around the assumption of known sampling variances is
questionable although smoothed sampling variance estimates may be
satisfactory for point estimation. However, as noted in Section 5,
area-level models remain attractive because the sampling design is taken
into account through the direct estimators, and the direct estimators
and the associated area-level covariates are more readily available to
the users than the corresponding unit-level sample data. Also, in using
unit-level models one need to ensure that the population model holds for
the sample and this could be problematic, although more complex methods
have been proposed recently to handle sample selection bias in
unit-level models (Pfeffermann and Sverchkov, \citeyear{PfeSve07}). Nevertheless,
I~agree with Sedransk that unit-level models should receive more attention
in the future.
Turning to HB model diagnostics, I have noted in Section 5 some
difficulties with the commonly used posterior predictive $p$-value (PPP)
for checking goodness-of-fit of a model because of ``double use'' of
data. Alternative methods that have been proposed to avoid double use of
data are more difficult to implement, especially in the context of small
area models as noted. Sedransk mentioned three additional references
(Yan and Sedransk, \citeyear{YANSED06}, \citeyear{YanSed07}, \citeyear{YANSED10}) that studied alternative measures
in the context of detecting unknown hierarchical structures under
somewhat simplified assumptions. In particular, Yan and Sedransk
demonstrated that the unit-specific PPP-values act like uniformly
distributed random variables under the simple mean null model (without
random area effects) and hence a Q--Q plot should reveal departures from
the model. They assumed normality and absence of outliers in their
study, but it would be interesting to see if their unit-specific
P-values can in fact detect nonnormality of random effects, studied by
Sinharay and Stern (\citeyear{SinSte03}). The use of unit-specific PPP-values might be
more attractive than using the traditional PPP-function because it does
not require the selection of an appropriate checking function, but
further work is needed including the detection of nonnormality as noted
above. Yan and Sedransk showed that the PPP-function, based on the
F-statistic as the checking function, is very effective for detecting
hierarchical structure when the true model is correctly guessed as the
mean model with random area effects. This seems to imply that the
PPP-function is chosen to reject the null model and yet Sedransk
criticizes the frequentist goodness-of-fit tests by saying that\vadjust{\eject} ``such
tests are constructed to \textit{reject} null hypotheses whereas one
would like to accept a postulated model if the data are concordant with
it.'' In the simulation study of Yan and Sedransk (\citeyear{YanSed07}) the F-statistic
based PPP-value detected even small correlations when the sample size is
large and the corresponding frequentist test would also lead to similar
results. I do not agree with Sedransk that global frequentist
goodness-of-fit tests necessarily reject the null model when the data
are concordant with the model. In fact, many published papers have
identified models from real data, using frequentist tests. For example,
Datta, Hall and Mandal (\citeyear{DATHALMAN}) developed a frequentist model selection
method by testing for the presence of small area random effects and
applied the method to two real data sets involving 13 and 23 areas,
respectively. Their test is based on simple bootstrap methods and it is
free of normality assumption. The null model in both applications is a
regression model without random area effects and they showed that the
frequentist $p$-value is as large as 0.2, suggesting that the data are
concordant with the simpler null model. Slud mentioned the work of
Jiang, Lahiri and Wu (\citeyear{JiaLahWu01}) and Jiang (\citeyear{Jia01}) on mixed linear model
diagnostics in the frequentist framework. I personally prefer using
prior-free frequentist methods for model checking because they can
handle a variety of model deviations including selection of variables
and random effects selection in linear or generalized linear mixed
models (e.g., Jiang et al., \citeyear{Jiaetal08}) and detection of outliers in
multilevel models (Shi and Chen, \citeyear{ShiChe08}). A model selected by the
frequentist methods can be further subjected to Bayesian selection
methods if necessary before using HB methods for inference. Slud notes
difficulties with model checking in the context of SAIPE for sample
counties where no poor children were seen. This is also the case for
counties or areas not sampled. Model checking in those cases is indeed
challenging.
Finally, Slud makes an important observation on goodness-of-fit tests
when the primary interest is prediction: ``excellent predictions can be
provided through estimating models which are too simple to pass
goodness-of-fit checks.'' Slud notes that this observation ``has not yet
been formulated with mathematical care'' and that both frequentists and
Baye\-sians will benefit by characterizing ``which target parameters and
which combinations of true and oversimplified models could work in this
way.'' In this context, the recent work of Jiang, Nguyen and Rao (\citeyear{JIANGURAO})
on best predictive small area estimation is relevant. This paper
develops a new prediction procedure, called observed best prediction
(OBP), and shows that it can significantly outperform the traditional
EBLUP.
\section*{Acknowledgments}
Again, I am thankful to the discussants for their insightful comments. I
also wish to thank the guest editor, Partha Lahiri, for inviting me to
submit this paper to \textit{Statistical Science}. This work was
supported by a research grant from the Natural Sciences and Engineering
Research Council of Canada.
|
2,877,628,091,547 | arxiv | \section{Small net for noise attenuated linear juntas}~\label{sec:noise-attenuated}
In this section, we are going to prove the following theorem which essentially shows the existence of a small cover for noise stable linear juntas.
{To state this theorem, we will require one crucial fact about noise attenuated functions (due to Bakry and Ledoux~\cite{Bakry:94})
\begin{lemma}~\label{prop:gradient-bound}~
Let $f: \mathbb{R}^n \rightarrow [-1,1]$. Then, $P_t f$ is $C_t$-Lipschitz for $C_t = O(t^{-1/2})$.
\end{lemma}
For the rest of this section, we are going to use $C_t$ to denote this quantity.
}
We can now state the main theorem of this section.
\begin{theorem}~\label{thm:net}
For any error parameter $\delta>0$, noise parameter $t>0$ and $k \in \mathbb{N}$, there is a set of functions $\mathsf{Cover}(t,k,\delta)$ (mapping $\mathbb{R}^k$ to $[-1,1]$) such that the following holds:
\begin{enumerate}
\item Let $f: \mathbb{R}^n \rightarrow [-1,1]$ and $W$ be a $k$-dimensional space such that $P_t f$ is $\delta$-close to a $W$-junta. Further, $(w_1, \ldots, w_k)$ be any orthonormal basis of $W$. Then, $P_t f$ is $3\delta$-close to $h(\langle w_1, x \rangle, \ldots, \langle w_k, x \rangle)$ for some $h \in \mathsf{Cover}(t,k,\delta)$.
\item Every function in $\mathsf{Cover}(t, k, \delta)$ is $2C_t$-Lipschitz.
\item $\log |\mathsf{Cover}(t, k, \delta)| \le \left(\frac{C \sqrt k \log^2(1/\delta)}{\delta \sqrt t}\right)^k$.
\end{enumerate}
\end{theorem}
The proof of this theorem relies on the following two lemmas.
\begin{lemma}~\label{lem:net-1}
For any $L>0$, error parameter $\delta>0$ and $k \in \mathbb{N}$, there is a set $\mathsf{Cover}_{k,L,\delta}$ consisting of functions mapping $\mathbb{R}^k \mapsto [-1,1]$ such that the following holds:
\begin{enumerate}
\item For every $g: \mathbb{R}^k \rightarrow [-1,1]$ which is $L$-Lipschitz, there is a function $h \in \mathsf{Cover}_{k,L,\delta}$ such that $\mathbf{E}[|g(x) - h(x)|] \leq \delta$.
\item Every function in $\mathsf{Cover}_{k,L,\delta}$ is $2L$-Lipschitz.
\item $\log |\mathsf{Cover}_{k,L,\delta}| \le \left(\frac{C L \sqrt k \log^2(1/\delta)}{\delta}\right)^k$.
\end{enumerate}
\end{lemma}
\begin{proof}
Let $\mathcal{B} = \{x: \Vert x \Vert_2 \le \sqrt{k} \cdot \log
(100/\delta)\}$. Let $\mathcal{A}$ be a maximal $\delta/(2L)$-packing of
$\mathcal{B}$ (that is, a maximal subset of $\mathcal{B}$ such that any two
distinct points in $\mathcal{A}$ are at least $\delta/(2L)$ apart.
It is well-known (see, e.g.~\cite{LedouxTalagrand}) that $\mathcal{A}$ is a $\delta/L$-net
of $\mathcal{B}$ and that $|\mathcal{A}| \le (C L \sqrt k \log (1/\delta)/\delta)^k$
(the $\sqrt k \log (1/\delta)$ term comes from the diameter of $\mathcal{B}$.
For $f: \mathbb{R}^n \rightarrow [-1,1]$, we now define $f_{\mathsf{int}}: \mathcal{A} \to [-1, 1]$ by
simply rounding $f$ to the nearest integer multiple of $\delta/100$. To check the Lipschitz
constant of $f_\mathsf{int}$, note that if $x, y \in \mathcal{A}$ then
\[
|f_{\mathsf{int}}(x) - f_{\mathsf{int}}(y)| \le |f(x) - f(y)| + \delta/50
\le L \|x - y\| + \frac{L}{25} \|x - y\|,
\]
where the last inequality used the fact that $f$ is $L$-Lipschitz and that
every pair of points in $\mathcal{A}$ is $\delta/(2L)$-separated.
In particular, $f_{\mathsf{int}}$ is $2L$-Lipschitz.
Let $\mathsf{Cover}'$ be the set of all functions $f_{\mathsf{int}}$ obtained
in this way. Then the size of $\mathsf{Cover}'$ is at most $\exp((C L \sqrt k \log^2(1/\delta) \delta^{-1})^k)$,
because there are at most $C/\delta$ choices for the value of each point, and there are $|\mathcal{A}|$ points.
Finally, we construct $\mathsf{Cover}_{k,L,\delta}$ by extending each function in $\mathsf{Cover}'$
to a function ${\mathbb{R}}^n \to [-1, 1]$. McShane's Lemma~\cite{mcshane34} implies that this extension can be done
without increasing its Lipschitz constant. Hence, properties 2 and 3 hold.
To check property 1, note that if $x \in \mathcal{B}$ and $y \in \mathcal{A}$ is the closest point to $x$
then
\[
|f(x) - f_{\mathsf{int}}(x)|
\le |f(x) - f(y)| + |f(y) - f_{\mathsf{int}}(y)| + |f_{\mathsf{int}}(y) - f_{\mathsf{int}}(x)|
\le 3L \|x - y\| + \delta/100 \le 4\delta.
\]
It then follows that
\[
\mathbf{E}[|f(x) - f_{\mathsf{int}}(x)|] \le 2 \cdot \Pr[x \not \in \mathcal{B}] + \max_{x \in \mathcal{B}} [|f(x) - f_{\mathsf{int}}(x)|] \le \delta + 4 \delta \le 5 \delta.
\]
The last inequality just follows from the fact that a $k$-dimensional standard Gaussian is in a ball of radius $\sqrt{k} \log (1/\delta)$ with probability $1-\delta/2$. This proves property 1 modulo the constant $5$, which can be dropped by redefining $\delta$.
\end{proof}
\begin{lemma}~\label{lem:Lip-1}
Let $f: \mathbb{R}^n \rightarrow [-1,1]$ be a $C$-Lipschitz function. Further, for $\kappa>0$, let $g: \mathbb{R}^n \rightarrow [-1,1]$ be a $W$-junta such that $f$ is $\kappa$-close to $g$. Then, there is a function $f_W:\mathbb{R}^n \rightarrow [-1,1]$ which is $C$-Lipschitz and $W$-junta which is $2\kappa$-close to $f$.
\end{lemma}
\begin{proof}
Reorient the axes so that $W$ is the space spanned by the first $\ell$-axes. Let us define the $W$-junta $f_{W}: \mathbb{R}^n \rightarrow [-1,1]$ defined as
\[
f_W(x) = \mathbf{E}_{y_{\ell+1}, \ldots, y_n} [f(x_1, \ldots, x_\ell, y_{\ell+1}, \ldots, y_n)
\]
For any fixed choice of $x_1, \ldots, x_\ell$, we have
\[
\mathbf{E}_{x_{\ell+1}, \ldots, x_n}[|f(x) - f_W(x)|] \leq \mathbf{E}_{x_{\ell+1}, \ldots, x_n}[|f(x) - g(x)|] +|g(x) - f_W(x)|.
\]
However, the second term can be bounded as
\[
|g(x) - f_W(x)| = \big| g(x) - \mathbf{E}{x_{\ell+1}, \ldots, x_n}[f(x_1, \ldots,x_\ell, x_{\ell+1} , \ldots, x_n)] \big| \le \mathbf{E}{x_{\ell+1}, \ldots, x_n} \big[ \big| g(x) - f(x) \big|\big]
\]
The last inequality is simply Jensen's inequality. Combining these two, we get
\begin{equation}~\label{eq:junta-diff}
\mathbf{E}_{x_{\ell+1}, \ldots, x_n}[|f(x) - f_W(x)|] \leq2 \cdot \mathbf{E}_{x_{\ell+1}, \ldots, x_n}[|f(x) - g(x)|].
\end{equation}
This in turn implies that
\begin{equation}~\label{eq:junta-diff-1}
\mathbf{E}_{x_{1}, \ldots, x_n}[|f(x) - f_W(x)|] \leq2 \cdot \mathbf{E}_{x_{1}, \ldots, x_n}[|f(x) - g(x)|] \leq 2\cdot \kappa.
\end{equation}
Finally, we see that
\begin{eqnarray*}
|f_W(x) - f_W(y)| &=& \big| \mathbf{E}_{x_{\ell+1}, \ldots, x_n}[f(x_1, \ldots, x_\ell, x_{\ell+1} , \ldots, x_n) - f(y_1, \ldots, y_\ell, x_{\ell+1},\ldots, x_n)] \\
&\leq& \mathbf{E}_{x_{\ell+1}, \ldots, x_n} \big[ \big| f(x_1, \ldots, x_\ell, x_{\ell+1} , \ldots, x_n) - f(y_1, \ldots, y_\ell, x_{\ell+1},\ldots, x_n) \big| \big] \\
&\le& \mathbf{E}_{x_{\ell+1}, \ldots, x_n} [ C \cdot \Vert (x_1, \ldots, x_\ell) - (y_1, \ldots, y_\ell) \Vert_2] \le C \Vert x -y\Vert_2.
\end{eqnarray*}
This finishes the proof.
\end{proof}
With these two lemmas, we can now finish the proof of Theorem~\ref{thm:net}.
{\begin{proofof}{Theorem~\ref{thm:net}}
First, we apply Lemma~\ref{prop:gradient-bound} to obtain that $P_t f$ is $C_t=O(t^{-1/2})$-Lipschitz. Since $P_t f$ is $\delta$-close to a $W$-junta, we obtain that $P_t f$ is $2\delta$ close to a $W$-junta $g$ which is $C_t$-Lipschitz (follows from Lemma~\ref{lem:Lip-1}). Let $ \mathsf{Cover}(t,k,\delta)=\mathsf{Cover}_{k,C_t,\frac{\delta}{2}}$ (constructed in Lemma~\ref{lem:net-1}). By a rotation of the coordinates, it follows from the definition of $ \mathsf{Cover}(t,k,\delta)$ that there exists $h \in\mathsf{Cover}(t,k,\delta)$ such that $h( \langle w_1, x \rangle, \ldots, \langle w_k, x \rangle)$ is $\frac{\delta}{4}$ close to $g$. The required properties now follow from Lemma~\ref{lem:net-1}.
\end{proofof}}
\section{Small net for noise attenuated linear juntas}~\label{sec:noise-attenuated}
In this section, we are going to prove the following theorem which essentially shows the existence of a small cover for noise stable linear juntas. To state this theorem, we will require one crucial fact about noise attenuated functions (due to Bakry and Ledoux~\cite{Bakry:94})
\begin{lemma}~\label{prop:gradient-bound}~
Let $f: \mathbb{R}^n \rightarrow [-1,1]$. Then, $P_t f$ is $C$-Lipschitz for $C = O(t^{-1/2})$.
\end{lemma}
\begin{theorem}~\label{thm:net}
For any error parameter $\delta>0$, noise parameter $t>0$ and $k \in \mathbb{N}$, there is a set of functions $\mathsf{Cover}(t,k,\delta)$ (mapping $\mathbb{R}^k$ to $[-1,1]$) such that the following holds:
\begin{enumerate}
\item Let $f: \mathbb{R}^n \rightarrow [-1,1]$ and $W$ be a $k$-dimensional space such that $P_t$ is $O(\epsilon)$-close to a $W$-junta. Further, $(w_1, \ldots, w_k)$ be any orthonormal basis of $W$. Then, $P_t f$ is $O(\delta)$-close to $h(\langle w_1, x \rangle, \ldots, \langle w_k, x \rangle)$ for some $h \in \mathsf{Cover}(t,k,\epsilon)$.
\item The size of the set $\mathsf{Cover}(t,k,\epsilon)$ is bounded by
$O\big( \frac{k \cdot \log(1/\delta)}{\sqrt{t} \cdot \delta}\big)^k$.
\item Every function $ h \in mathsf{Cover}(t,k,\delta)$ is \emph{approximately Lipschitz} in the following sense:
\[
\Vert_2 h(x) - h(y) \Vert_2 \le \Vert x - y \Vert_2 \cdot
\]
\end{enumerate}
\end{theorem}
\section{Small net for juntas with bounded surface area}
\begin{proposition}~\label{prop:sing-value}
Let $A \in\mathbb{R}^{\ell \times \ell}$ matrix such that for any $1 \le j\le \ell$, $\mathrm{dist}(a_j, A_{j-1})) \ge \delta$ where $a_j$ is the $j^{th}$ column of $A$ and $A_j$ is the column span of the first $j$ columns. Then, for {\color{red}{$\eta = \delta^{-k}$}}\anote{this probably needs to change}, given the inner products of $\langle a_i, a_j \langle$ for all $(i,j)$ up to additive error $\eta$,
the algorithm \textsf{Robust-linear-independence} has the following guarantee:
\begin{itemize}
\item If the matrix $A$ satisfies the above conditions,
then the algorithm outputs \textsf{yes}.
\item If the algorithm outputs \textsf{yes}, then $\mathsf{dist}(a_j, A_{j-1}) \ge \delta/2$.
\end{itemize}
\end{proposition}
\begin{proof}
{\color{red}This proposition is supposed to basically say that with good enough accuracy, we can check whether the vectors we have gotten are $\eta$-linearly independent}
\end{proof}
\subsection{Small-covers for Lipschitz functions}
\begin{lemma}~\label{lem:Lipschitz-cover}
For any error parameter $\epsilon>0$, surface area $s$ and $k \in \mathbb{N}$, there is a net $\mathsf{Net}_{s,k,\epsilon}$ consisting of functions mapping $\mathbb{R}^k \rightarrow [-1,1]$ such that the following holds: Let $g: \mathbb{R}^n \rightarrow [-1,1]$ and let $P_{t_1} g$ be $\epsilon$-close to a $W$-junta where $\mathsf{dim}(W)=k$.
Let $v_1, \ldots, v_k$ be an orthonormal basis of $W$. Then, there exists $h \in \mathsf{Net}_{s,k,\epsilon}$ such that $P_{t_1} g$ is $O(\epsilon)$-close to $h(\langle v_1, x\rangle, \ldots, \langle v_k ,x \rangle)$.
\end{lemma}
\begin{proof}
\end{proof}
\begin{lemma}~\label{lem:net-1}
For any $C, \delta>0$ and $k \in \mathbb{N}$, there is a set $\mathsf{Net}_{k,C,\delta}$ consisting of functions mapping $\mathbb{R}^k \mapsto [-1,1]$ of size $O\big(\frac{k \cdot C \cdot \log(1/\delta)}{\delta}\big)^k$
such that for any function $g: \mathbb{R}^k \rightarrow [-1,1]$ which is $C$-Lipschitz, there is a function $h \in \mathsf{Net}_{k,C,\delta}$, $\mathbf{E}[|g(x) - h(x)|] =\delta/9$.
\end{lemma}
\begin{proof}
Let $\mathcal{B} = \{x: \Vert x \Vert_2 \le \sqrt{k} \cdot \log (100/\delta)\}$. Now, let $\mathcal{A} \subseteq \mathcal{B}$ be defined as the set of points each of whose coordinates is an integral multiple of $\eta=\frac{\delta}{10 \cdot C \cdot \sqrt{k}}$. For $f: \mathbb{R}^n \rightarrow [-1,1]$, we now define $f_{\mathsf{int}}$ as follows:
\begin{enumerate}
\item For any point $x \not \in \mathcal{B}$, $f_{\mathsf{int}}(x)=0$.
\item For any point $x \in \mathcal{A}$, $f_{\mathsf{int}}(x)$ is defined to be the closest integral multiple of $\delta/100$ to $f(x)$.
\item For any point $x \in \mathcal{B} \setminus \mathcal{A}$, $f_{\mathsf{int}}(x) = f(y)$ where $y$ is the point in $\mathcal{A}$ closest to $x$.
\end{enumerate}
We next observe that for any $x \in \mathcal{B}$, if $y$ denotes the closest point in $\mathcal{A}$, then
\[
|f(x) - f_{\mathsf{int}}(x)| \le |f(x) - f(y)| + |f(y) - f_{\mathsf{int}}(y)| \le \frac{\delta}{100} + \frac{\delta}{10} \le \frac{11 \delta}{100}.
\]
The above uses the fact that $f$ is $C$-Lipschitz and the $\ell_2$ distance between $x$ and $y$ is bounded by $\frac{\delta}{10C}$. Now, observe any function of the form $f_{\mathsf{int}}$ can be specified by its value on the set $\mathcal{A}$ and further these values are one of $O(1/\delta)$ possiblities. Thus, if we define $ \mathsf{Net}_{k,C,\delta}$ as the set of all such functions, we obtain the lemma. Since $|\mathcal{A}| =O\big(\frac{k \cdot C \cdot \log(1/\delta)}{\delta}\big)^k$, we obtain the bound on the size of $ \mathsf{Net}_{k,C,\delta}$.
\end{proof}
\begin{lemma}~\label{lem:Lip-1}
Let $f: \mathbb{R}^n \rightarrow [-1,1]$ be a $c$-Lipschitz function. Further, let $g: \mathbb{R}^n \rightarrow [-1,1]$ be a $W$-junta such that $f$ is $O(\epsilon)$-close to $g$. Then, there is a function $f_W:\mathbb{R}^n \rightarrow [-1,1]$ which is $c$-Lipschitz and $W$-junta which is $O(\epsilon)$-close to $f$.
\end{lemma}
\begin{proof}
Reorient the axes so that $W$ is the space spanned by the first $\ell$-axes. Let us define the $W$-junta $f_{W}: \mathbb{R}^n \rightarrow [-1,1]$ defined as
\[
f_W(x) = \mathbf{E}_{y_{\ell+1}, \ldots, y_n} [f(x_1, \ldots, x_\ell, y_{\ell+1}, \ldots, y_n)
\]
For any fixed choice of $x_1, \ldots, x_\ell$, we have
\[
\mathbf{E}_{x_{\ell+1}, \ldots, x_n}[|f(x) - f_W(x)|] \leq \mathbf{E}_{x_{\ell+1}, \ldots, x_n}[|f(x) - g(x)|] +|g(x) - f_W(x)|.
\]
However, the second term can be bounded as
\[
|g(x) - f_W(x)| = \big| g(x) - \mathbf{E}{x_{\ell+1}, \ldots, x_n}[f(x_1, \ldots,x_\ell, x_{\ell+1} , \ldots, x_n)] \big| \le \mathbf{E}{x_{\ell+1}, \ldots, x_n} \big[ \big| g(x) - f(x) \big|\big]
\]
The last inequality is simply Jensen's inequality. Combining these two, we get
\begin{equation}~\label{eq:junta-diff}
\mathbf{E}_{x_{\ell+1}, \ldots, x_n}[|f(x) - f_W(x)|] \leq2 \cdot \mathbf{E}_{x_{\ell+1}, \ldots, x_n}[|f(x) - g(x)|].
\end{equation}
This in turn implies that
\begin{equation}~\label{eq:junta-diff-1}
\mathbf{E}_{x_{1}, \ldots, x_n}[|f(x) - f_W(x)|] \leq2 \cdot \mathbf{E}_{x_{1}, \ldots, x_n}[|f(x) - g(x)|] \leq 2\cdot \epsilon.
\end{equation}
Finally, we see that
\begin{eqnarray*}
|f_W(x) - f_W(y)| &=& \big| \mathbf{E}_{x_{\ell+1}, \ldots, x_n}[f(x_1, \ldots, x_\ell, x_{\ell+1} , \ldots, x_n) - f(y_1, \ldots, y_\ell, x_{\ell+1},\ldots, x_n)] \\
&\leq& \mathbf{E}_{x_{\ell+1}, \ldots, x_n} \big[ \big| f(x_1, \ldots, x_\ell, x_{\ell+1} , \ldots, x_n) - f(y_1, \ldots, y_\ell, x_{\ell+1},\ldots, x_n) \big| \big] \\
&\le& \mathbf{E}_{x_{\ell+1}, \ldots, x_n} [ C \cdot \Vert (x_1, \ldots, x_\ell) - (y_1, \ldots, y_\ell) \Vert_2] \le C \Vert x -y\Vert_2.
\end{eqnarray*}
This finishes the proof.
\end{proof}
\begin{proof}
First, we apply Proposition~\ref{prop:gradient-bound} to observe that $P_t f$ is $\kappa=O(t^{-1/2})$-Lipschitz. Since $P_t f$ is $O(\epsilon)$-close to a $W$-junta, we obtain that $P_t f$ is $O(\epsilon)$ close to a $W$-junta $g$ which is $\kappa$-Lipschitz (follows from Lemma~\ref{lem:Lip-1}). Let $ \mathsf{Cover}(t,k,\epsilon)=\mathsf{Net}_{k,\kappa,\epsilon}$ (where $\mathsf{Net}_{k,\kappa,\epsilon}$ in the set from Lemma~\ref{lem:Lipschitz-cover}). By a rotation of the coordinates, it follows from the definition of $ \mathsf{Cover}(t,k,\epsilon)$ that there exists $h \in\mathsf{Cover}(t,k,\epsilon)$ such that $h( \langle w_1, x \rangle, \ldots, \langle w_k, x \rangle)$ is $O(\epsilon)$ close to $P_tf$. This finishes the proof. The upper bound on the size of the set $\mathsf{Cover}(t,k,\epsilon)$ follows from Lemma~\ref{lem:Lipschitz-cover}.
\end{proof}
\begin{lemma}~\label{lem:apx-ortho}
For $f: \mathbb{R}^n \rightarrow [-1,1]$ and $t>0$, let $(y_1, \ldots, y_\ell)$ be $\gamma$-linearly independent for $h = P_t f$. Let $v_i = D_{y_i} h(y_i)$ and let $V = \mathsf{span}(v_1, \ldots, v_\ell)$.
For any error parameter $\tau>0$ and for $T = T(\tau, t, \gamma)$ defined as
\[
T(\tau, t, \gamma) = \mathsf{poly} \bigg( \frac{1}{t}, 2 \frac{k}{\tau \cdot \sqrt{t}} \cdot \big( \frac{k}{2\cdot \gamma \cdot \sqrt{t}}\big)^{3k+3}\bigg)
\]
we can make $T$ queries to oracle for $f$ and obtain numbers
$\{\alpha_{i,j}\}_{1\le i,j \le \ell}$ such that the following holds:
\begin{enumerate}
\item For $\xi ( t, \gamma)$ defined as
\[
\xi ( t, \gamma) = \bigg(\frac{2 k }{\gamma \cdot \sqrt{t}}\bigg)^{\frac{k+1}{2}} \cdot k^{3/2} \cdot \frac{1}{\sqrt{t}},
\]
we have
$|\alpha_{i,j}| \le \xi ( t, \gamma)$.
\item There is an orthonormal basis $(w_1, \ldots, w_\ell)$ of $(Dh_{y_1}(y_1), \ldots, Dh_{y_\ell}(y_\ell))$ such that for $$\Vert w_{i} - \sum_{j}\alpha_{i,j} Dh_{y_j}(y_j) \Vert_2 \le \tau.$$
\end{enumerate}
\end{lemma}
\begin{proof}
We first invoke Lemma~\ref{lem:inner-product-1} to obtain that with $T$ queries, we can compute numbers $\beta_{i,j}$ with the following guarantee:
\[
\big| \beta_{i,j} - \langle D_{y_i} h(y_i) , D_{y_j} h(y_j) \rangle \big| \le 2 \frac{\tau \cdot \sqrt{t}}{k } \cdot \bigg( \frac{\gamma \cdot \sqrt{t}}{2\cdot k }\bigg)^{3k+3}
\]
Now, observe that because $(y_1, \ldots, y_\ell)$ are $\gamma$-linearly independent, hence as vectors
$(Dh_{y_1}(y_1), \ldots, Dh_{y_\ell}(y_\ell))$ are
$(t^{-1/2}, \gamma)$ linearly independent. We can now apply Proposition~\ref{prop:linear} to obtain the numbers the numbers $\{\alpha_{i,j}\}$ promised here.
\end{proof}
\begin{proposition}~\label{prop:linear}
Let $v_1, \ldots, v_\ell$ be a $(\eta, \gamma)$-linearly independent vectors. Then, for $\epsilon>0$, $\lambda(\epsilon,\eta, \gamma)$ defined as
$$
\lambda= 2 \frac{\epsilon}{\ell \cdot \eta} \cdot \big( \frac{\gamma}{2\cdot \ell \cdot \eta}\big)^{3\ell+3},
$$ given numbers $\{\beta_{i,j}\}_{1\le i,j \le \ell}$ such that $|\beta_{i,j} - \langle v_i, v_j \rangle| \le \lambda$, we can compute numbers $\{\alpha_{i,j}\}_{1\le i,j \le \ell}$ such that:
\begin{enumerate}
\item For $\xi ( \eta, \gamma)$ defined as
\[
\xi ( \eta, \gamma) = \bigg(\frac{2 \ell \eta}{\gamma}\bigg)^{\frac{\ell+1}{2}} \cdot \ell^{3/2} \cdot \eta,
\]
we have
$|\alpha_{i,j}| \le \xi ( \eta, \gamma)$.
\item There is an orthonormal basis $(w_1, \ldots, w_\ell)$ of $\mathsf{span}(v_1, \ldots, v_\ell)$ such that for $\Vert w_{i} - \sum_{j}\alpha_{i,j} v_j \Vert_2 \le \epsilon$.
\end{enumerate}
\end{proposition}
\begin{proof}
Consider the symmetric matrix $\Sigma\in\mathbb{R}^{\ell \times \ell}$ defined as $\Sigma_{i,j} = \langle v_i, v_j \rangle$. By Proposition~\ref{prop:sing-1}, $\Sigma$ is non-singular. Define the matrix $\Gamma = \Sigma^{-1/2}$. It is easy to see that the columns of
$V \cdot \Sigma^{-1/2}$ form an orthonormal basis of $\mathsf{span}(v_1, \ldots, v_\ell)$. Here $V = [v_1 | \ldots | v_\ell]$. Of course, we cannot compute the matrix $\Sigma$ exactly and consequently, we cannot compute the matrix $\Sigma^{-1/2}$ either.
Instead, we can compute a
$\widetilde{\Sigma}$ (which is also symmetric)
such that $\Vert \widetilde{\Sigma} - \Sigma \Vert_F\le \ell \cdot \lambda$. Now, for an error parameter $\delta$ to be fixed, assume that
$$
\ell \cdot \lambda \le \delta \cdot \sigma_{\min}(\Sigma) = \delta \cdot \sigma_{\min}^2(V) \le \delta \cdot \bigg(\frac{ \gamma}{2 \cdot \ell \cdot \eta}\bigg)^{2\ell+2}.
$$
Here the second inequality, uses Proposition~\ref{prop:sing-1}. Now, we apply the matrix perturbation bound (Corollary~\ref{corr:mat-perturb})
(i.e., here we set $c = \big( \frac{\gamma}{2\cdot \ell \cdot \eta}\big)^{2\ell+2}$)
to obtain that
\[
\Vert \Sigma^{-1/2} - \widetilde{\Sigma}^{-1/2} \Vert \leq \frac{\delta}{2 \big( \frac{\gamma}{2\cdot \ell \cdot \eta}\big)^{\ell+1}}.
\]
Now, set $\delta = 2 \frac{\epsilon}{\ell \cdot \eta} \cdot \big( \frac{\gamma}{2\cdot \ell \cdot \eta}\big)^{\ell+1}$. Define $\alpha_{i,j} = \widetilde{\Sigma}^{-\frac12}(i,j)$, the second item now follows immediately. For the first item, observe that by Weyl's inequality (Lemma~\ref{lem:Weyl}), $\sigma_{\min}(\widetilde{\Sigma}) \ge (1-\delta) \cdot \sigma_{\min}({\Sigma})$. Thus, $$\Vert \widetilde{\Sigma}^{-1} \Vert_F \le \bigg(\frac{2 \ell \eta}{\gamma}\bigg)^{\ell+1} \cdot \ell \cdot \eta.$$
Finally, since $\widetilde{\Sigma}^{-1/2}$ is also Hermitian, it is easy to see that
\[
\Vert \widetilde{\Sigma}^{-1/2} \Vert_F \le \sqrt{\ell} \cdot \sqrt{\Vert \widetilde{\Sigma} \Vert_F}.
\]
This puts an upper bound on $|\alpha_{i,j}|$ finishing the proof.
\end{proof}
\begin{proposition}~\label{prop:sing-1}
Let $v_1, \ldots, v_\ell$ be a $(\eta, \gamma)$-linearly independent vectors. Let $V = [v_1 | \ldots | v_\ell]$. Then, the smallest singular value of $V$
is at least $(\frac{ \gamma}{2 \cdot \ell \cdot \eta})^{\ell+1}$.
\end{proposition}
\begin{proof}
Let $\kappa>0$ whose precise value will be fixed later.
Now, note that if $\sigma_{\min}$ is the smallest singular value of $V$, then $\inf_{x : \Vert x \Vert_2=1} \Vert V \cdot x \Vert_2$. In order to lower bound this, observe that $V \cdot x = \sum_{1 \le i \le \ell} v_i \cdot x_i$.
Now, let $j$ be the largest coordinate such that $|x_j| \ge \kappa^{j}$ (note that there has to be such a $j$ since $x$ is a unit vector). Define $w = \sum_{i \le j} v_i x_i$. Then, observe that its component in the direction orthogonal to the span of $\{v_1, \ldots, v_{j-1}\}$ is at least $\gamma \cdot \kappa^j$ in magnitude. On the other hand, $\Vert \sum_{i > j} v_i x_i \Vert_2 \le \kappa^{j+1} \cdot \ell \cdot \eta$. Now, as long as $\kappa \le \frac{\gamma}{2 \cdot \ell \cdot \eta}$, we obtain that
\[
\Vert \sum_{i} v_i x_i \Vert_2 \ge \Vert \sum_{i \le j} v_i x_i \Vert_2 - \Vert \sum_{i > j} v_i x_i \Vert_2 \ge \gamma \cdot \kappa^j - \ell \cdot \eta \cdot \kappa^{j+1} \ge \frac{\gamma \cdot \kappa^j}{2}.
\]
This finishes the proof.
\end{proof}
\begin{lemma}~\label{lem:test-one}
There is a routine \textsf{Test-closeness-one} such that given oracle access to $f: \mathbb{R}^n \rightarrow [-1,1]$, $(y_1, \ldots, y_\ell)$ which are $\gamma$-linearly independent for $P_t f$ and access to
$g \in \mathsf{Cover}(t,\ell,\epsilon)$, has the following guarantee:
\begin{enumerate}
\item For $\tau = \epsilon^2/(100 \cdot \ell^{3/2})$, it makes $T(\tau, t,\gamma) \cdot \log(1/\xi)$ queries to $f$ (where $T(\cdot, \cdot, \cdot)$
is the function in Lemma~~\ref{lem:apx-ortho}).
\item There is an orthonormal basis $(w_1, \ldots, w_\ell)$ of $\mathsf{span}(Dh_{y_1}(y_1), \ldots, Dh_{y_\ell}(y_\ell))$ (which depends just on that) such that
with probability $1-\xi$, the algorithm outputs an $\epsilon/100$ accurate estimate to $\mathbf{E}[\Vert P_t f - g (w_1, \ldots ,w_\ell) \Vert_1]$.
\end{enumerate}
\end{lemma}
\begin{proof}
First, we run the procedure in Lemma~\ref{lem:apx-ortho}with error parameter $\tau$, noise rate $t$ and parameter $\gamma$. Note that with $T(\tau, t, \gamma)$ queries, we are able to obtain coefficients $\{\alpha_{i,j} \} $ such that
\begin{equation}~\label{eq:bound-tau}
\Vert \sum_{j} \alpha_{i,j} Dh_{y_j}(y_j) -w_i \Vert_2 \le \tau.
\end{equation}
Let $K = \sum_{i,j} |\alpha_{i,j}|$. Set the parameter $\eta= \frac{\epsilon^2}{K \cdot \ell}$.
Let us now define a point $x \in \mathbb{R}^n$ to be \emph{good} if the following holds:
\begin{enumerate}
\item For all $1 \le i \le \ell$, the function $f_{\partial, \eta, t, y_i}$ defined in Lemma~\ref{lem:compute-derivative-x},
$$
\big| f_{\partial, \eta, t, y_i} (x) - \langle D_{y_i} (P_tf)(y_i), x \rangle \big| \le \frac{\ell \cdot \eta}{\epsilon}.
$$
\item For all $1 \le i \le \ell$,
$$
\big| \sum_{j} \alpha_{i,j} \langle Dh_{y_j}(y_j),x \rangle - \langle w_i, x\rangle \big| \le \frac{\epsilon}{100 \ell}.
$$
\end{enumerate}
The crucial point is that for a randomly chosen $x \sim \gamma_n$, Lemma~\ref{lem:compute-derivative-x} guarantees that the first item is satisfied with probability at least $1 - \frac{\epsilon^2}{\ell \cdot \eta^2}$. Likewise, from (\ref{eq:bound-tau}), for $x \sim \gamma_n$, we get that the second item is satisfied with probability $\epsilon/(100\ell)$. Thus, we get that a point $x \sim \gamma_n$ is \emph{good} with probability $1-\epsilon/\ell$. The algorithm \textsf{Test-closeness-one} is now defined as follows:
\begin{enumerate}
\item Sample $s = 1/\epsilon^2 \cdot \log(1/\xi)$ points $x_1, \ldots, x_s$.
\item For each of the points $x_i$, do the following:
\item \hspace*{10pt} Compute $f_{\partial, \eta, t, y_j}(x_i)$ for $1 \le j \le \ell$ up to error $\frac{\epsilon}{K \cdot \ell}$.
\item \hspace*{10pt} Compute $\tilde{\beta}_{i,x} = \sum_{j} \alpha_{i,j}f_{\partial, \eta, t, y_j}(x_i)$.
\item \hspace*{10pt} Compute $g(\tilde{\beta}_{1,x} ,\ldots, \tilde{\beta}_{\ell,x})$.
\item Output $\frac{1}{s} \sum_{i=1}^s |P_tf(x_i) - g(\tilde{\beta}_{1,x} ,\ldots, \tilde{\beta}_{\ell,x})|$.
\end{enumerate}
The analysis of this algorithm is as follows:
\begin{eqnarray*}
&& \big|\mathbf{E}_{x \sim \gamma_n}\big[ |P_tf(x_i) - g(\tilde{\beta}_{1,x} ,\ldots, \tilde{\beta}_{\ell,x})| \big]-\mathbf{E}_{x \sim \gamma_n}\big[ |P_tf(x_i) - g(\langle w_1,x\rangle ,\ldots, \langle w_\ell,x\rangle)| \big] \big| \\ &\le& \mathbf{E}_{x \sim \gamma_n} \big[ |g(\tilde{\beta}_{1,x} ,\ldots, \tilde{\beta}_{\ell,x})- g(\langle w_1,x\rangle ,\ldots, \langle w_\ell,x\rangle)| \big]
\end{eqnarray*}
Now, note that because $g$ is bounded by $[-1,1]$, the term inside the expectation is bounded by $2$. Further, if a point $x$ is \emph{good}, for every $1 \le i \le \ell$,
$$
\big|\widetilde{\beta}_{i,x} - \langle w_i, x \rangle \big| \le \frac{\epsilon}{50 \ell}.
$$
Now, this immediately implies that
\[
\mathbf{E}_{x \sim \gamma_n} \big[ |g(\tilde{\beta}_{1,x} ,\ldots, \tilde{\beta}_{\ell,x})- g(\langle w_1,x\rangle ,\ldots, \langle w_\ell,x\rangle)| \big] \le \frac{\epsilon}{50} + \Pr[x \textrm{ is not good}] \le \frac{\epsilon}{2}.
\]
Item 2 now follows immediately.
\end{proof}
\section{Algorithm to find hidden linear invariant structure}~\label{aff:inv}
In this section, we will prove the following main theorem.
\begin{theorem}~\label{thm:main-affine-invariant}
Let $f: \mathbb{R}^n \rightarrow \{-1,1\}$ be a linear-$k$-junta with surface area $s$. Then, there is an algorithm \textsf{Find-invariant-structure} which for any error parameter $\epsilon>0$,
makes $O(s \cdot k /\epsilon)^{O(k)}$ queries to $f$
and with probability $1-\epsilon$ outputs (for some $\ell \le k$) a function $g: \mathbb{R}^\ell \rightarrow [-1,1]$ so that the following holds: there is an orthonormal set of vectors $w_1, \ldots, w_\ell \in \mathbb{R}^n$ such that
$$
\mathbf{E}[|f(x) - g(\langle w_1, x\rangle, \ldots, \langle w_\ell, x \rangle)|] = O(\epsilon).
$$
{Further, there is a set $V = \{v_1, \ldots, v_k\}$ of orthonormal vectors such that for $1 \le j \le \ell$, $v_j = w_j$ and $\span\{v_1, \ldots, v_k\}$ is a relevant subspace of $f$.}
\end{theorem}
Our algorithm is quite na\"ive. First, we ``identify'' -- in some implicit
sense -- the $k$-dimensional subspace on which the linear $k$-junta acts. We
take a fine net of functions defined on that space, and we test them all until
we fine the best one. Obviously, this algorithm is not computationally
efficient, and it is also not particularly efficient in terms of the query
complexity. However, the crucial feature of this algorithm
is that its query complexity does not depend on the
ambient dimension $n$. The main difficulty in constructing and analyzing this algorithm
is that we cannot explicitly identify even a single vector in the interesting
$k$-dimensional subspace -- that would require a number of queries that depends on $n$.
One consequence of this is that we do not know how to apply an off-the-shelf
learning algorithm (such as the one from~\cite{KOS:08}).
\begin{definition}~\label{def:vector-independence}
A set of vectors $v_1, \ldots, v_\ell \in \mathbb{R}^n$ is said to be $(\eta,\gamma)$-linearly independent if the following conditions hold:
\begin{enumerate}
\item For all $1 \le i \le \ell$,
$\Vert v_i \Vert_2 \le \eta$.
\item For all $1 < i \le \ell$, $\mathsf{dist}(v_i, \mathsf{span}(v_1, \ldots, v_{i-1})) \ge \gamma$.
\end{enumerate}
\end{definition}
\begin{definition}
For $f: \mathbb{R}^n \rightarrow [-1,1]$ and $t>0$, we say that a set of directions $(y_1, \ldots, y_\ell)$ is
$\gamma$-linearly independent, if the following holds:
For $1 \le i \le \ell$, let $v_i = DP_tf(y_i)$. If for all $i$, $\mathsf{dist}(v_i, \mathsf{span}(v_1, \ldots, v_{i-1})) \ge \gamma$.
\end{definition}
By Proposition~\ref{prop:derivative-bound}, it is immediate that as long as $t \le 1/4$, $\Vert DP_t f (y) \Vert_2 \le t^{-1/2}$. Thus, if $(y_1, \ldots, y_\ell)$ is $\gamma$-linearly independent, then the directions $(v_1,\ldots, v_\ell)$ are $(t^{-1/2}, \gamma)$ linearly independent.
\begin{figure}[h]
\hrule
\vline
\begin{minipage}[t]{0.98\linewidth}
\vspace{10 pt}
\begin{center}
\begin{minipage}[h]{0.95\linewidth}
{\small
\underline{\textsf{Inputs}}
\vspace{5 pt}
\begin{tabular}{ccl}
$t$ &:=& noise parameter \\
$y_1, \ldots, y_\ell$ &:=& $\frac{\gamma}{2}$-linearly independent directions \\
$\{\beta_{i,j}\}$ &:=& $\lambda$-accurate estimates of $\langle DP_tf(y_i), DP_tf(y_j)\rangle$ where \\
&& $\lambda=\lambda(\ell,\nu, t^{-\frac12}, \gamma/2)$ and $\nu = \frac{\gamma^2 \cdot t}{100 \ell^2}$ (from Lemma~\ref{prop:linear})
\\
$y_{\ell+1}$ &:=& candidate direction in $\mathbb{R}^n$.
\end{tabular}
\vspace{5 pt}
\underline{\textsf{Testing algorithm}}
\begin{enumerate}
\item Find the numbers $\{\alpha_{1 \le i, j \le \ell}\}$ from Lemma~\ref{prop:linear}.
\item Estimate $\langle DP_tf(y_{\ell+1}) , DP_tf(y_{\ell+1}) \rangle$ up to $\pm \frac{\gamma^2}{50}$ . Call the estimate $\tilde{\beta}_{\ell+1, \ell+1}$.
\item Estimate $\langle DP_tf(y_{\ell+1}) , DP_tf(y_{j}) \rangle$ (for $1 \le j \le \ell$) up to accuracy
$\frac{1}{\xi(\ell, t^{-1/2}, \gamma/2)} \cdot \frac{\gamma^2 \cdot \sqrt{t}}{100\ell^3}$ (using Lemma~\ref{lem:inner-product-1}) where $\xi$ is the function from Lemma~\ref{prop:linear}. Call the estimates $\tilde{\beta}_{j, \ell+1}$.
\item Compute quantity $\zeta_i = \sum_{1 \le j \le \ell} \alpha_{i,j} \cdot \tilde{\beta}_{j,\ell+1}$ for all $1 \le i \le \ell$.
\item If the quantity $\tilde{\beta}_{\ell+1, \ell+1}^2 - \sum_{i=1}^\ell \zeta_i^2 > (\frac{3 \gamma}{4})^2$, then output \textsf{yes}. Else output \textsf{no}.
\end{enumerate}
\vspace{5 pt}
}
\end{minipage}
\end{center}
\end{minipage}
\hfill \vline
\hrule
\caption{Description of the algorithm \textsf{Test-candidate-direction}}
\label{fig:tlin-1}
\end{figure}
\begin{lemma}~\label{lem:test-candidate}
The algorithm \textsf{Test-candidate-direction} described in Figure~\ref{fig:tlin-1} has the following properties: For noise parameter $t$, directions $y_1, \ldots, y_\ell \in \mathbb{R}^n$, $\{\beta_{i,j} \}$ and candidate direction $y_{\ell+1}$ (where $y_1, \ldots, y_\ell$ as well as $\{ \beta_{i,j} \}$ meet the requirements described in Figure~\ref{fig:tlin-1}), the algorithm satisfies
\begin{enumerate}
\item The query complexity of the algorithm is
$T_{tc}(t, \gamma, \ell) = \big( \frac{\ell}{\sqrt{t} \cdot \gamma} \big)^{O(\ell)}$.
\item If the Euclidean distance of $DP_tf(y_{\ell+1})$ is at least $\gamma$ from the subspace $\mathsf{span}(DP_tf(y_{1}), \ldots, DP_tf(y_{\ell}))$, then the algorithm outputs \textsf{yes}. Conversely, if the algorithm outputs \textsf{no}, then the Euclidean distance must be less than $\frac{\gamma}{2}$.
\end{enumerate}
\end{lemma}
\begin{proof}
The query complexity bound is just immediate from Lemma~\ref{lem:inner-product-1} and plugging in the value of $\xi(\ell, t^{-1/2}, \gamma)$ from Lemma~\ref{prop:linear}. To prove the second guarantee,
let us use $v_j$ to denote $DP_tf(y_j)$. Since $(v_1, \ldots, v_j)$ are $(1/t^{-1/2}, \frac{\gamma}{2})$-linearly independent, hence by Lemma~\ref{prop:linear}, we obtain that there are orthonormal vectors $(w_1, \ldots, w_\ell)$ (which span $v_1, \ldots, v_\ell$)
such that
$$
\Vert w_i - \sum_{j} \alpha_{i,j} v_j \Vert_2 \le \frac{\gamma^2 \cdot {t}}{100 \ell^2}.
$$
This implies that if we let $v_{\ell+1} = DP_tf(y_{\ell+1})$ (using $\Vert v_{\ell+1} \Vert \le t^{-1/2}$), then
$$
\big| \langle w_i, v_{\ell+1} \rangle - \sum_{j} \alpha_{i,j}
\langle v_j , v_{\ell+1} \rangle \big| \le \frac{\gamma^2 \sqrt{t}}{100 \ell^2}.
$$
Consequently, we have
\begin{eqnarray*}
\big| \langle w_i, v_{\ell+1} \rangle - \sum_{j} \alpha_{i,j} \cdot \tilde{\beta}_{j,\ell+1} \big| &\le& \frac{\gamma^2 \cdot \sqrt{t}}{100 \ell^2} + \sum_{j}|\alpha_{i,j}| \cdot |\langle v_j , v_{\ell+1} \rangle - \tilde{\beta}_{j,\ell+1} | \\
&\le& \frac{\gamma^2\cdot \sqrt{t}}{100 \ell^2} + \sum_{j} \xi (\ell, t^{-1/2}, \gamma/2) \cdot \frac{1}{\xi (\ell, t^{-1/2}, \gamma/2)}\cdot \frac{\gamma^2\sqrt{t}}{100 \ell^3} \le \frac{\gamma^2\sqrt{t}}{50 \ell^2}.
\end{eqnarray*}
The penultimate inequality follows from the bound on $|\alpha_{i,j}|$ from Lemma~\ref{prop:linear} and the accuracy of estimates $\tilde{\beta}_{j,\ell+1}$.
This implies that for any $i$,
\begin{equation}~\label{eq:bound-diff1}
\big|\big| \langle w_i, v_{\ell+1} \rangle \big|^2- \big|\sum_{j} \alpha_{i,j} \cdot \tilde{\beta}_{j,\ell+1} \big|^2\big| \le \frac{\gamma^2\sqrt{t}}{50 \ell^2} \cdot \big| \langle w_i, v_{\ell+1} \rangle + \sum_{j} \alpha_{i,j} \cdot \tilde{\beta}_{j,\ell+1} \big| \le \frac{\gamma^2\sqrt{t}}{50 \ell^2} \cdot 2 \cdot t^{-\frac12} = \frac{\gamma^2}{25 \ell^2}.
\end{equation}
The second inequality uses that fact that $w_i$ is a unit vector whereas $\Vert v_{\ell+1} \Vert_2 \le t^{-\frac12}$. Thus,
\begin{eqnarray*}
\mathrm{dist}^2\big(DP_tf(y_{\ell+1}), \mathsf{span}(DP_tf(y_{1}), \ldots, DP_tf(y_{\ell}))\big) &=& \Vert DP_tf(y_{\ell+1})\Vert_2^2 - \sum_{j=1}^\ell \langle DP_tf(y_{\ell+1}), w_j\rangle^2 \\
&=& \Vert DP_tf(y_{\ell+1})\Vert_2^2 - \sum_{j=1}^\ell \zeta_j^2 + \theta
\end{eqnarray*}
where $|\theta| \le \frac{\gamma^2}{25\ell}$ (from \ref{eq:bound-diff1}). Using the fact that $ |\tilde{\beta}_{\ell+1, \ell+1}^2- \Vert D P_tf(y_{\ell+1})\Vert_2^2 | \le \frac{\gamma^2}{50}$, we can conclude that
$$
\big|\mathrm{dist}^2\big(DP_tf(y_{\ell+1}), \mathsf{span}(DP_tf(y_{1}), \ldots, DP_tf(y_{\ell}))\big)- \tilde{\beta}_{\ell+1, \ell+1}^2 - \sum_{i=1}^\ell \zeta_i^2\big| \le \frac{\gamma^2}{25}.
$$
Item 2 in the claim is now an immediate consequence.
\end{proof}
\begin{figure}[tb]
\hrule
\vline
\begin{minipage}[t]{0.98\linewidth}
\vspace{10 pt}
\begin{center}
\begin{minipage}[h]{0.95\linewidth}
{\small
\underline{\textsf{Inputs}}
\vspace{5 pt}
\begin{tabular}{ccl}
$s$ &:=& surface parameter \\
$\epsilon$ &:=& error parameter\\
\end{tabular}
~\\
~\\
\underline{\textsf{Parameters}}
\vspace{5 pt}
\begin{tabular}{ccl}
$t$ &:=& $\frac{\epsilon^4}{900 s^2}$ \\
$\gamma$ &:=& $\frac{\epsilon^2}{8}$\\
$\lambda$ &=& $\lambda(k, \nu, t^{-\frac12}, \gamma)$ (where $\lambda(\cdot)$ is the function from Lemma~\ref{prop:linear}) and $\nu = \frac{\gamma^2 \cdot t}{100k^2}$. \\
$\tau_{\mathsf{succ}}$ &:=& $\frac{\epsilon^6}{s^2}$ \\
$T_{\mathsf{succ}}$ &:=& $\frac{1}{\tau_{\mathsf{succ}}} \cdot \log (10k/\epsilon)$.\\
\end{tabular}
~\\
~\\
\underline{\textsf{Testing algorithm}}
\begin{enumerate}
\item Initialize $S$ to be the empty set.
\item Initialize $\mathsf{count}=0$.
\item If $\mathsf{count} =k$, exit;
\item else set $S =\{y_1, \ldots, y_\ell\}$ and compute $\{\beta_{i,j}\}$ as $\lambda$-accurate estimates of $\langle DP_tf(y_i), DP_tf(y_j) \rangle$ (Lemma~\ref{lem:inner-product-1}).
\item Repeat $T_{\mathsf{succ}}$ times
\item \hspace{7pt} Choose $z \sim \gamma_n$.
\item \hspace{7pt} Run \textsf{Test-candidate-direction} with $S=\{y_1, \ldots,y_\ell\}$, candidate direction $z$, $\gamma, t$ as defined in \textsf{Parameters} and $\{\beta_{i,j}\}$ \hspace{3pt} as computed above.
\item \hspace{7pt} If \textsf{Test-candidate-direction} outputs \textsf{yes},
add $z$ to $S$; $\mathsf{count}+=1$;
go to step 3;
\item If the size of $S$ does not increase in $T_{\mathsf{succ}}$ steps, then exit;
\end{enumerate}
\vspace{5 pt}
}
\end{minipage}
\end{center}
\end{minipage}
\hfill \vline
\hrule
\caption{Description of the algorithm \textsf{Find-candidate-directions}}
\label{fig:flin-1}
\end{figure}
We now give an algorithm which finds out directions $\{y_1, \ldots, y_\ell\}$ such that for $t$ defined before (as $t : = \frac{\epsilon^4}{900 s^2}$), $P_t f$ is close to a junta on the directions $\{DP_tf(y_1), \ldots, DP_tf(y_\ell)\}$.
\begin{lemma}~\label{lem:find-dirs}
The algorithm \textsf{Find-candidate-directions} described in Figure~\ref{fig:flin-1} has the following properties: For noise parameter $t$, error parameter $\epsilon$, surface area parameter $s$, if the function $f: \mathbb{R}^n \rightarrow [-1,1]$ has surface area $s$ and is a linear $k$-junta, then with probability $1-\epsilon$, the algorithm outputs vectors $y_1, \ldots, y_\ell \in \mathbb{R}^n$ ($\ell \le k$) such that for $\{v_1, \ldots, v_\ell\}$ defined as
$v_i = DP_tf(y_i)$, the function is $\epsilon$-close to
a junta on $\mathsf{span}(v_1, \ldots, v_\ell)$. Further, the directions $(y_1, \ldots, y_\ell)$ are at
least $\gamma/2 = \frac{\epsilon^2}{16}$ linearly independent.
The query complexity of this algorithm is $T_{fc} (s,k,\epsilon) =\big( \frac{s \cdot k}{\epsilon}\big)^{O(k)}$.
\end{lemma}
\begin{proof}
{{The bound on the query complexity of this algorithm is immediate by just plugging in the query complexity of the routine \textsf{test-candidate-direction} (Lemma~\ref{lem:test-candidate}) and the query complexity of Step 4~(Lemma~\ref{lem:inner-product-1}).} }
Next, observe that by the guarantee of \textsf{Test-candidate-direction}, the set $S$ output by the algorithm consists of $\gamma/2$-linearly independent
directions.
Finally, assume that $f$ is a $W$-junta where $\mathsf{dim}(W) \le k$. Then, note that for any $y \in \mathbb{R}^n$, $DP_tf(y) \in W$. Now, there are two possibilities: (For the rest of this proof, we will use $v_i$ as a shorthand for $DP_tf(y_i)$)
\begin{itemize}
\item[(a)] If $\mathsf{count}=k$, then note that we have found $k$ directions $y_1, \ldots, y_k$ such that $v_i\in W$. Further, the directions $(v_1, \ldots, v_k)$ are $(t^{-1/2}, \gamma)$-linearly independent. Thus, $\mathsf{span}(v_1, \ldots, v_k)= W$. So, in this case, $P_tf$ is indeed a junta on $\mathsf{span}(v_1, \ldots, v_k)$ (where $S= \{y_1, \ldots, y_k\}$).
\item[(b)] If $\mathsf{count}<k$, then we are in one of the two situations: either $f$ is $\epsilon$-close to a junta on $\mathsf{span}(v_1, \ldots, v_\ell)$ where $S=\{y_1, \ldots, y_\ell\}$. In this case, we are already done. If not, then we apply Lemma~\ref{lem:subspace-escape} and obtain that with probability at least $\tau_{\mathsf{succ}}$, a randomly chosen direction $z$ will be at least $\gamma=\epsilon^2/8$-far from the subspace $\mathsf{span}(v_1, \ldots, v_\ell)$ and will thus pass the algorithm \textsf{Test-candidate-direction}. Thus, over $T_{\mathsf{succ}}$ trials, with probability at least $1- \frac{\epsilon}{10k}$, the set $S$ will increase in size and we will continue inductively.
Since the outer loop (i.e., the loop for $\mathsf{count}$ will run at most $k$ times), the total probability that $P_t f$ is not $\epsilon$-close to a $W$-junta for $W = \mathsf{span}(v_1, \ldots, v_\ell)$ but the algorithm terminates is at most $1-\frac{\epsilon}{10}$. This finishes the proof.
\end{itemize}
\end{proof}
With the aid of the algorithm \textsf{Find-candidate-directions}, we are able to find implicitly find directions $\{v_1, \ldots, v_\ell\}$ such that $P_t f$ is close to a junta on $\mathsf{span}(v_1, \ldots, v_\ell)$. In the next subsection, we essentially do a hypothesis testing over a set of functions which form a cover for all juntas on $\mathsf{span}(v_1, \ldots, v_\ell)$.
\subsection{Hypothesis testing against subspace juntas}
The following lemma says how given the directions $y_1, \ldots, y_\ell$ and an error parameter $\tau$, we can implicitly find directions which form an orthonormal
basis of $\mathsf{span}(v_1,\ldots, v_\ell)$ (as before, we are using $v_1, \ldots, v_\ell$ as a shorthand for $DP_tf(y_1), \ldots, DP_tf(y_\ell)$ respectively). All the symbols below will have the same value as Lemma~\ref{lem:find-dirs} unless mentioned otherwise.
\begin{lemma}~\label{lem:orthogonalize}
Choose any error parameter $\tau>0$ and let $y_1, \ldots, y_\ell$ be $\gamma/2$-linearly independent directions for $P_t f$. Then, there is a procedure \textsf{Compute-ortho-transform} which makes $T_{\mathsf{ortho}} =\mathsf{poly}(1/\tau) \cdot \big( \frac{\ell}{\gamma \cdot t}\big)^{O(\ell)}$ queries to $f$, we can obtain numbers $\{\alpha_{i,j} \}_{1 \le i,j \le \ell}$ such that the following holds:
\begin{enumerate}
\item For $\Lambda(\ell, t, \gamma) = (\frac{\ell}{t \gamma})^{O(\ell)}$, all the numbers $|\alpha_{i,j}| \le \Lambda(\ell, t, \gamma)$.
\item There exists an orthonormal basis $(w_1, \ldots, w_\ell)$ of $\mathsf{span}(v_1, \ldots, v_\ell)$ such that for all $1 \le i \leq \ell$,
\[
\Vert w_i - \sum_{j} \alpha_{i,j} v_j \Vert_2 \le \tau.
\]
\end{enumerate}
\end{lemma}
\begin{proof}
Let $\lambda(\cdot)$ be the function defined in Lemma~\ref{prop:linear}. Now, observe that
\[
\lambda(\ell, \tau, t^{-1/2}, \gamma) = \tau \cdot \bigg(\frac{\gamma \cdot t}{2 \cdot \ell } \bigg)^{O(\ell)}.
\]
Thus, using Lemma~\ref{lem:inner-product-1}, we can use
$T_{\mathsf{ortho}}$ queries to $f$ to obtain numbers $\{\beta_{i,j}\}_{1 \le i,j \le \ell}$ such that
\[
\big| \beta_{i,j} - \langle D_{y_i} h(y_i) , D_{y_j} h(y_j) \rangle \big| \le \lambda(\ell, \tau, t^{-1/2}, \gamma).
\]
As $(y_1, \ldots, y_\ell)$ are $\gamma$-linearly independent, hence the vectors $(v_1, \ldots, v_\ell)$ are $(t^{-\frac12}, \gamma)$-linearly independent.
With this, we can now apply Lemma~\ref{prop:linear} to obtain numbers $\{\alpha_{i,j}\}$ such that there is an orthonormal basis $(w_1, \ldots, w_\ell)$ of $\mathsf{span}(DP_tf(y_1), \ldots, DP_tf(y_\ell))$ with the property that (a)
$
\Vert w_i - \sum_{j} \alpha_{i,j} v_j \Vert_2 \le \tau$
and (b)
$|\alpha_{i,j}| \le \Lambda(\ell, t,\gamma)$ where $\Lambda(\ell, t, \gamma) = (\frac{\ell}{t \gamma})^{O(\ell)}$.
\end{proof}
\begin{figure}[tb]
\hrule
\vline
\begin{minipage}[t]{0.98\linewidth}
\vspace{10 pt}
\begin{center}
\begin{minipage}[h]{0.95\linewidth}
{\small
\underline{\textsf{Inputs}}
\vspace{5 pt}
\begin{tabular}{ccl}
$s$ &:=& surface parameter \\
$\epsilon$ &:=& error parameter\\
$y_1, \ldots, y_\ell$ &:=& $\frac{\gamma}{2}$-linearly independent directions for $P_t f$\\
\end{tabular}
~\\
~\\
\underline{\textsf{Parameters}}
\vspace{5 pt}
\begin{tabular}{ccl}
$t$ &:=& $\frac{\epsilon^4}{900 s^2}$ \\
$\gamma$ &:=& $\frac{\epsilon^2}{8}$\\
$\tau$ &=& $\frac{\epsilon^2 \cdot \sqrt{t}}{100 \cdot \ell^{3/2}}$ \\
$\delta$ &:=& $\frac{\epsilon}{10}$ \\
$K$ &:=& $\ell^2 \cdot \Lambda(\ell,t,\gamma)$ where $\Lambda(\cdot)$ is defined in Lemma~\ref{lem:orthogonalize}. \\
$\xi$ &:=& $\frac{\epsilon^2 \cdot \sqrt{t}}{K \cdot \ell^3 }$\\
$\mu$ &:=& $\frac{\epsilon}{|\mathsf{Cover}(t,\ell,\delta)|}$ where $\mathsf{Cover}(\cdot, \cdot, \cdot)$ is the set from Theorem~\ref{thm:net}. \\
$J$ &:=& $\frac{10}{\epsilon^2} \cdot \log(1/\mu)$ \\
\end{tabular}
~\\
~\\
\underline{\textsf{Testing algorithm}}
\begin{enumerate}
\item Run the procedure \textsf{Compute-ortho-transform} with directions $(y_1,\ldots, y_\ell)$ and $\gamma$, $t$ and $\tau$ as set above.
\item Let the output be parameters $\{\alpha_{i,j}\}_{1 \le i, j \le \ell}$.
\item Sample $J$ points from $\gamma_n$. Call the points $x_1$, $\ldots$, $x_J$.
\item For each of the points $x_i$ and each direction $y_j$,
\item \hspace{7pt} Compute the function $f_{\partial, \xi, t, y_j}(x_i)$ (up to error $\xi$) using Lemma~\ref{lem:compute-derivative-x}. Call this $\zeta_{i,j}$.
\item \hspace{7pt} Compute $\overline{x}_{i,j'} = \sum_{j}\alpha_{j',j} \cdot \zeta_{i,j}$.
\item For all $g \in \mathsf{Cover}(t, \ell, \delta)$,
compute $\mathcal{O}_g= \frac1s \cdot \sum_{i=1}^s |P_tf(x_i) - g(\overline{x}_{i,1}, \ldots, \overline{x}_{i,\ell})|$.
\item Return the $g$ which has the smallest value of $\mathcal{O}_g$.
\end{enumerate}
\vspace{5 pt}
}
\end{minipage}
\end{center}
\end{minipage}
\hfill \vline
\hrule
\caption{Description of the algorithm \textsf{Estimate-closest-hypothesis}}
\label{fig:hyp}
\end{figure}
Let us now again set the parameters $t$ and $\gamma$ exactly the same as Lemma~\ref{lem:find-dirs}. Namely, we set $t= \frac{\epsilon^4}{900s^2}$ and $\gamma=\frac{\epsilon^2}{8}$. With this setting of parameters, we state the following lemma.
\begin{lemma}~\label{lem:test-hypothesis}
There is an algorithm \textsf{Estimate-closest-hypothesis} (described in Figure~\ref{fig:hyp}) which takes as input oracle access to $f: \mathbb{R}^n \rightarrow \{-1,1\}$, directions $(y_1, \ldots, y_\ell)$ which are $\gamma/2$-linearly independent, error parameter $\epsilon$,
surface area parameter $s$. The algorithm has the following guarantee:
\begin{enumerate}
\item It makes $O\big( \frac{s \cdot \ell}{\epsilon}\big)^{O(\ell)}$ queries to $f$.
\item There is an orthonormal basis $(w_1, \ldots, w_\ell)$ of $\mathsf{span}(DP_tf(y_1), \ldots, DP_tf(y_\ell))$ (which is independent of $g$) such that with probability $1-\epsilon$, outputs a function $g: \mathbb{R}^\ell \rightarrow [-1,1]$ with the following guarantee: Let $\mathsf{Cover}(t,\ell, \delta)$ be the set of functions from Theorem~\ref{thm:net} where the parameters $t, \delta$ are set as in Figure~\ref{fig:hyp}. Then,
\[
\mathbf{E}[|P_tf(x) - g(\langle w_1,x\rangle, \ldots, \langle w_\ell,x\rangle)|] \le \min_{g^\ast \in \mathsf{Cover}(t,\ell,\delta)} \mathbf{E}[|P_tf(x) - g^\ast(\langle w_1,x\rangle, \ldots, \langle w_\ell,x\rangle)|] + 5\epsilon.
\]
\end{enumerate}
\end{lemma}
\begin{proof}
As usual, the query complexity of the procedure is easily seen to be $O\big( \frac{s \cdot \ell}{\epsilon}\big)^{O(\ell)} $ by just plugging in the values of the parameters along with the guarantees on the query complexity of \textsf{Compute-ortho-transform} (Lemma~\ref{lem:orthogonalize}) as well Lemma~\ref{lem:compute-derivative-x}.
To analyze the algorithm, let us now define a point $x \in \mathbb{R}^n$ to be \emph{good} if the following two conditions hold:
\begin{enumerate}
\item For $1 \le i \le \ell$,
\[
\big|f_{\partial, \xi, t, y_i}(x) - \langle DP_tf (y_i), x\rangle \big| \le \frac{\ell \cdot \xi}{\epsilon}.
\]
\item For all $1 \le i \le \ell$,
\[
\big| \sum_j \alpha_{i,j} \langle DP_tf(y_j), x\rangle - \langle w_i, x\rangle \big| \le \frac{\epsilon \cdot \sqrt{t}}{100 \ell^2}.
\]
\end{enumerate}
\begin{claim}
For $x \sim \gamma_n$, $\Pr[x \textrm{ is good}] \ge 1-\frac{2\epsilon^2}{\ell}$.
\end{claim}
\begin{proof}
Lemma~\ref{lem:compute-derivative-x} guarantees that for any specific choice of $i$,
$\Pr[\big|f_{\partial, \xi, t, y_i}(x) - \langle DP_tf (y_i), x\rangle \big| \le \frac{\ell \cdot \xi}{\epsilon}] \le \frac{\epsilon^2}{\ell^2}$. Thus, with probability $1-\frac{\epsilon^2}{\ell}$, item $1$ holds for $x \sim \gamma_n$. Likewise, notice that
$$
\Vert \sum_j \alpha_{i,j} DP_tf(y_j)- w_i \Vert_2 \le \tau.
$$
Thus, for any $x_i \sim \gamma_n$, with probability $1- \frac{\epsilon^2}{\ell^2}$, item 2 holds. Thus, by a union bound, it holds for all $1 \le i\le \ell$ simultaneously, with probability $1-\frac{\epsilon^2}{\ell}$. This proves the claim.
\end{proof}
Next, observe that if a point $x_i$ is \emph{good}, then the following holds for every $j'$:
\begin{eqnarray}
\big|\overline{x}_{i,j'} - \langle w_{j'}, x_i \rangle\big| &\leq& \sum_{j} \big| \alpha_{j',j} \cdot \zeta_{i,j} - \alpha_{j',j} \cdot f_{\partial, \xi, t, y_j}(x_i) \big| + \big| \langle w_j', x_i \rangle - \sum_{j} \alpha_{j',j} \cdot f_{\partial, \xi, t, y_j}(x_i) \big| \nonumber \\
&\le& \xi \cdot \sum_{j} |\alpha_{j',j}| + \big| \langle w_j', x_i \rangle - \sum_{j} \alpha_{j',j} \cdot f_{\partial, \xi, t, y_j}(x_i) \big| \nonumber\\ &\le& \frac{\epsilon^2\sqrt{t}}{2 \ell^4 }+ \big| \langle w_j', x_i \rangle - \sum_{j} \alpha_{j',j} \cdot f_{\partial, \xi, t, y_j}(x_i) \big| \nonumber \\
&\le& \frac{\epsilon^2\sqrt{t}}{2 \ell^4 } +
\big| \langle w_j', x_i \rangle - \sum_{j} \alpha_{j',j} \langle DP_tf(y_j), x_i \rangle \big| +
\sum_{j} |\alpha_{j',j}| \cdot \big| \langle DP_tf(y_j), x_i - f_{\partial, \xi, t, y_j}(x_i) \big|
\nonumber \\
&\le& \frac{\epsilon^2\sqrt{t}}{2 \ell^4 } + \frac{\epsilon^2 \sqrt{t}}{100 \cdot \ell^2} + \frac{\epsilon \sqrt{t}}{100 \cdot \ell^2} \le \frac{\epsilon \cdot \sqrt{t}}{\ell^2}. ~\label{eq:good}
\end{eqnarray}
The penultimate inequalities just follow from the condition that $x_i$ is \emph{good} and the values of the parameters. Now, observe that
\begin{eqnarray}
&& \big| \mathbf{E}_{x\sim \gamma_n} [|P_tf(x) - g(\overline{x}_1,\ldots, \overline{x}_\ell)|] - \mathbf{E}_{x\sim \gamma_n} [|P_tf(x) - g(\langle w_1,x \rangle, \ldots, \langle w_\ell, x \rangle)|] \big| \nonumber \\ &\le& \mathbf{E}_{x\sim \gamma_n} [|g(\overline{x}_1,\ldots, \overline{x}_\ell)-g(\langle w_1,x \rangle, \ldots, \langle w_\ell, x \rangle)|]
\end{eqnarray}
Now, observe that by definition, the term inside the expectation is uniformly bounded by $2$. On the other hand, if a point $x$ is good, then by (\ref{eq:good}) and exploiting $g$ is $t^{-1/2}$-Lipschitz, then $|g(\overline{x}_1,\ldots, \overline{x}_\ell)-g(\langle w_1,x \rangle, \ldots, \langle w_\ell, x \rangle)| \le \epsilon$. Since the fraction of good points is at least $1-\frac{\epsilon^2}{\ell}$, we get that for any $g \in \mathsf{Cover}(t, \ell, \delta)$,
$$
\big| \mathbf{E}_{x\sim \gamma_n} [|P_tf(x) - g(\overline{x}_1,\ldots, \overline{x}_\ell)|] - \mathbf{E}_{x\sim \gamma_n} [|P_tf(x) - g(\langle w_1,x \rangle, \ldots, \langle w_\ell, x \rangle)|] \big| \le 2\epsilon.
$$
Now a standard Chernoff bound implies that with for any $g \in \mathsf{Cover}(t, \ell, \delta)$, $\mathcal{O}_g$ is within $\pm \epsilon/2$ $\mathbf{E}[|P_tf(x) - g (\langle w_1, x\rangle, \ldots, \langle w_\ell, x \rangle)|]$ with probability $1- \frac{\epsilon}{10 \cdot |\mathsf{Cover}(t, \ell, \delta)|}$. Thus, by a union bound, with probability $1-\frac{\epsilon}{10}$, for all $g \in \mathsf{Cover}(t, \ell, \delta)$, $\mathcal{O}_g$ is within $\pm \epsilon/2$ of $\mathbf{E}[|P_tf(x) - g (\langle w_1, x\rangle, \ldots, \langle w_\ell, x \rangle)|]$. This finishes the proof.
\end{proof}
We are now ready to prove Theorem~\ref{thm:main-affine-invariant}.
\begin{proofof}{Theorem~\ref{thm:main-affine-invariant}}
Set $t = \frac{\epsilon^4}{900 s^2}$ (this is the same setting as Lemma~\ref{lem:find-dirs} and Lemma~\ref{lem:test-hypothesis}). Observe that with this choice of $t$, since $f$ has surface area bounded by $s$, then by Proposition~\ref{prop:noise-stab-surf-1}, we get that
$$
\mathbf{E}[|P_tf(x) - f(x)|] \le \sqrt{\mathbf{E}[|P_tf(x) - f(x)|^2]} \le \frac{\epsilon}{\sqrt{5}}.
$$
We now run the algorithm \textsf{Find-candidate-directions} with noise parameter $t$, error parameter $\epsilon$ and surface area parameter $s$. We are guaranteed that with probability $1-\epsilon$, we will get $\ell \le k$ directions $y_1, \ldots, y_\ell$ which are $\gamma/2$-linearly independent and $P_tf$ is $\epsilon$-close to a junta on the subspace $\mathsf{span}(v_1, \ldots, v_\ell)$ where $v_i = DP_tf(y_i)$ (call this event $\mathcal{E}_1$). The query complexity of this (from Lemma~\ref{lem:find-dirs}) is $(s \cdot k /\epsilon)^{O(k)}$.
Next, we run the routine \textsf{Estimate-closest-hypothesis} with the directions $y_1, \ldots, y_\ell$, surface area parameter $s$, error parameter $\epsilon$.
Observe that the query complexity of \textsf{Estimate-closest-hypothesis} is also $(s \cdot k /\epsilon)^{O(k)}$. Thus, the total query complexity remains $(s \cdot k /\epsilon)^{O(k)}$.
By guarantee of \textsf{Estimate-closest-hypothesis}, we have the following: there is an orthonormal basis $(w_1, \ldots, w_\ell)$ of $\mathsf{span}(DP_tf(y_1), \ldots, DP_tf(y_\ell))$ such that
\[
\mathbf{E}[|P_tf(x) - g(\langle w_1,x\rangle, \ldots, \langle w_\ell,x\rangle)|] \le \min_{g^\ast \in \mathsf{Cover}(t,\ell,\delta)} \mathbf{E}[|P_tf(x) - g^\ast(\langle w_1,x\rangle, \ldots, \langle w_\ell,x\rangle)|] + 5\epsilon.
\]
However, conditioned on $\mathcal{E}_1$, $P_tf $ is $\epsilon$-close to a junta on $\mathsf{span}(DP_t f(y_1), \ldots, DP_t f(y_\ell))$. By Theorem~\ref{thm:net}, this implies that the quantity $\min_{g^\ast \in \mathsf{Cover}(t,\ell,\delta)} \mathbf{E}[|P_tf(x) - g^\ast(\langle w_1,x\rangle, \ldots, \langle w_\ell,x\rangle)|] \le 3 \epsilon$. This means that if we output the function $g$, then
$\mathbf{E}[|P_tf(x) - g(\langle w_1,x\rangle, \ldots, \langle w_\ell,x\rangle)|] = O(\epsilon)$. {Consider the subspace $V$ spanned by vectors $\{DP_tf(y)\}_{y \in \mathbb{R}^n}$. Note that $\mathsf{dim}(V) \le k$ and $V$ is a relevant subspace for $f$. Thus, $w_1, \ldots, w_\ell$ can be extended to a basis for $V$, finishing the proof.
}
\end{proofof}
{
\begin{remark}~\label{rem:gaussian}
A crucial point about the routine \textsf{Find-invariant-structure}, which will be useful in the next section, is the following: The marginal distribution of all the queries is distributed as the standard $n$-dimensional Gaussian distribution $\gamma_n$. To see this, note that
\begin{enumerate}
\item In the routine \textsf{Find-candidate-directions} , each of the directions $y_i$ is sampled from $\gamma_n$. Further, for $y_i$ and $y_j$ which are i.i.d. samples from $\gamma_n$, the queries made to the oracle for $f$ in computing $\langle DP_tf(y_i), DP_tf(y_j ) \rangle$ are also distributed as $\gamma_n$ (see Lemma~\ref{lem:inner-product-1}).
\item In the routine \textsf{Estimate-closest-hypothesis}, the points $x_i$ are sampled from $\gamma_n$ as are the directions $y_j$ (which are output of \textsf{Find-candidate-directions}). With this, the queries made to the oracle for $f$ for computing $f_{\partial, \xi, t, y_j}(x_i) $ are distributed as $\gamma_n$ (see Lemma~\ref{lem:compute-derivative-x}).
\item One minor subtlety is that while each sampled $y_j$ comes from $\gamma_n$,
as stated, our algorithm \textsf{Find-invariant-structure} is adaptive. Consequently, the above two items do not imply that the marginal distribution of all queries is coming from $\gamma_n$.
The cause of non-adaptivity is that in the routine \textsf{Find-candidate-directions}, while we sample each $y_j$ from $\gamma_n$, subsequently, we only use a subset of the sampled $y_j$'s (namely, the subset $S$). However, we can easily make this algorithm non-adaptive at no asymptotic increase in the sample complexity. This is because the number of candidate directions sampled by the procedure \textsf{Find-candidate-directions} is at most $k \cdot T_{\mathsf{succ}} = \mathsf{poly}(k \cdot s/\epsilon)$. We can run the subsequent routines
namely \textsf{Compute-ortho-transform} and \textsf{Estimate-closest-hypothesis} with all the $y_j$'s instead of just those in set $S$ but only use those which are part of the set $S$ output by \textsf{Find-candidate-directions}. This will only increase the query complexity by a factor of $\mathsf{poly}(k \cdot s/\epsilon)$.
\end{enumerate}
\end{remark}}
\subsubsection*{Finding the linear-invariant structure}
Given the previous theorem it is natural to ask for more, i.e., not just test if the function is a linear-junta but also find the junta in number of queries that depends only on $k$ and $s$ (but not on $n$).
In other words, could we output $g: \mathbb{R}^k \rightarrow \{-1,1\}$ such that there exists a {projection} matrix $A: \mathbb{R}^n \rightarrow \mathbb{R}^k$ and
$f$ is close to $g(A x)$ with query complexity independent of $n$? We give an affirmative answer to this question:
\begin{theorem*}
Let $f:\mathbb{R}^n \rightarrow \{-1,1\}$ be a linear $k$-junta with surface area at most $s$. Then, there is an algorithm \textsf{Find-invariant-structure} which on error parameter $\epsilon>0$, makes $(s \cdot k/\epsilon)^{O(k)}$ queries and outputs $g: \mathbb{R}^k \rightarrow [-1,1]$ so that the following holds: there exists an orthonormal set of vectors $w_1, \ldots, w_k \in \mathbb{R}^n$ such that
$$
\mathbf{E}[|f(x) - g(\langle w_1, x\rangle, \ldots, \langle w_k, x \rangle)|] = O(\epsilon).
$$
Moreover, for some $g^{\ast} : {\mathbb{R}}^k \to {\mathbb{R}}$:
$$
f(x) = g^{\ast}(\langle w_1, x\rangle, \ldots, \langle w_k, x \rangle).
$$
\end{theorem*}
Informally, the theorem states that it is possible to find the ``linear-invariant" structure (i.e., the structure up to unitary transformation) of $f$ in number of queries that dependens on $s$ and $k$. Of course, one cannot hope to output the relevant directions $w_1, \ldots, w_k$ explicitly as even describing these directions will require $\omega(n)$ bits of information and thus, at least those many queries. We note that the number of functions in $k$ dimensions with $O(1)$ surface area (even up to a unitary rotation) is $\exp (\exp (k))$ and thus even our output has to be $\exp(k)$ bits. Thus, it is not possible to significantly improve on our $\exp(k \log k)$ query complexity in finding the linear-invariant structure.
{
\subsubsection*{Testability of linear invariant families of linear $k$-juntas}
Our ability to find the linear-invariant structure of linear $k$-juntas additionally allows us to test subclasses of linear $k$-juntas which are closed under rotation.
\begin{definition}
Let $\mathcal{C}$ be any collection of functions mapping $\mathbb{R}^k$ to $\{-1,1\}$. For any $n \in \mathbb{N}$ let:
\[
\mathsf{Ind}(\mathcal{C})_n= \{f : \exists g \in \mathcal{C} \ \textrm{and orthonormal vectors } w_1, \ldots, w_k \textrm{ such that } f(x) = g(\langle w_1, x\rangle, \ldots, \langle w_k,x \rangle). \}
\]
Define $\mathsf{Ind}(\mathcal{C}) = \cup_{n=k}^\infty \mathsf{Ind}(\mathcal{C})_n$ and call it the \emph{induced class of $\mathcal{C}$}.
\end{definition}
The two key properties of $\mathsf{Ind}(\mathcal{C})$ are (i) each function $f \in \mathsf{Ind}(\mathcal{C})$ is a linear $k$-junta, (ii) the class $\mathsf{Ind}(\mathcal{C})$ is closed under unitary
transformations.
The definition is a continuous analogue of
the so-called ``induced subclass of $k$-dimensional functions" from \cite{gopalan2009testing} (that paper was about testing functions over $\mathsf{GF}^n[2]$).
The following theorem shows that for any $\mathcal{C}$, $\mathsf{Ind}(\mathcal{C})$ is testable without any dependence on the ambient dimension.
\begin{theorem*}
Let $\mathcal{C}$ be a collection of functions mapping $\mathbb{R}^k$ to $\{-1,1\}$. Further, for every $f \in \mathsf{Ind}(\mathcal{C})$, $\mathsf{surf}(f) \le s$. Then, there is an algorithm \textsf{Test-structure-$\mathcal{C}$} which has the following guarantee: Given oracle access to $f: \mathbb{R}^n \rightarrow \{-1,1\}$ and an error parameter $\epsilon>0$, the algorithm makes $(s \cdot k/\epsilon)^{O(k)}$ queries and distinguishes between the cases (i) $f \in \mathsf{Ind}(\mathcal{C})$ and (ii) $f$ is $\epsilon$-far from every function $g \in \mathsf{Ind}(\mathcal{C})$.
\end{theorem*}
A particularly important instantiation of the
above theorem is the following: Let $\mathcal{C}_{B}$ be any collection of functions mapping $\{-1,1\}^k \rightarrow
\{-1,1\}$ and let $\mathcal{C}$ be defined as
\[
\mathcal{C} = \{g: x \mapsto h(\langle w_1, x \rangle - \theta_1, \ldots, \langle w_k, x \rangle - \theta_k) | \ w_1,\ldots, w_k \in \mathbb{R}^k, \ \theta_1, \ldots, \theta_k \in \mathbb{R}, \ h \in \mathcal{C}_B \}.
\]
Note that $\mathcal{C}$ defined above is the set of functions obtained by composing a function from $\mathcal{C}_B$ with $k$-dimensional halfspaces. Consequently, $\mathsf{Ind}(\mathcal{C})$ is the of all functions which can be obtained by composing a function from $\mathcal{C}_B$ with halfspaces. As an example, if $\mathcal{C}_B$ consists of the $\mathsf{AND}$ function on $k$ or fewer bits, then $\mathsf{Ind}(\mathcal{C})$ is the class of ``intersections of $k$-halfspaces". Since the surface area of any Boolean function of $k$-halfspaces is bounded by $O(k)$ it follows that the this class is testable with $(k/\epsilon)^{O(k)}$ queries.
Roughly speaking, the algorithm \textsf{Test-structure-$\mathcal{C}$} works as follows:
we first run the routine \textsf{Test-linear-junta} -- if the target function $f$ passes this test, we are guaranteed that it is (very close to) a linear $k$-junta with surface area $s$. We then run the routine \textsf{Find-invariant-structure}. If the output of this step is $g$, then we can check whether $g$ is close to some function in $\mathsf{Ind}(\mathcal{C})_k$ and accept accordingly. We crucially note here that the last step, namely checking whether $g$ is close to a function in $\mathsf{Ind}(\mathcal{C})_k$ makes no queries to $f$. While the overall intuition of this procedure is obvious, the precise proof is more delicate and is given in Section~\ref{aff:inv}.
}
\begin{comment}
the algorithm \textsf{Test-structure-$\mathcal{C}$} proceeds as follows:
\begin{enumerate}
\item Run the routine \textsf{Test-linear-junta} with rank parameter $k$, surface area parameter $s$ and error parameter $\delta>0$ (where $\delta \approx (\epsilon / (s \cdot k))^{O(k)}$). If the test passes, go to Step~2.
\item Run the routine \textsf{Find-invariant-structure}
with surface area parameter $2s$, rank parameter $k$ and error parameter $\epsilon>0$. Let $g: \mathbb{R}^\ell \rightarrow \{-1,1\}$ be the output of this routine.
\item If the function $g: \mathbb{R}^{\ell} \rightarrow \{-1,1\}$ is $\epsilon$-close to a class $\mathcal{C}$, then accept. Else, reject.
\end{enumerate}
\end{comment}
\subsection{Related Work}
\paragraph{Testing Boolean juntas}
As we have already mentioned, the problem of testing juntas on $\{-1, 1\}^n$
has already been well-studied. For example, it is known~\cite{blais2009testing,Chen:2017:SQC} that $\tilde \Theta(k^{3/2})$
queries are necessary and sufficient for non-adaptively testing $k$-juntas
with respect to the uniform distribution, while $\tilde \Theta(k)$ queries
are necessary and sufficient in the adaptive setting~\cite{blais2012property}.
It even turns out to be possible to test $k$-juntas with respect to an
unknown distribution~\cite{CLSSX18}, although in that setting the non-adaptive
query complexity becomes exponential in $k$.
{{We emphasize that while the problem of junta testing inspires the problems considered in this paper, junta testing algorithms have no bearing on the problem of testing linear juntas
-- e.g., unlike~\cite{CLSSX18}, there is no reason to believe that distribution-free testing of
linear juntas on ${\mathbb{R}}^n$ is even possible, given that the space of probability
measures on ${\mathbb{R}}^n$ is much richer than the space of probability measures on $\{-1, 1\}^n$.}}
\paragraph{Learning juntas of half-spaces.}
{There has been extensive work on {\em learning} intersections and other functions of $k$ half-spaces~\cite{BlumKannan:97, vempala2010random, VX13, KOS:08} .
Note that these algorithms (necessarily) require time polynomial in $n$ (whereas our \emph{raison d'etre is a query complexity independent of $n$}). In particular,
\cite{BlumKannan:97} provided conditions under which intersections of halfspaces can be learnt under the uniform distribution on the ball.
Vempala~\cite{vempala2010random} extended their result to arbitrary log-concave distributions.
In terms of the expressivity of the function class, \cite{VX13} explicitly considered the problem of learning linear $k$-juntas (they called it subspace juntas) and showed that a linear $k$-junta of the form
$g(\langle w_1, x \rangle, \ldots, \langle w_k, x \rangle)$ is learnable in polynomial time if the function $g$ is identified by low moments and robust to small rotations in $\mathbb{R}^n$. Along a related but different axis, \cite{KOS:08} showed that functions of bounded surface area in the Gaussian space are learnable in polynomial time. Finally, we remark that there also has been work in learning intersections and other functions of halfspaces over the Boolean hypercube as well~\cite{KOS:02, gopalan2012learning}.
}
\ignore{learned some functions $g$ of $k$ half-spaces in polynomial time if the functions $g$ are identified by low moments and robust to small rotations in ${\mathbb{R}}^n$, while \cite{KOS:08} learned functions of bounded Gaussian surface area.}
\paragraph{Linearly Invariant Testing over Finite Fields}
We note that the set of linear-juntas is linearly invariant. If $f$ is a linear $k$-junta and $B$ is any $n \times n$ matrix then $x \mapsto f(Bx)$ is also a linear $k$-junta.
Over finite fields, \cite{KaufmanSudan:08} studied general criteria for when a linearly invariant property is testable, see also
\cite{bhattacharyya2013}. In particular, \cite{gopalan2009testing}, gave a $2^{O(k)}$ query complexity algorithm to test linear juntas over finite fields. Moreover, they also show that an exponential lower bound on $k$ is necessary.
This should be contrasted with our result which shows that linear juntas over the Gaussian space can be tested with $\mathsf{poly}(k)$ queries.
{\paragraph{Testing (functions) of halfspaces}
The question of testing halfspaces was first considered in \cite{MORS:10} who showed that in the Gaussian space (as well as the Boolean space), halfspaces are testable with $O(1)$ queries. Subsequently, the second and third authors (Mossel and Neeman~\cite{mossel2015robust}) gave a different testing algorithm for a single halfspace in the Gaussian space. In fact, Harms~\cite{harms19} recently showed that halfspaces over any rotationally invariant distribution can be tested with sublinear number of queries.
However, as far as we are aware, prior to our work, no non-trivial bounds were known for even testing the intersection of two halfspaces. As remarked earlier, from our work, it follows that for any arbitrary $k$, intersection of $k$-halfspaces can be tested in the Gaussian space with $\exp(k \log k)$ queries.}\\
\subsection{Techniques}
A major difference between linear juntas over finite fields and linear juntas over Gaussian space is the ``infinitesimal geometry" that can be used in the latter and does not exist in the former.
In particular, the linear part $\mathcal{W}_1(f)$
of the Hermite expansion of $f$ is approximately given by
$e^{-t} (P_t f - \operatorname{{\bf E}}[f])$ for large $t$. Here $P_t f$ is the Ornstein-Uhlenbeck operator.
Both the quantities, $\operatorname{{\bf E}}[f]$ and $P_t f$ can be approximated by sampling a small number of points from the Gaussian distribution and evaluating $f$ at those points.
Moreover, if $f(x) = g(\langle u_1, x \rangle, \ldots, \langle u_k, x \rangle)$ is a linear junta, then the linear part of its Hermite expansion, $\mathcal{W}_1(f)$, lies in the span of $u_1,\ldots,u_k$.
We would like to obtain ``many more directions" that lie in the span of $u_1,\ldots,u_k$.
We do so by considering functions of the form $f_{t,y}(x) = f(e^{-t} y + \sqrt{1-e^{-2 t}} x)$, for randomly chosen $y$ and an appropriate value of $t$
(the experts will recognize $f_{t,y}$ as part of the definition of the Ornstein-Uhlenbeck operator). Note that $f_{t,y}$ is also a linear junta defined by the same direction $u_1,\ldots,u_k$ and therefore the linear part of the Hermite expansion of $f_{t,y}$, is also in the span of $u_1,\ldots,u_k$.
It is now natural to propose the following algorithm to test if a function is a linear $k$-junta: choose points $y_i$ at random and ``compute''
$\mathcal{W}_1(f_{t,y_i})$ at these points. Then if the rank of the matrix spanned by
$(\mathcal{W}_1(f_{t,y_i}))_i$ is at most $k$, then output YES; otherwise, output NO.
Of course, actually computing $\mathcal{W}_1(f_{t,y})$ requires $\mathrm{poly}(n) \gg \mathrm{poly}(k)$ samples.
Instead we will approximately compute the Gram matrix
\[
A_{i,j} = \langle \mathcal{W}_1(f_{t,y_i}), \mathcal{W}_1(f_{t,y_j}) \rangle.
\]
and test if it is close or far from a matrix of rank $k$.
One advantage of using the Gram matrix, is that we can evaluate the entries $A_{i,j}$ by sampling random inputs to evaluate the expected values
\[
\operatorname{{\bf E}}[\mathcal{W}_1(f_{t,y_i})(x) \mathcal{W}_1(f_{t,y_j})(x)].
\]
How do we know that $\mathcal{W}_1(f_{t,y_i})(x)$ are not very close to $0$?
If $f$ has a bounded surface area then $f$ is close to the noise stable function $P_t f$. For such noise stable functions, we prove that with good probability at a random point $x$,
$\mathcal{W}_1(f_{t,y_i})(x)$ will be of non-negligible size. In fact, one of our main technical lemmas (Lemma \ref{lem:subspace-escape}) proves much more. It shows that if $f$ is $\epsilon$ far from any linear-$k$-junta then for any subspace $W$ with co-dimension at most $k$, it holds that for a random $y$ with probability at least $\mathrm{poly}(\epsilon)$, the projection of $\mathcal{W}_1(f_{t,y_i})(x)$ into $W$ will have norm at least
$\mathrm{poly}(\epsilon)$. This result is later combined with a perturbation argument to establish to show that if $f$ is $\epsilon$-far from a linear $k$-junta then indeed the Gram matrix will have $k+1$ large eigenvalues.
Since our analysis relies on the function $f$ having surface area at most $s$, the first stage of the algorithm uses the algorithm by the third author \cite{neeman2014testing} to test if the function of interest is of bounded surface area.
The algorithm to identify the linear invariant structure of $f$ builds up on the ideas in the algorithm to test linear $k$-juntas. More precisely, we can show that if $f$ is a linear $k$-junta with surface area $s$,
\begin{enumerate}
\item we can find directions $y_1,\ldots, y_\ell$ such that
$f$ is close to a function on the space spanned by the directions $\mathcal{W}_1(f_{t,y_1}),\ldots,\mathcal{W}_1(f_{t,y_\ell})$ (for some $\ell \le k$).
\item While we cannot find $\mathcal{W}_1(f_{t,y_j})$ explicitly for any $j$, we can evaluate $\langle \mathcal{W}_1(f_{t,y_j}), x\rangle$ at any point $x$ up to good accuracy.
\item With the above observation, the high level idea is to \emph{try out all smooth functions} on the subspace spanned by $\{\langle\mathcal{W}_1(f_{t,y_1}), x\rangle ,\ldots,\langle \mathcal{W}_1(f_{t,y_\ell}),x \rangle\}$. Perform \emph{hypothesis testing} for each such function against $f$ and output the most accurate one.
\end{enumerate}
The crucial part in the above argument is that even if we have $\mathcal{W}_1(f_{t,y_1}),\ldots,\mathcal{W}_1(f_{t,y_\ell})$ implicitly, the space of ``all smooth functions" on $\mathsf{span}(\langle\mathcal{W}_1(f_{t,y_1}), x\rangle ,\ldots,\langle \mathcal{W}_1(f_{t,y_\ell}),x \rangle)$ has a cover whose size is independent of $n$. This lets us identify the linear invariant function defining $f$ with query complexity just dependent on $k$ and $s$.
In order to prove lower bounds in terms of surface area, we construct a distribution over linear $1$-juntas
with large surface area by splitting ${\mathbb{R}}^2$ into many very thin parallel strips (oriented in a random direction)
and assign our function a random $\pm 1$ value on each strip. (Note that the surface area of such a function
is proportional to the number of strips.) The intuition is
that no algorithm that makes non-adaptive queries can tell that such a random
function is a 1-junta, because in order to ``see'' one of these strips, the
algorithm would need to have queried multiple far-away points in a single
strip. But if the number of queries is small relative to the number of strips
then this is impossible -- with high probability every pair of far-away query points
will end up in different strips.
In order to make this intuition rigorous, we also introduce a distribution on linear $2$-juntas
by randomly ``cutting'' the thin strips once in the orthogonal direction. We show that
for any non-adaptive set of queries, the two distributions induce almost identical query distributions,
and Yao's minimax lemma implies that no algorithm can distinguish between our random $1$-juntas and our
random $2$-juntas.
\section{Some useful results from linear algebra}
The next lemma states for any $v_1, \ldots, v_\ell$ which are $(\eta,\gamma)$-linearly independent, we can
find a set of vectors $(w_1, \ldots, w_\ell)$ (expressed as linear combination of $(w_1, \ldots, w_\ell)$) which is close to being an orthonormal basis of the $\mathsf{span}(v_1, \ldots, v_\ell)$ provided we have sufficiently good approximations of $\{\langle v_i, v_j \rangle\}_{1 \le i,j \le \ell}$. Now, modulo the \emph{quantitative estimates}, this is essentially just a consequence of a procedure such as the Gram-Schmidt orthogonalization. However, the complexity of our testing algorithm is dependent on the quantitative estimates, so we work out the linear algebra here.
\begin{lemma}~\label{prop:linear}
Let $v_1, \ldots, v_\ell$ be a $(\eta, \gamma)$-linearly independent vectors. Then, for any error parameter $\nu>0$ and $\lambda =\lambda(\ell,\nu,\eta, \gamma)$ defined as
$$
\lambda= 2 \frac{\nu}{\ell^2 \cdot \eta} \cdot \big( \frac{\gamma}{2\cdot \ell \cdot \eta}\big)^{3\ell+3},
$$ given numbers $\{\beta_{i,j}\}_{1\le i,j \le \ell}$ such that $|\beta_{i,j} - \langle v_i, v_j \rangle| \le \lambda$, we can compute numbers $\{\alpha_{i,j}\}_{1\le i,j \le \ell}$ such that:
\begin{enumerate}
\item For $\xi (\ell, \eta, \gamma)$ defined as
\[
\xi ( \ell, \eta, \gamma) =\sqrt{2\ell} \cdot \bigg(\frac{2 \ell \cdot \eta}{\gamma} \bigg)^{\ell+1},
\]
we have
$|\alpha_{i,j}| \le \xi ( \ell,\eta, \gamma)$.
\item There is an orthonormal basis $(w_1, \ldots, w_\ell)$ of $\mathsf{span}(v_1, \ldots, v_\ell)$ such that for $\Vert w_{i} - \sum_{j}\alpha_{i,j} v_j \Vert_2 \le \nu$.
\end{enumerate}
\end{lemma}
\begin{proof}
Consider the symmetric matrix $\Sigma\in\mathbb{R}^{\ell \times \ell}$ defined as $\Sigma_{i,j} = \langle v_i, v_j \rangle$. By Proposition~\ref{prop:sing-1}, $\Sigma$ is non-singular. Define the matrix $\Gamma = \Sigma^{-1/2}$. It is easy to see that the columns of
$V \cdot \Sigma^{-1/2}$ form an orthonormal basis of $\mathsf{span}(v_1, \ldots, v_\ell)$. Here $V = [v_1 | \ldots | v_\ell]$. Of course, we cannot compute the matrix $\Sigma$ exactly and consequently, we cannot compute the matrix $\Sigma^{-1/2}$ either. Instead, if we define the matrix $\widetilde{\Sigma}$ as $\widetilde{\Sigma}(i,j) = \beta_{i,j}$, then observe that $\widetilde{\Sigma}$ is symmetric. Next, observe that Proposition~\ref{prop:sing-1}, we have that
$$
\sigma_{\min}(\Sigma) = \sigma_{\min}^2(V) \geq \bigg(\frac{\gamma}{2 \cdot \ell \cdot \eta} \bigg)^{2 \ell +2}.
$$
Define a parameter $\rho$ as
$$
\rho = \frac{2 \nu}{\ell \cdot \eta} \cdot \bigg( \frac{\gamma}{2 \cdot \ell \cdot \eta}\bigg)^{\ell+1}.
$$
Now, with this setting, observe that
$$
\ell \cdot \lambda = \rho \cdot \bigg( \frac{\gamma}{2 \cdot \ell \cdot \eta} \bigg)^{2\ell +2} \le \rho \cdot \sigma_{\min} (\Sigma).
$$
Further, since entrywise, $\Sigma$ and $\widetilde{\Sigma}$ differ by at most $\lambda$, hence $\Vert \widetilde{\Sigma} - \Sigma \Vert_F\le \ell \cdot \lambda$. First, by Weyl's inequality (Lemma~\ref{lem:Weyl}), we have that
\begin{equation}~\label{eq:sigma-min}
\sigma_{\min}(\widetilde{\Sigma}) \ge \sigma_{\min}({\Sigma})- \Vert \Sigma-\widetilde{\Sigma} \Vert_F \ge (1- \rho) \cdot \sigma_{\min}(\Sigma).
\end{equation}
Thus, $\widetilde{\Sigma}$ is also psd.
Now, we apply the matrix perturbation bound to matrices $\Sigma$ and $\widetilde{\Sigma}$ (Corollary~\ref{corr:mat-perturb} with parameter $c = \big( \frac{\gamma}{2\cdot \ell \cdot \eta}\big)^{2\ell+2}$)
to obtain that
\[
\Vert \Sigma^{-1/2} - \widetilde{\Sigma}^{-1/2} \Vert \leq \frac{\rho}{2 \big( \frac{\gamma}{2\cdot \ell \cdot \eta}\big)^{\ell+1}} = \frac{2\nu}{\ell \cdot \eta}.
\]
We now define $\alpha_{i,j} = \widetilde{\Sigma}^{-\frac12}(j,i)$. We also define $\beta_{i,j} = {\Sigma}^{-\frac12}(i,j)$. Note that the vectors $w_i = \sum_{i} \beta_{j,i} v_j$ forms an orthonormal basis. As the matrices $\Sigma^{-\frac12}$ and $\widetilde{\Sigma}^{-\frac12}$ are $\frac{2\nu}{\ell \cdot \eta}$ close in operator norm, this immediately implies item 2. To get item 1, we recall the following basic inequality for Frobenius norm of an inverse matrix. In particular, for a symmetric matrix $A \in \mathbb{R}^{\ell \times \ell}$,
$\sigma_{\min}(A) \cdot \Vert A^{-1} \Vert_F \le \sqrt{\ell}$. Thus,
\[
\Vert \widetilde{\Sigma}^{-1/2} \Vert_F \le \frac{\sqrt{\ell}}{\sigma_{\min}(\widetilde{\Sigma}^{1/2})} = \sqrt{\frac{\ell}{\sigma_{\min}(\widetilde{\Sigma})}} \le
\sqrt{2\ell} \cdot \bigg(\frac{2 \ell \cdot \eta}{\gamma} \bigg)^{\ell+1}.
\]
The last inequality uses (\ref{eq:sigma-min}) and the fact that $\rho \le \frac12$. This immediately implies
the first item.
\end{proof}
\begin{proposition}~\label{prop:sing-1}
Let $v_1, \ldots, v_\ell$ be a $(\eta, \gamma)$-linearly independent vectors. Let $V = [v_1 | \ldots | v_\ell]$. Then, the smallest singular value of $V$
is at least $(\frac{ \gamma}{2 \cdot \ell \cdot \eta})^{\ell+1}$.
\end{proposition}
\begin{proof}
Let us set a parameter $\rho - \frac{\gamma}{2\ell \eta}$. Recall that if $\sigma_{\min}(V)$ is the smallest singular value of $V$, then
\[
\sigma_{\min}(V) = \inf_{x : \Vert x \Vert_2=1} \Vert V \cdot x \Vert_2
\]
Let us try to lower bound the right hand side. To do this, let $x \in \mathbb{R}^n$ be any unit vector and note that $V \cdot x = \sum_{1 \le i \le \ell} v_i \cdot x_i$.
Now, let $j$ be the largest coordinate such that $|x_j| \ge \rho^{j}$ (note that there has to be such a $j$ since $x$ is a unit vector and $\rho<1/2$). Define $w = \sum_{i \le j} v_i x_i$. Then, observe that its component in the direction orthogonal to the span of $\{v_1, \ldots, v_{j-1}\}$ is at least $\gamma \cdot \rho^j$ in magnitude. On the other hand, $\Vert \sum_{i > j} v_i x_i \Vert_2 \le \rho^{j+1} \cdot \ell \cdot \eta$. By triangle inequality, we obtain that
\[
\Vert \sum_{i} v_i x_i \Vert_2 \ge \Vert \sum_{i \le j} v_i x_i \Vert_2 - \Vert \sum_{i > j} v_i x_i \Vert_2 \ge \gamma \cdot \rho^j - \ell \cdot \eta \cdot \rho^{j+1} \ge \frac{\gamma \cdot \rho^j}{2}.
\]
The last inequality uses the value of $\rho$. This finishes the proof.
\end{proof}
\subsubsection{Oracle computation}
We now list several useful claims which all fit the same motif: Given oracle access to $f: \mathbb{R}^n \rightarrow \mathbb{R}$, what \emph{interesting} quantities can be computed?
\begin{lemma}~\label{lem:oracle-access-1}
Given oracle access to $f:\mathbb{R}^n \rightarrow [-1,1]$, error parameter $\eta>0$, there is a function $f_{\partial,\eta}: \mathbb{R}^{n} \rightarrow \mathbb{R}$ such that the following holds for every $\lambda \ge 1$,
\[
\mathop{\Pr}_{x \sim \gamma_n} \big[ \big|f_{\partial,\eta}(x) - \widehat{f}_1(x) \big| > \lambda \cdot \eta \big] \le \lambda^{-2}.
\]
Further, for any $x \in \mathbb{R}^n$, we can compute $f_{\partial,\eta}(x)$ to additive error $\pm \epsilon$ with confidence $1-\delta$ by making $\mathsf{poly}(1/\eta, 1/\epsilon, \log (1/\delta))$ queries to the oracle for $f$.
\end{lemma}
\begin{proof}
Observe that for any $t>0$, $P_t f = \sum_{q \ge 0} e^{-tq} \widehat{f}_q(x)$. This implies that
\[
\frac{P_t f - \mathbf{E}[f]}{e^{-t}} = \widehat{f}_1(x) + \sum_{q>1}e^{-t(q-1)} \widehat{f}_q(x).
\]
Set $t$ so that $e^{-t} = \eta$ and let us define $f_{\partial,\eta}$ as
$
f_{\partial,\eta} = \frac{P_t f - \mathbf{E}[f]}{e^{-t}}.
$ Now, observe that for $h(x)=\sum_{q>1}e^{-t(q-1)} \widehat{f}_q(x)$, $\mathbf{E}[h(x)]=0$ and $\mathsf{Var}[h(x)] \le \eta^2$. We now apply Chebyshev's inequality to obtain
\[
\mathop{\Pr}_{x \sim \gamma_n} \big[ \big|f_{\partial,\eta}(x) - \widehat{f}_1(x) \big| > \lambda \cdot \eta \big] \le \lambda^{-2}.
\]
Next, observe that both $P_tf(x)$ and $\mathbf{E}[f(x)]$ can be computed to error $\pm \epsilon \cdot \eta$ with confidence $1-\frac{\delta}{2}$ using $\mathsf{poly}(1/\eta, 1/\epsilon, \log (1/\delta))$ queries to the oracle for $f$. This immediately implies that $f_{\partial,\eta}$ can be computed to error $\pm \epsilon$ using $\mathsf{poly}(1/\eta, 1/\epsilon, \log (1/\delta))$ queries to the oracle for $f$.
\end{proof}
\begin{lemma}~\label{lem:inner-products}
Given oracle access to functions $f,g : \mathbb{R}^n \rightarrow [-1,1]$, error parameter $\epsilon >0$ and confidence parameter $\delta>0$, there is an algorithm which makes $\mathsf{poly}(1/\epsilon,\log(1/\delta))$ queries to $f,g$ and computes $\langle \widehat{f}_1, \widehat{g}_1 \rangle$ up to error $\epsilon$ with confidence $1-\delta$.
\end{lemma}
\begin{proof}
Consider the function
\[
h(x) = e^{2t} (P_t f(x) - \operatorname{{\bf E}}[f]) (P_t g(x) - \operatorname{{\bf E}}[g]).
\]
Writing out the Fourier expansions of $P_t f$ and $P_t g$, note that $P_t f = \sum_{q \ge 0} e^{-tq} \widehat f_q(x)$,
and so
\[
h(x) = \widehat f_1(x) \widehat g_1(x) + \sum_{\substack{q, r \ge 1 \\ q + r \ge 3}} e^{-t(q + r - 2)} \widehat f_q(x) \widehat g_r(x).
\]
Since $\widehat f_1$ and $\widehat g_1$ are linear functions, $\operatorname{{\bf E}}[\widehat f_1(x) \widehat g_1(x)] = \widehat f_1 \cdot \widehat g_1$. On the other hand, $\operatorname{{\bf E}}[\sum_{q \ge 0} \widehat f_q^2(x)] = \operatorname{{\bf E}}[f^2] \le 1$, and so the Cauchy-Schwarz inequality implies that
\[
\operatorname{{\bf E}}\Big[\sum_{\substack{q, r \ge 1 \\ q + r \ge 3}} e^{-t(q + r - 2)} \widehat f_q(x) \widehat g_r(x)\Big]
\le e^{-t}.
\]
Hence, $|\operatorname{{\bf E}}[h(x)] - \hat f_1 \cdot \hat g_1| \le e^{-t}$. If we choose $t$ so that $e^{-t} = \epsilon/2$, then it
only remains to show that we can estimate $\operatorname{{\bf E}}[h(x)]$ within additive error $\epsilon/2$ with confidence $1 - \delta$.
Let $y$ and $z$ be Gaussian random variables, independent of $x$, and write $P_t f(x) = \operatorname{{\bf E}}_y[f(e^{-t} x + \sqrt{1-e^{-2t}} y)]$ and $P_t g(x) = \operatorname{{\bf E}}_z[g(e^{-t} x + \sqrt{1-e^{-2t}} z)]$. In particular, we can express $\operatorname{{\bf E}}[h(x)]$ in the form $\operatorname{{\bf E}}[J(x, y, z)]$ where
\[
J(x,y,z) = e^{2t} (f(e^{-t} x + \sqrt{1-e^{-2t}} y) - f(y)) (g(e^{-t} x + \sqrt{1-e^{-2t}}z) - g(z)).
\]
Recalling that $e^{-2t} = 4/\epsilon^2$, it follows that $J$ takes values in $[-4/\epsilon^2, 4/\epsilon^2]$,
and it follows from Hoeffding's inequality that we can approximate $J$ to additive error $\epsilon/2$
with confidence $1 - \delta$ using $\mathsf{poly}(1/\epsilon, \log(1/\delta))$ samples of $J$. Moreover,
each sample of $J$ can be computed using two oracle queries to $f$ and two oracle queries to $g$.
\end{proof}
\begin{definition}
A function $f: \mathbb{R}^n \rightarrow \mathbb{R}$
is said to be a linear $k$-junta if there are at most $k$ orthonormal vectors $u_1, \ldots, u_k \in \mathbb{R}^n$ and a function $g: \mathbb{R}^k \rightarrow \mathbb{R}$ such that
\[
f(x) = g(\inp{u_1}{x}, \ldots, \inp{u_k}{x}).
\]
Further, if $u_1, \ldots, u_k \in W$ (a linear subspace of $\mathbb{R}^n$), then $f$ is said to be a $W$-junta.
\end{definition}
\subsection{Derivatives of functions}
{We will use $D$ to denote the derivative operator. In case, there are two sets of variables involved, we will explicitly indicate the variable with respect to which we are taking the derivative.}
\begin{definition}
For $f: \mathbb{R}^n \rightarrow \mathbb{R}$ ($f \in \mathcal{C}^{\infty}$) and $t \ge 0$, define the function $f_t: \mathbb{R}^n \times \mathbb{R}^n \rightarrow \mathbb{R}$,
\[
f_t(y,x) = f(e^{-t} y + \sqrt{1-e^{-2t}} x).
\]
Further, in the same setting as above, we let
$f_{t,y}: \mathbb{R}^n \rightarrow \mathbb{R}$,
\[
f_{t,y}(x) = f(e^{-t} y + \sqrt{1-e^{-2t}} x).
\]
\end{definition}
Let $D_x$ denote the derivative operator with respect to $x$ and let $D_y$ denote the derivative operator with respect to $y$. Then, it is easy to observe that
\begin{equation}~\label{eq:derivative-y-1}
\sqrt{e^{2t}-1} \cdot D_y f_{t}(y,x) = D_x f_{t}(y,x).
\end{equation}
Next, for a function $g: \mathbb{R}^n \rightarrow \mathbb{R}$, define $\mathcal{W}_{1}(g) \in \mathbb{R}^n$ as the degree-$1$ Hermite coefficients of $g$. In other words, the $i^{th}$ coordinate of $\mathcal{W}_1(g)$
\[
\mathcal{W}_{1}(g)[i] = \mathbf{E}[g(x) \cdot x_i],
\]
where $x \sim \gamma_n$, the standard $n$-dimensional Gaussian measure. With respect to our earlier definition of $\widehat{g}_1$, observe that we have:
$
\widehat{g}_1(x) = \inp{\mathcal{W}_{1}(g)}{x}.
$
We next prove the following important lemma which connects the gradient of $P_t f$ at $y$ with $\mathcal{W}_{1}(f_{t,y})$. In particular, we have the following lemma.
\begin{lemma}~\label{lem:derivative-shift}
\[
\mathcal{W}_{1}(f_{t,y})= \sqrt{e^{2t}-1} \cdot D (P_t f)(y).
\]
\end{lemma}
\begin{proof}
First of all, observe that for any function $g: \mathbb{R}^n \rightarrow \mathbb{R}$ with bounded derivatives, and for any $i \in [n]$
\begin{eqnarray*}
\mathbf{E}_{x \sim \gamma_n} \bigg[ \frac{\partial g(x)}{\partial x_i} \bigg] = \int_{x} \frac{\partial g(x)}{\partial x_i}\gamma_n(x) dx = \int_{x} x_i g(x) \gamma_n(x) dx = \mathbf{E}_{x \sim \gamma_n} [x_i \cdot g(x)].
\end{eqnarray*}
While the first and last equalities are trivial, the middle is a consequence of integration by parts. Assuming that
$f$ has bounded derivatives, we may apply this identity to $g = f_{t,y}$, yielding
\begin{eqnarray*}
\mathcal{W}_1(f_{t,y}) &=& \mathbf{E}_x [D_xf_{t}(y,x)]
\\ &=& \sqrt{e^{2t}-1} \cdot \mathbf{E}_x [D_y f_{t}(y,x)] \ \ \textrm{(applying (~\ref{eq:derivative-y-1}))}
\\
&=& \sqrt{e^{2t}-1} \cdot D_y (\mathbf{E}_x [f_{t}(y,x)]) = \sqrt{e^{2t}-1} \cdot D_y (P_t f)(y).
\end{eqnarray*}
This proves the lemma in the case that $f$ has bounded derivatives. In the general case, we approximate
choose a sequence of functions that have bounded derivatives and approximate $f_{t,y}$ in $L_2(\gamma)$.
Applying the lemma to these functions and taking the limit proves the general case.
\end{proof}
\begin{lemma}~\label{lem:inner-product-1}
Given oracle access to $f$, noise parameter $t>0$, error parameter $\epsilon>0$, confidence parameter $\delta>0$ and $y_1, y_2 \in \mathbb{R}^n$, there is an algorithm which makes $\mathsf{poly}(1/\epsilon, 1/\delta, 1/t)$ queries to $f$ and computes $\langle D(P_t f)(y_1),D(P_t f)(y_2)\rangle$ up to error $\epsilon$ with confidence $1-\delta$.
\end{lemma}
\begin{proof}
By Lemma~\ref{lem:derivative-shift}, we have
\[
\langle D(P_t f)(y_1),D(P_t f)(y_2)\rangle = \frac{1}{e^{2t}-1} \cdot \langle \mathcal{W}_1(f_{t,y_1}), \mathcal{W}_1(f_{t,y_2})\rangle.
\]
We can now apply Lemma~\ref{lem:inner-products} to finish the proof.
\end{proof}
\begin{proposition}~\label{prop:derivative-bound}
For any $f: \mathbb{R}^n \rightarrow [-1,1]$, $\Vert D(P_t f)(y) \Vert_2 \le (e^{2t} -1)^{-\frac12}$.
\end{proposition}
\begin{proof}
By Lemma~\ref{lem:derivative-shift}, we have $\Vert \mathcal{W}_1(f_{t,y}) \Vert_2 = \sqrt{e^{2t}-1} \cdot \Vert D(P_t f)(y) \Vert_2$. Now, observe that the range of $f_{t,y}$ is $[-1,1]$ and thus, $\Vert \mathcal{W}_1(f_{t,y}) \Vert_2\le 1$, implying the stated upper bound.
\end{proof}
\begin{lemma}~\label{lem:compute-derivative-x}
Given oracle access to $f: \mathbb{R}^n \rightarrow [-1,1]$, $y \in \mathbb{R}^n$, noise parameter $t>0$,
error parameter $\eta>0$, there is a function
$f_{\partial,\eta,t,y}: \mathbb{R}^n \rightarrow \mathbb{R}$ such that the following holds for every $\lambda \ge 1$,
\[
\Pr_{x \sim \gamma_n} [|f_{\partial,\eta,t,y}(x) - \langle D(P_t f)(y),x\rangle | > \lambda \cdot \eta] \le \lambda^{-2}.
\]
Further, for an error parameter $\epsilon>0$, confidence parameter $\delta>0$, we can compute
$f_{ \partial, t,\eta,y}$ to additive error $\pm \epsilon$ with confidence $1-\delta$ using $\mathsf{poly}(1/t, 1/\eta, 1/\epsilon, \log(1/\delta))$ queries to $f$.
\end{lemma}
\begin{proof}
We first use Lemma~\ref{lem:derivative-shift} and obtain that
$$
D_{}P_tf(y) = \frac{1}{\sqrt{e^{2t}-1}} \cdot \mathcal{W}_1(f_{t,y}).
$$
Consequently, we have that
\[
\langle DP_tf(y), x\rangle = \frac{1}{\sqrt{e^{2t}-1}} \cdot \widehat{f_{t,y}}_1(x).
\]
The claim now follows from Lemma~\ref{lem:oracle-access-1}.
\end{proof}
\subsection{Some useful inequalities concerning noise stability}
\begin{lemma}~\label{lem:Poincare} \textbf{[Poincar\'{e} inequality]} Let $f: \mathbb{R}^n \rightarrow \mathbb{R}$ be a $\mathcal{C}^1$ function. Then, $\mathsf{Var}[f] \le \mathbf{E}[\Vert Df \Vert_2^2]$.
\end{lemma}
\begin{definition}~\label{def:surface-area}
For a Borel set $A \subseteq \mathbb{R}^n$, we define its Gaussian surface area $\Gamma(A)$ to be
\[
\Gamma(A) = \liminf_{\delta \rightarrow 0} \frac{\mathsf{vol}(A_{\delta} \setminus A)}{\delta},
\]
provided the limit exists. Here, for any body $K$, $\mathsf{vol}(K)$ denotes the Gaussian volume of $K$, i.e., $\int_{x \in K} \gamma_n(x) dx$. Further, $A_{\delta} = \{x : d(x,A) \le \delta\}$ where $d(x,A)$ denotes the Euclidean distance of $x$ from $A$.
For a function $f : \mathbb{R}^n \rightarrow \{-1,1\}$, we denote its surface area $\Gamma(f) = \Gamma(A_f)$ where $A_f = \{x: f(x)=1\}$.
\end{definition}
Ledoux~\cite{Ledoux:94} (and implicitly Pisier~\cite{Pisier:86}) proved the following connection between noise sensitivity and surface area of functions.
\begin{lemma}~\label{lem:Ledoux}[Ledoux~\cite{Ledoux:94}]
For any $t \ge 0$ and $f : \mathbb{R}^n \rightarrow \{-1,1\}$, $x,y \sim \gamma_n$, we have \[
\Pr_{x,y} [f(x) \not = f(e^{-t} x+ \sqrt{1-e^{-2t}} y)] \le \frac{2\sqrt{t}}{\sqrt{\pi}} \cdot \Gamma(f)\]
\end{lemma}
The following proposition is an immediate consequence of the above lemma.
\begin{proposition}~\label{prop:Ledoux}
Let $f: \mathbb{R}^n \rightarrow \{-1,1\}$, $t \ge 0$ and $\Gamma(f) \le s$. Then,
\begin{enumerate}
\item $\mathbf{E}[(f(x) - P_tf (x))^2] = 8 s \sqrt{t}$.
\item For any $\epsilon >0$ and $T = O(s^2/\epsilon^2)$, $\sum_{q \ge T} \mathbf{E}[\widehat{f}_q^2] \le \epsilon$.
\end{enumerate}
\end{proposition}
\begin{proof}
Let $\mathcal{E}_1(x,y)$ denote the event that
$f(x) \not = f(e^{-t} x+ \sqrt{1-e^{-2t}} y)$.
To prove the first item, observe that for any $x$,
\[
(f(x) - P_t f(x))^2 = (2\mathop{\mathbf{E}}_{y \sim \gamma_n}[\mathbf{1}(\mathcal{E}_1(x,y))])^2 =4 \big(\mathop{\mathbf{E}}_{y \sim \gamma_n}[\mathbf{1}(\mathcal{E}_1(x,y))]\big)^2 \le 4 \big(\mathop{\mathbf{E}}_{y \sim \gamma_n}[\mathbf{1}(\mathcal{E}_1(x,y))]\big)
\]
Thus, we obtain that $$\mathbf{E}[(f(x) - P_tf(x))^2] \le 4\mathop{\mathbf{E}}_{x, y \sim \gamma_n}[\mathbf{1}(\mathcal{E}_1(x,y))] \le 8 s\sqrt{t}, $$
where the last inequality is an application of Lemma~\ref{lem:Ledoux}. The second item here is the same as Theorem~15 (full version) of ~\cite{KOS:08}. So, we do not prove it here.
\end{proof}
\subsection{Inequalities for matrix perturbation}
We will require some basic results on matrix perturbations. For this, we adopt the following notation: Let $A \in \mathbb{C}^{n \times n}$ be a Hermitian matrix. Then $\sigma_1(A) \ge \ldots \ge \sigma_n(A)$ denote its singular values in order.
\begin{lemma}~\label{lem:Weyl}[Weyl's inequality]
Let $A, E \in \mathbb{R}^{n \times n}$ be real symmetric matrices.
Then for any $j$, $$|\sigma_j(A+E) - \sigma_j(A)| \le \Vert E \Vert_F. $$
\end{lemma}
\begin{fact}~\label{lem:mat-1-bound}~\cite{schmitt1992perturbation}
Let $A_1, A_2$ be two psd matrices. Let $\sigma_{\min}(A_1), \sigma_{\min}(A_2) \ge c$. Then,
\[
\Vert A_2^{1/2} - A_1^{1/2} \Vert_2 \le \Vert A_2 - A_1 \Vert_2 \cdot \frac{1}{2 \sqrt{c}}.
\]
\end{fact}
\begin{fact}~\label{lem:mat-2-bound}\cite{stewart1973introduction}
Let $A_1, A_2$ be two psd matrices. Let $\sigma_{\min}(A_1)\ge c$ and $\Vert A_2 - A_1 \Vert_2 \le c/100$. Then,
\[
\Vert A_2^{-1} - A_1^{-1} \Vert_2 \le \Vert A_2 - A_1 \Vert_2 \cdot \frac{1}{ c^2}.
\]
\end{fact}
Combining these two facts, we have the following corollary.
\begin{corollary}~\label{corr:mat-perturb}
Let $0<c<1$ and let $A_1$ be a psd matrix such that $\sigma_{\min}(A_1) \ge c$. Let $A_2 - A_1$ be real symmetric that $\Vert A_2 - A_1 \Vert_2 \le \xi \cdot c$ for $|\xi| \le 1/100$. Then,
$\Vert A_{1}^{-1/2} - A_2^{-1/2} \Vert_2 \le \frac{\xi}{2\sqrt{c}}$.
\end{corollary}
\begin{proof}
We first apply Fact~\ref{lem:mat-1-bound} to obtain that $$
\Vert A_2^{1/2} - A_1^{1/2} \Vert_2 \le \frac{\xi c^{1/2}}{2}.
$$
Observe that $\sigma_{\min}(A_1^{1/2}) \ge \sqrt{c}$. Since
$c<1$ and $|\xi| \le \frac{1}{100}$, we apply
Fact~\ref{lem:mat-2-bound} to obtain that
$$
\Vert A_2^{-1/2} - A_1^{-1/2} \Vert_2 \le \frac{\xi}{2\sqrt{c}}.
$$
This finishes the proof.
\end{proof}
\section{A lower bound in terms of surface area} \label{sec:lb}
The query complexity of our testing algorithm depends
on the surface area of the set being tested. In this section,
we prove that a polynomial dependence on surface area is
necessary for non-adaptive tester, by proving a lower bound for distinguishing
1-juntas and 2-juntas in two dimensions.
In particular, we show the following theorem.
\begin{theorem}~\label{thm:lb}
Any non-adaptive algorithm which can distinguish between a $1$-junta with surface area at most $s$ versus $\Omega(1)$-far from a linear $1$-junta makes at least $s^{\frac{1}{10}}$ queries.
\end{theorem}
To prove this theorem, as is standard, we will use the Yao's minimax lemma. More specifically, we will describe a distribution $D_1$
over 1-juntas with surface area at most $\Theta(s)$ and a distribution
$D_2$ over functions that are far from 1-juntas and have surface area $\Theta(s)$, such that
for any choice of $x_1, \dots, x_n \in {\mathbb{R}}^2$ with $n = O(s^{1/10})$,
if $f \sim D_1$ and $g \sim D_2$ then $(f(x_1), \dots, f(x_n))$
and $(g(x_1), \dots, g(x_n))$ have almost the same distribution.
We begin with the description of $f \sim D_1$: let $\theta \in {\mathbb{R}}^2$
be a uniformly random unit vector. Choose $a_1, \dots, a_{s-1}$
uniformly from $[-1, 1]$, and then put them in increasing order.
We also set $a_0 = -1$ and $a_{s} = 1$.
Then choose independent
random bits $b_1, \dots, b_{s}$
and define $f$ by
\[
f(x) = \begin{cases}
b_i &\text{if $a_{i-1} < \langle x, \theta \rangle \le a_i$ for some $i \in \{1, \dots, s\}$} \\
1 &\text{otherwise}.
\end{cases}
\]
Clearly, such a function $f$ is a 1-junta,
and its surface area is at most $s+1$
because the boundary of $\{f = 1\}$ is a collection of at most $s+1$
lines, and each line has surface area at most $1/\sqrt{2\pi}$.
To describe the construction of $g \sim D_2$, we begin with
the same collection of random variables as before (i.e., $\theta$,
$a_1, \dots, a_{s-1}$, $b_1, \dots, b_s$).
Let $\theta^\perp$ be a $90^\circ$ clockwise rotation of $\theta$,
choose $z \in [-1, 1]$ independent of the other random variables,
and define $g$ by
\[
g(x) = \begin{cases}
b_i \mathrm{sign}(\langle x, \theta^\perp \rangle - z) &\text{if $a_{i-1} < \langle x, \theta \rangle \le a_i$ for some $i \in \{1, \dots, s\}$} \\
1 &\text{otherwise}.
\end{cases}
\]
Note that the boundary of $\{g = 1\}$ is contained in at most $s+2$
lines, and so it has surface area at most $s+2$.
We will prove below that (with high probability) functions drawn from $D_2$ are far from 1-juntas.
Then the following Theorem will demonstrate that testing 1-juntas with surface area $\Theta(s)$
requires $\mathsf{poly}(1/s)$ queries.
\begin{theorem}\label{thm:surface-area-lower-bound}
For any query set $x_1, \dots, x_n$ with $n \le s^{1/10}$, if $f \sim D_1$ and $g \sim D_2$
then the distributions of $(f(x_1), \dots, f(x_n))$ and $(g(x_1), \dots, g(x_n))$ are $C s^{-1/10}$-close
in total variation distance.
\end{theorem}
In order to study the distinguishability of $D_1$ and $D_2$, we give a slightly different
description of $f \sim D_1$ and $g \sim D_2$:
for $i = 1, \dots, s$ set
\begin{eqnarray*}
S_i^+ &=&\{x: a_{i-1} < \langle x, \theta \rangle \le a_i \text{ and } \langle x, \theta^\perp \rangle \ge z\} \\
S_i^- &=&\{x: a_{i-1} < \langle x, \theta \rangle \le a_i \text{ and } \langle x, \theta^\perp \rangle < z\} \\
S_i &=& S_i^- \cup S_i^+,
\end{eqnarray*}
and note that $f$ was defined by independently assigning a random $\pm 1$ value on each set $S_i$,
while $g$ was defined by independently assigning opposite random $\pm 1$ values on each pair
$S_i^+$, $S_i^-$.
Also, $f$ and $g$ are both identically one on ${\mathbb{R}}^2 \setminus \bigcup S_i$.
Let $x_1, \dots, x_n$ be the set of query points, and consider the event
that for every $i$, at least one of $S_i^+$ or $S_i^-$ contains no point in $x_1, \dots, x_n$;
call this event $A$.
Then $A$ depends on $x_1, \dots, x_n$, $\theta$, and $a_1, \dots, a_{s-1}$, but not on $b_1,\dots,b_s$.
Thanks to the description of $f$ and $g$ above, conditioned on
$A$ the random variables $(f(x_1), \dots, f(x_n))$ and $(g(x_1), \dots, g(x_n))$
have the same distribution. In particular, we can couple $f$ and $g$ so that
$(f(x_1), \dots, f(x_n)) = (g(x_1), \dots, g(x_n))$ with probability at least $1 - \Pr[A]$,
and so we will prove Theorem~\ref{thm:surface-area-lower-bound} by showing that for
any choice of $x_1, \dots, x_n$ with $n \le s^{1/10}$,
$\Pr[A] \le C s^{-1/10}$.
To do this, we will divide the pairs $(x_i, x_j)$ into
``close'' pairs and ``far'' pairs: we say that $x_i$ and $x_j$ are $\delta$-close if
$|x_i - x_j| \le \delta$, and $\delta$-far otherwise.
The following lemma will complete the proof of Theorem~\ref{thm:surface-area-lower-bound}, because
it implies that with high probability no pair of points lies in the same strips $S_i$,
but on different sides of the line $\{x: \langle x, \theta^\perp \rangle = z\}$.
\begin{lemma}\label{lem:close-and-far}
Suppose that $n \le s^{1/10}$ and set $\delta = s^{-1/3}$. For any set $x_1, \dots, x_n$, with probability
at least $1 - C s^{-1/10}$:
\begin{enumerate}
\item every pair of points $x_i, x_j$ that are $\delta$-far do not belong to the same set $S_k$
for any $k \in \{1, \dots, s\}$.
\item every pair of points $x_i, x_j$ that are $\delta$-close
lie on the same side of the line $\{x: \langle x, \theta^\perp\rangle = z\}$.
\end{enumerate}
\end{lemma}
The first step of Lemma~\ref{lem:close-and-far} is the simple observation that far points remain
reasonably far even after projecting them in the direction $\theta$.
\begin{lemma}\label{lem:random-inner-product}
For all sufficiently small $\delta$ and any $x \in {\mathbb{R}}^2$, $\Pr(|\langle \theta, x\rangle| \le \delta |x|) \le \delta$.
\end{lemma}
\begin{proof}
If $\phi$ is the angle between $\theta$ and $x$ then $|\langle \theta, x\rangle| \le \delta |x|$
exactly when $|\cos \phi| \le \delta$, which has probability $\frac{\cos^{-1}(-\delta) - \cos^{-1}(\delta)}{\pi}$.
Since $\cos^{-1}$ has derivative 1 at zero, this is approximately $\frac{2}{\pi} \delta$ for small $\delta$.
In particular, if $\delta > 0$ is sufficiently small then this probability is at most $\delta$.
\end{proof}
\begin{proof}[Proof of Lemma~\ref{lem:close-and-far}]
Let $\ell_k = \ell_k(\theta, a_k)$ be the line $\{x: \langle x, \theta \rangle = a_k\}$.
By Lemma~\ref{lem:random-inner-product} applied to $x_i - x_j$, if $x_i$ and $x_j$ are $\delta$-far
then with probability at least $1 - \delta$, $|\langle \theta, x_i - x_j \rangle| \ge \delta^2$.
By a union bound, with probability at least $1 - n^2 \delta$,
$|\langle \theta, x_i - x_j \rangle| \ge \delta^2$ for every $\delta$-far pair $x_i, x_j$; from now
on, we will condition on this event (call it $\Omega_1$) occurring.
Now, if either $\langle \theta, x_i \rangle$
or $\langle \theta, x_j \rangle$ lies outside of the interval $[-1, 1]$ then $x_i$ and $x_j$
do not both lie in any single $S_k$. On the other hand, if both $\langle \theta, x_i \rangle$
and $\langle \theta, x_j \rangle$ lie in $[-1, 1]$, then each line $\ell_k$ has (independently)
probability $|\langle \theta, x_i - x_j \rangle| \ge \delta^2$ to ``split'' $x_i$ from $x_j$.
Hence, with probability at least $1 - (1 - \delta^2)^{s-1} \ge 1 - \exp(\delta^2(s-1))$, there will
be a line $\ell_k$ that splits $x_i$ from $x_j$, and so they will not belong to any single set $S_k$.
Taking a union bound over all pairs $x_i, x_j$, we see that (conditioned on $\Omega_1$) with probability
at least $1 - n^2 \exp(-\delta^2(s-1))$, no pair of $\delta$-far points lands in the same $S_k$.
Removing the conditioning on $\Omega_1$ changes the probability bound to
$1 - n^2 \delta - n^2 \exp(-\delta^2(s-1))$, which with our choice of parameters is at least
$1 - C s^{-1/10}$.
If $x_i$ and $x_j$ are $\delta$-close then $|\langle \theta^\perp, x_i - x_j\rangle| \le \delta$, and hence
the probability that they land on opposite sides of the line
$\{x: \langle x, \theta^\perp \rangle = z\}$ is at most $O(\delta)$. By a union bound over all pairs,
with probability at least $1 - C n^2 \delta \ge 1 - C s^{-1/10}$,
every pair of $\delta$-close $x_i, x_j$ land on the same side of that line.
\end{proof}
\subsection{$D_2$ is far from a 1-junta}
So far, we have shown that one cannot distinguish $D_1$ from $D_2$ from few samples. It remains to
show that functions
from $D_2$ are far (with high probability) from 1-juntas, it will follow that one cannot
1-juntas with $O(s)$ surface area with fewer than $s^{1/10}$ queries.
\begin{theorem}\label{thm:far-from-1-junta}
There is a constant $c > 0$ such that with probability at least $1 - \mathsf{poly(1/s)}$
over $g \sim D_2$, $g$ is $c$-far from every 1-junta.
\end{theorem}
Now recall that the construction of $D_1$ and $D_2$ involved dividing up the
strip $\{x: \langle \theta, x \rangle \in (-1, 1]\}$ into $s$ strips $S_1,
\dots, S_s$ and assigning random values on each strip.
Since both the construction of $D_2$ and the notion of distance to a 1-junta are rotationally
invariant, we will assume from now on that $\theta = e_1$, which means that the strips
$S_1, \dots, S_s$ are vertically oriented.
Let $U^+ = \bigcup_{i: b_i = 1} (a_i, a_{i+1}]$ and let $U^- = [-1, 1] \setminus U^+$.
\begin{definition}
Let $I \subset [-1, 1]$ be an interval. We say that $I$ is \emph{$\delta$-balanced} if
of both $|I \cap U^+|$ and $|I \cap U^-|$ are at least $\delta |I|$, where $|\cdot|$ denotes the one-dimensional
Lebesgue measure.
We say that $I$ is \emph{wide} if $|I| \ge \frac{1}{s}$.
We extend these definitions to strips in two dimensions: say that $I \times {\mathbb{R}}$ is $\delta$-balanced (resp. wide)
if $I$ is $\delta$-balanced (resp. wide).
\end{definition}
\begin{definition}
For any line $\ell \subset {\mathbb{R}}^2$, we say that $\ell$ is $\delta$-balanced if both
\[
\int_{\ell \cap U^+} e^{-|x|^2/2} \, dx \quad \text{and} \quad
\int_{\ell \cap U^-} e^{-|x|^2/2} \, dx
\]
are at least
\[
\delta \int_{\ell} e^{-|x|^2/2}\, dx.
\]
\end{definition}
We will now describe the outline of Theorem~\ref{thm:far-from-1-junta}'s proof: note that if $h$ is a $1$-junta
then $h(x) = \tilde h(\langle \phi, x\rangle)$ for some $\phi$.
Now, Fubini's theorem implies that
\[
2\pi \|h - g\|_1
= \int_{{\mathbb{R}}^2} e^{-|x|^2/2} |h - g| \, dx
= \int_{\mathbb{R}} \int_{\{x: \langle x, \phi^\perp \rangle = a\}} e^{-|x|^2/2} |\tilde h(a) - g(x)| \, dx \, da.
\]
Now, whenever the line $\{x: \langle x, \phi^\perp\}$ is $\delta$-balanced, the inner integral is
at least $\delta \int e^{-|x|^2 / 2} \, dx$. Therefore, in order to prove Theorem~\ref{thm:far-from-1-junta},
it suffices to show that there is a constant $\delta$ such that at least a constant fraction of the lines
$\{x: \langle x, \phi^\perp \rangle = a\}$ are $\delta$-balanced.
To be precise, let $L(\phi)$ be the set of lines of the form $\{x: \langle
\phi^\perp, x\rangle = a\}$ for $a \in [-10, 10]$. Since $e^{-|x|^2/2}$ is bounded from below on $[-10, 10]$,
it suffices to show that there is a constant $\delta > 0$ such that with high probability,
for every $\phi$, a constant fraction of $\ell \in L(\phi)$ are $\delta$-balanced. For the remainder of the
section, we will focus on proving the preceding statement.
We will consider two cases depending on $\phi$: if the lines in $L(\phi)$ are
``steep,'' then these lines will be balanced because a constant fraction of them will
cross the horizontal line $\{x: x_2 = z\}$ near the middle of a strip. Since the value of $g$ on a strip changes sign at
that horizontal line,
this will imply that such a line is balanced.
On the other hand, if the lines are not steep, then they will be balanced because they cross many
strips, and $g$ will tend to take different values on different strips.
We will first deal with the case of steep lines. In this case, it is deterministically the case that
$g$ is far from $h$.
\begin{lemma}\label{lem:wide-strips}
At least half of the points on the line segment from $(-1, 1/2)$ to $(1, 1/2)$ are
in a wide strip $S_i$.
\end{lemma}
\begin{proof}
There are $s$ strips in total, and so the narrow ones can take up at most a total width of 1,
which is only half of the width of the line segment in question.
\end{proof}
\begin{lemma}
There is a constant $c > 0$ such that if the absolute value of the slope of $\{x: \langle \phi^\perp, x \rangle = 0\}$
is at least $s$ then a $c$-fraction of $\ell \in L(\phi)$ are $c$-balanced.
\end{lemma}
\begin{proof}
We may assume without loss of generality that $z \le 0$.
By Lemma~\ref{lem:wide-strips}, at least a constant fraction of $\ell \in L(\phi)$
intersect the line $\{x: x_2 = 1/2\}$ in the middle third of a wide strip $S_k$. In this case,
$\ell$ belongs to $S_k^+$ for a distance of at least 1/3, and to $S_k^-$ for a distance
of at least 1/3, and it follows that $\ell$ is $c$-balanced for a constant $c$
depending on the minimum and maximum values of $e^{-|x|^2/2}$ for $x \in [-1, 1]^2$.
\end{proof}
For the remainder of the section we will deal with lines that are not steep. For $k$
with $2^{-k} \le 2/s$, consider an interval of the form $[j 2^{-k}, (j+1) 2^{-k}] \subset [-1, 1]$;
let $D_k$ be the set of all such intervals.
\begin{lemma}\label{lem:balanced-dyadics}
There is a constant $C$ such that
with probability at least $1 - \mathsf{poly}(1/s)$,
for every $k$ for which $2^{-k} \le 2/s$, at least a $\frac{1}{C}$-fraction of the intervals $I \in D_k$ are $\frac{1}{C}$-balanced.
\end{lemma}
\begin{proof}
For technical convenience, we will consider a slightly different way of generating the strips $S_i$.
Instead of dividing $[-1, 1] \times {\mathbb{R}}$ using exactly $s-1$ vertical lines, we will take a Poisson number
(with mean $s-1$) of vertical lines. We will prove the claim for this modified model, with a probability
estimate of at least $1 - \exp(-\Omega(\sqrt s))$, and since a Poisson random variable is equal to its mean
with probability $\mathsf{poly}(1/s)$, the claim will also follow for the original model.
Our first claim is that for $2^{-k} \le 1/\sqrt s$, each interval in $D_k$ has a constant
probability of being $\Omega(1)$-balanced.
First, consider the largest $k$ for which $2^{-k} \le 2/s$. In this case, the width of each $I \in D_k$
is within a factor 2 of $s$ (we will call such an interval a
\emph{primitive} interval.
It is easy to verify that for each $I \in D_k$, there is a constant probability that $I$ will intersect
exactly two strips, each taking up at least 1/3 of the width of $I$, and that these two strips
will receive different labels $b_i$. Hence, there is a constant probability that $I$ is $1/3$-balanced.
Now consider $k$ for which $2^{-k} \le 1 / \sqrt s$. Every $I \in D_k$ is made up of
$\Theta(s 2^k)$ primitive intervals, each of which has a constant probability of being balanced.
Moreover (thanks to our Poissonized model) the events that different primitive intervals are balanced
are independent. By Chebyshev's inequality, there is a constant probability that at least a constant
fraction of $I$'s primitive intervals are $1/3$-balanced, and so $I$ has a constant probability of being
$\Omega(1)$-balanced.
This proves our first claim (that for each $2^{-k} \le 1/\sqrt s$, each interval in $D_k$ has a constant
probability of being $\Omega(1)$-balanced). Now, for each such $k$ there are at least $\Omega(\sqrt s)$
such intervals, and so a Chernoff bound implies that with probability at least $1 - \exp(-\Omega(\sqrt s))$,
at least a constant fraction of these intervals are balanced. Taking a union bound over $k$
proves the claim whenever $2^{-k} \le 1/\sqrt s$.
For smaller $k$, we claim that with high probability, \emph{every} $I \in D_k$ is balanced.
Indeed, such $I \in D_k$ contain at least $\sqrt s$ primitive intervals, and so a Chernoff bound
implies that with probability $1 - \exp(-\Omega(\sqrt s))$, at least a constant fraction of those
primitive intervals are balanced, and so $I$ is balanced also. We can take a union bound over all $k$
and all $I$.
\end{proof}
To complete the proof of Theorem~\ref{thm:far-from-1-junta}, it remains to show that with high probability,
every non-steep line is balanced.
\begin{lemma}
There is a constant $c > 0$ such that if the absolute value of the slope of $\{x: \langle \phi^\perp, x \rangle = 0\}$
is at most $s$ then with probability at least $1 - \mathsf{poly}(1/s)$, a $c$-fraction of $\ell \in L(\phi)$ are $c$-balanced.
\end{lemma}
\begin{proof}
Choose $k$ so that the slope of all lines in $L(\phi)$ are between $2^{k-1}$ and $2^{k}$.
Consider a rectangle of the form $Q = [j 2^{-k}, (j+1) 2^{-k}] \times [-2, -1]$, where
the interval $[j 2^{-k}, (j+1) 2^{-k}]$ is balanced. Since the slope of $\phi$
is at most $2^{k}$, if the line $\ell$ intersects the rectangle $Q$ then it
crosses the entire vertical strip $[j 2^{-k}, (j+1) 2^{-k}] \times {\mathbb{R}}$ within the horizontal strip $[-3, 0]$.
Since the interval $[j 2^{-k}, (j+1) 2^{-k}]$ is balanced, it follows that the line $\ell$ is also balanced.
(We're assuming here, without loss of generality, that $z \ge 0$).
Finally, it is easy to verify that if a constant fraction of the intervals $[j 2^{-k}, (j+1) 2^{-k}]$
are balanced then a constant fraction of $\ell \in L(\phi)$ intersect with some rectangle of the form above.
By Lemma~\ref{lem:balanced-dyadics}, this completes the proof.
\end{proof}
\section{Introduction}
\input{intro_merged}
\vspace{-0.2cm}
\section{Preliminaries}
\input{prelim}
\input{test-rank}
\input{finding-struct}
\input{surface-area-lower-bound}
\section{Testing linear invariant subclasses of linear $k$-juntas} \label{sec:test-invariant}
In this section, we prove the following theorem.
\begin{theorem}~\label{thm:test-invariant-struct}
Let $\mathcal{C}$ be a collection of functions mapping $\mathbb{R}^k$ to $\{-1,1\}$. Further, for every $f \in \mathsf{Ind}(\mathcal{C})$, $\mathsf{surf}(f) \le s$. Then, there is an algorithm \textsf{Test-structure-$\mathcal{C}$} which has the following guarantee: Given oracle access to $f: \mathbb{R}^n \rightarrow \{-1,1\}$ and an error parameter $\epsilon>0$, the algorithm makes $(s \cdot k/\epsilon)^{O(k)}$ queries and distinguishes between the cases (i) $f \in \mathsf{Ind}(\mathcal{C})$ and (ii) $f$ is $\epsilon$-far from every function $f' \in \mathsf{Ind}(\mathcal{C})$.
\end{theorem}
The algorithm \textsf{Test-structure-$\mathcal{C}$} is described in Figure~\ref{fig:tsi}. We now proceed with the proof of Theorem~\ref{thm:test-invariant-struct}. We begin with the following fact.
\begin{figure}[tb]
\hrule
\vline
\begin{minipage}[t]{0.98\linewidth}
\vspace{10 pt}
\begin{center}
\begin{minipage}[h]{0.95\linewidth}
{\small
\underline{\textsf{Inputs}}
\vspace{5 pt}
\begin{tabular}{ccl}
$s$ &:=& surface area parameter \\
$\epsilon$ &:=& error parameter \\
$k$ &:=& rank parameter\\
\end{tabular}
\underline{\textsf{Parameters}}
\vspace{5 pt}
\begin{tabular}{ccl}
$T_{inv}(s,k,\epsilon)$ &:=& query complexity of \textsf{Find-invariant-structure} with parameters $s$, $k$ and $\epsilon$. \\
$\delta$ &:=& $(\epsilon / s \cdot k)^{O(k)}$ such that $\delta \cdot T_{inv}(2s,k,\epsilon/4) \le \epsilon$. \\
$k$ &:=& rank parameter\\
\end{tabular}
\vspace{5 pt}
\underline{\textsf{Testing algorithm}}
\begin{enumerate}
\item Run algorithm \textsf{Test-rank} with surface area parameter $s$, rank parameter $k$ and error parameter $\delta$. If \textsf{Test-rank} outputs no, output no.
\item Otherwise, run routine \textsf{Find-invariant-structure} with surface area parameter $s$, rank parameter $k$ and error parameter $\epsilon/4$.
\item Let $g: \mathbb{R}^\ell \rightarrow \{-1,1\}$ be the output of Step~2. Extend it to $\mathbb{R}^k$ where $g$ acts trivially in the last $n-k$ coordinates. Output yes if $g$ is $\epsilon$-close to some function in $\mathsf{Ind}(\mathcal{C})_k$. Otherwise, output no.
\end{enumerate}
\vspace{5 pt}
}
\end{minipage}
\end{center}
\end{minipage}
\hfill \vline
\hrule
\caption{Description of the algorithm \textsf{Test-Structure-$\mathcal{C}$}}
\label{fig:tsi}
\end{figure}
\begin{fact}~\label{fact:unitary}
Let $f: \mathbb{R}^n \rightarrow \{-1,1\}$ be in $\mathsf{Ind}(\mathcal{C})$ defined as $f = g(\langle w_1, x \rangle, \ldots, \langle w_k, x\rangle)$ for $ g \in \mathcal{C}$ and orthonormal vectors $w_1, \ldots, w_k \in \mathbb{R}^n$. If $v_1, \ldots, v_k$ is some other orthonormal basis of $\mathsf{span}(w_1, \ldots, w_k)$, then $f = \tilde{h}(\langle v_1, x \rangle, \ldots, \langle v_k ,x\rangle)$, for $\tilde{h} \in \mathsf{Ind}(\mathcal{C})_k$.
\end{fact}
\begin{proof}
Observe that one go from the basis $(w_1, \ldots, w_k)$ to the basis $(v_1, \ldots, v_k)$ by means of a unitary transformation $U \in \mathbb{R}^{k \times k}$. Thus, $\tilde{h} =g \circ U$. However, observe that the function $\tilde{h}(x) = g \circ U (x)$ lies in $\mathsf{Ind}(\mathcal{C})_k$ which finishes the proof.
\end{proof}
\begin{claim}~\label{clm:test-invariant-complete}
Assume that $f : \mathbb{R}^n \rightarrow \{-1,1\}$ is in $\mathsf{ind} (\mathcal{C})_n$. Further, $\mathsf{surf}(f) \leq s$. Then, $f$ passes \textsf{Test-Structure-$\mathcal{C}$} with probability $1-\epsilon$.
\end{claim}
\begin{proof}
If $f \in \mathsf{ind} (\mathcal{C})_n$ with $\mathsf{surf}(f) \le s$, then \textsf{Test-rank} outputs yes with probability $1-\delta$. Let $w_1, \ldots, w_k$ be the implicit basis such that the output of Step~2 of \textsf{Test-Structure-$\mathcal{C}$} such that
\[
\mathbf{E}[|g(\langle w_1, x \rangle, \ldots, \langle w_k, x \rangle) - f(x)|] \le \frac{\epsilon}{4}.
\]
As $f \in \mathsf{Ind}(\mathcal{C})$, there is an orthonormal set of vectors $(v_1, \ldots, v_k)$ and $h \in \mathcal{C}$ such that (i) $f = h(\langle v_1, x \rangle, \ldots, \langle v_k ,x \rangle)$ and (ii) $\mathsf{span}(w_1, \ldots, w_k) = \mathsf{span}(v_1, \ldots, v_k)$. By Fact~\ref{fact:unitary}, we get that there is a function $\tilde{h} \in \mathsf{Ind}(\mathcal{C})_k$ such that
$f = \tilde{h}(\langle w_1, x\rangle, \ldots, \langle w_k, x\rangle)$. This implies that
\[
\mathbf{E}[|g(\langle w_1, x\rangle, \ldots, \langle w_k, x\rangle)- \tilde{h}(\langle w_1, x\rangle, \ldots, \langle w_k, x\rangle)|] \le \frac{\epsilon}{4}.
\]
Since $w_1, \ldots, w_k$ are orthonormal vectors, we get that $\mathbf{E}_{x \sim \gamma_k}[|g(x) - \tilde{h}(x)|] \le \frac{\epsilon}{4}$. This finishes the proof of the claim.
\end{proof}
\begin{claim}~\label{clm:test-invariant-sound}
Assume that $f : \mathbb{R}^n \rightarrow \{-1,1\}$ is $\epsilon$-far from any function $f' \in \mathsf{Ind}(\mathcal{C})$. Then, the test \textsf{Test-Structure-$\mathcal{C}$} rejects with probability at least $0.9$.
\end{claim}
\begin{proof}
First of all, if $f$ passes \textsf{Test-rank} with probability $0.1$, then $f$ is $O(\delta)$-close to a linear $k$-junta $\tilde{f}$ with $\mathsf{surf}(\tilde{f}) \le (1+\delta) \cdot s$. Now, by Remark~\ref{rem:gaussian}, since the marginal of the queries made by the routine \textsf{Test-invariant-structure} is distributed as $\gamma_n$, hence with probability $1-O(\delta)$, we can assume that the queries are made to $\tilde{f}$. Note that the surface area of $\tilde{f}$ is at most $2s$.
Since $\delta \cdot T_{inv}(2s,k,\epsilon/4) \le \epsilon$, hence with probability $1-\epsilon$, we can assume that the queries made by \textsf{Test-invariant-structure} are made to $\tilde{f}$. Then, with probability $1-2\epsilon$, Step~2 outputs a function $g$ such that there is an orthonormal set of vectors $w_1, \ldots, w_k$ and $h: \mathbb{R}^k \rightarrow \{-1,1\}$ with the following conditions: (i) $\mathbf{E}_{x \sim \gamma_k}[|g(\langle w_1, x\rangle, \ldots, \langle w_k , x\rangle) - h(\langle w_1, x\rangle, \ldots, \langle w_k , x\rangle)|] \le \frac{\epsilon}{4}$ and (ii)
$\tilde{f} (x) = h(\langle w_1, x\rangle, \ldots, \langle w_k , x\rangle)$. Let $g_0 \in \mathsf{Ind}(\mathcal{C})_k$ such that $g$ and $g_0$ are $\epsilon/4$-close. Because $w_1, \ldots, w_k$ are orthonormal, this also implies that
$
\mathbf{E}_{x \sim \gamma_n}[|g(\langle w_1, x\rangle, \ldots, \langle w_k , x\rangle) - g_0(\langle w_1, x\rangle, \ldots, \langle w_k , x\rangle)|] \le \frac{\epsilon}{4}.
$ By applying triangle inequality, we get that
\[
\mathbf{E}_{x\sim \gamma_n} [ |f(x) - g_0 (\langle w_1, x\rangle, \ldots, \langle w_k, x \rangle)|] \le \frac{\epsilon}{2} + O(\delta) \le\epsilon.
\]
However, $g_0(\langle w_1, x\rangle, \ldots, \langle w_k, x \rangle) \in \mathsf{Ind}(\mathcal{C})_n$ contradicting the claim that $f$ is $\epsilon$-far from any function in $\mathsf{Ind}(\mathcal{C})$.
\end{proof}
}
\section{Algorithm to test $k$-juntas}~\label{sec:test-rank}
In this section, we will prove the following theorem.
\begin{theorem}~\label{thm:main1}
There is an algorithm \textsf{Test-linear-junta} which has the following guarantee: Given oracle access to $f: \mathbb{R}^n \rightarrow \{-1,1\}$, rank parameter $k$, surface area parameter $s$ and error parameter $\epsilon>0$, it makes $\mathsf{poly}(s,\epsilon^{-1}, k)$ queries and
\begin{enumerate}
\item If $f$ is a linear $k$-junta with $\mathsf{surf}(f) \le s$, then the algorithm outputs \textsf{yes} with probability at least $0.9$.
\item If $f$ is $O(\epsilon)$-far from any linear $k$-junta $g$ with {$\mathsf{surf}(g) \leq (1+\epsilon) \cdot s$}, then the algorithm outputs \textsf{no} with probability at least $0.9$.
\end{enumerate}
\end{theorem}
\begin{remark}{A convention that we shall adopt (to avoid proliferation of parameters) is to sometimes ignore the confidence parameter of the testing algorithm. Typically, whenever we can estimate a parameter within $\pm \epsilon$ with $T$ queries with confidence $2/3$, we can do the usual ``median trick" and get the same accuracy with confidence $1-\delta$ with a multiplicative $O(\log(1/\delta))$ overhead in the query complexity. Since we only need to succeed with probability $0.9$ in the final algorithm, it is sufficient for each of the individual subroutines to succeed with probability sufficiently close to $1$. So, unless it is crucial, at some places,we shall ignore the confidence parameter in the theorem statements and many of the calculations. It will be implicit that the confidence parameter is sufficiently close to $1$.
}
\end{remark}
The algorithm \textsf{Test-linear-junta} is described in Figure~\ref{fig:tlj}. The algorithm invokes two different subroutines, \textsf{Test-surface-area} and \textsf{Test-rank} whose guarantees we state now. To
do this, we first define the notion of $(\epsilon, s)$ smooth function.
\begin{definition}~\label{def:smooth-perturb}
A function $f: \mathbb{R}^n \rightarrow \{-1,1\}$ is said to be $(\epsilon, s)$-smooth if there is a function $g: \mathbb{R}^n \rightarrow \{-1,1\}$ such that $\mathbf{E}[|f-g|]\le\epsilon$ and $\mathsf{surf}(g) \le s (1+\epsilon)$.
\end{definition}
In other words, a function $f$ is $(\epsilon,s)$ smooth if $f$ is $\epsilon$-close to some other function $g$ (in $\ell_1$ distance) and $g$ has surface area which is essentially bounded by $s$. With this definition, we can now state the guarantee of the routine \textsf{Test-surface-area}
(due to Neeman~\cite{neeman2014testing}).
\begin{theorem}~\label{thm:neeman-testing}
There is an algorithm \textsf{Test-surface-area} which given oracle access to a function $f: \mathbb{R}^n \rightarrow \{-1,1\}$ and error parameter $\epsilon>0$ makes $T_{\mathsf{test}} = \mathsf{poly}(s/\epsilon)$ queries and has the following guarantee:
\begin{enumerate}
\item If $f$ is a function with surface area at most $s$, then the algorithm outputs \textsf{yes} with probability at least $1-\epsilon$.
\item {Any function $f$ which passes the test with probability $0.1$ is $(\epsilon,s)$-smooth.}
\end{enumerate}
\end{theorem}
Next, we state the guarantee of the routine \textsf{Test-rank}.
\begin{lemma}~\label{lem:far-k-junta}
The routine \textsf{Test-rank} has a query complexity of $\mathsf{poly}(k,s, \epsilon^{-1})$. Further, we have
\begin{enumerate}
\item If the function $f$ is a linear-$k$-junta, then the algorithm \textsf{Test-rank} outputs \textsf{yes} with probability $1-\epsilon$.
\item
If $f : \mathbb{R}^n \rightarrow \{-1,1\}$ is a $((\epsilon/30)^2,s)$-smooth function which is $\epsilon$-far from a linear $k$-junta,
then the algorithm \textsf{Test-rank} outputs \textsf{no} with probability $1-\epsilon$.
\end{enumerate}
\end{lemma}
{In order to prove Theorem~\ref{thm:main1}, we will need the following claim which shows that property of closeness to a linear $k$-junta and closeness to a smooth function can be certified using a single function.
\begin{lemma}~\label{lem:dual-closeness}
For a function $f: {\mathbb{R}}^n \to \{-1, 1\}$, suppose that there is a linear $k$-junta $g: {\mathbb{R}}^n \to \{-1, 1\}$
and a function $h: {\mathbb{R}}^n \to \{-1, 1\}$ of surface area at most $s$ such that both $g$ and $h$
are $\epsilon$-close to $f$. Then there is a function $\tilde h: {\mathbb{R}}^n \to \{-1, 1\}$ that is a linear $k$-junta \emph{and}
has surface area at most $s(1 + \sqrt \epsilon)$, and which is $O(\sqrt{\epsilon})$-close to $f$.
\end{lemma}
}
{\begin{proofof}{Theorem~\ref{thm:main1}}
If $f$ is a linear $k$-junta with surface area at most $s$, then it passes both the tests \textsf{Test-surface-area} as well as \textsf{Test-rank} with probability $1-\epsilon$. Thus, any linear $k$-junta with surface area at most $s$ passes with probability at least $1-2\epsilon$ (so as long as $\epsilon \le 0.05$, the test succeeds with probability $0.9$).
On the other hand, suppose $f$ passes \textsf{Test-linear-junta} with probability $0.9$. Then, applying Theorem~\ref{thm:neeman-testing} is $((\epsilon/30)^4, s)$ smooth.
In other words, there is a function $h$ such that $\mathsf{surf}(h) \le (1+(\epsilon/30)^4) \cdot s$ which is $O(\epsilon^4)$-close to $f$.
Further, since $f$ passes \textsf{Test-rank} with probability $0.9$, Lemma~\ref{lem:far-k-junta} implies that
$f$ is $\epsilon^2$-close to some linear $k$-junta $g$. We now apply Lemma~\ref{lem:dual-closeness} to obtain that $f$ is $O(\epsilon)$-close to some function $\tilde{h}: \mathbb{R}^n \rightarrow \{-1,1\}$ which is a linear $k$-junta and $\mathsf{surf}(h) \le (1+O(\epsilon)) s$. This concludes the proof.
\end{proofof}}
We now turn to describing the routine \textsf{Test-rank} and prove Lemma~\ref{lem:far-k-junta}.
\begin{figure}[tb]
\hrule
\vline
\begin{minipage}[t]{0.98\linewidth}
\vspace{10 pt}
\begin{center}
\begin{minipage}[h]{0.95\linewidth}
{\small
\underline{\textsf{Inputs}}
\vspace{5 pt}
\begin{tabular}{ccl}
$s$ &:=& surface area parameter \\
$\epsilon$ &:=& error parameter \\
$k$ &:=& rank parameter\\
\end{tabular}
\vspace{5 pt}
\underline{\textsf{Testing algorithm}}
\begin{enumerate}
\item Run algorithm \textsf{Test-surface-area} with surface area parameter $s$ and error parameter $(\epsilon/30)^4$.
\item If \textsf{Test-surface-area} outputs \textsf{yes}, then run the algorithm \textsf{Test-rank}
with rank parameter $k$, surface area parameter $s$ and error parameter $\epsilon$.
\item If \textsf{Test-rank} outputs \textsf{yes}, then output \textsf{yes}. If \textsf{Test-rank} outputs \textsf{no}, output \textsf{no}.
\end{enumerate}
\vspace{5 pt}
}
\end{minipage}
\end{center}
\end{minipage}
\hfill \vline
\hrule
\caption{Description of the algorithm \textsf{Test-linear-junta}}
\label{fig:tlj}
\end{figure}
\begin{figure}[tb]
\hrule
\vline
\begin{minipage}[t]{0.98\linewidth}
\vspace{10 pt}
\begin{center}
\begin{minipage}[h]{0.95\linewidth}
{\small
\underline{\textsf{Input}}
\vspace{5 pt}
\begin{tabular}{ccl}
$k$ &:=& rank parameter \\
$s$ &:=& surface area parameter \\
$\epsilon$ &:=& error parameter
\end{tabular}
\underline{\textsf{Parameters}}
\vspace{5 pt}
\begin{tabular}{ccl}
$t$ &:=& $\frac{\epsilon^4}{900 s^2}$ \\
$r$ &:=& $\frac{k \cdot s^2}{\epsilon^7}$\\
$\kappa$ &:& $\frac{\epsilon^2}{40 r}$ \\
\end{tabular}
\vspace{5 pt}
\underline{\textsf{Testing algorithm}}
\begin{enumerate}
\item Sample directions $y_1,\ldots, y_r \sim \gamma_n$.
\item Let $A_{i,j}=\langle D_{} P_{t} f(y_i) , D_{} P_{t} f(y_j) \rangle$.
\item For all $1 \le i,j \le r$, compute $A_{i,j}$ up to error $\kappa$ using Lemma~\ref{lem:inner-product-1}. Call the estimates $B_{i,j}$.
\item For the matrix $B \in \mathbb{R}^{r \times r}$, compute the top $k+1$ singular values of $B$.
\item Output \textsf{yes} if and only if the $(k+1)^{st}$ singular value is at most $\frac{\epsilon^2}{16}$.
\end{enumerate}
\vspace{5 pt}
}
\end{minipage}
\end{center}
\end{minipage}
\hfill \vline
\hrule
\caption{Description of the \textsf{Test-rank} algorithm}
\label{fig:trj}
\end{figure}
\begin{proofof}{Lemma~\ref{lem:far-k-junta}}
The bound on the query complexity of Lemma~\ref{lem:far-k-junta} is immediate from the settings of our parameters and query complexity of Lemma~\ref{lem:inner-product-1}.
The first item (i.e., the completeness of \textsf{Test-rank}) follows from the fact that if $f$ is a linear $k$-junta, $P_t f$ is also a linear $k$-junta. Consequently, $A$ is a rank-$k$ matrix. Then, $A$ has at most $k$ non-zero singular values. Thus, if $\sigma_1 \ge \sigma_2 \ge \ldots$ are the singular values of $A$ (in order), then $\sigma_{k+1}=0$. By invoking Weyl's inequality (Lemma~\ref{lem:Weyl}), the $(k+1)^{th}$ singular value of $B$ is at most $\epsilon^2/10$. This finishes the proof of the first item.
The proof of the second item (i.e., the soundness of \textsf{Test-rank}) is more involved. In particular, we can restate the second item as proving the following lemma.
\begin{lemma}~\label{lem:far-k-junta-1}
Let $f : \mathbb{R}^n \rightarrow \{-1,1\}$ be a $((\epsilon/30)^2,s)$-smooth function which is $\epsilon$-far from a linear $k$-junta,
then the algorithm \textsf{Test-rank} outputs \textsf{no} with probability $1-\epsilon$.
\end{lemma}
The task of proving this lemma shall be the agenda for the rest of this section.
\end{proofof}
In order to prove Lemma~\ref{lem:far-k-junta-1}, we will need a few preliminary lemmas. The following lemma says that if a function's gradient is almost always orthogonal to a subspace $V$. Then, the function is close to a $V$-junta.
\begin{lemma}~\label{lem:gradient-subspace}
Let $f: \mathbb{R}^n \rightarrow \mathbb{R}$ (be a $\mathcal{C}^1$ function) and let $V$ be a subspace of rank $k$ and let $W = V^{\perp}$. Let us assume that $\mathbf{E}[\Vert (Df)_W \Vert_2^2] = \epsilon$. Then there is a $V$-junta $g: \mathbb{R}^n \rightarrow \mathbb{R}$ such that $\mathbf{E}[(g(x) - f(x))^2]\le \epsilon$.
\end{lemma}
\begin{proof}
Let us rotate the space so that $V = \{(x_1, \ldots, x_k, 0, \ldots, 0): x_1, \ldots, x_k \in \mathbb{R}\}$.
Let us now define $g: \mathbb{R}^n \rightarrow \mathbb{R}$ as
\[
g(x) = \mathop{\mathbf{E}}_{z \sim \gamma_{n-k}} [f(x_1, \ldots, x_k, z_1, \ldots, z_{n-k})].
\]
Observe that $g$ is a $V$-junta. Now, for every choice $X = (x_1, \ldots, x_k)$, consider the function $h_X: \mathbb{R}^{n-k} \rightarrow \mathbb{R}$ as
\[
h_X(z_1, \ldots, z_{n-k}) = f(x_1, \ldots, x_k, z_1, \ldots, z_{n-k}) - g(x).
\]
Observe that $\mathbf{E}_{(z_1, \ldots, z_{n-k}) \sim \gamma_{n-k}} [h_X(z_1, \ldots, z_{n-k})]=0$.
By applying Lemma~\ref{lem:Poincare},
\[
\mathbf{E}[h_X^2(z_1, \ldots, z_{n-k})] = \mathsf{Var}[h_X(z_1, \ldots, z_{n-k})] \leq \mathbf{E}[\Vert Dh_X \Vert_2^2].
\]
Observe that $Dh_X(z_1, \ldots, z_{n-k}) =Df(x_1,\ldots, x_k, z_1, \ldots, z_{n-k})_W$. Thus, we get
\begin{eqnarray*}
\mathbf{E}[(f(x) - g(x))^2] &=& \mathop{\mathbf{E}}_{X \sim \gamma_k} \mathop{\mathbf{E}}_{Z\sim \gamma_{n-k}} [h_X^2 (Z)] \leq \mathop{\mathbf{E}}_{X \sim \gamma_k} \mathop{\mathbf{E}}_{Z\sim \gamma_{n-k}} [\Vert Dh_X \Vert_2^2] \\
&=& \mathop{\mathbf{E}}_{X \sim \gamma_k} \mathop{\mathbf{E}}_{Z\sim \gamma_{n-k}} [\Vert Df(X,Z)_W \Vert_2^2] = \epsilon.
\end{eqnarray*}
This finishes the proof.
\end{proof}
For the rest of this section, when we use the value $t$, it will bear the same relation as stated in the description of the algorithm \textsf{Test-rank} (see Figure~\ref{fig:trj}).
\begin{proposition}~\label{prop:noise-stab-surf-1}
Let $f: \mathbb{R}^n \rightarrow \{-1,1\}$ which is $((\epsilon/30)^2,s)$-smooth.
Then,
$
\mathbf{E}[|P_{t}f - f|^2] \le \frac{\epsilon^2}{5}.
$
\end{proposition}
\begin{proof}
Since $f$ is $((\epsilon/30)^2,s)$ smooth, we know that there is a function $g$ such that $\mathbf{E}[|f-g|] \le (\frac{\epsilon}{30})^2$ and $\mathsf{surf}(g)\le s\big(1+(\frac{\epsilon}{30})^2\big)$.
By using the fact that the operator $P_{t}$ is contractive, we have,
\[
\mathbf{E}[|P_{t} f - P_{t}g|^2] \le \mathbf{E}[|f-g|^2 ] \le 4 \mathbf{E}[\Vert f - g \Vert_1]\le \frac{\epsilon^2}{200}.
\]
Next, we use Proposition~\ref{prop:Ledoux} to get that
$
\mathbf{E}[|P_{t}g - g|^2 ] \le \frac{\epsilon^2}{30}.
$
We can now combine these to get
\[
\mathbf{E}[|P_{t} f - f|^2] = 3 \big( \mathbf{E}[|P_{t} f - P_{t} g|^2] + \mathbf{E}[|P_{t} g - g|^2] + \mathbf{E}[| f - g|^2]\big) \le \frac{\epsilon^2}{5}.
\]
\end{proof}
\begin{lemma}~\label{lem:subspace-escape}
Let $f: \mathbb{R}^n \rightarrow \{-1,1\}$ be a $((\epsilon/30)^2,s)$-smooth function which is $\epsilon$-far from any linear $k$-junta. For any subspace $W$ of co-dimension at most $k$,
\[
\Pr_{y \sim \gamma_n} \bigg[\Vert D_{}P_{t}f(y) \Vert_{W}^2 \ge \frac{\epsilon^2}{8}\bigg] \ge \Omega\bigg(\frac{\epsilon^6}{s^2}\bigg).
\]
\end{lemma}
\begin{proof}
Applying Proposition~\ref{prop:noise-stab-surf-1}, we have that $\mathbf{E}[|P_{t}f-f|^2] \le \frac{\epsilon^2}{5}$. By applying Jensen's inequality,
we have $\mathbf{E}[|P_{t}f-f|] \le \epsilon/\sqrt{5}$. Thus, $P_{t}f$ is $0.5\cdot \epsilon$-far from any linear $k$-junta (in $\ell_1$ distance). Consequently, we can say that for any $W$-junta $h$, $\mathbf{E}[\Vert P_{t} f - h \Vert_2^2] > 0.25 \epsilon^2$. By contrapositive of Lemma~\ref{lem:gradient-subspace}, we have that
\begin{equation}~\label{eq:expectation}
\mathbf{E}[\Vert D_{}P_{t}f(y) \Vert_{W}^2] > 0.25 \cdot \epsilon^2. \end{equation}
Next, observe that Lemma~\ref{lem:derivative-shift} implies that
\[
\Vert D_{}P_{t}f(y) \Vert_{W}^2 \le \frac{1}{e^{2t}-1} \cdot \Vert \mathcal{W}_1(f_{t,y}) \Vert_2^2 \le \frac{1}{{e^{2t}-1}}
\le O(1/t) \le O\left(\frac{s^2}{\epsilon^4}\right).
\]
The second inequality follows immediately from that $f_{t,y}$ has range bounded between $[-1,1]$. Combining this with (\ref{eq:expectation}), this implies that
\[
\Pr\bigg[\Vert D_{}P_{t}f(y) \Vert_{W}^2 \ge \frac{\epsilon^2}{8}\bigg] \ge \Omega\bigg(\frac{\epsilon^6}{s^2}\bigg).
\]
\end{proof}
We are now in a position to finish the proof of Lemma~\ref{lem:far-k-junta-1}.
\begin{proofof}{Lemma~\ref{lem:far-k-junta-1}}
Let $M_i \in \mathbb{R}^n$ denote $M_i = D_{} (P_{t}f)(y_i)$. As in Figure~\ref{fig:trj}, consider the matrix $A \in \mathbb{R}^{r \times r}$ whose $(i,j)$ entry is $A_{i,j} = \langle D_{} (P_{t}f)(y_i), D_{} (P_{t}f)(y_j) \rangle$. Now, consider the matrix $M \in \mathbb{R}^{n \times k}$ whose $i^{th}$ column is $M_i$. Then, observe that $A = M^t \cdot M$. We would like to analyze the singular values of $A$. Observe that the non-zero singular values of $M^t \cdot M$ are the same as the non-zero singular values of $M \cdot M^t$. Now, observe that
\[
M \cdot M^t = \sum_{i=1}^r D_{} (P_{t}f)(y_i) \cdot D_{} (P_{t}f)(y_i)^t
\]
Instead of analyzing the non-zero singular values of $M^t \cdot M$, we will analyze the non-zero singular values of $M \cdot M^t$. From now on, let us use $h$ to denote
$P_{t}f$. Let us define the sequence of stopping times $\{\tau_j \}_{j \ge 0}$ as follows: $\tau_0=0$ and let $\mathcal{G}_j = \sum_{\ell \le \tau_j} D_{} h(y_\ell) \cdot D_{} h(y_\ell)^t$ and $W_j$ be the eigenspace formed by the top $j$ eigenvectors of $\mathcal{G}_j$. Then, $\tau_{j+1}$ is the smallest $\ell>j$ such that
$\Vert (D_{} h(y_\ell))_{W_j^\perp} \Vert_2 \ge \frac{\epsilon}{2\sqrt{2}}$. We now make the following claim.
\begin{claim}~\label{clm:large-singular}
For $j \le k+1$, the top $j$ singular values of $\mathcal{G}_j$ are at least $\epsilon^2/8$.
\end{claim}
\begin{proof}
We will prove this claim by induction. So, assume that
the top $j$ singular values of $\mathcal{G}_j$ are all at least $\epsilon^2/8$. Now, for $\ell = \tau_{j+1}$, let
$w$ be the unit vector in the direction of the component of $D_{} h(y_\ell)$ orthogonal to $W_j$. Let $\Gamma$ be the linear span of $W_j$ and $w$. Now, consider any unit vector $v \in \Gamma$ and express it as $v= v_1+ v_2$ where $v_1$ lies in $W_j$ and $v_2$ is parallel to $w$. Next, observe that
\begin{eqnarray*}
v^T \cdot \big( \mathcal{G}_j + D_{} h(y_\ell) \cdot D_{} h(y_\ell)^t \big) \cdot v = v^T \cdot \mathcal{G}_j \cdot v + v^T \cdot D_{} h(y_\ell) \cdot D_{} h(y_\ell)^t \cdot v.
\end{eqnarray*}
The first term $v^T \cdot \mathcal{G}_j \cdot v$ is at least as large as $v_1^T \cdot \mathcal{G}_j \cdot v_1$ and the second term $v^T \cdot D_{} h(y_\ell) \cdot D_{y} h(y_\ell)^t \cdot v$ is the same as $v_2^T \cdot D_{} h(y_\ell) \cdot D_{} h(y_\ell)^t \cdot v_2$. Next, note that
\[
v_1^T \cdot \mathcal{G}_j \cdot v_1\ge \frac{\epsilon^2}{8} \cdot \Vert v_1\Vert_2^2 ;\ \ \ \ \ v_2^T \cdot D_{} h(y_\ell) \cdot D_{} h(y_\ell)^t \cdot v_2\ge \frac{\epsilon^2}{8} \cdot \Vert v_2\Vert_2^2.
\]
Consequently,
\[
v^T \cdot \big( \mathcal{G}_j + D_{} h(y_\ell) \cdot D_{} h(y_\ell)^t \big) \cdot v \ge \frac{\epsilon^2 }{8} \cdot \big(\Vert v_1 \Vert_2^2 + \Vert v_2 \Vert_2^2 \big) = \frac{\epsilon^2}{8}.
\]
Observe that
\[
v^T \cdot \mathcal{G}_{j+1} \cdot v \ge v^T \cdot \big( \mathcal{G}_j + D_{} h(y_\ell) \cdot D_{} h(y_\ell)^t \big) \cdot v \ge \frac{\epsilon^2}{8}.
\]
The first inequality is immediate from the fact that
\[
\mathcal{G}_{j+1} -\big( \mathcal{G}_j + D_{} h(y_\ell) \cdot D_{} h(y_\ell)^t \big) = \sum_{\tau_j <i < \tau_{j+1}} D_{} h(y_i) \cdot D_{} h(y_i)^t
\]
is a psd matrix. Thus, we obtain that
\begin{equation}~\label{eq:singular-1}
\inf_{v: \Vert v \Vert_2=1 \ \textrm{and} \ v \in \Gamma} v^T \cdot \mathcal{G}_{j+1} \cdot v \ge \frac{\epsilon^2}{8}.
\end{equation}
Now, it is clear that $\mathcal{G}_{j+1}$ is a psd matrix. If the singular values of $\mathcal{G}_{j+1}$ are $\sigma_1 \ge \sigma_2 \ge \ldots$, then by Courant Fischer theorem, we have
\[
\sigma_{k+1} = \max_{S_{k+1} \subseteq \mathbb{R}^n} \inf_{v: \Vert v \Vert_2=1 \ \textrm{and} \ v \in S_{k+1}} v^T \cdot \mathcal{G}_{j+1} \cdot v,
\]
where $S_{k+1}$ is the set of all $k+1$ dimensional subspaces of $\mathbb{R}^n$. Thus, by applying (\ref{eq:singular-1}) and observing $\mathsf{dim}(\Gamma) = k+1$, we get
\[
\sigma_{k+1} \ge \inf_{v: \Vert v \Vert_2=1 \ \textrm{and} \ v \in \Gamma} v^T \cdot \mathcal{G}_{j+1} \cdot v \ge \frac{\epsilon^2}{8}.
\]
This finishes the proof.
\end{proof}
Now applying Lemma~\ref{lem:subspace-escape}, we have that conditioned on $\tau_j$, $\tau_{j+1}-\tau_j$ is a geometric random variable with parameter (at least) $\Omega(\epsilon^6/s^2)$. From this, it is not difficult to see that with probability at least $1-\epsilon$, $\tau_{k+1} = O(s^2 \cdot k/\epsilon^7)$,
Thus, with probability $1-\epsilon$, we can assume that the top $k+1$ singular values of $M \cdot M^t$ are all at least $\epsilon^2/8$.
Consequently, we get that the top $k+1$ singular values of $A = M^t \cdot M$ are all at least $\epsilon^2/8$. Now, the algorithm computes a matrix $B$ such that
$\Vert A-B\Vert_F \le \epsilon^2/10$. By Weyl's inequality~(Lemma~\ref{lem:Weyl}), we get that the top $k+1$ singular values of $B$ are all at least $\epsilon^2/16$. This proves the lemma.
\end{proofof}
{
We finally give the proof of Lemma~\ref{lem:dual-closeness}. The proof relies on the so-called co-area formula.
\begin{lemma}~\label{lem:coarea}
Let $f: {\mathbb{R}}^n \to [-1, 1]$ be smooth and $\psi: [-1, 1] \to {\mathbb{R}}_+$ be bounded and measurable. Then
\[
\int_{-1}^1 \psi(s) \mathsf{surf}(\{x: f(x) \le s\}) \, ds = \int_{{\mathbb{R}}^n} \psi(f(x)) |\nabla f(x)| \, d\gamma(x).
\]
\end{lemma}
\begin{proofof}{Lemma~\ref{lem:dual-closeness}}
By~\cite{maggi2012sets}, there is a smooth function $h_1: {\mathbb{R}}^n \to [-1, 1]$ with bounded gradient
such that $\|h_1 - h\|_2 \le \epsilon$ and $\operatorname{{\bf E}}[|\nabla h_1|] \le 2s$.
Let $E$ be a $k$-dimensional subspace for which $g$ is an $E$-junta, and let $z$ be a standard Gaussian
vector on $E^\perp$. Let $\Pi_E$ be the projection operator for subspace $E$ and define $h_2: {\mathbb{R}}^n \to [-1, 1]$ by $h_2(x) = \operatorname{{\bf E}}_z[h_1(\Pi_E x + z)]$. By Jensen's inequality,
$\operatorname{{\bf E}} [|\nabla h_2|] \le \operatorname{{\bf E}}[|\nabla h_1|] \le 2s$. Let $t$ be uniformly distributed in $[-1+\eta, 1-\eta]$,
and define $\tilde h = \tilde h_t$ by
\[
\tilde h_t(x) = \tilde h(x) = \begin{cases}
-1 &\text{if $h_2(x) \le t$} \\
1 &\text{otherwise}.
\end{cases}
\]
Note that $\tilde h_t$ is an $E$-junta (because $h_2$ is an $E$-junta).
In expectation over $t$, the surface area of $\tilde h$ is
\[
\frac{1}{2-2\eta} \int_{-1+\eta}^{1-\eta} \mathsf{surf}\{x: h_2 \le s\}\, ds,
\]
which by the co-area formula is equal to
\[
\frac{1}{2-2\eta} \int_{{\mathbb{R}}^n} 1_{\{h_2(x) \in [-1+\eta, 1-\eta]\}} |\nabla h_2(x)|\, d\gamma(x)
\le \frac{1}{2-2\eta} \operatorname{{\bf E}}_{x \sim \gamma} [|\nabla h_2(x)|]
\le \frac{s}{1-\eta}.
\]
In particular, there exists some $t \in [-1+\eta, 1-\eta]$ such that the surface area
of $\tilde h_t$ is at most $\frac{s}{1-\eta}$.
Next, we will estimate the distance of $\tilde h$ from $h$. By the triangle inequality, $\|h - g\|_2 \le 2\epsilon$
and so $\|h_1 - g\|_2 \le 3\epsilon$. On the other hand, Pythagoras' theorem implies
that $h_2$ minimizes $\|h_1 - h_2\|_2$ among all $E$-juntas; hence, $\|h_1 - h_2\|_2 \le 3\epsilon$
and so $\|h - h_2\| \le 4 \epsilon$.
Now, $h$ takes values in $\{-1, 1\}$ and so $|h(x) - h_2(x)| \ge \eta 1_{\{h_2(x) \in [-1+\eta, 1-\eta]\}}$.
On the other hand, the definition of $\tilde h$ ensures that
\[
|\tilde h(x) - h_2(x)| \le \begin{cases}
2 &\text{if $h_2(x) \in [-1+\eta, 1-\eta]$} \\
\eta &\text{otherwise}.
\end{cases}
\]
If $p$ is the probability that $h_2(x) \in [1 + \eta, 1-\eta]$, it follows that $\eta \sqrt{p} \le \|h - h_2\|_2$
and so
\[
\|\tilde h - h_2\|_2 \le 2\sqrt{p} + \eta \le \frac{8\epsilon}{\eta} + \eta.
\]
By the triangle inequality $\|\tilde h - h\|_2 \le 4 \epsilon + \frac{8\epsilon}{\eta} + \eta$.
Choosing $\eta = \sqrt \epsilon$ completes the proof.
\end{proofof}
}
|
2,877,628,091,548 | arxiv | \section{Introduction}
\vspace{0mm}
\label{sec:intro}
\IEEEPARstart{G}aussian distribution is the ubiquitous probability distribution used in statistics, signal processing, and pattern recognition areas~\cite{Park2013}. However, not all the data being processed are Gaussian distributed~\cite{Ma2011}. In many real-life applications, the distribution of data is asymmetric and, therefore, is not Gaussian distributed~\cite{Nguyen2013}. For example, the image pixel values~\cite{Ma2011a,Bouguila2006}, the reviewer's rating to an item in a recommendation system~\cite{Ma2015,Salakhutdinov2008,Salakhutdinov2008a}, and the DNA methylation level data~\cite{Ji2005} are distributed in a range with bounded support. The diversity gain over the $K_G$ fading~\cite{Jung2014} and the periodogram coefficients in speech enhancement~\cite{Mohammadiha2013,Mohammadiha2013a} are semi-bounded (nonnegative). The spatial fading correlation~\cite{Mammasis2009} and the yeast gene expressions~\cite{Taghia2014} have directional property so that the $l_1$ norm equals one. In signal processing, the acoustic noise with colored spectra~\cite{Zao2012} and the measurement noise in state-space model~\cite{Xu2014} are heavy-tailed. In the stock market, the asymptotic behavior of the first-order autoregressive (AR) process is clearly non-Gaussian~\cite{Amini2013} and the underlying Bayesian copula model for the stock index series are non-Gaussian as well~\cite{Xu2015}. Although the above mentioned data represent diverse characteristics, a common property is that, these data~\emph{not only} have specific support range,~\emph{but also} have non-bell distribution shape. The natural properties of Gaussian distribution (the definition domain is unbounded and the distribution shape is symmetric) do not fit such data well. Hence, these data are non-Gaussian distributed. It has been found in recent studies that explicitly utilizing the non-Gaussian characteristics can significantly improve the practical performance~\cite{Ma2011,Ma2011a,Bouguila2006,Ji2005,Jung2014,Mohammadiha2013,Mammasis2009,Mohammadiha2013a,Taghia2014,Zao2012,Xu2014}. Hence, it is of particular importance and interest to make thorough studies of the non-Gaussian data and non-Gaussian statistical models.
Bayesian analysis plays an essential role in parameter estimation of statistical models~\cite{Fukunaga1990,Jain2000,Bishop2006,Bernardo2000}. Unlike the conventionally used maximum-likelihood (ML) estimation~\cite{Dempster1977}, Bayesian estimation assumes that the parameters potentially follow underlying distributions and derives the posterior distributions of the parameters by applying the Bayes' theorem~\cite{Stigler1982} through combining the prior distributions with the likelihood function obtained from the observed data~\cite{Bishop2006,Tipping2004}. Estimation of the posterior distribution via Bayesian estimation has several advantages over the ML estimation. Firstly, it gives statistical description to the parameters, rather than the simple point estimate that yield by the ML estimation. This makes Bayesian estimation more robust and reliable, by including the resulting uncertainty into the estimation~\cite{Bernardo2000}. Secondly, it can potentially prevent the overfitting problem, which is one of the drawbacks the ML estimation suffers. This is mainly due to the advantage of Occam's razor effect in Bayesian estimation. Last but not the least, Bayesian estimation can estimate the model complexity automatically from the data. In ML estimation, model complexity decision usually requires cross validations and, therefore, is computationally costly~\cite{Bishop2006,Dempster1977}.
Varitional inference (VI) framework, among others, is a widely used strategy to infer the posterior distribution of the parameters in Bayesian analysis~\cite{Bishop2006,Jordan1999,Blei2005,Fox2012}. In a full Bayesian model where all the parameters are assigned with prior distributions, we minimize the Kullback-Leibler (KL) divergence of the true posterior distribution from the approximating one to obtain an optimal approximation to the posterior distribution~\cite[Ch. 10]{Bishop2006}. This procedure is equivalent to maximizing the lower-bound to the marginal likelihood (model evidence). The optimal posterior distribution can be obtained by iteratively updating one variable (or one variable group) while fixing the rest. However, unlike the famous Gaussian distribution~\cite{Bishop2006,Ormerod2012,Park2013}, most of the non-Gaussian statistical models (e.g., beta mixture model (BMM)~\cite{Bouguila2006, Ma2011a}, Dirichlet mixture model (DMM)~\cite{Fan2012}, von-Mises Fisher mixture model (VMM)~\cite{Taghia2014}, beta-Gamma nonnegative matrix factorization (BG-NMF)~\cite{Ma2015}) do not have analytically tractable solution to estimate the posterior distribution of the parameters. Numerical methods,~\emph{e.g.}, Newton-Raphson algorithm, Gibbs sampling, Markov Chain Monte Carlo, are usually employed to sample from the posterior distribution~\cite{Blei2005,Bouguila2006}. Numerical method often depends on Markov chain convergence and is in general computationally costly, especially in the high-dimensional space~\cite{Blei2004}.
Recently, an improved framework, namely the extended variational inference (EVI)~\cite{Blei2006,Hoffman2010,Braun2010,Ma2011a,Fan2012,Taghia2014,Ma2015}, has become popular in solving the above mentioned problem. Similar as the VI framework, EVI also seeks an optimal approximation to the posterior distribution. The difference is that EVI relaxes the objective function (the evidence lower-bound to the marginal likelihood) by constructing lower-bound approximation to the objective function. This lower-bound relaxation, which uses the convexity or relative convexity~\cite{Boyd2004} of the objective function, can yield analytically tractable solution so that the parameter estimation is facilitated. Although systematic bias has been introduced due to the lower-bound approximation, several works have demonstrated the advantages of EVI in Bayesian estimation of statistical models~\cite{Hoffman2010,Ma2011a,Taghia2014,Ma2015}. In Bayesian estimation of BMM, Ma et al.~\cite{Ma2011a} derived an analytically tractable solution which outperforms the numerical Gibbs sampling based method~\cite{Bouguila2006}. As an extension work, Bayesian estimation of DMM via EVI have been proposed in~\cite{Fan2012}, respectively. For directional data, von-Mises Fisher distribution is an important model in several applications. Analytically tractable solution to Bayesian estimation of VMM has been proposed by using EVI to provide lower-bound approximation~\cite{Taghia2014}. For non-negative matrix factorization (NMF), EVI was also applied in deriving analytically tractable solutions for Poisson process (discrete) NMF~\cite{Cemgil2009}, Gamma process NMF in music recording~\cite{Hoffman2010}, and beta-Gamma NMF for bounded support data~\cite{Ma2015}.
Convergence is an important issue in parameter estimation algorithm. For VI-based method, the objective function maximized during each iteration is convex or relatively convex in terms of the target variable's posterior distribution~\cite{Bishop2006,Boyd2004}. Hence, convergence is theoretically guaranteed. In EVI, the introduced lower-bound approximation to the objective function can be obtained either via a single extension over the whole variable group or multiple extensions, one for a subset of the whole variable group. Based on this, two lower-bound approximation strategies are obtained, one is the single lower-bound (SLB) approximation~\cite{Hoffman2010,Taghia2014,Ma2015} and the other is the multiple lower-bounds (MLB) approximation~\cite{Ma2011a,Fan2012}. For EVI with SLB approximation, convergence is also guaranteed because the original objective function is replaced by one single lower-bound and this new objective function (\emph{i.e.}, the single lower-bound to the original objective function in VI) is convex (or relatively convex) and maximized during each iteration. However, when applying EVI with MLB approximation, the variable group is divided into different disjoint subsets and there exists different lower-bound approximations to the objective function. During each iteration, different lower-bounds, one for each variable subset, are maximized iteratively. Since the new objective function is not unique, convergence can not be theoretically guaranteed.
In order to clarify the convergence property of the EVI framework, we will discuss and summarize the conditions that required in EVI implementation. The SLB and MLB approximations will also be analyzed and compared qualitatively and quantitatively. Experimental results based on the recently proposed EVI-based BMM and DMM estimation algorithms will be presented to demonstrate the advantages of the SLB approximations. We draw some conclusions in the end.
\section{Variational Inference and Extended Variational Inference}
\vspace{0mm}
\subsection{Variational Inference}
\vspace{0mm}
In Bayesian estimation, a universal solution to the variational inference (VI) framework~\cite{Jordan1999} is to approximate the posterior distribution by a product of several factor distributions and then update each factor distribution individually~\cite{Bishop2006}. This method is the so-called factorized approximation (FA) which was developed from the mean field theory in physics~\cite{Jaakkola2001}. With the FA method, the variational objective function that we want to maximize can be represented as the negative KL divergence as
\begin{equation}
\eqs
\label{Eq: Lowerbound}
\mathcal{L} = \mathbf{E}_{\mathbf{Z}}\left[{\ln p(\mathbf{X},\mathbf{Z})} - \ln q(\mathbf{Z})\right],
\end{equation}
where $\mathbf{X}$ is the observed data, $\mathbf{Z}$ denotes all the random variables. If $\mathbf{Z}$ can be (approximately) factorized into $M$ disjoint groups as $\mathbf{Z} = \left\{\mathbf{Z}_1,\ldots,\mathbf{Z}_i,\ldots,\mathbf{Z}_M\right\}$ and we approximate the true posterior distribution $p(\mathbf{Z}|\mathbf{X})$ as
\begin{equation}
\eqs
p(\mathbf{Z}|\mathbf{X}) \approx q(\mathbf{Z}) = \prod_{i=1}^M q_i(\mathbf{Z}_i),
\end{equation}
the optimal solution can be written as
\begin{equation}
\eqs
\label{Eq: Optimal Solution}
\ln q_i^*(\mathbf{Z}_i) = \mathbf{E}_{\mathbf{Z}\backslash_{\mathbf{Z}_i}}\left[\ln p(\mathbf{X},\mathbf{Z})\right] + \text{const}.
\end{equation}
The operator $\mathbf{E}_{\mathbf{Z}\backslash_{\mathbf{Z}_i}}$ means expectation with respect all the variables in $\mathbf{Z}$, except for $\mathbf{Z}_i$. If the optimal solution to the posterior distribution of $\mathbf{Z}_i$, which is $\ln q_i^*(\mathbf{Z}_i)$ in~\eqref{Eq: Optimal Solution}, has the same logarithmical form as the prior distribution, the conjugate match between the prior and the posterior distributions are satisfied. Then we have obtained an analytically tractable solution. However, this conjugate match is not satisfied in most of the practical problems~\cite{Ma2011a,Fan2012,Ma2015}. This is due to the fact that the optimal solution depends on the expectation computed with respect to the factor distribution~\cite{Bishop2006}.
\vspace{0mm}
\subsection{Extended Variational Inference}
\vspace{0mm}
\label{Sec: EVI}
In order to satisfy the conjugate match requirement, some approximations can be applied to get a nearly optimally analytically tractable solution. Braun et al.~\cite{Braun2010} considered the zeroth-order and first-order delta method for moments~\cite{Bickel2007} to derive an alternative for the objective function to simplify the calculation. Blei et al.~\cite{Blei2006} proposed a correlated topic model (CTM) and used a first-order Taylor expansion to preserve a bound such that an intractable expectation was avoided. Similar idea was also applied in~\cite{Ma2011a,Fan2012,Taghia2014} for approximating the posterior distributions in BMM, DMM, and VMM, respectively. Using Jensen's inequality has become commonplace in variational inference. In~\cite{Hoffman2010}, the concavity of the function $-x^{-1}$ and the convexity of $-\log x$ were studied and the Jensen's inequality and the first-order Taylor expansion were applied to approximately calculated the posterior distribution. Moreover, the EVI strategy was also applied in low rank matrix approximation area~\cite{Ma2015}, where the Taylor expansion and Jensen's inequality were both applied for the purpose of deriving analytically tractable solution.
\begin{table*}[!t]
\vspace{0mm}
\caption{\label{Tab: Required Conditions for EVI} \footnotesize Required conditions for EVI.}
\centering
\sps
\begin{tabular}{|c|c|c|c|}
\hline
& Auxiliary function & Form of the Auxiliary Function & Systematic Gap\\
\hline
Strong condition & ${p}(\mathbf{X},\mathbf{Z})\geq \widetilde{p}_{s}(\mathbf{X},\mathbf{Z})$ & \multirow{2}{*}{$\mathbf{E}_{\mathbf{Z}\backslash_{\mathbf{Z}_i}}\left[\ln \widetilde{p}(\mathbf{X},\mathbf{Z})\right] \approxeq \ln {p}_i(\mathbf{Z}_i) $ $^{\dag}$} & \multirow{2}{*}{$\mathcal{G}_{\text{s}}>\mathcal{G}_{\text{w}}$}\\
\cline{1-2}
Weak condition & $\mathbf{E}_{\mathbf{Z}}\left[\ln p(\mathbf{X},\mathbf{Z})\right] \geq \mathbf{E}_{\mathbf{Z}}\left[\ln \widetilde{p}_{w}(\mathbf{X},\mathbf{Z})\right]$ & & \\
\hline
\end{tabular}\\
$^{\dag}$ ``$\approxeq$'' denotes that the two formulations at the LHS and RHS have the same mathematical form, up to a constant difference.
\vspace{-5mm}
\end{table*}
All the aforementioned works utilized the following property.
Given an auxiliary function $\widetilde{p}(\mathbf{X},\mathbf{Z})$ which satisfies
\begin{equation}
\eqs
\label{Eq: Auxiliary Function}
\mathbf{E}_{\mathbf{Z}}\left[\ln p(\mathbf{X},\mathbf{Z})\right] \geq \mathbf{E}_{\mathbf{Z}}\left[\ln \widetilde{p}(\mathbf{X},\mathbf{Z})\right],
\end{equation}
the variational objective function (see~\cite{Bishop2006}, pp. 465 for more details) can be lower-bounded as
\begin{equation}
\eqs
\begin{split}
\label{Extended FA}
\mathcal{L} =&\mathbf{E}_{\mathbf{Z}}\left[\ln p(\mathbf{X},\mathbf{Z})\right] - \mathbf{E}_{\mathbf{Z}}\left[\ln q(\mathbf{Z})\right]\\
\geq &\mathbf{E}_{\mathbf{Z}}\left[\ln \widetilde{p}(\mathbf{X},\mathbf{Z})\right] - \mathbf{E}_{\mathbf{Z}}\left[\ln q(\mathbf{Z})\right]\\
\triangleq &\mathcal{\widetilde{L}}.
\end{split}
\end{equation}
Then we can maximize $\mathcal{\widetilde{L}}$, which is an lower-bound to the original objective function $\mathcal{\widetilde{L}}$, to asymptotically reach the maximum value of $\mathcal{{L}}$~\cite{Hoffman2010,Ma2011a,Fan2012}. The approximated optimal solution in this case is written as
\begin{equation}
\eqs
\label{Eq: Optimal Approximating Solution}
\ln \widetilde{q}_i^*(\mathbf{Z}_i) = \mathbf{E}_{\mathbf{Z}\backslash_{\mathbf{Z}_i}}\left[\ln \widetilde{p}(\mathbf{X},\mathbf{Z})\right] + \text{const}.
\end{equation}
This method is the so-called EVI framework~\cite{Blei2006,Hoffman2010,Braun2010,Ma2011a,Fan2012,Taghia2014,Ma2015}. Although it introduces systematic gap when involving the lower-bound approximation, the EVI allows more flexibility when calculating intractable integrations in non-Gaussian statistical models and provides a convenient way to obtain an analytically tractable solution.
\vspace{0mm}
\section{Convergence of EVI}
\vspace{0mm}
\subsection{Weak Condition and Strong Condition}
\vspace{0mm}
\label{Sec: Required Conditions}
As mentioned in Sec.~\ref{Sec: EVI}, finding an auxiliary function $\widetilde{p}(\mathbf{X},\mathbf{Z})$ is an essential yet difficult part in EVI implementation. Generally speaking, this auxiliary function should satisfy the relation presented in~\eqref{Eq: Auxiliary Function} or it should satisfy
\begin{equation}
\eqs
\label{Eq: Strong Condition}
{p}(\mathbf{X},\mathbf{Z})\geq \widetilde{p}(\mathbf{X},\mathbf{Z}).
\end{equation}
It is obvious that an auxiliary function satisfies~\eqref{Eq: Strong Condition} should also satisfy~\eqref{Eq: Auxiliary Function}. Hence, the condition in~\eqref{Eq: Auxiliary Function} is named as the~\emph{weak condition} and the one in~\eqref{Eq: Strong Condition} is referred to as the~\emph{strong condition}. When using an auxiliary function to lower-bound the original objective function, the EVI will introduce a systematic gap. Generally speaking, the gap incurred by the applying weak condition is relatively smaller than that introduced by using the strong condition. Fig.~\ref{Fig: WeakAndStrongConditions} illustrates the different gaps introduced by the weak and strong conditions, respectively.
\begin{figure}[!t]
\vspace{0mm}
\psfrag{A}[][]{\tiny $p(\mathbf{X},\mathbf{Z})$}
\psfrag{B}[][]{\tiny $\widetilde{p}_{\text{w}}(\mathbf{X},\mathbf{Z})$}
\psfrag{C}[][]{\tiny $\widetilde{p}_{\text{s}}(\mathbf{X},\mathbf{Z})$}
\psfrag{D}[][]{\tiny $\mathcal{G}_{\text{w}}$}
\psfrag{E}[][]{\tiny $S_1$}
\psfrag{F}[][]{\tiny $S_2$}
\psfrag{G}[][]{\tiny $S_3$}
\centering
\subfigure[\label{Subfig: Weak}\sps Weak condition of EVI.]{\includegraphics[width=.235\textwidth]{WeakCondition.eps}}\hspace{2mm}
\subfigure[\label{Subfig: Strong}\sps Strong condition of EVI.]{\includegraphics[width=.235\textwidth]{StrongCondition.eps}}\hspace{2mm}
\vspace{0mm}
\caption{ \label{Fig: WeakAndStrongConditions}\footnotesize Comparisons of the weak and the strong conditions of EVI. The systematic gap introduced by the weak condition can be calculated as $\mathcal{G}_{\text{w}}=S_1-(S_2+S_3)$. For either the strong or the weak condition, the auxiliary function is chosen to minimize the gap as much as possible. Generally speaking, the systematic gap $\mathcal{G}_{\text{w}}$ is smaller than $\mathcal{G}_{\text{s}}$.}
\vspace{-5mm}
\end{figure}
It is worthwhile to note that the auxiliary function $\widetilde{p}(\mathbf{X},\mathbf{Z})$ is not necessary to be a normalized probability density function (PDF)\footnotemark\footnotetext{Actually, an auxiliary function that satisfies the strong condition cannot be a normalized PDF, as ${p}(\mathbf{X},\mathbf{Z})$ itself is a normalized PDF.}. This will not affect the final solution since either VI or EVI will re-normalize the obtained optimal posterior distribution in the end.
In practice, in addition to the above mentioned weak or strong condition, an auxiliary function should also has a specific mathematical form so that the optimal solution in~\eqref{Eq: Optimal Approximating Solution} has the same logarithmic form as the prior distribution and the conjugate match between the prior and the posterior distributions is satisfied. This is another required condition for choosing the auxiliary function. Table~\ref{Tab: Required Conditions for EVI} lists the required conditions when implementing EVI.
Generally speaking, it is usually not feasible to find an auxiliary function that satisfies the strong condition, except that the original function $p(\mathbf{X},\mathbf{Z})$ is globally concave in terms of $\mathbf{Z}$\ \footnotemark\footnotetext{According to our experience, globally concavity holds only for Gaussian distribution. For (most of) the non-Gaussian statistical models, the original function is not globally concave.}. Compared to the strong condition, it is easy to find an auxiliary function to fulfill the weak condition, although the ordinal function $p(\mathbf{X},\mathbf{Z})$ might be partially concave with respect to part of $\mathbf{Z}$~\cite{Ma2015}. For example, the multivariate log-inverse-beta (MLIB) function in the Dirichlet distribution is~\emph{not} globally concave in terms of all of its variables. It is only relatively concave~\emph{w.r.t.} one of its variable when fixing the rest. Iteratively taking this property, an auxiliary function that satisfies the weak condition and the requirement of the mathematical form can be found so that an analytically tractable solution was derived. Moreover, the weak conditions yields smaller systematic gap. Therefore, the weak condition is more preferable in practice.
In summary, in order to apply the EVI to derive an analytically tractable solution for the Bayseisn estimation of non-Gaussian statistical models, an auxiliary function should 1) satisfies either the weak or the strong condition and 2) have the same mathematical form as the prior distribution (up to a constant difference).
\vspace{0mm}
\subsection{SLB Approximation and MLB Approximation}
\label{Sec: SLB vs MLB}
\vspace{0mm}
If we can find an auxiliary function $\widetilde{p}(\mathbf{X},\mathbf{Z})$ that contains all the variables $\mathbf{Z}$ and satisfies the aforementioned required conditions, the convergence of EVI is naturally guaranteed as this new objective function is convex or relatively convex in terms of $q_i(\mathbf{Z}_i)$~\cite{Bishop2006}. Since only one lower-bound approximation is applied to the original objective function, this approach is referred to as the single lower-bound (SLB) approximation and has been applied in,~\emph{e.g.},~\cite{Taghia2014,Ma2015}.
When dividing $\mathbf{Z}$ into $M$ disjoint groups as $\mathbf{Z} = \left\{\mathbf{Z}_1,\ldots,\mathbf{Z}_i,\ldots,\mathbf{Z}_M\right\}$, there might exist several auxiliary functions. For example, we could have $M$ auxiliary functions as
\begin{equation}
\eqs
\label{Eq: MLB}
\begin{split}
{p}(\mathbf{X},\mathbf{Z})\geq& \widetilde{p}_1(\mathbf{X},\mathbf{Z}_1)\\
\vdots\\
{p}(\mathbf{X},\mathbf{Z})\geq& \widetilde{p}_i(\mathbf{X},\mathbf{Z}_i)\\
\vdots\\
{p}(\mathbf{X},\mathbf{Z})\geq& \widetilde{p}_M(\mathbf{X},\mathbf{Z}_M).
\end{split}
\end{equation}
This approach is referred to as the multiple lower-bound (MLB) approximation. As each of the above mentioned auxiliary functions satisfies the required conditions in Sec.~\ref{Sec: Required Conditions}, the optimal solution in~\eqref{Eq: Optimal Approximating Solution} is
\begin{equation}
\eqs
\ln \widetilde{q}_i^*(\mathbf{Z}_i) = \mathbf{E}_{\mathbf{Z}\backslash_{\mathbf{Z}_i}}\left[\ln \widetilde{p}_i(\mathbf{X},\mathbf{Z}_i)\right] + \text{const}.
\end{equation}
In this case, the new objective function that maximized during each iteration is~\emph{not unique}. Hence,~\emph{there is no globally objective function that is maximized during each iteration}. The convergence cannot be theoretically guaranteed. Such procedure has been applied in~\cite{Ma2011a} and~\cite{Fan2012}. Although it is not guaranteed theoretically, the convergence was observed empirically.
Let's study a simple case with two disjoint groups in the MLB approximation. Assuming that $\mathbf{Z} = \{\mathbf{Z}_1,\mathbf{Z}_2\}$ and we have two auxiliary functions $\widetilde{p}_1(\mathbf{X},\mathbf{Z}_1)$ and $\widetilde{p}_2(\mathbf{X},\mathbf{Z}_2)$ for $\mathbf{Z}_1$ and $\mathbf{Z}_2$, respectively.
As mentioned above, two different lower-bounds are obtained as
\begin{equation}
\eqs
\label{Eq: Multiple Lower bounds}
\begin{split}
\mathcal{\widetilde{L}}_1=& \mathbf{E}_{\mathbf{Z}}\left[{\ln \widetilde{p}_1(\mathbf{X},\mathbf{Z}_1)} - \ln q(\mathbf{Z})\right]\\
\mathcal{\widetilde{L}}_2=& \mathbf{E}_{\mathbf{Z}}\left[{\ln \widetilde{p}_2(\mathbf{X},\mathbf{Z}_2)} - \ln q(\mathbf{Z})\right].
\end{split}
\end{equation}
If we maximize each lower-bound separately, the optimal solutions to these two disjoint groups are
\begin{subequations}
\eqs
\label{Eq: Optimal Solutions with Seperate Lower bounds}
\begin{align}
\ln \widetilde{q}_1^*(\mathbf{Z}_1) =& \mathbf{E}_{\mathbf{Z}\backslash_{\mathbf{Z}_1}}\left[\ln \widetilde{p}_1(\mathbf{X},\mathbf{Z})\right] + \text{const}\label{Eq: Optimal Solution 1 with Seperate Lower bounds}\\
\ln \widetilde{q}_2^*(\mathbf{Z}_2) =& \mathbf{E}_{\mathbf{Z}\backslash_{\mathbf{Z}_2}}\left[\ln \widetilde{p}_2(\mathbf{X},\mathbf{Z})\right] + \text{const}\label{Eq: Optimal Solution 2 with Seperate Lower bounds}.
\end{align}
\end{subequations}
With these solutions, it looks like what we are maximizing is just two times of the original lower-bound as
\begin{subequations}
\eqs
\begin{align}
2\times\mathcal{L}\geq& \mathcal{\widetilde{L}}_1+\mathcal{\widetilde{L}}_2\label{Eq: Overall Lower Bound} \\
=& \mathbf{E}_{\mathbf{Z}}\left[{\ln \widetilde{p}_1(\mathbf{X},\mathbf{Z}_1)}\right] - \mathbf{E}_{\mathbf{Z}}\left[\ln q(\mathbf{Z})\right]\label{Eq: Lower bound 1}\\
+&\mathbf{E}_{\mathbf{Z}}\left[{\ln \widetilde{p}_2(\mathbf{X},\mathbf{Z}_2)}\right] - \mathbf{E}_{\mathbf{Z}}\left[\ln q(\mathbf{Z})\right]\label{Eq: Lower bound 2}.
\end{align}
\end{subequations}
When performing the update strategy~\eqref{Eq: Optimal Solution 1 with Seperate Lower bounds}, we get~\eqref{Eq: Lower bound 1} to be maximized. This maximization makes the distribution of $\mathbf{Z}_1$ to be less uncertain. As $- \mathbf{E}_{\mathbf{Z}}\left[\ln q(\mathbf{Z})\right]$ in~\eqref{Eq: Lower bound 2} is the differential entropy of $\mathbf{Z}$,~\eqref{Eq: Lower bound 2} is decreasing while~\eqref{Eq: Lower bound 1} is maximizing. It is hard to evaluate if~\eqref{Eq: Lower bound 1} changes more than~\eqref{Eq: Lower bound 2} or not. Thus, the overall lower-bound,~\emph{i.e.}, $\mathcal{\widehat{L}}_1+\mathcal{\widehat{L}}_2$ in~\eqref{Eq: Overall Lower Bound}, might decrease during some iterations. On the one hand, as the lower-bound (\emph{i.e.},~$\mathcal{\widetilde{L}}_1+\mathcal{\widetilde{L}}_2$) to the original objective function can not be guaranteed to be maximized all the time, this strategy may not promise convergence. On the other hand, if the change to~\eqref{Eq: Lower bound 1} is larger than that to~\eqref{Eq: Lower bound 2}, the convergency is still guaranteed. There is no general judgement for the convergence. It should be studied case by case. Similar arguments can be applied to the case with more than two auxiliary functions. Thus, the convergency of MLB approximation is underdetermined.
\begin{figure}[!t]
\vspace{0mm}
\centering
\includegraphics[width=.45\textwidth]{SLBvsMLB.eps}
\caption{ \label{Fig: SLBvsMLB}\footnotesize Qualitative comparisons of SLB and MLB. For MLB, two different lower-bounds are introduced for $\mathbf{Z}_1$ and $\mathbf{Z}_2$, respectively (the blue dash lines). For SLB, there is only one lower-bound (the green dash line). The original objective function is marked with red solid line. It can be observed that the new objective function that needs to be maximized is not unique for the MLB case. Hence, the convergence is not guaranteed. A new single objective function is employed and maximized for the SLB case. Therefore, the convergence is theoretically guaranteed.}
\vspace{0mm}
\end{figure}
In summary, SLB approximation can theoretically guarantee the convergence while MLB approximation, in general, cannot promise convergence.\footnotemark\footnotetext{In practice (\emph{e.g.},~\cite{Ma2011a,Fan2012}), the EVI-based algorithm may also converge with MLB approximation. However, it is empirical result without proof.}
\vspace{0mm}
\section{Experimental Results and Discussions}
\vspace{0mm}
Recently, several EVI-based parameter estimation algorithms for non-Gaussian statistical models have been proposed. Among others, the EVI-based Bayesian BMM~\cite{Ma2011a} and the EVI-based Bayesian DMM~\cite{Fan2012} took the strong condition and the related analytically tractable solutions were derived with MLB approximation. With SLB approximation, the improved work about EVI-based Dirichlet mixture model was proposed. Regarding the non-negative matrix factorization, Hoffman~\emph{et al.} and Ma~\emph{et al.} proposed the EVI-based strategies for musical signal~\cite{Hoffman2010} and bounded support data~\cite{Ma2015}, respectively. The EVI-based von Mises-Fisher mixture model was proposed in~\cite{Taghia2014} where a structural factorization was considered. For all the aforementioned SLB approximation-based method, the weak condition is fulfilled.
In this section, we first compare the weak and strong conditions quantitatively. Secondly, we intensively compare the performance of the MLB approximation-based methods with the SLB approximation-based methods.
\subsection{Comparisons of Weak and Strong Condition }
Since Dirichlet distribution is a multivariate case of beta distribution, the EVI-based Bayesian BMM that constructs a auxiliary function with weak condition can be obtained based on the work in~\cite{Ma2014} by simply setting the dimension $K=2$. The EVI-based Bayesian BMM proposed in~\cite{Ma2011a} utilized the strong condition to choose the auxiliary function. We compare these two different methods to demonstrate the differences between the strong and weak conditions.
Following the same notation in~\cite{Ma2011a}, we denote a multivariate Bayesian BMM with observation data $\mathbf{X}$ as
\begin{equation}
\label{BMMVector}
\eqs
\begin{split}
f(\mathbf{X};\mathbf{\Pi},\mathbf{U},\mathbf{V}) =& \prod_{n=1}^N \sum_{i=1}^{I}\pi_i \mathrm{Beta}({x}_n;{u}_i,{v}_i),
\end{split}
\end{equation}
where $\pi_i$ is the mixture weigh for the $i$th mixture component and $\mathrm{Beta}(x;u,v)$ is the beta distribution, which can be denoted as
\begin{equation}
\label{BetaDistr}
\eqs
\mathrm{Beta}(x;u,v) = \frac
{\Gamma(u+v)}{\Gamma(u)\Gamma(v)} x^{u-1}
(1-x)^{v-1},\ u,v >0.
\end{equation}
We consider the observation ${x}_n$ and the unobserved indication vector $\mathbf{z}_{n}$ as the \emph{complete} data. The conditional
distribution of $\mathbf{X}= \{{x}_1, \ldots, {x}_N\}$ and $\mathbf{Z}= \{\mathbf{z}_1, \ldots, \mathbf{z}_N\}$
given the latent variables $\{\mathbf{U},\mathbf{V},\mathbf{\Pi}\}$ is
\begin{equation}
\label{Conditional PDF}
\eqs
\begin{split}
f(\mathbf{X},\mathbf{Z}|\mathbf{U},\mathbf{V},\mathbf{\Pi})=&f(\mathbf{X}|\mathbf{U},\mathbf{V},\mathbf{\Pi},\mathbf{Z})f(\mathbf{Z}|\mathbf{\Pi})\\
=&f(\mathbf{X}|\mathbf{U},\mathbf{V},\mathbf{Z})f(\mathbf{Z}|\mathbf{\Pi})\\
=&\prod_{n=1}^N\prod_{i=1}^I\left[\pi_i\mathrm{Beta}({x}_n|{u}_i,{v}_i)\right]^{z_{ni}}.
\end{split}
\end{equation}
The ultimate goal is to estimate the posterior distributions of $u_{i}$, $v_{i}$, and $z_{ni}$, respectively.
In order to derive an analytically tractable solution for the posterior distributions, the most challengeable part with the EVI framework is to calculate the expectation of the bivariate log-inverse-beta (LIB) function
\begin{equation}
\eqs
\mathbf{E}_{u_{i},v_{i}}\left[\mathrm{LIB}(u_{i},v_{i})\right]= \mathbf{E}_{u_{i},v_{i}}\left[\frac{\Gamma(u_{i}+v_{i})}{\Gamma(u_{i})\Gamma(v_{i})}\right].
\end{equation}
\subsubsection{EVI-based Bayesian BMM with Weak Condition~\cite{Ma2014}}
In the Bayesian BMM with SLB approximation~\footnotemark\footnotetext{A Bayesian BMM with SLB approximation can be derived from the Bayesian DMM with SLB approximation~\cite{Ma2014} by setting the dimensions of the Dirichlet variable equal to two.}, the new objective function that we are maximizing is
\begin{equation}
\eqs
\label{Eq: EFA-SLB}
\begin{split}
&\mathbf{E}_{\mathbf{Z}}\left[\ln \widetilde{p}_{w}(\mathbf{X},\mathbf{Z})\right]\\
=&\mathcal{\widetilde{L}}_{\text{SLB}}\\
=&\ln \frac{\Gamma(\overline{u}_{i}+\overline{v}_{i})}{\Gamma(\overline{u}_{i})\Gamma(\overline{v}_{i})}\\ &+\overline{u}_{i}\left[\psi(\overline{u}_{i}+\overline{v}_{i})-\psi(\overline{u}_{i})\right] (\mathbf{E}\left[\ln u_{i}\right]-\ln \overline{u}_{i})\\
&+ \overline{v}_{i}\left[\psi(\overline{u}_{i}+\overline{v}_{i})-\psi(\overline{v}_{i})\right] (\mathbf{E}\left[\ln v_{i}\right]-\ln \overline{v}_{i}),
\end{split}
\end{equation}
where $\overline{x}$ is the expected value of $x$ and $\psi(x)$ is the digamma function defined as $\psi(x)=\frac{\partial \ln \Gamma(x)}{\partial x}$.
This lower-bound satisfies the weak condition such that $\mathbf{E}_{\mathbf{Z}}\left[\ln p(\mathbf{X},\mathbf{Z})\right] \geq \mathbf{E}_{\mathbf{Z}}\left[\ln \widetilde{p}_{w}(\mathbf{X},\mathbf{Z})\right]$. Moreover, this lower-bound is identical for all the variables $u_{i}$, $v_{i}$, and $z_{ni}$
\subsubsection{EVI-based Bayesian BMM with Strong Condition~\cite{Ma2011a}}
For the case with strong condition, an auxiliary function $\widetilde{p}_{s}(\mathbf{X},\mathbf{Z})$ is required. In~\cite{Ma2011a}, three different auxiliary functions were derived for the variables $u_{i}$, $v_{i}$, and $z_{ni}$, respectively. To specify, for $u_{i}$, the auxiliary function is
\begin{equation}
\eqs
\label{Eq: StrongAuxiliary1}
\begin{split}
\widetilde{p}_{s_{u_i}}(\mathbf{X},\mathbf{Z})=&\ln \frac{\Gamma(\overline{u}_i+\overline{v}_i)}{\Gamma(\overline{u}_i)\Gamma(\overline{v}_i)}\\
&+ \overline{u}_i\left[\psi(\overline{u}_i+\overline{v}_i)-\psi(\overline{u}_i)\right](\ln u_i-\ln \overline{u}_i)\\
&+ \overline{v}_i\left[\psi(\overline{u}_i+\overline{v}_i)-\psi(\overline{v}_i)\right](\ln v_i-\ln \overline{v}_i)\\
&+ \overline{u}_i\overline{v}_i\psi^{'}(\overline{u}_i+\overline{v}_i)(\ln u_i - \ln \overline{u}_i),
\end{split}
\end{equation}
where $\psi^{'}(x)=\frac{\partial \psi(x)}{\partial x}$. Hence, when considering $u_{i}$ as the variable, the objective function that was maximized is~\cite{Ma2011a}
\begin{equation}
\eqs
\label{Eq: EFA-MLB-1}
\begin{split}
\mathcal{\widetilde{L}}_{\text{MLB}_{u_i}}
=&\mathbf{E}_{\mathbf{Z}}\left[\widetilde{p}_{s_{u_i}}(\mathbf{X},\mathbf{Z})\right]\\
=&\ln \frac{\Gamma(\overline{u}_{i}+\overline{v}_{i})}{\Gamma(\overline{u}_{i})\Gamma(\overline{v}_{i})}\\ &+\overline{u}_{i}\left[\psi(\overline{u}_{i}+\overline{v}_{i})-\psi(\overline{u}_{i})\right] (\mathbf{E}\left[\ln u_{i}\right]-\ln \overline{u}_{i})\\
&+ \overline{v}_{i}\left[\psi(\overline{u}_{i}+\overline{v}_{i})-\psi(\overline{v}_{i})\right] (\mathbf{E}\left[\ln v_{i}\right]-\ln \overline{v}_{i})\\
&+ \overline{u}_{i}\cdot\overline{v}_{i}\cdot\psi^{'}(\overline{u}_{i}+\overline{v}_{i})(\mathbf{E}\left[\ln u_{i}\right] - \ln \overline{u}_{i}).
\end{split}
\end{equation}
Similarly, due to the symmetry of $u_i$ and $v_i$, the objective function, when treating $v_i$ as the variable, is~\cite{Ma2011a}
\begin{equation}
\eqs
\label{Eq: EFA-MLB-2}
\begin{split}
\mathcal{\widetilde{L}}_{\text{MLB}_{v_i}}
=&\ln \frac{\Gamma(\overline{u}_{i}+\overline{v}_{i})}{\Gamma(\overline{u}_{i})\Gamma(\overline{v}_{i})}\\ &+\overline{u}_{i}\left[\psi(\overline{u}_{i}+\overline{v}_{i})-\psi(\overline{u}_{i})\right] (\mathbf{E}\left[\ln u_{i}\right]-\ln \overline{u}_{i})\\
&+ \overline{v}_{i}\left[\psi(\overline{u}_{i}+\overline{v}_{i})-\psi(\overline{v}_{i})\right] (\mathbf{E}\left[\ln v_{i}\right]-\ln \overline{v}_{i})\\
&+ \overline{u}_{i}\cdot\overline{v}_{i}\cdot\psi^{'}(\overline{u}_{i}+\overline{v}_{i})(\mathbf{E}\left[\ln v_{i}\right] - \ln \overline{v}_{i}).
\end{split}
\end{equation}
When taking $z_{ni}$ as the only variable, the auxiliary function that proposed in~\cite{Ma2011a} is
\begin{equation}
\eqs
\label{Eq: EFA-MLB-3}
\begin{split}
&\widetilde{p}_{s_{z_{ni}}}(\mathbf{X},\mathbf{Z})\\
=&\ln \frac{\Gamma(\overline{u}_{i}+\overline{v}_{i})}{\Gamma(\overline{u}_{i})\Gamma(\overline{v}_{i})}\\ &+\overline{u}_{i}\left[\psi(\overline{u}_{i}+\overline{v}_{i})-\psi(\overline{u}_{i})\right] (\ln u_{i}-\ln \overline{u}_{i})\\
&+ \overline{v}_{i}\left[\psi(\overline{u}_{i}+\overline{v}_{i})-\psi(\overline{v}_{i})\right] (\ln v_{i}-\ln \overline{v}_{i})\\
&+ 0.5\cdot \overline{u}_{i}^2\left[\psi^{'}(\overline{u}_{i}+\overline{v}_{i})-\psi^{'}(\overline{u}_{i})\right](\ln u_{i} - \ln \overline{u}_{i})^2\\
&+ 0.5\cdot \overline{v}_{i}^2\left[\psi^{'}(\overline{u}_{i}+\overline{v}_{i})-\psi^{'}(\overline{v}_{i})\right](\ln v_{i} - \ln \overline{v}_{i})^2\\
&+ \overline{u}_{i}\cdot\overline{v}_{i}\cdot\psi^{'}(\overline{u}_{i}+\overline{v}_{i})(\ln u_{i} - \ln \overline{u}_{i})(\ln v_{i} - \ln \overline{v}_{i}).
\end{split}
\end{equation}
Correspondingly, the objective function for updating the posterior distribution of $z_{ni}$ can be represented as
\begin{equation}
\eqs
\label{Eq: EFA-MLB}
\begin{split}
&\mathcal{\widetilde{L}}_{\text{MLB}_{z_{ni}}}\\
=&\mathbf{E}_{\mathbf{Z}}\left[\widetilde{p}_{s_{z_{ni}}}(\mathbf{X},\mathbf{Z})\right]\\
=&\ln \frac{\Gamma(\overline{u}_{i}+\overline{v}_{i})}{\Gamma(\overline{u}_{i})\Gamma(\overline{v}_{i})}\\ &+\overline{u}_{i}\left[\psi(\overline{u}_{i}+\overline{v}_{i})-\psi(\overline{u}_{i})\right] (\mathbf{E}\left[\ln u_{i}\right]-\ln \overline{u}_{i})\\
& + \overline{v}_{i}\left[\psi(\overline{u}_{i}+\overline{v}_{i})-\psi(\overline{v}_{i})\right] (\mathbf{E}\left[\ln v_{i}\right]-\ln \overline{v}_{i})\\
&+ 0.5\cdot \overline{u}_{i}^2\left[\psi^{'}(\overline{u}_{i}+\overline{v}_{i})-\psi^{'}(\overline{u}_{i})\right]\mathbf{E}\left[(\ln u_{i} - \ln \overline{u}_{i})^2\right]\\
&+ 0.5\cdot \overline{v}_{i}^2\left[\psi^{'}(\overline{u}_{i}+\overline{v}_{i})-\psi^{'}(\overline{v}_{i})\right]\mathbf{E}\left[(\ln v_{i} - \ln \overline{v}_{i})^2\right]\\
&+ \overline{u}_{i}\cdot\overline{v}_{i}\cdot\psi^{'}(\overline{u}_{i}+\overline{v}_{i})(\mathbf{E}\left[\ln u_{i}\right] - \ln \overline{u}_{i})(\mathbf{E}\left[\ln v_{i}\right] - \ln \overline{v}_{i}).
\end{split}
\end{equation}
It has been analyzed in Sec.~\ref{Sec: Required Conditions} that both the strong condition and the weak condition incur systematic gaps. We now quantitatively compare the gaps. It is worth to note that the EVI-based Bayesian BMM with strong condition is also a MLB approximation. We focus only on the comparisons of weak and strong conditions in thie section. The comparisons about the SLB approximation with the MLB approximation will be presented in the next section.
When taking $u_i$ as the variable, the difference between the objective functions obtained via weak and strong conditions, respectively, can be calculated as
\begin{equation}
\eqs
\label{Eq: Lower bound Difference-1}
\begin{split}
\Delta\mathcal{\widetilde{L}}_{\text{SLB}\ \text{vs.}\ \text{MLB}_{u_i}}=&\mathcal{\widetilde{L}}_{\text{SLB}} - \mathcal{\widetilde{L}}_{\text{MLB}_{u_i}}\\
=& -\bar{u}_{i}\bar{v}_{i}\psi'(\bar{u}_{i}+\bar{v}_{i})(\mathbf{E}\left[\ln v_{i}\right] - \ln \bar{v}_{i})\\
\geq & 0,
\end{split}
\end{equation}
where we used the fact that $\psi^{'}(x)>0$ and $\ln x$ is a convex function in terms of $x$. For $v_i$, it is straightforward to show the difference is also positive by using the symmetric properties.
When comparing $\mathcal{\widetilde{L}}_{\text{SLB}} $ with $\mathcal{\widetilde{L}}_{\text{MLB}_{z_{ni}}}$, the difference is
\begin{equation}
\eqs
\label{Eq: Lower bound Difference-3}
\begin{split}
&\Delta\mathcal{\widetilde{L}}_{\text{SLB}\ \text{vs.}\ \text{MLB}_{z_{ni}}}\\
=&\mathcal{\widetilde{L}}_{\text{SLB}} - \mathcal{\widetilde{L}}_{\text{MLB}_{z_{ni}}}\\
=&-\left\{0.5\cdot \overline{u}_{i}^2\left[\psi^{'}(\overline{u}_{i}+\overline{v}_{i})-\psi^{'}(\overline{u}_{i})\right]\mathbf{E}\left[(\ln u_{i} - \ln \overline{u}_{i})^2\right]\right.\\
&+ 0.5\cdot \overline{v}_{i}^2\left[\psi^{'}(\overline{u}_{i}+\overline{v}_{i})-\psi^{'}(\overline{v}_{i})\right]\mathbf{E}\left[(\ln v_{i} - \ln \overline{v}_{i})^2\right]\\
&+ \left.\overline{u}_{i}\cdot\overline{v}_{i}\cdot\psi^{'}(\overline{u}_{i}+\overline{v}_{i})(\mathbf{E}\left[\ln u_{i}\right] - \ln \overline{u}_{i})(\mathbf{E}\left[\ln v_{i}\right] - \ln \overline{v}_{i})\right\}.
\end{split}
\end{equation}
It can be proved that the difference $\Delta\mathcal{\widetilde{L}}_{\text{SLB}\ \text{vs.}\ \text{MLB}_{z_{ni}}}$ is also greater than or equal to $0$. More details for this proof can be found in Appendix~\ref{Appendix-1}.
The aforementioned three positive differences indicate that the new objective function with weak condition~\cite{Ma2014} is tighter (\emph{i.e.}, closer to the original objective function) than that with strong condition~\cite{Ma2011a}. Thus, for the EVI-based Bayesian BMM, the systematic gap incurred by the weak condition is smaller than that incurred by the strong condition. This makes the weak condition more favorable in practice~\cite{Ma2014,Taghia2014,Ma2015,Hoffman2010}.
Similar analysis can be applied to the Bayesian DMM with MLB~\cite{Fan2012} and the Bayesian DMM with SLB~\cite{Ma2014}, as Dirichlet distribution is an multivariate extension of beta distribution.
\subsection{Comparisons of MLB and SLB Approximations}
In the previous section, we analyzed and compared the weak and the strong conditions for the EVI framework. Another important issue in EVI implementation is to distinguish the MLB and the SLB approximations, as the latter one can guarantee convergence but the first one may not. To this end, we compare the MLB approximation-based algorithm with the SLB approximation-based algorithm in this section.
\subsubsection{Observations of Non-convergence}
As discussed in Sec.~\ref{Sec: SLB vs MLB}, the convergence of the MLB method is not guaranteed. We ran the MLB approximation-based Bayesian BMM algorithm~\cite{Ma2011a} and Bayesian DMM algorithm~\cite{Fan2012}, respectively, and monitored the value of the objective function during each iteration. It can be observed that, for some rounds of simulations~\footnotemark\footnotetext{Here, one simulation round means that we ran the estimation algorithm until it stops according to some criterion.}, the objective function is~\emph{decreasing} during some iterations. This phenomenon has been observed for several times, both for BMM and DMM. Figure~\ref{Fig: NonConvergence} lists the decreasing objective function values and the corresponding iterations. For the SLB approximation-based Bayesian BMM and Bayesian DMM~\cite{Ma2014}, the monitored objective function was always increasing until converging. The observation of non-convergence demonstrates that the convergence with MLB approximation is underdetermined.
\begin{figure}[!t]
\vspace{0mm}
\psfrag{x}[][]{\tiny ${\text{Iter.}}\ \sharp$}
\psfrag{y}[][]{\tiny $\mathbf{E}_{\mathbf{Z}}\left[\ln{ p(\mathbf{X},\mathbf{Z})}-\ln{q(\mathbf{Z})}\right]$}
\centering
\subfigure[\sps Model A]{\includegraphics[width=.235\textwidth]{NonConvergence1.eps}}\hspace{2mm}
\subfigure[\sps Model B]{\includegraphics[width=.235\textwidth]{NonConvergence2.eps}}\hspace{2mm}
\vspace{0mm}
\caption{ \label{Fig: NonConvergence}\footnotesize Observation of decreasing values of the objective function during iterations. In principle, objective function should always increase (at least not decrease). Although this non-convergence can be observed in some of the simulation rounds ($2\sim3$ times out of $10$ rounds of simulations), this fact indicates that the MLB approximation-based method may not promise convergence. Model A is a BMM with parameter $\pi_1=0.3,\pi_2=0.7,\mathbf{u}_1=[2\ 8]^{\text{T}},\mathbf{u}_2=[15\ 4]^{\text{T}}$ and model B is a three-dimensional DMM with parameter $\pi_1=0.35,\pi_2=0.65,\mathbf{u}_1=[4\ 12\ 3]^{\text{T}}, \mathbf{u}_2=[10\ 6\ 2]^{\text{T}}$. $400$ samples were generated from each model.}
\vspace{0mm}
\end{figure}
\subsubsection{Comparisons of Estimation Accuracy}
In this section, we compare the MLB approximation with the SLB approximation quantitatively. With a known BMM or DMM, $2,000$ samples were generated, respectively. The above-mentioned Bayesian estimation algorithms were applied to estimate the posterior distributions, respectively. We calculated the original variational objective function in~\eqref{Eq: Lowerbound} to examine which approximation is better. With the obtained posterior distribution $q^*(\mathbf{Z})$, the original variational objective function is calculated numerically by sampling method. Hence,we got two different values, ${\mathcal{{L}}_{\text{SLB}}}$ and $\mathcal{{L}}_{\text{MLB}}$, from the SLB approximation and the MLB approximation, respectively. Larger value means closer lower-bound approximation. In addition to this, we also measure the estimation accuracy by the KL divergence of the estimated PDF from the true one as
$\text{KL}(p(\mathbf{X}|\boldsymbol\Theta)\|p(\mathbf{X}|\boldsymbol{\widehat{\Theta}}))$,
where $\boldsymbol\Theta$ is the true parameter vector and $\boldsymbol{\widehat{\Theta}}$ is the estimated one. Similarly, we numerically calculated $\text{KL}_{\text{SLB}}$ and $\text{KL}_{\text{MLB}}$ from the SLB and MLB approximations~\footnotemark\footnotetext{For the MLB approximation, we only take those simulation rounds that always converge into consideration.}, respectively. The smaller the KL divergence is, the more accurate the estimation is.
For Bayesian BMM, the comparisons are presented in Table~\ref{Tab: BMM} and Figure~\ref{Fig: BMM}. The simulations were run $20$ rounds and the mean values are reported. The comparisons of the Bayesian DMM via SLB~\cite{Ma2014} and MLB~\cite{Fan2012} approximations are illustrated in Table~\ref{Tab: DMM} and Figure~\ref{Fig: DMM}. It can be observed that, for both Bayesian BMM and Bayesian DMM, the SLB approximation yields higher objective function value than the MLB approximation. Meanwhile, the KL divergences obtained by the SLB approximation are all smaller than those obtained by the MLB. These facts demonstrate that the SLB approximation is superior to the MLB approximation.
\begin{table}[!t]
\vspace{0mm}
\caption{\label{Tab: BMM} \footnotesize Comparisons of the objective functions for Bayesian BMM. }
\centering
\sps
\begin{tabular}{|@{}c@{}|c|c|c|c|}
\hline
\ Model\ \ &Parameters & \ $\mathcal{L}_{\text{SLB}}- \mathcal{L}_{\text{MLB}}$\ &\ $\text{KL}_{\text{SLB}}-\text{KL}_{\text{MLB}}$\ \\[.1mm]
\hline
\hline
\multirow{2}{*}{A}&$\pi_1 = 0.3, u_1 = 2,v_1 = 8$ & \multirow{2}{*}{$3.6\times 10^{-3}$}& \multirow{2}{*}{$-2.8\times 10^{-3}$} \\[.1mm]
&$\pi_2 = 0.7, u_2 = 15,v_2 = 4$ & & \\[.1mm]
\hline
\multirow{3}{*}{B}&$\pi_1 = 0.3, u_1 = 10,v_1 = 2$ & \multirow{3}{*}{$1.3\times 10^{-3}$} & \multirow{3}{*}{$-0.58\times 10^{-3}$}\\[.1mm]
&$\pi_2 = 0.4, u_2 = 2,v_2 = 12$ & &\\[.1mm]
&$\pi_3 = 0.3, u_3 = 10,v_3 = 10$ & &\\[.1mm]
\hline\end{tabular}\\
\end{table}
\begin{figure}[!t]
\vspace{0mm}
\psfrag{x}[][]{\tiny ${\text{SLB}}$}
\psfrag{y}[][]{\tiny ${\text{MLB}}$}
\centering
\subfigure[\label{Subfig: AN10}\sps Model A]{\includegraphics[width=.235\textwidth]{VBBMMCompModelA.eps}}\hspace{2mm}
\subfigure[\sps Model B]{\includegraphics[width=.235\textwidth]{VBBMMCompModelB.eps}}\hspace{2mm}
\vspace{0mm}
\caption{ \label{Fig: BMM}\footnotesize Comparisons of the original objective functions in Bayesian BMM. The central mark is the median, the edges are the $25^{th}$ and $75^{th}$ percentiles. The outliers are marked individually. Model settings are the same as Table~\ref{Tab: BMM}.}
\vspace{0mm}
\end{figure}
\vspace{0mm}
\section{Conclusions}
\vspace{0mm}
The extended variational inference (EVI) framework can be applied in efficiently estimation of non-Gaussian statistical models. We discussed and summarized the required conditions for selection of the auxiliary functions in the EVI framework. Moreover, we also analyzed and compared the single lower-bound (SLB) approximation and the multiple lower-bounds (MLB) approximation. Theoretical analysis showed that the weak condition, in general, can incur smaller systematic gap than the strong condition. Hence, the weak condition is more preferable in practice. Furthermore, quantitative evaluations based on Bayesian beta mixture model and Bayesian Dirichlet mixture model demonstrated that the SLB approximation can theoretically guarantee convergence and is superior to the MLB approximation.
|
2,877,628,091,549 | arxiv | \section{Introduction}
It is not difficult to see that for the 2-element group $\mathbb{Z}_2 =
\langle\{0,1\}, + \rangle$, the term operation $m(x,y,z) = x + y + z$ satisfies
the equations
\begin{equation}\label{min-eq}
m(y,x,x) \approx m(x,y,x) \approx m(x,x,y) \approx y.
\end{equation}
A slightly more challenging exercise is to show that a~finite Abelian group
will have such a~term operation if and only if it is isomorphic to a~Cartesian
power of~$\mathbb{Z}_2$.
A ternary operation $m(x,y,z)$ on a~set $A$ is called a~\emph{minority
operation on $A$} if it satisfies the identities~(\ref{min-eq}). A ternary
term $t(x,y,z)$ of an algebra $\m a$ is a~\emph{minority term of $\m a$} if its
interpretation as an operation on $A$, $t^{\m a}(x,y,z)$, is a~minority
operation on $A$. Given a~finite algebra $\m a$, one can decide if it has
a~minority term by constructing all of its ternary term operations and checking
to see if any of them satisfy the equations~(\ref{min-eq}). Since the set of
ternary term operations of $\m a$ can be as big as $|A|^{|A|^3}$, this
procedure will have a~runtime that in the worst case will be exponential in the
size of~$\m a$.
In this paper we consider the computational complexity of testing for the
existence of a~minority term for finite algebras that are {idempotent}. An
$n$-ary operation $f$ on a~set $A$ is \emph{idempotent} if it satisfies the
equation $f(x, x, \dots, x) \approx x$ and an algebra is \emph{idempotent} if
all of its basic operations are. We observe that every minority operation is
idempotent. While idempotent algebras are rather special, one can always form
one by taking the \emph{idempotent reduct} of a~given algebra $\m a$. This is
the algebra with universe $A$ whose basic operations are all of the idempotent
term operations of $\m a$. It turns out that many important properties of an
algebra and the variety that it generates are governed by its idempotent
reduct~\cite{kearnes-kiss-book}.
The condition of an algebra having a~minority term is an example of a~more
general existential condition on the set of term operations of an algebra
called a~\emph{strong Maltsev condition}. Such a~condition consists of
a~finite set of operation symbols along with a~finite set of equations
involving them. An algebra is said to satisfy the condition if for each
$k$-ary operation symbol from the condition, there is a~corresponding $k$-ary
term operation of the algebra so that under this correspondence, the equations
of the condition hold. For a~more careful and complete presentation of this
notion and related ones, we refer the reader to~\cite{Garcia-Taylor}.
Given a~strong Maltsev condition $\Sigma$, the problem of determining if
a~finite algebra satisfies $\Sigma$ is decidable and lies in the complexity
class \compEXPTIME. As in the minority term case, one can construct all term
operations of an algebra up to the largest arity of an operation symbol in
$\Sigma$ and then check to see if any of them can be used to witness the
satisfaction of the equations of $\Sigma$. In general, we cannot do any better
than this, since for some strong Maltsev conditions, it is known that the
corresponding decision problem is {\compEXPTIME}-complete~\cite{freese-valeriote}.
The situation for finite idempotent algebras appears to be better than in the
general case since there are a~number of strong Maltsev conditions for which
there are polynomial-time procedures to decide if a~finite idempotent algebra
satisfies them~\cite{freese-valeriote, horowitz-ijac, kazda-valeriote}. At
present there is no known characterization of these strong Maltsev conditions
and we hope that the results of this paper may help to lead to a~better
understanding of them. We refer the reader to~\cite{Bu-Sa} or
to~\cite{bergman-book} for background on the basic algebraic notions and
results used in this work.
\section{Formulation of the problem}
In this section, we formally introduce the considered problem. In all the
problems mentioned in the introduction, we assume that the input algebra is
given as a list of tables of its basic operations. In particular, this implies
that the input algebra has finitely many operations. We also assume that the
input algebra has at least one operation (i.e., the input is non-empty) and we
forbid nullary operations on the input.
The main concern of this paper is the following decision problem.
\begin{definition}
Define \minority\ to be the following decision problem:
\begin{itemize}
\item INPUT: A~list of tables of basic operations of an idempotent algebra~$\m A$.
\item QUESTION: Does $\m a$ have a~minority term?
\end{itemize}
\end{definition}
The size of an input is measured by the following formula. For a finite
algebra~$\m a$, let
\[
\|\m a\| = \sum_{i = 1}^\infty k_i|A|^i,
\]
where $k_i$ is the number of $i$-ary basic operations of $\m a$. Since we
assume that $\m a$ has only finitely many operations, the sum is finite. Also
note that $\lVert \m a\rVert \geq \lvert A \rvert$ since we assumed that $\m a$
has a non-nullary operation.
\section{Minority is a join of two weaker conditions}
\label{join}
One approach to understanding the minority term condition is to see if maybe
there exist two weaker Maltsev conditions $\Sigma_1$ and $\Sigma_2$ such that a
finite algebra $\m A$ has a minority term if and only if $\m a$ satisfies both
$\Sigma_1$ and $\Sigma_2$. In this situation, we would say that the minority
term condition is the join of $\Sigma_1$ and $\Sigma_2$. Were this the case, we
could decide if $\m a$ has a minority term by deciding $\Sigma_1$ and
$\Sigma_2$.
On the surface, the minority term condition is already quite concise and
natural; it is not clear if having a minority term can be expressed as a join
of weaker conditions. In this section, we show that it is a join of having a
Maltsev term with a condition which we call having a minority-majority term
(not to be confused with the `generalized minority-majority' terms
from~\cite{GMM-paper}). Maltsev terms are a classical object of study in
universal algebra -- deciding if an
algebra has them is in \compP{} for finite idempotent algebras. The
minority-majority terms are much less understood.
\begin{definition}
A ternary term $p(x,y,z)$ of an algebra $\m a$ is a~\emph{Maltsev term for
$\m a$} if it satisfies the equations
\[
p(x,x,y)\approx p(y,x,x)\approx y
\]
and a~6-ary term $t(x_1, \dots, x_6)$ is a~\emph{minority-majority term} of
$\m a$ if it satisfies the equations
\begin{align*}
t(y,x,x,z,y,y)&\approx y\\
t(x,y,x,y,z,y)&\approx y\\
t(x,x,y,y,y,z)&\approx y.
\end{align*}
\end{definition}
We point out that if an algebra has a~minority term then it also, trivially,
has a~Maltsev term, but that the converse does not hold (as witnessed by the
cyclic group $\mathbb{Z}_4$). Our definition of a~minority-majority term is
a~strengthening of the term condition found by Ol\v{s}\'{a}k
in~\cite{Olsak2017}. Ol\v{s}\'{a}k has shown that his terms are a~weakest
non-trivial strong Maltsev condition whose terms are all idempotent.
We observe that by padding variables, any algebra that has a~minority term or
a~majority term (just replace the final occurrence of the variable $y$ in the
equations~(\ref{min-eq}) by the variable $x$ to define such a~term) also has
a~minority-majority term. Since the 2-element lattice has a~majority term but
no minority term, it follows that having a~minority-majority term is strictly
weaker than having a~minority term.
\begin{theorem}\label{thm:join}
An algebra has a~minority term if and only if it has a~Maltsev term and
a~minority-majority term.
\end{theorem}
\begin{proof}
The discussion preceding this theorem establishes one direction of this
theorem. For the other we need to show that if an algebra $\m a$ has a~Maltsev
term $p(x,y,z)$, and a~minority-majority term $t(x_1, \dots, x_6)$ then $\m
a$ has a~minority term. Given such an algebra $\m a$, define
\[
m(x,y,z)=t(x,y,z,p(z,x,y),p(x,y,z),p(y,z,x)).
\]
Verifying that $m(x,y,z)$ is a~minority term for $\m a$ is
straightforward; we show one of the three required equalities here as an example:
\begin{align*}
m(x,x,y)&\approx t(x,x,y,p(y,x,x),p(x,x,y),p(x,y,x))\\
&\approx t(x,x,y,y,y,p(x,y,x))\approx y.
\end{align*}
\end{proof}
\begin{corollary}
The problem of deciding if a~finite algebra has a~minority term can be
reduced to the problems of deciding if it has a~Maltsev term and if it has
a~minority-majority term.
\end{corollary}
As was demonstrated in~\cite{freese-valeriote, horowitz-ijac}, there is
a~polynomial-time algorithm to decide if a~finite idempotent algebra has
a~Maltsev term. Therefore, should testing for a~minority-majority term for
finite idempotent algebras prove to be tractable, then this would lead to
a~fast algorithm for testing for a~minority term, at least for finite
idempotent algebras. From the hardness results found
in~\cite{freese-valeriote} it follows that in general, the problem of deciding
if a~finite algebra has a~minority-majority term is \compEXPTIME-complete; the
complexity of this problem restricted to idempotent algebras is unknown.
\section{Local Maltsev terms}
\label{maltsev}
In~\cite{freese-valeriote, horowitz-ijac, kazda-valeriote,
Valeriote_Willard_2014} polynomial-time algorithms are presented for deciding
if certain Maltsev conditions hold in the variety generated by a~given finite
idempotent algebra. One particular Maltsev condition that is addressed by all
of these papers is that of having a~Maltsev term. In all but
\cite{freese-valeriote}, the polynomial-time algorithm produced is based on
testing for the presence of enough `local' Maltsev terms in the given
algebra.
\begin{definition}
Let $\m a$ be an algebra and $S \subseteq A^2\times\{0,1\}$. A term
operation $t(x,y,z)$ of $\m a$ is a~\emph{local Maltsev term operation for
$S$} if:
\begin{itemize}
\item whenever $((a,b), 0) \in S$, $t(a,b,b) = a$, and
\item whenever $((a,b), 1) \in S$, $t(a,a,b) = b$.
\end{itemize}
\end{definition}
Clearly, if $\m a$ has a~Maltsev term then it has a~local Maltsev term
operation for every subset $S$ of $A^2 \times\{0,1\}$ and conversely, if $\m a$
has a~local Maltsev term operation for $S = A^2 \times\{0,1\}$ then it has
a~Maltsev term. In~\cite{horowitz-ijac, kazda-valeriote,
Valeriote_Willard_2014} it is shown that if a~finite idempotent algebra $\m a$
has local Maltsev term operations for all two element subsets of $A^2
\times\{0,1\}$ then $\m a$ will have a~Maltsev term. This fact is then used as
the basis for a~polynomial-time test to decide if a~given finite idempotent
algebra has a~Maltsev term.
In this section we extract an additional piece of information from this
approach to testing for a~Maltsev term, namely that if a~finite idempotent
algebra has a~Maltsev term, then we can produce an operation table or a~circuit
for a~Maltsev term operation in time polynomial in the size of the algebra.
We will first prove that there is an
algorithm for producing circuits for a Maltsev function; the algorithm for
producing the operation table will then be given as a~corollary. However, for
the reduction presented in Section~\ref{sec:np} we need only the algorithm
for producing a function table.
Let us first briefly describe how to get a~global Maltsev operation from local
ones. Assume we know (circuits of) a~local Maltsev term operation
$t_{a,b,c,d}(x,y,z)$ for each two element subset
\[
\{((a,b), 0), ((c,d), 1)\}
\]
of $A^2 \times\{0,1\}$. These are required for $\m A$ to have a~Maltsev term.
A global Maltsev term can be constructed from them in two stages: First, we construct, for each $a,b\in
A$, an operation $t_{a,b}$ such that $t_{a,b}(a,b,b) = a$ and $t_{a,b}(x,x,y) = y$ for all
$x,y \in A$. This is done by fixing an enumeration $(a_1, b_1)$, $(a_2,
b_2)$, \dots, $(a_{n^2}, b_{n^2})$ of $A^2$, and then defining, for $1 \le j
\le n^2$, the operation $t_{a,b}^j(x,y,z)$ on $A$ inductively as follows:
\begin{itemize}
\item $t_{a,b}^1(x,y,z) = t_{a,b,a_1, b_1}(x,y,z)$, and
\item for $1 \le j < n^2$, $t_{a,b}^{j+1}(x,y,z) =
t_{a,b,u,v}(t_{a,b}^j(x,y,z), t_{a,b}^j(y,y,z), z)$, where $u =
t_{a,b}^j(a_{j+1}, a_{j+1}, b_{j+1})$ and $v = b_{j+1}$.
\end{itemize}
An easy inductive argument shows that $t_{a,b}^j(a,b,b) = a$ and
$t_{a,b}^j(a_i, a_i, b_i) = b_i$ for all $i \le j \le n^2$, and so setting
$t_{a,b}(x,y,z) = t_{a,b}^{n^2}(x,y,z)$ works.
In the second stage, we construct a~term $t_j(x,y,z)$ such that $t_j(a,a,b) =
b$ for all $a$, $b \in A$ and $t_j(a_i, b_i, b_i) = a_i$ for all $i \le j$. We
define this sequence of operations inductively again:
\begin{itemize}
\item $t_1(x,y,z) = t_{a_1, b_1}(x,y,z)$, and
\item for $1 \le j < n^2$, $t_{j+1}(x,y,z) = t_{u,v}(x, t_j(x,y,y),
t_j(x,y,z))$, where $u = a_{j+1}$ and $v = t_j(a_{j+1}, b_{j+1},
b_{j+1})$.
\end{itemize}
Again, it can be shown that for $1 \le j \le n^2$, the operation $t_j(x,y,z)$
satisfies the claimed properties and so $t_{n^2}(x,y,z)$ will be a~Maltsev term
operation for $\m a$.
From the above construction, one can obtain a~term that
represents a Maltsev term operation of the algebra $\m A$, starting with terms representing the operations $t_{a,b,c,d}$. But there is
an efficiency problem with this approach:
the term is extended by one layer
in each step, which results in a term of exponential size. Therefore, the
bookkeeping of this term would increase the running time of the algorithm
beyond polynomial. Nevertheless, this can be circumvented by constructing
a~succint representation of the term operations, namely by considering circuits
instead of terms.
Informally, a~circuit over an algebraic language (as a~generalization of
logical circuits) is a~collection of gates labeled by operation symbols, where
the number of inputs of each gate corresponds to the arity of the operation
symbol. The inputs are either connected to outputs of some other gate, or
designated as inputs of the circuit; an output of one of the gates is
designated as an~output of the circuit. Furthermore, these connections allow
for straightforward evaluation, i.e., there are no oriented cycles.
Formally, we define an $n$-ary \emph{circuit} in the language of an algebra $\m A$ as a~directed acyclic graph with possibly multiple edges that has two kinds of vertices: \emph{inputs} and \emph{gates}. There are exactly $n$ inputs, labeled by variables $x_1,\dots, x_n$, and each of them is a~source, and a~finite number of gates. Each gate is labeled by an~operation symbol of $\m A$, the in-degree corresponds to the arity of the operation, and the in-edges are ordered. One of the vertices is designated as the \emph{output} of the circuit. We define the size of the circuit to be the number of its vertices.
The value of a~circuit given an input tuple $a_1,\dots,a_n$ is defined by the following
recursive computation: The value on an input vertex labeled by $x_i$ is $a_i$,
the value on a~gate labeled by $g$ is the value of the operation $g^{\m A}$
applied to the values of its in-neighbours in the specified order. Finally, the
output value of the circuit is the value of the output vertex. It is easy
to see that the value of a~circuit on a~given tuple can be computed in linear
time (in the size of the circuit) in a~straightforward way. For a~fixed circuit
the function that maps the input tuple to the output is a~term function of $\m
A$. Indeed, to find such a~term it is enough to evaluate the circuit in the
free (term) algebra on the tuple $x_1,\dots,x_n$. The converse is also true
since any term can be represented as a~`tree' circuit (it is an oriented tree
if we omit all input vertices). Many terms can be expressed by considerably
smaller circuits. We give one such example in
Figure~\ref{fig:term-and-circuit}.
\begin{figure}
\[ \begin{tikzpicture}[ split/.style = { shape = circle, draw, fill, inner
sep = 0, minimum size = 2.5 }, circuit logic US ]
\node [and gate, inputs = {nnn}, point right, minimum size = .7cm] (f) at (3,0) {$f$};
\node [and gate, inputs = {nnn}, point right, minimum size = .7cm] (g) at (0,0) {$g$};
\node (x) at (-3,.7) {$x$};
\node (y) at (-3,0) {$y$};
\node (z) at (-3,-.7) {$z$};
\draw (f.output) -- ++(right:.5) node [right] {$f(g(x,y,y),g(x,y,y),z)$} ;
\draw (g.output) -- (f.input 2);
\draw ($(g.output)!.5!(f.input 2)$) |- (f.input 1);
\node [split] at ($(g.output)!.5!(f.input 2)$) {};
\draw (x.east) -| ($(x.east)!.5!(g.input 1)$) |- (g.input 1);
\draw (y.east) -- (g.input 2);
\draw ($(y.east)!.5!(g.input 2)$) |- (g.input 3);
\node [split] at ($(y.east)!.5!(g.input 2)$) {};
\draw (z.east) -| ($(g.output)!.5!(f.input 2) + (-90:.5) $) |- (f.input 3);
\end{tikzpicture} \]
\caption{A~succinct circuit representation of the term $f(g(x,y,y),g(x,y,y),z)$.}
\label{fig:term-and-circuit}
\end{figure}
In the proof of the theorem below, we will also use circuits with multiple outputs. The only difference in the definition is that several vertices are designated as outputs. Any such circuit then computes a~tuple of term functions.
\begin{theorem}\label{maltsevcircuit}
Let $\m a$ be a~finite idempotent algebra. There is an algorithm whose
runtime can be bounded by a~polynomial in the size of $\m a$ that will either
(correctly) output that $\m a$ has no Maltsev term operation, or output
a~circuit for some Maltsev term operation of $\m a$.
\end{theorem}
\begin{proof}
Let $n =
|A|$. Recall that $\m A$ has at least one
basic operation of positive arity and hence $\|\m A\|\geq n$. Let $m\geq 1$ be the
maximal arity of an operation of $\m A$.
We construct a~circuit representing a~Maltsev operation in three steps: The
first step produces, for each $a$, $b$, $c$, $d$ from $A$, a circuit that computes a local Maltsev term operation $t_{a,b,c,d}$ as defined near the beginning of this section, the second step
produces circuits that compute $t_{a,b}$, and the final step produces
a~circuit for a~Maltsev operation $t$. We note that the algorithm can fail
only in the first step.
\begin{figure}
\[
\begin{tikzpicture}[ split/.style = { shape = circle, draw, fill, inner sep = 0,
minimum size = 2.5 }, node distance = 1.7cm, circuit logic US ]
\node [and gate, inputs = {nnn}, point right] (out1) at (0,1) {$t_{a,b,u,v}$};
\node [and gate, inputs = {nnn}, point right] (out2) at (0,-1) {$t_{a,b,u,v}$};
\draw (out1.output) -- ++(right:.5) node [right] {$t^{j+1}_{a,b}(x,y,z)$} ;
\draw (out2.output) -- ++(right:.5) node [right] {$t^{j+1}_{a,b}(y,y,z)$} ;
\node at ($(out1.input 1) + (left:2.5)$) (j1) {$t^j_{a,b}(x,y,z)$};
\node at ($(out2.input 1) + (left:2.5)$) (j2) {$t^j_{a,b}(y,y,z)$};
\draw (j1.east) -- (out1.input 1);
\draw (j2.east) -- (out2.input 1);
\node (split1) [ split ] at ($(j2.east)!.5!(out2.input 1)$) {};
\draw (split1) |- (out1.input 2);
\draw (split1) |- (out2.input 2);
\node [split] (split2) at ($(split1) + (-.25,0) - (out1.input 1) + (out1.input 3)$) {};
\draw (split2) |- (out1.input 3);
\draw (split2) -- (out2.input 3);
\node [ left of = j1 ] (out1j){};
\draw (j1.west) -- (out1j.east);
\node [ left of = j2 ] (out2j){};
\draw (j2.west) -- (out2j.east);
\node (x) at (-8,1.5) {$x$};
\node (y) at (-8,0) {$y$};
\node (z) at (-8,-1.5) {$z$};
\node [ right of = x ] (in1j){};
\node [ right of = y ] (in2j){};
\node [ right of = z ] (in3j){};
\draw (x.east) -- (in1j);
\draw (y.east) -- (in2j);
\draw (z.east) -- (in3j);
\node (split3) [split] at ($(z.east) !.5! (in3j.west)$) {};
\node (mid) [ below of = j2 ] {$z$};
\draw (split3) |- (mid) -| (split2);
\node [ draw, dashed, fit = (in1j) (in2j) (in3j) (out1j) (out2j) ] {$C^j_{a,b}$};
\end{tikzpicture}
\]
\caption{Recursive definition of circuit $C^{j+1}_{a,b}$.}
\label{fig:C-j-ab}
\end{figure}
Step 1: Circuits for $t_{a,b,c,d}$. For each $a,b,c,d$, we aim to produce a~circuit that computes a local Maltsev term
operation $t_{a,b,c,d}$. To do this, we consider the subuniverse $R$ of $\m
A^2$ generated by $\{(a,c), (b,c), (b,d)\}$.
According to Proposition~6.1 from~\cite{freese-valeriote} $R$ can be generated in time $O(||\m a||^2m)$.
It is clear that $\m A$ has
a~local Maltsev term operation $t_{a,b,c,d}$ if and only if $(a,d) \in R$. Our algorithm produces a circuit for $t_{a,b,c,d}$ by generating elements of $R$ one at a time and keeping track of circuits that witness
the membership of these elements.
More precisely, we employ a subuniverse generating algorithm to produce a sequence
$r_1 = (a,c), r_2 = (b,c), r_3 = (b,d), r_4, \dots$ of elements of $R$ (in time $O(||\m a||^2m)$) such
that each $r_{k+1}$, for $k \ge 3$, is obtained from $r_1,\dots,r_{k}$ by a~single
application of an operation $f$ of $\m A^2$. Our algorithm will also produce a~sequence of
ternary circuits $C_{a,b,c,d}^3 \subseteq C_{a,b,c,d}^4 \subseteq \dots$ such
that each $C_{a,b,c,d}^k$ has $k$ outputs, and the values of $C_{a,b,c,d}^k$
on $r_1,r_2,r_3$ give $r_1,\dots,r_k$. We define $C_{a,b,c,d}^3$ to be
the~circuit with no gates, and outputs $x_1$, $x_2$, $x_3$. The circuit
$C_{a,b,c,d}^{k+1}$ is defined inductively from $C_{a,b,c,d}^k$: Consider an
operation $f$ and $r_{i_1},\dots,r_{i_p}$ with $i_j \leq k$ such that
$r_{k+1} = f(r_{i_1},\dots,r_{i_p})$; add a~gate labeled $f$ to
$C_{a,b,c,d}^k$ connecting its inputs with the outputs of $C_{a,b,c,d}^k$
numbered by $i_j$ for $j = 1,\dots, p$. We designate the output of this gate
as the $(k+1)$-st output of $C_{a,b,c,d}^{k+1}$.
It is straightforward to
check that the circuits $C_{a,b,c,d}^k$ satisfy the requirements. We also
note that the size of $C_{a,b,c,d}^k$ is exactly $k$. We stop this inductive
construction at some step $k$ if $r_k = (a,d)$, in which case we produce the circuit
$C_{a,b,c,d}$ from $C_{a,b,c,d}^k$ by indicating a~single output to be the
$k$-th output of $C_{a,b,c,d}^k$. If, on the other hand, we have generated all of $R$ without producing $(a,d)$ at any step then the algorithm halts and outputs that $\m a$ does not have a Maltsev term operation. The soundness of our algorithm follows from the fact that $\m a$ has a~local Maltsev term $t_{a,b,c,d}$ if and only if $(a,d) \in R$
and that $\m a$ has a Maltsev term if and only if it has local Maltsev terms $t_{a,b,c,d}$ for all $a$, $b$, $c$, $d \in A$.
The algorithm produces circuits of
size $\bigO(n^2)$ and spends most of its
time generating new elements of $R$;
generating each $C_{a,b,c,d}$ takes time
$O(\|\m a\|^2m)$, making the total time complexity of Step 1 to be $\bigO(\|\m
a\|^2mn^4)$.
Step 2: Circuits for $t_{a,b}$. At this point we assume that the functions $t_{a,b,c,d}$ are part of
the signature. It is clear that the full circuit can be obtained by
substituting the circuits $C_{a,b,c,d}$ for gates labeled by $t_{a,b,c,d}$,
and this can be still done in polynomial time.
Our task is to obtain a~circuit for $t_{a,b}$. We do this by inductively constructing circuits $C^j_{a,b}$ that compute two
values of the terms $t_{a,b}^j$, namely $t_{a,b}^j(x,y,z)$ and
$t_{a,b}^j(y,y,z)$. Starting with $j = 0$ and $t^0(x,y,z) = x$, we define
$C_{a,b}^0$ to be the circuit with no gates and outputs $x,y$. Further, we
define circuit $C_{a,b}^{j+1}$ inductively from $C_{a,b}^j$ by adding two
gates labeled by $t_{a,b,u,v}$, where $u = t_{a,b}^j(a_{j+1}, a_{j+1},
b_{j+1})$ and $v = b_{j+1}$: the first gate has as inputs the two outputs of
$C_{a,b}^j$ and $z$, the second gate has as inputs two copies of the second
output of $C_{a,b}^j$ and $z$. See Figure~\ref{fig:C-j-ab} for a~graphical
representation. Again, it is straightforward to check that these circuits
have the required properties. Also note that the size of $C^j_{a,b}$ is
bounded by $2j+3$ which is a~polynomial. The final circuit $C_{a,b}$
computing $t_{a,b}$ is obtained from $C_{a,b}^{n^2}$ by designating the first
output of $C_{a,b}^{n^2}$ to be the only output of $C_{a,b}$. Once we have
$t_{a,b,c,d}$ in the signature, this process will run in time $\bigO(n^2)$.
Step 3: Circuit for a Maltsev term. Again, we assume that $t_{a,b}$ are basic operations, and construct
circuits $C_j$ computing two values $t_j(x,y,y)$ and $t_j(x,y,z)$ of $t_j$
inductively. The proof is analogous to Step 2, with the only difference that
we use Figure~\ref{fig:C-j} for the inductive definition. Again the time
complexity is $\bigO(n^2)$.
\begin{figure}
\begin{centering}
\begin{tikzpicture}[ split/.style = { shape = circle, draw, fill, inner sep = 0,
minimum size = 2.5 }, node distance = 1.7cm, circuit logic US ]
\node [and gate, inputs = {nnn}, point right, minimum size = .7cm] (out1) at (0,1) {$t_{u,v}$};
\node [and gate, inputs = {nnn}, point right, minimum size = .7cm] (out2) at (0,-1) {$t_{u,v}$};
\draw (out1.output) -- ++(right:.5) node [right] {$t_{j+1}(x,y,y)$} ;
\draw (out2.output) -- ++(right:.5) node [right] {$t_{j+1}(x,y,z)$} ;
\node at ($(out1.input 3) + (left:2.5)$) (j1) {$t_j(x,y,y)$};
\node at ($(out2.input 3) + (left:2.5)$) (j2) {$t_j(x,y,z)$};
\node (mid) [ above of = j1 ] {$x$};
\draw (j1.east) -- (out1.input 3);
\draw (j2.east) -- (out2.input 3);
\node (split1) [ split ] at ($(j1.east)!.5!(out1.input 3)$) {};
\draw (split1) |- (out1.input 2);
\draw (split1) |- (out2.input 2);
\node [split] (split2) at ($(split1) + (-.25,0) + (out1.input 1) - (out1.input 3)$) {};
\draw (split2) -- (out1.input 1);
\draw (split2) |- (out2.input 1);
\node [ left of = j1 ] (out1j){};
\draw (j1.west) -- (out1j.east);
\node [ left of = j2 ] (out2j){};
\draw (j2.west) -- (out2j.east);
\node (x) at (-8,1.5) {$x$};
\node (y) at (-8,0) {$y$};
\node (z) at (-8,-1.5) {$z$};
\node [ right of = x ] (in1j){};
\node [ right of = y ] (in2j){};
\node [ right of = z ] (in3j){};
\draw (x.east) -- (in1j);
\draw (y.east) -- (in2j);
\draw (z.east) -- (in3j);
\node (split3) [split] at ($(x.east) !.5! (in1j.west)$) {};
\draw (split3) |- (mid) -| (split2);
\node [ draw, dashed, fit = (in1j) (in2j) (in3j) (out1j) (out2j) ] {$C_j$};
\end{tikzpicture}
\end{centering}
\caption{Recursive definition of circuit $C_{j+1}$.}
\label{fig:C-j}
\end{figure}
Each step runs in time polynomial in $\|\m a\|$ (the time complexity is
dominated by Step 1) and outputs a~polynomial
size circuit. This also implies that expanding the gates according to their
definitions in Steps 2 and 3 can be done in polynomial time; the final size of
the output circuit will be bounded by $\bigO(n^6)$.
\end{proof}
\begin{corollary}\label{maltsevterm}
Let $\m a$ be a~finite idempotent algebra. There is an algorithm whose
runtime can be bounded by a~polynomial in the size of $\m a$ that will
produce the table of some Maltsev term operation of $\m a$, should one exist.
\end{corollary}
\begin{proof}
The polynomial-time algorithm is as follows. First, generate a~polynomial size
circuit for some Maltsev term operation of $\m a$. This can be done in polynomial time by
the above theorem. Second, evaluate this circuit at all $\lvert A\rvert^3$
possible inputs. The second step runs in polynomial time since evaluation of
a~circuit is linear in the size of the circuit.
\end{proof}
We note that there is also a more straightforward algorithm for producing the
operation table of a Maltsev term which follows the circuit construction but
instead of circuits, it remembers the tables for each of the relevant term
operations.
\section{Local minority terms}
In contrast to the situation for Maltsev terms highlighted in the previous
section, we will show that having plenty of `local' minority terms does not
guarantee that a~finite idempotent algebra will have a~minority term. One
consequence of this is that an approach along the lines in~\cite{horowitz-ijac,
kazda-valeriote, Valeriote_Willard_2014} to finding an efficient algorithm to
decide if a~finite idempotent algebra has a~minority term will not work.
In this section, we will construct for each odd natural number $n > 2$ a~finite
idempotent algebra $\m a_n$ with the following properties: The universe of $\m
a_n$ has size $4n$ and $\m a_n$ does not have a~minority term, but for every
subset $E$ of $A_n$ of size $n-1$ there is a~term of $\m a_n$ that acts as
a~minority term on the elements of $E$.
We start our construction by fixing some odd $n > 2$ and some minority
operation $m$ on the set $[n] = \{1, 2, \dots, n\}$. To make things concrete
we set
\[
m(x,y,z)=\begin{cases}
x& y=z\\
y& x=z\\
z& \text{else,}
\end{cases}
\]
but note that any minority operation on $[n]$ will do.
Since there are two nonisomorphic groups of order 4, we have two different
natural group operations on $\{0,1,2,3\}$: addition modulo~4, which we will
denote by `$+$' (its inverse is `$-$'), and bitwise XOR, which we denote by
`$\oplus$' (this operation takes bitwise XOR of the binary representations of
input numbers, so for example $1\oplus 3=2$). Throughout this section, we will
use arithmetic modulo 4, e.g., $6x = x + x$, for all expressions except those
involving indices.
The construction relies on similarities and subtle differences of the two group
structures, and the derived Maltsev operations, $x-y+z$ and $x\oplus y\oplus z$.
Both these operations share a congruence $\equiv_2$ that is given by taking
the remainder modulo 2. We note that $x\equiv_2 y$ if and only if $2x = 2y$.
\begin{observation}\label{obs:maltsev-diff}
Let $x,y,z\in \{0,1,2,3\}$. Then
\[
(x\oplus y\oplus z) - (x - y + z) \in \{0,2\}
,\]
and moreover the result depends only on the classes of
$x$, $y$, and $z$ in the congruence $\equiv_2$ (i.e., the least significant
binary bits of $x$, $y$, and $z$).
\end{observation}
\begin{proof}
Both Maltsev operations agree modulo $\equiv_2$, hence the difference lies in
the $\equiv_2$-class of 0.
To see the second part, it is enough to observe that $x\oplus 2=x+2=x-2$ for all
$x$. Hence changing, say $x$ to $x'=x\oplus 2$ simply flips the most
significant binary bit
of both $x\oplus y\oplus z$ and $x - y + z$, keeping the difference the same.
\end{proof}
\begin{definition}
Let $A_n=[n]\times [4]$. For $i \in [n]$, we define $t_i(x,y,z)$ to be the
following operation on $A_n$:
\[
t_i((a_1,b_1), (a_2,b_2), (a_3,b_3)) =
\begin{cases}
(i,b_1 - b_2 + b_3)
\quad\text{if $a_1 = a_2 = a_3 = i$, and} \\
(m(a_1,a_2,a_3),b_1\oplus b_2 \oplus b_3),
\quad\text{otherwise.}
\end{cases}
\]
The algebra $\m a_n$ is defined to be the algebra with universe $A_n$ and
basic operations $t_1,\dots,t_n$.
\end{definition}
By construction, the following is true.
\begin{claim}\label{local}
For every $(n-1)$-element subset $E$ of $A_n$, there is a~term operation of
$\m a_n$ that satisfies the minority term equations when restricted to
elements from $E$.
\end{claim}
\begin{proof}
Pick $i\in [n]$ such that no element of $E$ has its first coordinate equal to
$i$; the operation $t_i$ is a local minority for this $E$.
\end{proof}
\begin{proposition}\label{prop:An}
For $n > 1$ and odd, the algebra $\m a_n$ does not have a~minority term.
\end{proposition}
\begin{proof}
Given some $(i,a)\in A_n$, we will refer to $a$ as the \emph{arithmetic part}
of $(i,a)$. This is to avoid talking about `second coordinates' in
the confusing situation when $(i,a)$ itself is a~part of a~tuple of elements
of $A_n$.
To prove the proposition, we will define a~certain subuniverse $R$ of $(\m
a_n)^{3n}$ and then show that $R$ is not closed under any minority operation
on $A_n$ (applied coordinate-wise). We will write $3n$-tuples of elements of
$A_n$ as $3n\times 2$ matrices where the arithmetic parts of the elements
make up the second column.
Let $R \subseteq (A_n)^{3n}$ be the set of all $3n$-tuples of the form
\[
\begin{pmatrix}
1&x_1\\ 2&x_2\\ \vdots\\ n&x_n\\
1&x_{n+1}\\ 2&x_{n+2}\\ \vdots\\ n&x_{2n}\\
1&x_{2n+1}\\ 2&x_{2n+2}\\ \vdots\\ n&x_{3n}\\
\end{pmatrix}
\]
such that
\begin{align}
&x_{kn+1} \equiv_2x_{kn+2} \equiv_2 \dots \equiv_2 x_{kn+n},
&\text{for $k=0,1,2$, and} \label{eqn:bits} \\
&\sum_{i=1}^{3n} x_i = 2.\label{eqn:strange-sum}
\end{align}
The three equations from (\ref{eqn:bits}) mean that the least significant bits of the
arithmetic parts of the first $n$ entries agree and similarly for the second
and the last $n$ entries; equation (\ref{eqn:strange-sum}) can be viewed
as a~combined parity check on all involved bits.
\begin{claim}
The relation $R$ is a~subuniverse of $(\m a_n)^{3n}$.
\end{claim}
\begin{proof}
By the symmetry of the $t_i$'s and $R$, it is enough to show that $t_1$
preserves $R$. Let us take three arbitrary members of $R$:
\[
\begin{pmatrix}
1&x_{1,1}\\ 2&x_{1,2}\\ \vdots\\ n&x_{1,n}\\
1&x_{1,n+1}\\ 2&x_{1,n+2}\\ \vdots\\ n&x_{1,2n}\\
1&x_{1,2n+1}\\ 2&x_{1,2n+2}\\ \vdots\\ n&x_{1,3n}\\
\end{pmatrix},
\begin{pmatrix}
1&x_{2,1}\\ 2&x_{2,2}\\ \vdots\\ n&x_{2,n}\\
1&x_{2,n+1}\\ 2&x_{2,n+2}\\ \vdots\\ n&x_{2,2n}\\
1&x_{2,2n+1}\\ 2&x_{2,2n+2}\\ \vdots\\ n&x_{2,3n}\\
\end{pmatrix},
\begin{pmatrix}
1&x_{3,1}\\ 2&x_{3,2}\\ \vdots\\ n&x_{3,n}\\
1&x_{3,n+1}\\ 2&x_{3,n+2}\\ \vdots\\ n&x_{3,2n}\\
1&x_{3,2n+1}\\ 2&x_{3,2n+2}\\ \vdots\\ n&x_{3,3n}\\
\end{pmatrix}
\]
and apply $t_1$ to them to get:
\begin{equation}
\vec r =
\begin{pmatrix}
1&x_{1,1}-x_{2,1}+x_{3,1}\\
2&x_{1,2}\oplus x_{2,2}\oplus x_{3,2}\\
&\vdots\\
n&x_{1,n}\oplus x_{2,n}\oplus x_{3,n}\\
1&x_{1,n+1}-x_{2,n+1}+x_{3,n+1}\\
2&x_{1,n+2}\oplus x_{2,n+2}\oplus x_{3,n+2}\\
& \vdots\\
n&x_{1,2n}\oplus x_{2,2n}\oplus x_{3,2n}\\
1&x_{1,2n+1}-x_{2,2n+1}+x_{3,2n+1} \\
2&x_{1,2n+2}\oplus x_{2,2n+2}\oplus x_{3,2n+2}\\
& \vdots\\
n&x_{1,3n}\oplus x_{2,3n}\oplus x_{3,3n}\\
\end{pmatrix}
\label{eqn:r}
\end{equation}
We want to verify that $\vec r\in R$. First note that (\ref{eqn:bits}) is
satisfied: This follows from the fact that $x-y+z$ and $x\oplus y\oplus z$
give the same result modulo 2, and the assumption that the original three
tuples satisfied (\ref{eqn:bits}).
What remains is to verify the property~(\ref{eqn:strange-sum}). If in the
equality~(\ref{eqn:r}) above we replace the operations $\oplus$ by $-$ and
$+$, verifying~(\ref{eqn:strange-sum}) is easy: The sum of the
arithmetic parts of such a modified tuple is
\begin{equation}
\sum_{j=1}^{3n} (x_{1,j}-x_{2,j}+x_{3,j})=
\sum_{j=1}^{3n}
x_{1,j}-\sum_{j=1}^{3n}x_{2,j}+\sum_{j=1}^{3n}x_{3,j}=2-2+2=2.
\label{eqn:2}
\end{equation}
This is why we need to examine the difference between the $\oplus$-based
and $+$-based Maltsev operations. For $k\in \{0,1,2\}$ and $i\in
\{1,\dots,n\}$ we let
\[
c_{k,i} = (x_{1,kn+i} \oplus x_{2,kn+i} \oplus x_{3,kn+i}) -
(x_{1,kn+i} - x_{2,kn+i} + x_{3,kn+i})
\]
By the second part of Observation~\ref{obs:maltsev-diff}, $c_{k,i}$ does not
depend on $i$ (changing $i$ does not change
the $x_{j,kn+i}$'s modulo $\equiv_2$ by condition~(\ref{eqn:bits}) in the
definition of $R$). Hence we can write just $c_k$ instead of $c_{k,i}$.
Using $c_0$, $c_1$, and
$c_2$ to adjust for the differences between the two Maltsev operations, we can express the
sum of the arithmetic parts of the tuple $\vec{r}$ as
\[
\sum_{j=1}^{3n} (x_{1,j}-x_{2,j}+x_{3,j})+\sum_{i=2}^{n}
c_0+\sum_{i=2}^{n} c_1+\sum_{i=2}^{n} c_2
= 2+(n-1)(c_0+c_1+c_2)
\]
where we used~(\ref{eqn:2}) to get the right hand side. We chose $n$ odd,
hence $n-1$ is even and each $c_k$ is even by
Observation~\ref{obs:maltsev-diff}, so $(n-1)c_k=0$ for any $k=0,1,2$. We see that the sum of the
arithmetic parts of $\vec{r}$ is equal to 2 which concludes the proof
of~(\ref{eqn:strange-sum}) for the tuple~$\vec r$ and we are done.
\end{proof}
It is easy to see that
\[
\begin{pmatrix}
1&0\\ 2&0\\ \vdots\\ n&0\\
1&1\\ 2&1\\ \vdots\\ n&1\\
1&1\\ 2&1\\ \vdots\\ n&1\\
\end{pmatrix},
\begin{pmatrix}
1&1\\ 2&1\\ \vdots\\ n&1\\
1&0\\ 2&0\\ \vdots\\ n&0\\
1&1\\ 2&1\\ \vdots\\ n&1\\
\end{pmatrix},
\begin{pmatrix}
1&1\\ 2&1\\ \vdots\\ n&1\\
1&1\\ 2&1\\ \vdots\\ n&1\\
1&0\\ 2&0\\ \vdots\\ n&0\\
\end{pmatrix}\in R,
\quad\text{and}\quad
\begin{pmatrix}
1&0\\ 2&0\\ \vdots\\ n&0\\
1&0\\ 2&0\\ \vdots\\ n&0\\
1&0\\ 2&0\\ \vdots\\ n&0\\
\end{pmatrix}\notin R.
\]
However, the last tuple can be obtained from the first three by applying any
minority operation on the set $A_n$ coordinate-wise. From this we conclude
that $\m a_n$ does not have a~minority term.
\end{proof}
We note that the above construction of $\m a_n$ makes sense for $n$ even as well
and claim that these algebras also have the same key features, namely, by
construction, they have plenty of `local' minority term operations but they do
not have minority terms. The verification of this last fact for $n$ even is
similar, but slightly more technical than for $n$ odd, and we omit the proof here.
The algebras $\m a_n$ can also be used to witness that having a~lot of local
minority-majority terms does not guarantee the presence of an actual
minority-majority term. By padding with dummy variables, any local minority
term of an algebra $\m a_n$ is also a~term that locally satisfies the
minority-majority term equations. But since each $\m a_n$ has a~Maltsev term
but not a~minority term, then by Theorem~\ref{thm:join} it follows that $\m
a_n$ cannot have a~minority-majority term.
\section{Deciding minority in idempotent algebras is in \compNP}\label{sec:np}
The results from the previous section imply that one cannot base an efficient
test for the presence of a~minority term in a~finite idempotent algebra on
checking if it has enough local minority terms. This does not rule out that
the problem is in the class \compP, but to date no other approach to showing
this has worked. As an intermediate result, we show, at least, that this
decision problem is in \compNP{} and so cannot be \compEXPTIME-complete (unless
$\compNP=\compEXPTIME$).
We first show that an instance $\m a$ of the decision problem \minority\ can
be expressed as a~particular instance of the subpower membership problem for
${\m a}$.
\begin{definition}\label{defSMP}
Given a~finite algebra $\m a$, the \emph{subpower membership problem} for $\m
a$, denoted by $\smp(\m a)$, is the following decision problem:
\begin{itemize}
\item INPUT: $\vec a_1, \dots, \vec a_k, \vec b \in A^n$
\item QUESTION: Is $\vec b$ in the subalgebra of $\m a^n$ generated by
$\{\vec a_1, \dots, \vec a_k\}$?
\end{itemize}
\end{definition}
To build an instance of $\smp(\m a)$ expressing that $\m a$ has a~minority
term, let $I =\{(a,b,c)\mid \mbox{$a, b, c \in A$ and $|\{a,b,c\}| \le 2$}\}$.
So $|I| = 3|A|^2 - 2|A|$. For $(a,b,c) \in I$, let $\min(a,b,c)$ be the minority
element of this triple. So
\[
\min(a,b,b) = \min(b,a,b) = \min(b,b,a) = \min(a,a,a) = a.
\]
For $1 \le i \le 3$, let $\vec \pi_i \in A^I$ be defined by $\vec \pi_i(a_1,
a_2, a_3) = a_i$ and define $\vec \mu_A \in A^I$ by $\vec \mu_A(a_1, a_2, a_3)
= \min(a_1, a_2, a_3)$, for all $(a_1, a_2, a_3) \in I$. Denote the instance
$\vec \pi_1$, $\vec \pi_2$, $\vec \pi_3$, and $\vec \mu_A$ of $\smp(\m a)$ by
$\min(\m a)$.
\begin{proposition}\label{min-instance}
An algebra $\m a$ has a~minority term if and only if $\vec \mu_A$ is
a~member of the subpower of $\m a^I$ generated by $\{\vec \pi_1, \vec \pi_2,
\vec \pi_3\}$, i.e., if and only if $\min(\m a)$ is a~`yes' instance of
$\smp(\m a)$ when $\m a$ is finite.
\end{proposition}
\begin{proof}
If $m(x,y,z)$ is a~minority term for $\m a$, then applying $m$
coordinatewise to the generators $\vec \pi_1$, $\vec \pi_2$, $\vec \pi_3$
will produce the element $\vec \mu_A$. Conversely, any term that produces
$\vec \mu_A$ from these generators will be a~minority term for $\m a$.
\end{proof}
Examining the definition of $\min(\m a)$, we see that the parameters from
Definition~\ref{defSMP} are $k=3$ and $n=3|A|^2-2|A|$, which is (for algebras
with at least one at least unary basic operation) polynomial in $\|\m A\|$.
For $\m a$ idempotent, we can in fact improve $n$ to $3|A|^2-3|A|$,
since then we do not need to include in $I$ entries of the form $(a,a,a)$.
In general, it is known that for some finite algebras the subpower membership problem can be
\compEXPTIME-complete~\cite{Kozik2008} and that for some others, e.g., for any
algebra that has only trivial or constant basic operations, it lies in the
class \compP. In~\cite{Mayr2012}, P.\ Mayr shows that when $\m a$ has a~Maltsev
term, then $\smp(\m a)$ is in \compNP. We claim that a careful reading of Mayr's proof reveals that in fact the following uniform version of the subpower membership problem, where the algebra $\m a$ is considered as part of the input, is also in \compNP.
\begin{definition}
Define \smpun\ to be the following decision problem:
\begin{itemize}
\item INPUT: A~list of tables of basic operations of an algebra~$\m A$ that includes a~Maltsev operation, and $\vec a_1, \dots, \vec a_k, \vec b \in A^n$.
\item QUESTION: Is $\vec b$ in the subalgebra of $\m a^n$ generated by
$\{\vec a_1, \dots, \vec a_k\}$?
\end{itemize}
\end{definition}
We base the main result of this section
on the following.
\begin{theorem}[see \cite{Mayr2012}]\label{smpun}
The decision problem \smpun\ is in the class \compNP.
\end{theorem}
While this theorem is not explicitly stated in \cite{Mayr2012}, it can be seen
that the runtime of the verifier that Mayr constructs for the problem $\smp(\m
a)$, when $\m a$ has a Maltsev term, has polynomial dependence on the size of
$\m a$ in addition to the size of the input to $\smp(\m a)$. We stress that
Mayr's verifier requires that the table for a Maltsev term of $\m a$ is given as part of
the description of $\m a$.
\begin{theorem}\label{NP} The decision problem \minority\ is in the class \compNP.
\end{theorem}
\begin{proof}
To prove this theorem, we provide a polynomial reduction $f$ of \minority\ to \smpun. By Theorem~\ref{smpun}, this will suffice.
Let $\m a$ be an instance of \minority, i.e., a~finite
idempotent algebra that has at least one operation.
We first check, using the polynomial-time
algorithm from Corollary~\ref{maltsevterm}, to see if $\m a$ has a~Maltsev
term. If it does not, then $\m a$ will not have a~minority term, and in this case we
set $f(\m a)$ to be some fixed `no' instance of \smpun. Otherwise, we augment the list of basic operations of $\m a$ by
adding the Maltsev operation on $A$ that the algorithm produced. Denote
the resulting (idempotent) algebra by $\m a'$ and note that $\m a'$ can be constructed from $\m a$ by a polynomial-time algorithm.
Also, note that $\m a'$ is term equivalent to
$\m a$ and so the subpower membership problem is the same for both
algebras.
If we set $f(\m a)$ to be the instance of \smpun\ that consists of the~list of tables of basic operations of~$\m A'$ along with
$\min(\m a)$ then we have, by Proposition~\ref{min-instance}, that $f(\m a)$ is a `yes' instance of \smpun\ if and only if $\m a$ has a minority term. Since the construction of $f(\m a)$ can be carried out by a procedure whose runtime can be bounded by a polynomial in $\|\m a\|$, we have produced a polynomial reduction of \minority\ to \smpun, as required.
\end{proof}
\section{Conclusion}
While Theorem~\ref{NP} establishes that testing for a~minority term for finite
idempotent algebras is not as hard as it could be, the true complexity of this
decision problem is still open. Our proof of this theorem closely ties the
complexity of {\minority} to the complexity of the subpower membership problem
for finite Maltsev algebras and specifically to the problem \smpun. Thus any progress on determining the complexity of
$\smp(\m a)$ for finite Maltsev algebras may have a~bearing on the complexity
of {\minority}.
There has certainly been progress on the algorithmic side of $\smp$;
a~major recent paper has shown in particular that $\smp(\m a)$ is tractable for
$\m a$ with cube term operations (of which a Maltsev term operation is
a~special case) as long as $\m a$ generates a residually small
variety~\cite{BMS18} (the statement from the paper is actually stronger than
this, allowing multiple algebras in place of $\m a$).
In Section~\ref{join} we introduced the notion of a~minority-majority term and
showed that if testing for such a~term for finite idempotent algebras could be
done by a~polynomial-time algorithm, then \minority\ would lie in the
complexity class \compP. This is why we conclude our paper with a~question
about deciding minority-majority terms.
\begin{open-problem*}
What is the complexity of deciding if a~finite idempotent algebra has
a~minority-majority term?
\end{open-problem*}
|
2,877,628,091,550 | arxiv | \section{Introduction}
After receiving paper reviews, authors may optionally submit a rebuttal to address the reviewers' comments, which will be limited to a {\bf one page} PDF file.
Please follow the steps and style guidelines outlined below for submitting your author response.
The author rebuttal is optional and, following similar guidelines to previous CVPR conferences, is meant to provide you with an opportunity to rebut factual errors or to supply additional information requested by the reviewers.
It is NOT intended to add new contributions (theorems, algorithms, experiments) that were absent in the original submission and NOT specifically requested by the reviewers.
You may optionally add a figure, graph, or proof to your rebuttal to better illustrate your answer to the reviewers' comments.
Per a passed 2018 PAMI-TC motion, reviewers should refrain from requesting significant additional experiments for the rebuttal or penalize for lack of additional experiments.
Authors should refrain from including new experimental results in the rebuttal, especially when not specifically requested to do so by the reviewers.
Authors may include figures with illustrations or comparison tables of results reported in the submission/supplemental material or in other papers.
Just like the original submission, the rebuttal must maintain anonymity and cannot include external links that reveal the author identity or circumvent the length restriction.
The rebuttal must comply with this template (the use of sections is not required, though it is recommended to structure the rebuttal for ease of reading).
\subsection{Response length}
Author responses must be no longer than 1 page in length including any references and figures.
Overlength responses will simply not be reviewed.
This includes responses where the margins and formatting are deemed to have been significantly altered from those laid down by this style guide.
Note that this \LaTeX\ guide already sets figure captions and references in a smaller font.
\section{Formatting your Response}
{\bf Make sure to update the paper title and paper ID in the appropriate place in the tex file.}
All text must be in a two-column format.
The total allowable size of the text area is $6\frac78$ inches (17.46 cm) wide by $8\frac78$ inches (22.54 cm) high.
Columns are to be $3\frac14$ inches (8.25 cm) wide, with a $\frac{5}{16}$ inch (0.8 cm) space between them.
The top margin should begin 1 inch (2.54 cm) from the top edge of the page.
The bottom margin should be $1\frac{1}{8}$ inches (2.86 cm) from the bottom edge of the page for $8.5 \times 11$-inch paper;
for A4 paper, approximately $1\frac{5}{8}$ inches (4.13 cm) from the bottom edge of the page.
Please number any displayed equations.
It is important for readers to be able to refer to any particular equation.
Wherever Times is specified, Times Roman may also be used.
Main text should be in 10-point Times, single-spaced.
Section headings should be in 10 or 12 point Times.
All paragraphs should be indented 1 pica (approx.~$\frac{1}{6}$ inch or 0.422 cm).
Figure and table captions should be 9-point Roman type as in \cref{fig:onecol}.
List and number all bibliographical references in 9-point Times, single-spaced,
at the end of your response.
When referenced in the text, enclose the citation number in square brackets, for example~\cite{Alpher05}.
Where appropriate, include the name(s) of editors of referenced books.
\begin{figure}[t]
\centering
\fbox{\rule{0pt}{0.5in} \rule{0.9\linewidth}{0pt}}
\caption{Example of caption. It is set in Roman so that mathematics
(always set in Roman: $B \sin A = A \sin B$) may be included without an
ugly clash.}
\label{fig:onecol}
\end{figure}
To avoid ambiguities, it is best if the numbering for equations, figures, tables, and references in the author response does not overlap with that in the main paper (the reviewer may wonder if you talk about \cref{fig:onecol} in the author response or in the paper).
See \LaTeX\ template for a workaround.
\subsection{Illustrations, graphs, and photographs}
All graphics should be centered.
Please ensure that any point you wish to make is resolvable in a printed copy of the response.
Resize fonts in figures to match the font in the body text, and choose line widths which render effectively in print.
Readers (and reviewers), even of an electronic copy, may choose to print your response in order to read it.
You cannot insist that they do otherwise, and therefore must not assume that they can zoom in to see tiny details on a graphic.
When placing figures in \LaTeX, it is almost always best to use \verb+\includegraphics+, and to specify the figure width as a multiple of the line width as in the example below
{\small\begin{verbatim}
\usepackage{graphicx} ...
\includegraphics[width=0.8\linewidth]
{myfile.pdf}
\end{verbatim}
}
{\small
\bibliographystyle{ieee_fullname}
\section{Introduction}
\label{sec:intro}
Semantic segmentation is one of the high-level tasks in computer vision that assigns a label for each pixel of an image. Semantic segmentation plays a significant role in many applications since it is able to provide scene understanding at the pixel level. Some of those applications include pedestrian segmentation, autonomous driving, and medical diagnosis. Semantic segmentation differs from other common computer vision tasks such as image classification and object detection in terms of its output. For instance, image classification provides which object exists in an image, while object detection gives the object labels and locations by a bounding box. Image segmentation is divided into three sub-branches: semantic, instance, and panoptic segmentation. Semantic segmentation provides a class label for each pixel of an image, while instance segmentation identifies and segments each instance of a class separately. Moreover, panoptic segmentation aims to find the class label for every pixel in an image and all the instances.
The interest in semantic segmentation has increased rapidly since the deep learning methods achieved promising results. In other words, deep learning-based semantic segmentation approaches have demonstrated a significant boost in efficiency compared to older methods. Different DNN architectures and mechanisms are proposed to obtain better segmentation results. For instance, fully convolutional networks (FCN)\cite{long2015fully} have led to recent advances in deep learning-based semantic segmentation since many novel models use it to get dense predictions. Many semantic segmentation networks also employ the encoder-decoder structure. The encoders extract the features by reducing the resolution, and the decoders restore the resolution. SegNet \cite{badrinarayanan2017segnet} passes the indices of the max locations during pooling in the encoder to the decoder. Also, Unet \cite{ronneberger2015u} employs an encoder-decoder structure with skip connections that pass high-resolution features from the contracting path to the expanding path to guide semantic segmentation. DeepLabV3+ \cite{chen2018encoder} benefits from spatial pyramid pooling and atrous convolution mechanisms. In addition, BiseNetV2 \cite{yu2021bisenet} captures high-level semantics and spatial details with two-pathway architecture. An aggregation layer is also used to exploit these extracted features for the semantic segmentation task.
Most of the methods in the literature exploit three-channel RGB images captured by visible cameras. However, due to the visible imaging limitations, these methods cannot provide the desired performance in adverse environmental conditions such as low-illuminated, rainy, foggy. Therefore, thermal images have been utilized for semantic segmentation tasks since thermal cameras capture thermal radiation, which is more stable at any weather and time. However, thermal images usually have low resolution and ambiguous object boundaries caused by thermal crossover, a phenomenon where the thermal radiation coming from two different objects cannot be distinguished. Moreover, thermal crossover and the lack of the thermal dataset cause semantic segmentation in thermal images to be under-explored.
To the best of our knowledge, this work is the first survey of semantic segmentation methods in thermal images. The key contributions of this survey are as follows:
\begin{itemize}
\setlength\itemsep{0em}
\item A broad survey of current datasets, including RGB and thermal (RGB-T) image pairs and solely thermal images.
\item A comprehensive review of the deep learning-based thermal image semantic segmentation methods with their architectures and contributions.
\item A well-organized comparison of the methods with the announced quantitative measures in the papers.
\end{itemize}
This paper is organized as follows; Section II overviews the datasets, including thermal and RGB images, deep learning-based semantic segmentation methods for multi-spectral inputs, and a brief discussion on the presented methods. Section III introduces thermal image datasets, semantic segmentation methods using only thermal images, and a comparison of the methods with quantitative measures according to the revealed results in the papers. Finally, Section IV concludes this survey.
\section{Combining Infrared and Visible Spectrum for Semantic Segmentation}
The fact that infrared and visible spectrum information is in different light spectrums allows them to compensate for each other's deficiencies. While this provides an advantage, it restricts the use cases of the proposed algorithms only on specific hardware with two different sensors for thermal and visible light. Moreover, these methods require additional algorithms to fuse information coming from different spectrums. The fusion methods should avoid information conflicts while incorporating complementary information from different modalities. Besides, few datasets provide RGB-T aligned images with annotations. The number of proposed methods is limited due to the reasons mentioned above.
Utilizing RGB and thermal images simultaneously improves the model's performance. Therefore, this part mentions datasets with RGB-T images and segmentation methods using both visible and infrared spectrums.
\subsection{Multi-spectral Datasets}
Multi-Spectral Fusion Networks (MFNet) Dataset \cite{ha2017mfnet} contains both RGB and IR images captured using an InfRec R500 camera. This camera has different lenses and sensors for visible and infrared spectrum. The spatial resolutions of all images are 480x640. The dataset consists of 820 daytime and 749 nighttime urban scene images annotated with eight classes (car, person, bike, curve, car stop, guardrail, color cone, and bump). Moreover, the training set contains 50\% of the daytime images and 50\% of the nighttime images, while the remaining images are separated equally for the validation and test sets. Some prediction results of MFNet \cite{ha2017mfnet} and SegNet \cite{badrinarayanan2017segnet} can be seen in Figure \ref{fig:mfnet} which is directly taken from \cite{ha2017mfnet}. Also, RGB-T image pairs and ground truth annotations from the dataset are presented in the first three rows of Figure \ref{fig:mfnet}.
Shivakumar et al. introduced Penn Subterranean Thermal 900 Dataset (PST900) \cite{shivakumar2020pst900} containing 894 aligned RGB-T image pairs with ground truth annotations. A Stereolabs ZED Mini stereo camera and a FLIRBoson 320 camera are used for data collection. The PST900 aims to meet the needs of the DARPA Subterranean Challenge\footnote{https://www.subtchallenge.com/} that requires the identification of four objects (fire extinguisher, backpack, hand drill, and thermal mannequin or person) and robustness in various underground situations. Therefore, images are gathered from diverse environments with varying degrees of lighting. Two RGB-T image pairs and the ground truth annotations from the dataset can be seen in Figure \ref{fig:pst900}. Additionally, the dataset provides 3416 annotated RGB images.
\begin{figure}[h]
\centering
\begin{subfigure}{2.5cm}
\includegraphics[width=2.5cm]{images/pst_rgb3.png}
\end{subfigure}
\begin{subfigure}{2.5cm}
\includegraphics[width=2.5cm]{images/pst_thermal3.png}
\end{subfigure}
\begin{subfigure}{2.5cm}
\includegraphics[width=2.5cm]{images/pst_mul_label3.png}
\end{subfigure}
\begin{subfigure}{2.5cm}
\includegraphics[width=2.5cm]{images/pst_rgb4.png}
\end{subfigure}
\begin{subfigure}{2.5cm}
\includegraphics[width=2.5cm]{images/pst_thermal4.png}
\end{subfigure}
\begin{subfigure}{2.5cm}
\includegraphics[width=2.5cm]{images/pst_mul_label4.png}
\end{subfigure}
\begin{subfigure}{2.5cm}
\includegraphics[width=2.5cm]{images/pst_rgb5.png}
\end{subfigure}
\begin{subfigure}{2.5cm}
\includegraphics[width=2.5cm]{images/pst_thermal5.png}
\end{subfigure}
\begin{subfigure}{2.5cm}
\includegraphics[width=2.5cm]{images/pst_mul_label5.png}
\end{subfigure}
\caption{RGB, thermal and annotation images from the PST900 dataset \cite{shivakumar2020pst900}}
\label{fig:pst900}
\end{figure}
The Freiburg Thermal Dataset \cite{vertens2020heatnet} includes 12051 daytime and 8596 nighttime time-synchronized RGB-T image pairs captured in rural and urban environments. A stereo RGB camera rig (FLIR Blackfly 23S3C) and a stereo thermal camera rig (FLIR ADK) are used for data collection. However, only the testing set including 32 daytime and 32 nighttime images annotated with the following classes: road, sidewalk, building, curb, fence, pole/signs, vegetation, terrain, sky, person/rider, car/truck/bus/train, bicycle/motorcycle and background.
Multi-model multi-stage network (MMNet) \cite{lan2021mmnet} compares its nighttime segmentation performance with other models using its own dataset. The dataset contains 541 urban scenes RGB and thermal images taken only at night. All the images have a resolution of 300x400. Since the dataset is not publicly available, its use is limited to the MMNet.
\subsection{Multi-spectral Semantic Segmentation Methods}
Ha et al.\cite{ha2017mfnet} proposed Multi-Spectral Fusion Networks (MFNet) having two identical encoders for thermal and RGB images and one decoder block. Also, the encoder has a mini-inception block with dilated convolution so that the size of the receptive field is enlarged while the time complexity is the same with a normal 3x3 convolutional layer when the number of input and output channels are the same. MFNet aims to achieve high inference speed for real-time semantic segmentation for autonomous vehicles, and MFNet dataset, including RGB-Thermal (RGB-T) urban scene images, is introduced with pixel-level annotations for the self-driving task. MFNet includes a small decoder designed to reduce the number of parameters, and the decoder makes use of the low-level feature maps extracted in encoders to improve up-sampling efficiency. A concatenation operation fuses the outputs of the RGB and infrared (IR) encoders, and the decoder receives the fused result as input. Some segmentation predictions of MFNet and SegNet \cite{badrinarayanan2017segnet} can be seen in Figure \ref{fig:mfnet} which is directly taken from \cite{ha2017mfnet}.
\begin{figure*}[h]
\centering
\includegraphics[width=\textwidth]{images/mfnet_result.png}
\caption{Some prediction results of MFNet \cite{ha2017mfnet} and Segnet \cite{badrinarayanan2017segnet} on the MFNet dataset \cite{ha2017mfnet}}
\label{fig:mfnet}
\end{figure*}
Sun et al. proposed RGB-Thermal Fusion Network (RTFNet) \cite{sun2019rtfnet} to achieve semantic segmentation of urban scenes for autonomous vehicles. RTFNet adopts an encoder-decoder structure with two encoders for extracting features of RGB and IR inputs and one decoder restoring the resolution of feature maps. The encoders are identical except the first layers' input channel numbers and slightly changed ResNet-50 \cite{he2016deep} is employed as feature extractors. The infrared feature maps are fused into the RGB encoder through the element-wise summing. The decoder uses the output of the last fusion layer as input to obtain dense predictions. The encoder and decoder of the model are designed asymmetrically, two large encoders and a small decoder. Each decoder layer has two sub-blocks introduced by RTFNet, namely Upception A and Upception B. Upception A does not change the resolution and channel number, whereas Upception B changes, and the final channel number equals the number of classes. Also, Upception blocks have short-cut connections. In short, the decoder block gradually restores the resolution.
The Penn Subterranean Thermal Network (PSTNet) \cite{shivakumar2020pst900} includes independent RGB and Fusion streams to generate segmentation maps from RGB and thermal images. RGB stream can be trained without thermal data since collecting aligned RGB-T images is challenging. Therefore, the designed model uses thermal images to improve the initial segmentation result in the Fusion stream. The RGB stream is a ResNet-18 \cite{he2016deep} architecture with an encoder-decoder and skip-connection scheme similar to U-Net \cite{ronneberger2015u}. The RGB stream is trained with the annotated RGB images to get the per-pixel confidence volume for the classes. This volume, thermal, and RGB input images are concatenated, and the result is passed to the Fusion stream, which is essentially an ERFNet-based \cite{romera2017erfnet} encoder-decoder architecture.
Sun et al. proposed FuseSeg \cite{sun2020fuseseg} employing encoder-decoder structure and two-stage fusion strategy to achieve segmentation in urban scenes. There are two encoders taking three-channel RGB and one-channel thermal images as inputs, and DenseNet-161 \cite{huang2017densely} is employed as the backbone of the encoders. Moreover, FuseSeg introduces a decoder including three modules: a feature extractor with two convolutional layers, an upsampler, and an out block. The upsampler and the out block each have a transposed convolutional layer. The feature extractor is responsible for extracting features from the fused feature maps while keeping the resolution of the feature maps unchanged. The upsampler and the out block increase the resolution by 2. The out block outputs the final segmentation result. Sun et al. also proposed a two-stage fusion strategy to effectively use the multi-spectral inputs and reduce the loss of spatial information due to downsampling. In the first stage of the fusion, feature maps extracted from the inputs in the encoder are fused with element-wise summation in the RGB encoder. The summations are again fused with the corresponding feature maps in the decoder through concatenation.
Vertens et al. \cite{vertens2020heatnet} proposed HeatNet intending to achieve daytime and nighttime image segmentation tasks without costly annotations of nighttime images. The PSPNet \cite{zhao2017pyramid} is exploited as a teacher model to get the annotations of daytime images in the Freiburg Thermal dataset \cite{vertens2020heatnet}. For this purpose, PSPNet is trained on the Mapillary Vistas dataset \cite{neuhold2017mapillary}. Then, a multimodal semantic segmentation network is trained using RGB and thermal daytime images annotated by the teacher model. The multimodel network also exploits PSPNet architecture and the first two block of the corresponding ResNet-50 \cite{he2016deep} encoder. Moreover, a domain adaptation method similar to \cite{tsai2018learning} is proposed to obtain nighttime segmentation results; therefore, a domain discriminator is employed after the softmax layer of the multimodal RGB-T model. Besides, the training is conducted using an alternating training scheme.
Graded-Feature Multilabel-Learning Network (GMNet) \cite{zhou2021gmnet} includes two encoders for feature extraction and three grading decoding stages to restore original resolution. The proposed model employed ResNet-50 \cite{he2016deep} as the backbone of the encoders. The fully connected and average pooling layers of ResNet are removed as they may result in the loss of spatial information and details. GMNet divides multilevel features into senior, intermediate, and junior grades. The features extracted from the ResNet's last three layers, in which the visual receptive fields are enlarged, are selected as senior features. Besides, the features from the first layer, which have more detailed information, are selected as junior features. Moreover, GMNet introduces two fusion modules, the shallow feature fusion module (SFFM) and the deep feature fusion module (DFFM), to use the junior, intermediate, and senior features. SFFM fuses the features from the first two layers of the encoders, whereas DFFM accomplishes the fusion operation for the last three layers. Finally, semantic, binary, and boundary loss functions are used to find the optimum parameters of the model.
Multi-Modal Multi-Stage Network (MMNet) \cite{lan2021mmnet} tackles the semantic segmentation problem by employing three encoder-decoder structures. In the first stage of the network, two separate encoder-decoder structures process RGB and thermal images with no information interactions between the modalities. In the second stage, one encoder-decoder fuses and refines the features from the first stage. The proposed model deploys ResNet-18 \cite{he2016deep} as encoders in the first stage, whereas Mini Refinement Block (MRB) is proposed as the encoder for the second stage. Each encoder sends information to the corresponding decoder using skip connections. As the direct connection may impact the fusion performance, EFEM (Efficient feature enhancement module) has been proposed to reduce the semantic gap between encoders and decoders.
Zhang et al. \cite{zhang2021abmdrnet} proposed Adaptive-weighted Bi-directional Modality Difference Reduction Network (ABMDRNet) containing three parts: Modality Difference Reduction and Fusion (MDRF) subnetwork, Multi-Scale Spatial Context (MSC) module, and Multi-Scale Channel Context(MCC) module. All RGB-T networks strive to use complementary information from RGB and thermal images to their advantage. The integration and utilization of multi-modality complementary information from RGB and thermal images may be hampered by the modality difference generated by distinct imaging mechanisms. Therefore, the MDRF subnetwork uses a bridging-then-fusing strategy to reduce the modality difference and utilize the multi-modality complementary information. An MSC module and an MCC module are designed to exploit multi-scale contextual knowledge of cross-modality features and their long-range relationships along spatial and channel dimensions.
Xu et al. \cite{xu2021attention} proposed Attention Fusion Network (AFNet) containing an attention fusion module to guide the fusion operation of multi-spectral inputs. Also, AFNet employs two identical encoders for feature extraction and a single decoder for resolution restoration. The encoders are designed based on the ResNet-50 \cite{he2016deep} with dilated convolutions. Also, the downsampling operations in the last two residual blocks in ResNet are removed. To make full use of the complementary properties of the RGB and thermal images, AFNet designed an attention fusion module. The attention matrices are obtained considering the cross-spectral and the global contextual relations of the images. The fusion operation takes place under the guidance of attention matrices, and the decoder uses the fused feature map. Moreover, the decoder employs three interpolations and three convolutional layers to obtain the segmentation result.
\subsection{Analysis \& Results}
Exploiting thermal images as well as RGB images may improve the segmentation network in terms of accuracy and robustness. For multi-spectral input semantic segmentation, several methods have been developed, and this review describes the similarities and differences of these methods in many aspects. The proposed architectures and fusion strategies tackle the difficulties of employing two images and producing a precise segmentation result. Two encoders and a single decoder are commonly used in RGB-T methods, such as \cite{ha2017mfnet}, \cite{sun2019rtfnet}, \cite{sun2020fuseseg}, \cite{zhou2021gmnet}, and \cite{xu2021attention} to extract features and restore the resolution. Moreover, \cite{shivakumar2020pst900} employs two distinct streams to process RGB images and the fused features, respectively. In this way, it can be trained by using only RGB images, and thermal images may provide further improvement. Also, \cite{lan2021mmnet} has three encoder-decoder structures with EFEM connections. \cite{vertens2020heatnet} achieves nighttime image segmentation as well as daytime image segmentation by using a domain adaptation method. Furthermore, different fusion strategies are proposed to use complementary information from different modalities without information conflicts. \cite{ha2017mfnet} concatenates the outputs of the encoders, whereas the \cite{sun2019rtfnet} fuses the thermal features into the RGB encoder through element-wise summation. Besides, more complex fusion strategies are proposed for the fusion operation, such as two-stage fusion, bridging-then-fusing strategies, attention fusion module, SFFM, and DFFM.
The MFNet dataset \cite{ha2017mfnet} includes both day and night RGB images with aligned thermal images. Since the images in the dataset can provide complementary information, RGB-T correlations are essential. The quantitative results of the RGB-T methods on test images from \cite{ha2017mfnet} can be found in Table \ref{table:mfnet_results}. The results are shown using a standard evaluation metric, mean Intersection over Union (mIoU). All the results provided in the table are taken from the original papers of the mentioned methods. On the other hand, the PST900 dataset \cite{shivakumar2020pst900} is more challenging for thermal fusion networks since plenty of information is provided by RGB alone, and the same object images are captured at both above and below the ambient temperature, making learning RGB-T correlations challenging. The PSTNet \cite{shivakumar2020pst900} reports better results on PST900 because the model has a separate RGB stream and employs a late fusion approach.
According to the revealed results in \cite{vertens2020heatnet}, on the MFNet dataset with three classes (person, car, bicycle), \cite{vertens2020heatnet} and \cite{ha2017mfnet} has comparable results, while \cite{sun2019rtfnet} outperforms with 0.707 mIoU. Also, the paper indicates that \cite{vertens2020heatnet} achieves mIoU score of 0.597 on the Freiburg Thermal dataset while \cite{ha2017mfnet} and \cite{sun2019rtfnet} perform with only 0.314 mIoU and 0.586 mIoU, respectively.
The inference speed of \cite{ha2017mfnet}, RTFNet-50, RTFNet-152 \cite{sun2019rtfnet} and \cite{sun2020fuseseg} are declared as 229.86, 88.87, 34.07 and 30.01 FPS according to the results announced in \cite{sun2020fuseseg}. Also, \cite{lan2021mmnet} and \cite{xu2021attention} reports their inference speeds slightly higher than \cite{sun2019rtfnet}.
\begin{table}[!h]
\centering
\caption{The Results of the RGB-T Methods on MFNet Dataset \cite{ha2017mfnet}}
\resizebox{0.35\textwidth}{!}
{%
\renewcommand{\arraystretch}{0.7}
\begin{tabular}{ll}
\hline
\textbf{\tiny{Method}} & \textbf{\tiny{mean IoU}} \\ \hline
\tiny{MFNet \cite{ha2017mfnet}} & \tiny{0.649} \\ \hline
\tiny{RTFNet-50 \cite{sun2019rtfnet}} & \tiny{0.517} \\ \hline
\tiny{RTFNet-152 \cite{sun2019rtfnet}} & \tiny{0.532} \\ \hline
\tiny{PSTNet \cite{shivakumar2020pst900}} & \tiny{0.484} \\ \hline
\tiny{FuseSeg \cite{sun2020fuseseg}} & \tiny{0.545} \\ \hline
\tiny{GMNet \cite{zhou2021gmnet}} & \tiny{0.573} \\ \hline
\tiny{MMNet \cite{lan2021mmnet}} & \tiny{0.528} \\ \hline
\tiny{ABMDRNet \cite{zhang2021abmdrnet}} & \tiny{0.548} \\ \hline
\tiny{AFNet \cite{xu2021attention}} & \tiny{0.546} \\ \hline
\end{tabular}
}
\label{table:mfnet_results}
\end{table}
\section{Semantic Segmentation Using Only Infrared Spectrum}
In terms of capturing details under adverse environmental conditions, thermal imaging cameras outperform visual imaging cameras. Thermal imaging cameras are widely used in the defense industry, and since they have become more affordable, they have gained popularity in various other applications. Therefore, a couple of approaches have been developed to achieve high accuracy under a wide range of conditions by using only infrared images. Unlike methods using RGB-T image pairs, aligned images are not required, making data collection easier. But still, the lack of a thermal image dataset limits the number of works conducted in the area. Also, extracting features might be challenging due to thermal crossover, low resolution, and contrast of the infrared images.
\subsection{Thermal Datasets}
Li et al. introduced Segment Objects in Day and Night (SODA) dataset \cite{li2020segmenting}. The SODA consists of 2168 real and 5000 synthetically generated thermal images. The real subset is captured by a FLIR camera (SC260). The thermal images generated from annotated RGB images are included in the synthetic subset. An image-to-image translation method, pix2pixHD \cite{wang2018high}, is trained with KAIST Multispectral Pedestrian Dataset \cite{hwang2015multispectral}. After training the model, the synthetic subset is generated from Cityscapes \cite{cordts2016cityscapes}. Figure \ref{fig:soda_syn} shows some synthetically generated thermal images and ground truth annotations. Labels of the generated thermal images can be obtained directly from RGB annotations. Besides, the real subset images are manually annotated. Three thermal images and annotations from the real subset can be seen in Figure \ref{fig:soda_real}.
\begin{figure}[h]
\centering
\begin{subfigure}{3cm}
\includegraphics[width=3cm]{images/soda_aachen_img.jpg}
\end{subfigure}
\begin{subfigure}{3cm}
\includegraphics[width=3cm]{images/soda_aachen_label.png}
\end{subfigure}
\begin{subfigure}{3cm}
\includegraphics[width=3cm]{images/soda_stuttgart_img.jpg}
\end{subfigure}
\begin{subfigure}{3cm}
\includegraphics[width=3cm]{images/soda_stuttgart_label.png}
\end{subfigure}
\begin{subfigure}{3cm}
\includegraphics[width=3cm]{images/soda_stuttgart2_img.jpg}
\end{subfigure}
\begin{subfigure}{3cm}
\includegraphics[width=3cm]{images/soda_stuttgart2_label.png}
\end{subfigure}
\caption{Synthetically generated thermal images and ground truth annotations in the SODA dataset \cite{li2020segmenting}}
\label{fig:soda_syn}
\end{figure}
\begin{figure}[h]
\centering
\begin{subfigure}{3cm}
\includegraphics[width=3cm]{images/soda_r_thermal1.jpg}
\end{subfigure}
\begin{subfigure}{3cm}
\includegraphics[width=3cm]{images/soda_r_label1.png}
\end{subfigure}
\begin{subfigure}{3cm}
\includegraphics[width=3cm]{images/soda_r_thermal2.jpg}
\end{subfigure}
\begin{subfigure}{3cm}
\includegraphics[width=3cm]{images/soda_r_label2.png}
\end{subfigure}
\begin{subfigure}{3cm}
\includegraphics[width=3cm]{images/soda_r_thermal3.jpg}
\end{subfigure}
\begin{subfigure}{3cm}
\includegraphics[width=3cm]{images/soda_r_label3.png}
\end{subfigure}
\caption{Thermal images and ground truth annotations from the real subset of the SODA dataset \cite{li2020segmenting}}
\label{fig:soda_real}
\end{figure}
For pedestrian detection from thermal images, there are a few well-known datasets such as OSU Thermal Pedestrian Database (OSUT) \cite{davis2005two}, Terravic Motion IR Database (TMID) \footnote{http://vcipl-okstate.org/pbvs/bench/}, and Pedestrian Infrared/Visible Stereo Video Dataset (PISVD) \cite{bilodeau2014thermal}. However, these datasets are not suited for the segmentation task due to the lack of annotations. Wang et al. \cite{wang2019thermal} introduced a new dataset including thermal pedestrian images from the driver's perspective for autonomous driving applications. The dataset consists of 1031 thermal images at a resolution of 720x480 sampled from 25 scene videos. The dataset is also split into two equal parts for train and test sets. However, the dataset is not publicly available.
Another application of the thermal semantic segmentation might be the ground vehicle segmentation from aerial images. In this context, NPU\_CS\_UAV\_IR\_DATA \cite{liu2018real} dataset includes UAV-based infrared vehicle images. The dataset also provides four groups of road images for testing. Flying altitude, resolution of the images, and ambient temperature differ in these groups. Also, the captured images differ in terms of the number of vehicles and surroundings.
For the networks aiming for good segmentation results despite illumination and noise, the Low Illumination Image dataset (LII) \cite{chen2020nv} includes manually labeled thermal, motion blur, night, and weak lighting images. The images' average SNR (signal-to-noise ratio) is 25.5 dB.
Xiong et al. introduced SCUT-Seg dataset \cite{xiong2021mcnet} which includes nighttime driving scenes from different environments. The dataset includes 2010 thermal images with semantic-wise annotations for ten classes (background, road, person, rider, car, truck, fence, tree, bus, and pole). Also, instance-wise annotations are provided for future works. The training and testing sets consist of 1365 and 665 images, respectively. Four example images and their ground truth annotations from the training set are presented in Figure \ref{fig:scut_seg}.
\begin{figure}[h]
\centering
\begin{subfigure}{3cm}
\includegraphics[width=3cm]{images/scut_seg_thermal1.jpg}
\end{subfigure}
\begin{subfigure}{3cm}
\includegraphics[width=3cm]{images/scut_seg_label1.png}
\end{subfigure}
\begin{subfigure}{3cm}
\includegraphics[width=3cm]{images/scut_seg_thermal2.jpg}
\end{subfigure}
\begin{subfigure}{3cm}
\includegraphics[width=3cm]{images/scut_seg_label2.png}
\end{subfigure}
\begin{subfigure}{3cm}
\includegraphics[width=3cm]{images/scut_seg_thermal3.jpg}
\end{subfigure}
\begin{subfigure}{3cm}
\includegraphics[width=3cm]{images/scut_seg_label3.png}
\end{subfigure}
\begin{subfigure}{3cm}
\includegraphics[width=3cm]{images/scut_seg_thermal4.jpg}
\end{subfigure}
\begin{subfigure}{3cm}
\includegraphics[width=3cm]{images/scut_seg_label4.png}
\end{subfigure}
\caption{Thermal images and corresponding ground truth images from the SCUT-Seg dataset \cite{xiong2021mcnet}}
\label{fig:scut_seg}
\end{figure}
\subsection{Thermal Semantic Segmentation Methods}
Edge-Conditioned CNN (EC-CNN) \cite{li2020segmenting} exploits edge prior information to increase the quality of segmentation output since thermal crossover and thermal sensors cause ambiguous object boundaries and imaging noise, respectively. Some gated feature-wise transform (GFT) layers are inserted into the model to embed edge information properly. The proposed model consists of an edge extractor (EdgeNet), EC-CNN blocks, and a DeepLabV3 \cite{chen2017rethinking} based semantic segmentation network. As an edge extractor, HED (Holistically-nested Edge Detection) \cite{xie2015holistically} is employed to obtain high-quality edge information. However, there is no thermal dataset with edge annotations; the RGB dataset was used for HED training. Even though HED is trained on an RGB dataset with ground truth edge annotations, the edge results of thermal images are quite successful. EC-CNN blocks consist of convolutional layers and GFT layers to guide the segmentation of the input image by using the output of the EdgeNet. Also, the DeepLabV3 model employs ResNet as feature extractor and atrous convolutions, whereas some ResNet blocks are replaced with EC-CNN block to embed edge prior.
Wang et al. \cite{wang2019thermal} proposed a thermal infrared pedestrian segmentation algorithm including a conditional generative adversarial network (IPS-cGAN). The generator of the IPS-cGAN is based on Unet \cite{ronneberger2015u} with two modifications so that a more suitable network for thermal infrared pedestrian segmentation is obtained. Firstly, to have more efficient connections, original convolutional blocks are replaced by residual blocks. Secondly, 0.5 rate dropout has been deployed so that the network becomes more robust. Moreover, SandwichNet is designed with a symmetrical structure as the discriminator of the proposed network. SandwichNet takes original image and segmentation results as inputs. The SandwichNet is designed based on multi-channel input PatchGAN \cite{isola2017image}. The difference is that SandwichNet needs symmetrical three-channel result-image-result with segmentation result from the generator and thermal image, and three-channel truth-image-truth with segmentation ground truth. Moreover, the designed generator and the discriminator are trained as an end-to-end GAN algorithm with cross-entropy loss. The modifications on Unet and the design of the discriminator provide a more robust model against noises for thermal infrared pedestrian segmentation.
The combination of the Gaussian-Bernoulli Restricted Boltzmann Machine (GB-RBM) and convolutional neural network is proposed in RT-SegRBM-Net \cite{masouleh2019development} to segment the vehicles from the UAV-based thermal images in real-time. The deep learning algorithm is designed based on SegNet \cite{badrinarayanan2017segnet} architecture, and GB-RBM is embedded into the overall structure to make use of the geometry information of the vehicles.
Nightvision-Net (NvNet) \cite{chen2020nv} is proposed for semantic segmentation of low-resolution infrared images in weak illumination environmental conditions. Nv-Net suggests the network architecture of the FCN-8S \cite{bhandari2019context} with a contracting and an expanding path and a weighting loss. Also, transfer learning is utilized to increase the performance of semantic segmentation. NvNet architecture consists of four parts, data refinement (DR), data normalization (DN), the contracting path, and the expanding path. The contracting path has several convolution layers and average pooling operations and outputs down-sampled feature maps. Therefore, the aim of the expanding path is to enhance the output feature map's resolution. The contracting path is complemented by the expanding path that applies consecutive layers with upsampling operations instead of pooling operations. Also, the expanding path uses the feature maps from the corresponding layers of the contracting path to achieve better localization of the objects. Moreover, data normalization is performed, which accelerates the convergence of the training. Besides, NvNet introduces weighted-sigmoid-cross-entropy loss to calculate the error between the prediction and ground truth.
Xiong et al. proposed a Multi-level correction network (MCNet) \cite{xiong2021mcnet} to achieve thermal images segmentation for nighttime driving scenes. Thermal images have low resolution and blurred edges caused by the thermal crossover; therefore, MCNet proposes the multi-level attention module (MAM) to solve this problem. The MAM includes two sub-modules, the context aggregation module (CAM) and the correlation matrix correction module (CMCM). CAM is chosen to model the spatial correlations within pixels' position, and the correlation matrix learns the dependency between any two pixels. The correlation matrix has significant importance since the properties of the thermal images, such as low resolution and ambiguous object boundaries, may cause misleading results about the related contextual information. So, to prevent this misleading information and suppress the noisy information, the CMCM module is also included in the proposed method. If the correlation values between the intra-class pixels are lower than that of inter-class pixels, the CMCM module corrects these wrong values. Furthermore, thermal images are more dependent on edge information due to the lack of color information. Hence, a multi-level edge enhancement module (MEEM) is designed to enhance the edge information and improve the final feature representation in multiple iterations.
Feature Transverse Network (FTNet) \cite{panetta2021ftnet} is an end-to-end trainable convolutional neural network architecture. FTNet employs an encoder-decoder structure and an edge guidance part to conduct reliable pixel-wise classification. FTNet introduces a feature transverse network (decoder) exploiting a set of residual units \cite{he2016deep}. Moreover, ResNeXt \cite{xie2017aggregated} based encoder network provides thermal image features at different resolutions by subsampling at several stages. These feature maps are passed through the aforementioned residual units. Also, FTNet employs a fully connected layer to combine the outputs of the residual units. Edge information is also exploited to reduce the effects of thermal crossover and noise created by the sensors on the segmentation map. Moreover, the edges are extracted from the feature map obtained in the third layer of the encoder and passed through the combination of layers convolution, batch normalization, ReLU, respectively. Then, the edge map is upsampled to the input image resolution before passing through another convolutional layer. Finally, the edge map is also fused with the feature maps obtained in the decoder, and the result is passed through the final block, including convolutional, batch normalization, and ReLU layers. In addition, an edge-based loss function is adapted with the semantic loss while training FTNet to increase the segmentation accuracy. Edge ground truths are calculated from the semantic label gradients.
\subsection{Analysis \& Results}
Thermal images alone can provide sufficient information in adverse environmental conditions, so a few segmentation methods have been developed using only thermal images. Although thermal crossover and noise introduced by the thermal imaging sensors make the segmentation task more difficult, the thermal segmentation methods propose different approaches such as employing edge information and correlation matrix. \cite{li2020segmenting}, \cite{xiong2021mcnet} and \cite{panetta2021ftnet} propose mechanisms to extract the edge information from the thermal image and use it to guide segmentation. \cite{masouleh2019development}, \cite{chen2020nv}, and \cite{wang2019thermal} employ encoder-decoder structures, whereas \cite{li2020segmenting} exploits atrous convolutions to obtain the output segmentation map. Moreover, \cite{xiong2021mcnet} creates the correlation matrix, which models the dependency between any two pixels, to focus on the same classes and avoid noisy information. Also, \cite{chen2020nv} exploits weighted-sigmoid-cross-entropy loss for images collected in weak illumination environmental conditions to discriminate important pixels while calculating the loss. \cite{masouleh2019development} attempts to segment vehicles from UAV-based thermal images, so a Boltzmann machine is employed for geometry extraction from vehicles up view to increase the segmentation accuracy. \cite{xiong2021mcnet} is proposed for nighttime driving scenes, so the model exploits inherent aspects of driving scene images, such as the fact that object instances show only in narrow bands that cross horizontally through the image's center.
SODA dataset \cite{li2020segmenting} includes day and night thermal images for generic purposes and commonly used for testing thermal segmentation methods. On the SODA testing set, \cite{li2020segmenting} reported the performance of the proposed method and \cite{chen2017rethinking} as 0.619 and 0.571 mIoU, respectively. Moreover, \cite{xiong2021mcnet} and \cite{panetta2021ftnet} reaches 0.503 and 0.600 mIoU on the same dataset, as reported in \cite{panetta2021ftnet}. It can be noted that \cite{li2020segmenting} and \cite{panetta2021ftnet} have comparable results on SODA. In addition, \cite{wang2019thermal} is designed to overcome regional intensity inhomogeneity and be more robust against various noises for the infrared pedestrian segmentation. According to the revealed results in \cite{wang2019thermal}, the proposed method performs with 0.939 mIoU on its own dataset, and outperforms \cite{chen2017rethinking}, \cite{mirza2014conditional} and \cite{isola2017image}. In terms of the average precision results, \cite{masouleh2019development}, \cite{he2017mask}, \cite{chen2017rethinking}, and \cite{badrinarayanan2017segnet} achieves the similar performance on the NPU\_CS\_UAV\_IR\_DATA \cite{liu2018real} test sets, whereas \cite{masouleh2019development} achieves slightly better average processing time, as reported in the \cite{masouleh2019development}. In addition, \cite{chen2020nv} reported that the proposed model performs with 0.912 mIoU, which is better than \cite{chen2017deeplab} with 0.469 mIoU on the LII dataset. On the SCUT-Seg nighttime driving dataset, \cite{xiong2021mcnet} reported 0.676 mIoU and 32.52 FPS with a single NVIDIA GTX 1080 Ti. Moreover, \cite{panetta2021ftnet} announces its accuracy as 0.667 mIoU on the same dataset. Similar to SCUT-Seg, MFNet dataset \cite{ha2017mfnet} also contains driving scenes, and \cite{xiong2021mcnet} reaches 0.519 mIoU on the thermal images in MFNet dataset, while \cite{chen2018encoder} only achieves 0.504 mIoU and RTFNet-50 \cite{sun2019rtfnet} using both RGB and thermal images achieves 0.503 mIoU according to the revealed results in \cite{xiong2021mcnet}. Using the thermal images in the \cite{ha2017mfnet} dataset, \cite{panetta2021ftnet} reported its accuracy as 0.471 mIoU which is also comparable with the results of \cite{xiong2021mcnet} and \cite{sun2019rtfnet}.
\section{Conclusion}
This survey reviews recent progress in deep learning-based semantic segmentation methods using thermal images and compares them in terms of their architectures, performance, applications, and the proposed approaches to improve the models. Also, this survey provides thermal image datasets descriptions.
In conclusion, using thermal images in semantic segmentation tasks helps to increase the robustness and success of the systems. Also, the proposed methods can be used in a wide range of applications. Due to the limited number of available thermal image datasets and characteristics of the images, only a few methods have been developed. The semantic segmentation of thermal images is very promising, and further research can be advanced in several directions, such as creating synthetic data, data augmentation, and fusion strategies.
{\small
\bibliographystyle{ieee_fullname}
|
2,877,628,091,551 | arxiv | \section{Introduction}
\noindent When a liquid is quenched faster than its critical cooling rate, crystallization events can be overcome and atoms can structurally freeze into a disordered solid state.\textsuperscript{\cite{Turnbull1969,Zhong2014a,Salinga2018}} Upon cooling the melt, the atomic mobility decreases. Eventually, the system can no longer assume the equilibrium structure of the supercooled liquid state within the timescale of the experiment and a non-equilibrium glass state is created. The free energy difference between the super-cooled liquid and glass state results in structural relaxation, where the atomic configurations in the glass change over time. The intrinsic material properties, including viscosity, density, and the electronic bandgap, change due to relaxation,\textsuperscript{\cite{Angell2000b,Priestley2005a}} and this is understood to occur in three phases (Figure~\ref{fig:SketchSigmoidRelaxation}a).\textsuperscript{\cite{Chen2007,McKenna2003,Priestley2009}} For every rearrangement, a finite energy barrier must be overcome, and therefore, for some amount of time, the onset phase, the properties do not change. In the second phase, where relaxation is most profound, the properties have been observed to change proportionally to log(t). Finally, approaching the supercooled liquid, the glass reaches a saturation phase, and the properties no longer continue to change. Tracking this structural relaxation process through all three phases is experimentally challenging. Most studies are focused on amorphous polymers and metallic glasses, yet there is little work on highly fragile glass formers with bad glass forming ability, such as phase change chalcogenide glasses.
Thin films of phase change chalcogenides, such as Ge$_2$Sb$_2$Te$_5$ (GST) show interesting electrical and optical material properties, which can be rendered tunable via rapid and reversible crystalline to amorphous phase-transitions. Phase change chalcogenides are exploited for many technologies, including the commercialized electrical phase change memory (PCM) technology. In PCM, a nanoscale volume of a chalcogenide compound is sandwiched between the top and bottom electrodes. Joule heating, from current across the electrodes (Figure~\ref{fig:SketchSigmoidRelaxation}b), allows reversible amorphization and crystallization of the chalcogenide glass.\textsuperscript{\cite{LeGallo2020AnPhysics}} The amorphous and crystalline states exhibit electrically distinct properties and thus, the device resistance can be toggled between a electrically conductive state (SET) and resistive state (RESET). Within PCM devices, the amorphous state relaxes after RESET, and the observable metrics, such as the electrical resistance ($R$) and the threshold-switching voltage ($V_\text{th}$), change due to structural relaxation. This process is commonly referred to as drift.\textsuperscript{\cite{Zhang2020a}}
In the RESET state, the device resistance increases with $\log(R) = \log(R_0) + \nu_\text{R} * \log(t/t_0)$, and the threshold-switching voltage increases with $V_\text{th} = V_\text{th,0} + \nu_\text{Vth} * \log(t/t_0)$, where $\nu_\text{R}$ and $\nu_\text{th}$ denotes the resistance and threshold voltage drift coefficients, respectively, $R_0$ the resistance at $t_0$ and $V_{th,0}$ the threshold voltage at $t_0$. Importantly, however, these equations are only valid in the relaxation phase. What remains to be investigated both qualitatively and quantitatively are the other two phases, namely, when does structural relaxation begin and when does it end? To this end, measurements that capture the relaxation from extremely short to long timescales at different temperatures are required. Although drift in PCM devices has been studied extensively experiments only revealed the $\log(t)$ dependency of resistance and threshold voltage.\textsuperscript{\cite{Gorbenko2019,LeGallo2018,Salinga2018}} A notable exception is a stand-alone sub-\unit[100]{$ns$} drift measurement by Ielmini et al. on GST that hints at the presence of a region where drift is absent.\textsuperscript{\cite{Ielmini2007}} In Supplementary Note 1 we compare these experiments to our study.
\begin{figure}
\centering
\includegraphics[width=0.75\linewidth]{Figure1.eps}
\caption{(a) Sketch of the temporal evolution of material properties upon structural relaxation: Previous studies of phase change materials have been limited to time scales and temperatures in which drift obeys a $\log(t)$ dependence. In this work, we expand the measurement range to the onset of relaxation and study its temperature dependence. Since relaxation is thermally activated, the onset and saturation of drift shift to shorter time scales with increasing ambient temperature. (b) Transmission electron micrograph of the mushroom-type PCM device used in this study. The amorphous dome that is created by passing a current through the narrow bottom electrode is highlighted with a white dotted line.}
\label{fig:SketchSigmoidRelaxation}
\end{figure}
The goal of this study is to shed light on the phase where relaxation is absent. Specifically, to quantify on what timescales the commonly assumed $\log(t)$ dependence is valid, and to appraise the different classical models to explain drift. To this end, we employ $V_\text{th}$~as a means to observe the state of relaxation. We study the drift characteristics of GST and a doped GST (dGST) by setting up a $V_\text{th}$~drift measurement and analysis framework. Mushroom-type PCM devices of both materials are melt-quenched at temperatures spanning from \unit[100]{$K$} to \unit[300]{$K$}, and drift is probed from tens of nanoseconds to ten seconds after RESET. The experimental data are fitted with two models, namely the collective relaxation and the Gibbs model of relaxation, and the different physical parameters used in the fitting are discussed and compared.
\section{Threshold-switching voltage drift experiments}\label{sec:expdata}
Because structural relaxation processes are thermally activated, ambient temperature can be used as a knob to shift the onset and saturation of drift to experimentally accessible timescales (see Figure~\ref{fig:SketchSigmoidRelaxation}a). However, the observation of the saturation phase by raising the ambient temperature (greater than \unit[400]{$K$}) is prohibitive due to potential recrystallization of the amorphous phase. On the other hand, there is a potential for measuring the onset of drift by monitoring it at lower ambient temperatures. The challenge however, is the inability to reliably measure electrical resistance at short-timescales and low temperatures. Hence, we resort to $V_\text{th}$~as a means to observe the state of relaxation. $V_\text{th}$~marks the switching of the highly resistive RESET state to an electronically excited on-state (see Figure~\ref{fig:SummaryExperiment}a).
To probe the $V_\text{th}$~drift, the mushroom cell is repeatedly programmed to a new RESET state and SET pulses with delay times varying from \unit[10]{$ns$} to \unit[10]{$s$} are applied. Note that each $V_\text{th}$~measurement results in the erasure by recrystallization of the corresponding RESET state. The measured $V_\text{th}$~drift represents an averaged behavior of RESET states created in the device. Details on the experimental protocol and the algorithm to define $V_\text{th}$~are provided in the Methods. Three distinct regimes are apparent in the temporal evolution (Figure~\ref{fig:SummaryExperiment}b). In regime 1, up to $\sim 1 \: \mu s$ there is a steep increase of $V_\text{th}$. Most likely this is caused by the decay of the RESET excitation. While previous studies attributed this regime solely to decay of the electrical excitation,\textsuperscript{\cite{Elliott2020,Ielmini2007}} thermal transient effects may also play an important role. The threshold voltage increase with time in regime 1 appears to be independent of the ambient temperature (Supplementary Note 2). Regime 2 shows a flattening of the curve and almost constant threshold voltage values. Finally, in regime 3 we observe a continuous linear increase with $\log(t)$. We attribute the temporal evolution in regimes 2 and 3 to the structural relaxation of the amorphous phase with the transition between them marking the onset of relaxation.
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{Figure2.eps}
\caption{Threshold-switching voltage drift experiment: (a) Programming scheme. A RESET pulse with a \unit[3]{$ns$} falling edge programs the device to a melt-quenched amorphous state. A SET pulse with a \unit[100]{$ns$} leading edge is applied to probe the threshold voltage value. In the example shown here, the device drifts for \unit[372]{$ns$} before threshold switching occurs at \unit[1.5]{$V$}. The time interval (t\textsubscript{delay}) between the RESET and SET pulses is varied to probe the temporal evolution. (b) $V_\text{th}$~drift in GST. The $V_\text{th}$~evolution exhibits three distinct regimes (labeled in grey). First a steep increase up to \unit[$\sim$1]{$\mu s$}, second, a flattening to almost constant values, and third a transition to a monotonic increase approximately proportional to log(t). (c) Temperature-dependent $V_\text{th}$~drift of GST and (d) doped GST in regimes 2 and 3. With increasing ambient temperature, $V_\text{th}$~begins to change at shorter timescales and shows a larger change with time. Error bars show the standard deviation over 15 measurements.}
\label{fig:SummaryExperiment}
\end{figure}
In the following, we further analyze the temperature dependence of regimes 2 and 3 for GST and dGST. To create comparable RESET states at different ambient temperatures, the programming power was scaled such that the initially molten volume remains approximately constant (Supplementary Note 3) and the RESET pulse trailing edge was kept constant. Both materials show the same characteristic behavior (Figure \ref{fig:SummaryExperiment}c and \ref{fig:SummaryExperiment}d). With increasing ambient temperature, the $V_\text{th}$~values decrease. This is due to the thermally activated transport of the amorphous phase. All experiments capture regime 2 in which $V_\text{th}$~hardly changes. The transition point to regime 3, the onset of relaxation, shifts continuously to shorter timescales. Furthermore, the slope in regime 3 progressively increases with increasing temperature. Both these effects, namely, the onset shift and the slope change are expected because the relaxation processes are accelerated by increasing temperature.
At \unit[100]{$K$}, we find $V_\text{th}$~drift to be absent, which can be either because the drift coefficient is very small or because the onset has shifted outside of the measurement range. One possible reason for the former is that in a phase change material with trap states deep within the band gap, like Ge$_2$Sb$_2$Te$_5$ \textsuperscript{\cite{Luckas2010,Rutten2019,Konstantinou2019}}, resistance drift at very low temperatures may not be observable if the electrical transport changes from a trap limited band transport to a hopping type transport. The activation energy for hopping would be defined by the distance between the Fermi level and the trap states, which may not necessarily change upon structural relaxation.
\section{Analytical Framework and Relaxation Models}\label{sec:analysis}
In this section, we will try to justify the use of $V_\text{th}$~to monitor the state of relaxation and also to analyze the experimental data presented in Section \ref{sec:expdata} based on state-of-the-art relaxation models. Establishing an analytical relation between $V_\text{th}$~and the state of relaxation, Glass($t$,$T_\text{hist}$), where $T_{hist}$ captures the thermal history, poses a non-trivial problem. First, the exact mechanism of threshold switching is still debated, and second, it is not clear which material parameters change with relaxation.
We assume that $V_\text{th}$~can be defined by the sum of a temperature-dependent function $f(T)$, a term proportional to Glass($t$,$T_\text{hist}$)~and an offset value ($C_2$) that could change with the size of the amorphous dome for example.
\begin{equation}
\label{equ:VariableSeparation}
V_\text{th} = f(T) + C_1 * \mathrm{Glass}(t,T_{hist}) + C_2
\end{equation}
A key basis for this assumption is the approximately linear change of $V_\text{th}$ with the activation energy for electrical conduction ($E_\text{a}$).\textsuperscript{\cite{Pirovano2004,LeGallo2016}} $E_\text{a}$ in turn has been shown to increase with drift,\textsuperscript{\cite{Boniardi2011,LeGallo2018,Wimmer2014b}} and a linear increase of $E_\text{a}$ with $\log(t)$ has been experimentally measured for different phase change materials.\textsuperscript{\cite{Fantini2012,Rutten2015}} Thus, we expect that $V_\text{th}$~is proportional to Glass($t$,$T_\text{hist}$). Finally, we assume that the change of $V_\text{th}$~upon relaxation is decoupled from the change of ambient temperature $T$. Such a decoupling was previously deduced in \cite{Ciocchini2012} and can also be derived from simulations based on a thermally assisted switching model (see Supplementary Note 4).\textsuperscript{\cite{LeGallo2016}}
From Equation \ref{equ:VariableSeparation}, it can be seen that the temporal change in $V_\text{th}$~with respect to a defined reference point, $t_\text{ref}$, is dependent only on Glass($t$,$T_\text{hist}$). For our analysis, we identify $V_\text{th}$(1$\mu$s) as an ideal reference point where drift is absent for all temperatures studied. We denote the initially created glass state, that did not yet begin to relax, as $\mathrm{Glass_0}$.
\begin{equation}
\label{equ:DeltVthDeltaGlass}
\Delta V_\text{th} = V_\text{th}(t,T)-V_\text{th}(1 \mu s,T) = C_1 * (\mathrm{Glass}(t,T_{hist})-\mathrm{Glass_0})
\end{equation}
In the following, we will show that, based on these assumptions, the temperature-dependent onset of threshold voltage drift can be captured with two common relaxation models proposed for phase change materials, that is the Gibbs model \textsuperscript{\cite{Ielmini2008a,Lavizzari2009}} and the collective relaxation model.\textsuperscript{\cite{Sebastian2015,LeGallo2018}}
\subsection*{Collective relaxation model}
\noindent The collective relaxation model does not specify individual relaxation processes or defect states but instead quantifies the relaxation state of the glass by an abstract state variable $\Sigma$. $\Sigma = 1$ is an infinitely unrelaxed state and $\Sigma = 0$ denotes that the system approaches equilibrium.\textsuperscript{\cite{Knoll2009}} Upon relaxation, the system assumes configurational states of progressively lower energy. The activation energy $E_\text{b}=E_\text{s}*(1-\Sigma)$ that must be overcome for the next relaxation step increases monotonically. The temporal evolution of the state variable $\Sigma$ is captured by the rate equation
\begin{equation}
\label{equ:DifferentialSigma}
\frac{d\Sigma(t)}{dt}= - \nu_0 \Delta_\Sigma *\exp\left(\frac{-E_\text{s}*(1-\Sigma(t))}{k_\text{b}T}\right)
\end{equation}
assuming an Arrhenius dependence of the relaxation rate on the activation energy. The attempt to relax frequency, which is on the order of phonon frequencies, is denoted by $\nu_0$ and $\Delta_\Sigma$ is the change of $\Sigma$ with each relaxation step. Consequently, $\Delta_\Sigma*E_\text{s}$ defines the increase of the activation energy for each subsequent relaxation step. For a constant ambient temperature, the differential equation can be solved analytically as
\begin{equation}
\label{equ:SigmaOftime}
\Sigma(t,T) = -\frac{k_\text{b}T}{E_\text{s}} \log\left(\frac{t+\tau_0}{\tau_1}\right)
\end{equation}
with $\tau_0 = \frac{k_\text{b}T}{\nu_0\Delta_\Sigma E_\text{s}} \exp(\frac{E_\text{s}*(1-\Sigma_0)}{k_\text{b}T})$ marking the begin of relaxation from the initial glass state $\Sigma_0$ and $\tau_1 = \frac{k_\text{b}T}{\nu_0\Delta_\Sigma E_\text{s}} \exp(\frac{E_\text{s}}{k_\text{b}T})$ the time at which the system reaches equilibrium. In the range $\tau_0<<t<\tau_1$ the temporal evolution of $\Sigma$ follows $\log(t)$ and the change of $\Sigma$ depends linearly on the ambient temperature. The linear temperature dependence of $\tau_0$ is almost negligible compared to the exponential term. In a first approximation the shift of the relaxation onset with temperature is defined by the smallest activation energy for relaxation $E_\text{min} = E_\text{s} * (1-\Sigma_0)$ and the effective attempt to relax frequency $\nu_0 \Delta_\Sigma$ is a scaling factor defining the timescale of the onset.
To fit the experimental data with the collective relaxation model, the terms $\mathrm{Glass}(t,T_{hist})$ and $\mathrm{Glass_0}$ in Equation~\ref{equ:DeltVthDeltaGlass} are replaced by $\Sigma(t,T)$ and $\Sigma_0$ respectively. Both distinct features, the shift of the onset $\tau_0$ and the increase of the drift coefficient $\nu_\text{Vth} = C_1 * k_\text{b}T /E_\text{s} $ with ambient temperature are well captured (Figure \ref{fig:ModelFit}). It confirms that the observed threshold voltage evolution is caused by the relaxation dynamics of the amorphous phase. At \unit[300]{$K$}, $\tau_0$ of GST and dGST is \unit[$\sim 15$]{$\mu s$} and \unit[$\sim 2.3$]{$\mu s$} after RESET, respectively. Interestingly, the relaxation onset of the glass states created at ambient temperatures ranging from \unit[100]{$K$} to \unit[300]{$K$} can be fitted with a single $\Sigma_0$, i.e. the degree of relaxation of the initially created glass state does not change notably. Faster quenching has been observed to create less relaxed glass states.\textsuperscript{\cite{Greer1982}} In our study the cooling profile of the melt-quenching process changes with ambient temperature. An understanding of the quench-rates and glass transition temperature in the device and how these determine the initial value of $\Sigma_0$ could be subject of future work.
Still it is not possible to determine unique values of $\Sigma_0$, $C_1$, $E_s$, or $\nu_0 \Delta_\Sigma$, the material parameters defining the relaxation kinetics. To this end it would be necessary to also determine the time of drift saturation $\tau_1$. As long as only the drift coefficient $\nu_{Vth}$ and $\tau_0$ are known, the four fitting variables are interdependent. The drift coefficient depends on $C_1/E_s$, the exponential prefactor of $\tau_0$ on $\nu_0 \Delta_\Sigma E_s$ and the exponential term on $(1-\Sigma_0 ) E_s$.
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{Figure3.eps}
\caption{Model fit - Collective relaxation model: The temporal evolution of $V_\text{th}$~at different temperatures is fitted collectively with Equations (\ref{equ:SigmaOftime}) and (\ref{equ:DeltVthDeltaGlass}). The onset of relaxation is marked with an asterisk. For better visibility, the experiments at different temperatures are shifted along the y-axis by $(T-100 K) * 0.2 V/K$. The fitting parameters are summarized in Table \ref{tab:CollectiveRelaxation}.}
\label{fig:ModelFit}
\end{figure}
\begin{table}[h!]
\centering
\begin{tabular}{c c c c}
\hline
Material & $C_1 /E_s$ [V/eV] & $\nu_0 \Delta_\Sigma E_s$ [eV/s] & $(1-\Sigma_0)E_s$ [eV] \\
\hline
GST & -1.2 & 2.48e6 & 0.19 \\
dGST & -0.73 & 1.07e8 & 0.24 \\
\hline
\end{tabular}
\caption{Fitting parameters - Collective relaxation model}
\label{tab:CollectiveRelaxation}
\end{table}
The variable $E_\text{s}$, which defines the activation energy for relaxation as the glass approaches its ideal state, is constrained to some extent. Defining the longest times for which drift is reported in the literature as a lower limit for $\tau_1$, we calculate a lower limit for $E_\text{s}$. In pure GST, drift was measured for more than \unit[8*10$^6$]{$s$} at \unit[300]{$K$} \textsuperscript{\cite{Gorbenko2019}} and in a dGST, similar to the one used in this study, drift for \unit[10$^4$]{$s$} at \unit[420]{$K$} \textsuperscript{\cite{LeGallo2018}} has been reported. This corresponds to a lower limit of \unit[0.95]{$eV$} and \unit[1.12]{$eV$} for GST and dGST, respectively. The upper limit of $E_\text{s}$ is on the order of the activation energy of crystallization, which is \unit[3.2]{$eV$} for GST \textsuperscript{\cite{Jeyasingh2014a}} and \unit[3.01]{$eV$} for dGST.\textsuperscript{\cite{Sebastian2014a}}
At this point it is worth highlighting the simplicity of this model, which has been shown to not only capture relaxation at a constant ambient temperature but also the effect of annealing profiles on the resistance in phase change memory devices.\textsuperscript{\cite{Sebastian2015}} Only two variables, $\nu_0\Delta_\Sigma$, and $E_\text{s}$ define the relaxation dynamics and, since relaxation is abstracted to a collective process, a single variable suffices to describe the degree of relaxation of the glass state.
\subsection*{Gibbs model}
\noindent The relaxation model introduced by Gibbs in the 1980s defines the glass by a spectrum of defect states with different activation energies ($q(E_\text{d})$).\textsuperscript{\cite{Gibbs1983}} The sum of these defect states ($Q$) and their change with time is the equivalent of $\Sigma(t)$ in the collective relaxation model. While the physical picture of the relaxation process is different, the models are mathematically quite similar. Like in the collective relaxation model, a rate equation with an Arrhenius dependence on the defect state activation energy for relaxation ($E_\text{d}$) is assumed
\begin{equation}
\label{equ:DifferentialGibbs}
\frac{dq(E_\text{d},t)}{dt} = -\nu_0 * \exp\left(\frac{-E_\text{d}}{k_b T}\right) * q(E_\text{d},t)
\end{equation}
where $\nu_0$ is the attempt to relax frequency. Since the probability to relax depends exponentially on the activation energy for relaxation, defects with a small activation energy relax first and only defects in a narrow range of activation energies relax at the same time (see Supplementary Note 5). Thus, the activation energy that must be overcome for further relaxation effectively increases monotonically, like in the collective relaxation model. For a constant ambient temperature, the relaxation dynamics are given by the equation
\begin{equation}
\label{equ:GibbsConstantTamb}
Q(t,T) = \int q(E_\text{d}) * \exp\left(-t * \nu_0 * \exp\left[\frac{-E_\text{d}}{k_\text{b}T}\right]\right) dE_\text{d}
\end{equation}
The main challenge in probing the Gibbs model is to infer what spectrum of defect states the material has. Experimental studies on other materials show bell-shaped or more complex distribution functions.\textsuperscript{\cite{Shin1993,Chen1976,Khonik2008,Tsyplakov2014,Friedrichs1989}} To capture the strict $\log(t)$ dependence observed in phase change materials over many orders of magnitude in time, a rather flat $q(E_\text{d})$ is required.\textsuperscript{\cite{Knoll2009, LeGallo2018, Ielmini2007}} The onset of relaxation emerges from a transition from no defects to existing defects. How sharp it is depends on the width of this transition and the attempt to relax frequency. We fit the experiment assuming three different initial defect distributions $q_0(E_\text{d})$. One with a step-like transition and two with a linear transition over an energy range of \unit[0.25]{$eV$} and \unit[0.5]{$eV$} (Figure \ref{fig:GibbsModel_Fit}). The upper limit of $q_0(E_\text{d})$ is set to \unit[1.5]{$eV$}, which is well beyond the highest activation energies that can be overcome on the timescales and temperatures probed in our study. The position of the transition, the attempt to relax frequency and the proportionality constant $C_1$ are free fitting parameters. All three distributions give a fairly good fit to the experimental data. An extrapolation to longer timescales, however, shows that the onset is stretched out too much for the \unit[0.5]{$eV$} wide transition. In both materials, the $q(E_\text{d})$ needs to have a rather sharp transition from zero to a constant number of defects.
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{Figure4.eps}
\caption{Model fit - Gibbs model: The temporal evolution of the threshold voltage of GST (a) and dGST (b) is fitted to the Gibbs model for three differently shaped activation energy spectra. The initial defect distribution functions $q_0(E_\text{d})$ are shown in the figure inset. When the activation energy spectrum increases over an energy range that is too wide, here \unit[0.5]{$eV$}, the onset gets stretched out too much. For better visibility the experiments at different temperatures are shifted along the y-axis by $(T_{amb}-100 K) * 0.2 V/K$. The fitting parameters are summarized in Table \ref{tab:GibbsModel}.}
\label{fig:GibbsModel_Fit}
\end{figure}
\begin{table}[h!]
\centering
\begin{tabular}{ c c c c}
\hline
Material & Transition q(E) & $\nu_0 [s^{-1}]$ & $C_1 [V]$ \\
\hline
GST & step & 3.66e7 & -1.57 \\
GST & 0.25 eV & 7.33e8 & -1.54 \\
GST & 0.5 eV & 8.64e7 & -2.33 \\
dGST & step & 2.27e9 & -0.92 \\
dGST & 0.25 eV & 1.97e9 & -1.00 \\
dGST & 0.5 eV & 9.33e9 & -1.15 \\
\hline
\end{tabular}
\caption{Fitting parameters - Gibbs model: The threshold voltage drift is fitted with three differently shaped initial activation energy spectra $q_0(E_\text{d})$ (inset - Figure \ref{fig:GibbsModel_Fit}). Dependent on the spectrum the attempt to relax frequency and the proportionality constant between $V_{th}$ and the sum of defect states changes.}
\label{tab:GibbsModel}
\end{table}
\section{Discussion}
\noindent The onset of relaxation constrains the parameters in both relaxation models. The activation energy that must be overcome for the first relaxation step in the collective relaxation model is \unit[0.19]{$eV$} for GST and \unit[0.24]{$eV$} for dGST. In the Gibbs model, the equivalent to this is the position of the transition from no defects to existing defects, which is at around \unit[0.1]{$eV$} to \unit[0.25]{$eV$} for both materials (Figure \ref{fig:GibbsModel_Fit}, inset). It changes slightly depending on the assumed shape of $q(E_\text{d})$. In the phase change memory cell, a melt-quenched state is created with extremely high cooling rates on the order of \unit[$\sim$10$^{10}$]{$K/s$}. It thus allows studying the relaxation from an extremely unrelaxed glass state, which manifests in the low activation energy for relaxation and consequently an onset of relaxation at short time scales.
Another parameter used in both models is the attempt to relax frequency $\nu_0$ (Equation \ref{equ:DifferentialSigma} and Equation \ref{equ:DifferentialGibbs}). Previous studies on phase change materials estimated the attempt to relax frequency in the typical phonon frequency range of 10$^{13}$ to \unit[10$^{14}$]{$s^{-1}$}.\textsuperscript{\cite{Ielmini2008a, LeGallo2018}} Our fits to the Gibbs model give an attempt to relax frequency on the order of \unit[10$^7$]{$s^{-1}$} to \unit[10$^8$]{$s^{-1}$} for GST and \unit[10$^9$]{$s^{-1}$} for dGST, which is notably lower than previously considered for phase change materials.
In order to fit experimentally obtained relaxation dynamics to the Gibbs model, the attempt to relax frequency is commonly used as a free fitting parameter. For metallic glasses, attempt to relax frequencies ranging from 10$^{11}$ to \unit[10$^{15}$]{$s^{-1}$} have been reported. Here, reduced attempt to relax frequencies were ascribed to relaxation processes involving groups of atoms.\textsuperscript{\cite{Friedrichs1989}} In carbon doped amorphous silicon frequencies as low as \unit[10$^6$]{$s^{-1}$} have been found.\textsuperscript{\cite{Stutzmann1986}} To fit the relaxation of nanoscale indents in a polymer glass an attempt to relax frequency of \unit[2*10$^{24}$]{$s^{-1}$} has been used.\textsuperscript{\cite{Roura2009}} We believe that to justify the Gibbs model, further explanation, and physical reasoning, as to why the attempt to relax frequency could change over so many orders of magnitude, is required. The corresponding fitting parameter in the collective relaxation model is $\nu_0\Delta_\Sigma$ which is \unit[$\sim$10$^6$]{$s^{-1}$} for GST and \unit[$\sim$10$^8$]{$s^{-1}$} for dGST. The variable $\Delta_\Sigma$ is expected to be $<<1$; $1/\Delta_\Sigma$ is the hypothetical number of the different configurational states that the system could assume. Accordingly, the collective relaxation model requires orders of magnitude higher attempt to relax frequencies than the Gibbs model. In this case we can expect attempt to relax frequencies of 10$^{13}$ to \unit[10$^{14}$]{$s^{-1}$}.
A major critique against the Gibbs model concerns the shape of the activation energy spectrum required to capture the drift of phase change materials. Studies on metallic glasses found bell shape or more complex spectra with a rather shallow increase of the number of defect states over a range of 0.5 to \unit[1]{$eV$}.\textsuperscript{\cite{Khonik2008,Tsyplakov2014,Friedrichs1989,Chen1976}} Opposed to this, to explain the $\log(t)$ dependent drift observed over many orders of magnitude in time a rather flat spectrum of defect states over a range of at least \unit[1]{$eV$} is required.\textsuperscript{\cite{LeGallo2018}} Additionally, to capture the relaxation onset, the transition between no defects to existing defects must happen in a narrow energy range. The threshold voltage drift characterized here represents an average behavior of multiple RESET states created in the device. Thus, the relaxation onset is blurred, and the characterization of a single glass state would probably show an even sharper relaxation onset. A sharp onset also requires a sharp transition of $q(E_\text{d})$. These considerations indicate that an almost step-like $q(E_\text{d})$ is required to capture the relaxation onset in phase change materials. This provides further evidence that an improbable $q(E_\text{d})$ is required to explain the drift of phase change materials with the Gibbs model. In fact, for the scenario of a step-like transition, the Gibbs model and the collective relaxation model give identical fits to our experimental data (Supplementary Note~5). To constrain $q(E_\text{d})$ further, relaxation studies over even longer timescales are required.
Two recently proposed relaxation models postulate that resistance drift may also result from the release of trapped electrons. The release of these charge carriers has been proposed to increase the width of the potential barrier needed to overcome at the contact between electrode and phase change material.\textsuperscript{\cite{Khan2020}} A second hypothesis states that these electrons recombine with thermally generated holes in the valence band and thus reduce the number of free charge carriers in the amorphous state.\textsuperscript{\cite{Elliott2020}} Even though these models assume quite a different mechanism, they could in principle explain the onset and saturation of drift. The onset of drift would be determined by the potential barrier and attempt to escape frequency of electrons from a trap state. Drift would saturate when an equilibrium between electron trapping and detrapping is reached. In the current version, however, the model proposed in \cite{Elliott2020} is designed such that drift begins immediately after RESET. Neither of the two models specifies the expected dynamics of electron detrapping. Thus, we cannot say if or how well these models will be capable of quantitatively capturing the temperature-dependent onset of relaxation.
\section{Conclusion}
\noindent In this work we experimentally measured the onset of structural relaxation in melt-quenched amorphous phase-change materials. Threshold-switching voltage was used to measure the state of relaxation. Experiments were performed using mushroom-type phase-change memory devices with GST and doped GST as phase-change materials. The onset of structural relaxation, marked by a transition from almost constant threshold-switching voltage values to the commonly observed $\log(t)$ dependence, changes profoundly with ambient temperature; from microseconds at \unit[300]{$K$} to tens of seconds at \unit[100]{$K$}. We found that both the Gibbs relaxation model and the collective relaxation model are capable of describing the experimental data. The fits to the Gibbs model, however, required an almost step-like defect distribution and orders of magnitude lower attempt to relax frequencies than estimated in previous works.
\section{Methods}
\noindent \textit{Mushroom-type PCM device:}
The mushroom-type PCM cells used in this study were fabricated in the 90-nm technology node. The multi-layer ring bottom electrode with a radius of \unit[$\sim$20]{$nm$} and a height of \unit[$\sim$40]{$nm$} was patterned with a sub-lithographic hardmask process. The sputter deposited phase change material is \unit[$\sim$75]{$nm$} thick. To assure a stable device operation throughout our study the cells were cycled at least 100,000 times in advance. The devices are fabricated with an on-chip series resistor of \unit[$\sim$2]{$k\Omega$}.
\noindent \textit{Experimental setup:}
The experiments were performed in a cryogenic probe station (JANIS ST-500-2-UHT), cooled with liquid nitrogen, that operates between 77 to \unit[400]{$K$}. The sample holder and chamber temperature was controlled with an accuracy of \unit[$\pm$0.5]{$K$}.
AC voltage signals were applied to the device with an Agilent 81150A Pulse Function Arbitrary Generator. To send the SET pulse (Instrument Output 1) with a defined delay time after the RESET pulse (Instrument Output 2), the two instrument outputs were coupled internally. Cell voltage and current were measured with a Tektronix DPO5104B digital oscilloscope, which was triggered on the SET pulse leading edge. The transient signals were sampled with a frequency of \unit[2.5]{$GHz$}.
\noindent \textit{Experimental protocol:}
In order to measure the threshold voltage evolution with time the device is programmed to a new RESET state multiple times and the pause t\textsubscript{delay} before applying the SET pulse is increased. This sequence with a t\textsubscript{delay} ranging from \unit[10]{$ns$} to \unit[10]{$s$} is repeated 15 times to average out drift and threshold switching variability. Due to the variability some scattering of the data and potentially a blurring of the onset of relaxation is inevitable. Nonetheless the overall threshold voltage change with time is a smooth curve. The standard deviation is around \unit[30]{$mV$} for GST and \unit[50]{$mV$} for dGST.
\noindent \textit{Definition of the threshold voltage:}
To extract the threshold voltage from the switching IV curve we fit the load-line of the voltage snap-back (Supplementary Note 6). The mushroom cell is fabricated with an on-chip series resistor. In the moment of switching the cell resistance drops to values similar to the series resistor (R\textsubscript{ser}) and thus the voltage drop over the cell decreases. By fitting the load-line, instead of choosing the largest voltage drop prior to switching as threshold voltage value, the analysis scheme becomes more resilient to noise in the transient voltage and current trace. The threshold voltage is defined at a load-line current of \unit[5]{$\mu A$}.
\noindent \textit{Impact of the SET pulse shape on $V_\text{th}$}
To induce threshold switching the device is biased with a triangular voltage pulse. Both, the electrical stress and joule heating in the device prior to switching, may affect the relaxation dynamics of the device. In fact, the threshold voltage of a nanoscale device changes dependent on the transient voltage signal applied in order to switch the device.\textsuperscript{\cite{Wimmer2014a,LeGallo2016}} With an increasing duration of the SET pulse leading edge, the threshold voltage decreases (Supplementary Figure 8a). The absolute change of $V_{th}$ with time, however, appears to be independent of the leading edge (Supplementary Figure 8b). This suggests that the rise of the cell bias is fast enough to not notably alter the relaxation process of the glass state prior to switching. First, the time for which the cell is biased prior to switching is short on absolute time scales. Second, it is at least in regime 2 and 3, which are governed by the relaxation dynamics, much shorter than the time for which the material relaxes without any bias being applied.
\section*{Conflict of Interest}
The authors declare no conflict of interest.
\section*{Acknowledgements}
This work was supported by the IBM Research
AI Hardware Center. This work was also partially funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (grant agreement numbers 682675 and 640003).
\beginsupplement
\newpage
\huge\textbf{Supplementary Notes}
\normalsize
\section{Comparison to prior threshold voltage drift experiments}
Careful inspection of the threshold voltage drift experiment at room temperature reported in \cite{Ielmini2007} shows the same three regimes we observe in our study. While a continuous drift with log(t) is observed from about \unit[$10^{-5}$]{$s$} in both studies, regime 1 occurs at different timescales. In our study it lasts up to \unit[1]{$\mu s$} and in the work by Ielmini et al. it ends after about \unit[30]{$ns$}. This could be attributed to the devices having different geometries and sizes (mushroom cell with a bottom electrode \unit[$\sim$ 706]{$nm^2$} vs. $\mu$-trench with a bottom electrode \unit[>1500]{$nm^2$}) and the resulting difference in the thermal environment. Additionally, in \cite{Ielmini2007} the bias of the RESET pulse is reduced to a lower level, sufficient to melt-quench and induce threshold switching, but it is not turned off between RESET and threshold switching. The experiment probes the decay time to a partially excited state ($I \sim 150 \mu A$) defined by the bias applied at the end of the RESET pulse. Thus, the different bias scheme compared to our experiment, where the bias is completely switched off at the end of the RESET pulse, is another reason why regime 1 lasts longer in our study.
\section{Threshold voltage transients in regime 1}
The threshold voltage evolution with time shows three distinct regimes (Figure \ref{fig:DriftRegime1} a). While the transition between regimes 2 and 3, which marks the onset of relaxation, and the drift coefficient in regime 3 exhibit a pronounced temperature dependence, regime 1 appears to be almost independent of the ambient temperature (Figure \ref{fig:DriftRegime1} b). Between \unit[30]{$ns$} and \unit[1]{$\mu s$} after RESET the threshold voltage increases by \unit[0.4]{$V$}. This rapid change is most likely caused by the decay of the RESET excitation. On one hand, a large number of excess charge carriers is generated when the device is molten at high-fields, on the other hand the temperature is locally raised to more than \unit[900]{$K$} before the applied power is switched off within \unit[3]{$ns$} (RESET pulse trailing edge). Both a decay of the excess charge carriers \cite{Elliott2020,Ielmini2007} and a slow decay of the local temperature when the device is already close to ambient temperature would result in a continuous increase of the threshold voltage.
\begin{figure}[h!]
\centering
\includegraphics[width=0.8\linewidth]{Supplement_Regime1_Tdep.eps}
\caption{Threshold voltage evolution: (a) The threshold voltage evolution shows the same three characteristic regimes (labeled in the figure) for all temperatures studied. At lower ambient temperatures the first measurement point is delayed because more time elapses before the applied voltage pulse with a \unit[100]{$ns$} leading edge reaches $V_{th}$. (b) The threshold voltage change in regime 1 is calculated with respect to the value measured 1 $\mu s$ after RESET. In regime 1 the temporal evolution appears to be independent of the ambient temperature.}
\label{fig:DriftRegime1}
\end{figure}
\newpage
\section{RESET state programming}
To create comparable RESET states at different ambient temperatures, the programming power was scaled such that the initially molten volume remained approximately constant. The applied quench-rate, defined by the \unit[3]{$ns$} RESET pulse trailing edge, was kept constant. An easily accessible metric to compare the molten volume at different ambient temperatures is the hot-spot temperature ($T_{hs}$) inside the device. If $T_{hs} = R_{th} P_{inp} +T_{amb}$ remains constant, so does the molten volume. $R_{th}$ is the average thermal resistance of the device, $P_{inp}$ the input power associated with the voltage pulse and $T_{amb}$ the ambient temperature.\textsuperscript{\cite{Boniardi2012, Sebastian2014a}}
The thermal resistance of both devices is obtained from the programming curves measured at different ambient temperatures (Figure \ref{fig:ThermalResistance}). The programming power at which the device resistance begins to increase marks the point at which $T_{hs}$ rises to the melting temperature $T_{melt}$ and an amorphous volume begins to cover the device bottom electrode. Extrapolated to \unit[0]{$\mu W$}, the temperature-dependent programming power coincides fairly well with the melting temperature. GST has a melting temperature of \unit[858]{$K$} \textsuperscript{\cite{Adnane2017}} and for a comparable type of doped GST a melting temperature of \unit[877]{$K$} was measured.\textsuperscript{
\cite{Sebastian2014a}}
\begin{figure}[h!]
\centering
\includegraphics[width=0.9\linewidth]{Supplement_Rth.eps}
\caption{Thermal resistance: Programming curves of (a) GST and (b) dGST cells are measured at ambient temperatures ranging from \unit[150]{$K$} to \unit[400]{$K$} (insets). The cell plugging power, defined as the programming power required to induce a first increase of cell resistance, increases linearly with ambient temperature. The slope of the linear fit is the device's thermal resistance.}
\label{fig:ThermalResistance}
\end{figure}
A comparison of two dGST devices, which were programmed with the same $T_{hs}$ at an ambient temperature of \unit[100]{$K$} and \unit[300]{$K$}, shows the fidelity of this approach (Figure \ref{fig:ComparableStates}). The two cell states have a different thermal history and thus represent differently relaxed glass states. An annealing step to \unit[320]{$K$} for \unit[15]{$minutes$} is applied to erase the differing thermal history. After annealing the two devices show an identical field and temperature-dependent transport. This remarkable match, even though the devices were programmed at different ambient temperatures, shows two things. First, that first RESET states of comparable size were created. Second, that upon annealing the glass state in both devices exhibits identical transport characteristics, which implies that the annealing step created similarly relaxed glass states.
\begin{figure}[h!]
\centering
\includegraphics[width=0.45\linewidth]{Supplement_Rth_test_IV.eps}
\caption{Programming comparable device states: Two devices are programmed with the same $T_{hs}$ at an ambient temperature of \unit[100]{$K$} and \unit[300]{$K$}. After annealing, the RV characteristic of the device state is probed at ambient temperatures from \unit[300]{$K$} to \unit[250]{$K$}. RESET states with identical transport characteristics were created in both devices. This demonstrates that amorphous volumes of similar size were programmed at \unit[100]{$K$} and \unit[300]{$K$}}.
\label{fig:ComparableStates}
\end{figure}
\newpage
\section{Collective relaxation and thermally assisted threshold switching}
\label{Supp:FullModel}
In the main manuscript, we propose that $V_{th}$ changes approximately proportional to $\Sigma$ and that the change upon relaxation is decoupled from the change with ambient temperature
\begin{equation}
V_{th} = f(T_{amb}) + C_1 * \Sigma(t,T) + C_2
\end{equation}
This approximation is further supported by simulations based on previously published models. In this work, a field dependent transport model,\textsuperscript{\cite{Gallo2015}} the collective relaxation model \textsuperscript{\cite{LeGallo2018}} and a threshold switching model \textsuperscript{\cite{LeGallo2016}} are combined. Here the models are briefly sketched to introduce the most relevant variables and assumptions.
The electrical transport in amorphous phase change materials is highly field dependent. It can be modeled as a multiple-trapping transport together with 3D Poole-Frenkel emission from a two-center Coulomb potential.\textsuperscript{\cite{Gallo2015}} At low fields, the density of free charge carriers depends on the activation energy for conduction $E_a$, which corresponds to the depth of the coulomb potential. The activation energy follows a Varshni law temperature dependence $E_a = E_{a0} - \frac{aT^2}{b+T}$.\textsuperscript{\cite{Varshni1967}} With increasing field strength, the Coulomb potentials of neighboring defect states, separated by the intertrap distance $s$, overlap and the effective barrier height for emission thus decreases.
Structural relaxation results in an increase of the activation energy and the intertrap distance. The activation energy changes proportionally to the state variable of the glass $E_{a0} = E^* - \alpha \Sigma(t)$ and the intertrap distance with $s(t) = s_0 /\Sigma(t)$.\textsuperscript{\cite{LeGallo2018}}
In a nanoscopic device structure, threshold switching can be induced by a thermal feedback loop.\textsuperscript{\cite{LeGallo2016}} The device is described as a thermal RC-circuit with the electrical power applied to the device as input variable. The Joule heating at elevated fields increases the temperature, which in turn increases the density of free charge carriers and thus the Joule heating.
To simulate the temperature dependence of $V_{th}$ for differently relaxed glass states, the three models are combined and solved numerically. The threshold switching dynamics are simulated for a \unit[3.5]{$V$} pulse with a \unit[500]{$ns$} leading edge. All model parameters are summarized in Table \ref{tab:SwitchingSimulations}. The threshold voltage change is calculated with respect to an initial glass state $\Sigma_0 = 0.9$ and the range of $\Sigma$ is defined such that $\Delta V_{th}$ matches the experimentally observed threshold voltage increase. The simulation results show that a linear dependence of $V_{th}$ on $\Sigma$ is a good approximation (Figure \ref{fig:SigmaVth} a).
Additionally, differently relaxed glass states show a similar change of $V_{th}$ with ambient temperature (Figure \ref{fig:SigmaVth} b). The calculated $V_{th}$ values follow parallel lines. This supports the assumption that the change with temperature is decoupled from the change upon relaxation.
\begin{figure}[h!]
\centering
\includegraphics[width=0.8\linewidth]{Supplement_Sigma_Vth.eps}
\caption{Threshold voltage modeling: (a) The threshold voltage changes approximately linearly with relaxation. The inset shows the simulated threshold switching IV characteristic. (b) The temperature dependence of $V_{th}$ shows no notable change with $\Sigma$.}
\label{fig:SigmaVth}
\end{figure}
\begin{table}[h!]
\centering
\begin{tabular}{|l|l|}
\multicolumn{2}{c}{\textbf{Transport model}} \\ \hline
$r_{BE}$ [nm] & 32 \\
$u_a$ [nm] & 10.94 \\
$\epsilon_r$ & 10 \\
$K*\mu_0 [m^{-1}V^{-1}s^{-1}]$ & $10^{22}$ \\
a [$\mu eV K^-1$] & 600 \\
b [K] & 800 \\ \hline
\multicolumn{2}{c}{\textbf{Relaxation model}} \\ \hline
$E^*$ [eV]& 0.505 \\
$\alpha$ [eV] & 0.267 \\
$s_0$ [nm] & 2.08 \\ \hline
\multicolumn{2}{c}{\textbf{Thermal model}} \\ \hline
$R_{th} [K / \mu W] $ & 3.7 \\
$\tau_{th} = R_{th} * C_{th} [ns] $ & 4.1 \\
$R_{ser} [\Omega]$ & 6170 \\ \hline
\end{tabular}
\caption{Model parameters to simulate the threshold switching: The amorphous volume in the mushroom cell is approximated as a cylinder with radius $r_{BE}$ and height $u_a$. $\epsilon_r$ denotes the relative high-frequency dielectric constant and $K*\mu_0$ a model constant. a and b are the parameters of the Varshni law. The parameter $\alpha$ is taken from.\textsuperscript{\cite{LeGallo2018}} All other variables are from.\textsuperscript{\cite{LeGallo2016}} $E^*$ and $s_0$ are defined such that $E_{a0}$ and $s$ correspond to the values reported in \cite{LeGallo2016} for $\Sigma = 0.8$. $R_{ser}$ is the on-chip resistor in series with the phase change memory cell.}
\label{tab:SwitchingSimulations}
\end{table}
\newpage
\section{Relaxation models comparison}
The basic idea of the Gibbs model is that the glass state can be described by a distribution of defect states. These defects relax individually, without creating new defects or changing the activation energy for the relaxation of defect states in their local surrounding. The collective relaxation model on the other hand describes relaxation as a sequence of transitions between neighboring unrelaxed configurational states. Upon relaxation local configurations become more stabilized but are still involved in subsequent collective rearrangements. Thus, the activation energy for relaxation increases with each relaxation step.
Despite different concepts of the relaxation process, the Gibbs model
\begin{equation}
\frac{dq(E_\text{d},t)}{dt} = -\nu_0 * exp(\frac{-E_\text{d}}{k_b T}) * q(E_\text{d},t)
\end{equation}
and the collective relaxation model
\begin{equation}
\frac{d\Sigma(t))}{dt}= - \nu_0 \Delta_\Sigma *\exp(\frac{E_\text{s}*(1-\Sigma(t))}{k_\text{b}T})
\end{equation}
are mathematically constructed similar, in the sense that both assume a first order rate equation with an Arrhenius temperature dependence on the activation energy of relaxing defects. In the collective relaxation model, at any time instance only one activation energy governs the next relaxation step, whereas in the Gibbs model multiple defects with different activation energies can relax simultaneously. But at each time instance, only a narrow range of activation energies has a finite probability of relaxing (Figure \ref{fig:AESdrift}). Effectively, the activation energy of relaxing defects continuously increases, like in the collective relaxation model. Assuming a flat distribution $q(E_\text{d})$, with a step like transition from no defects to existing defects for the Gibbs model, both models give almost identical fits to our experimental data (Figure \ref{fig:CollectiveVsGibbs}).
\begin{figure}[h!]
\centering
\includegraphics[width=1\linewidth]{Supplement_AES.eps}
\caption{dGST activation energy spectrum: The spectrum is derived from the fit in Figure \ref{fig:CollectiveVsGibbs} b, assuming a flat defect distribution. (a) Evolution of the activation energy spectrum upon relaxation at \unit[300]{$K$}. The lower limit of the spectrum increases linearly with log(t). (b) Change of the activation energy spectrum with time. At each time instance defect states within a narrow range of activation energies relax.}
\label{fig:AESdrift}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=1\linewidth]{Supplement_CollectiveVsGibbs.eps}
\caption{Model comparison: A one-on-one comparison of the collective relaxation model and the Gibbs model for the scenario of a flat $q(E_\text{d})$ with a step-like transition from no defects to existing defects shows almost identical fits. The fitting parameters are summarized in table 1 and table 2 of the main manuscript.}
\label{fig:CollectiveVsGibbs}
\end{figure}
\newpage
\section{Algorithm to determine V\textsubscript{th}}
The threshold voltage value is obtained by fitting the load-line of the switching $IV$ curve (Methods). The reference points to fit the load-line range from the last point where the device current is smaller than \unit[20]{$\mu$A} to \unit[125]{$mV$} above the minimum voltage of the load-line (Figure \ref{fig:DefenitionVth}). This approach is taken to make the analysis scheme more resilient to noise in the transient voltage and current traces.
\begin{figure}[h!]
\centering
\includegraphics[width=0.6\linewidth]{Supplement_DefinitionVth.eps}
\caption{Threshold switching IV characteristic: The snap-back of the IV curve is fitted linearly to obtain the threshold voltage. The threshold voltage is defined at a load-line current of \unit[5]{$\mu A$}. Green dots mark the measurement points used to fit the load-line.}
\label{fig:DefenitionVth}
\end{figure}
\newpage
\section{Impact of the SET pulse shape}
To induce threshold switching, the device is biased with a triangular voltage pulse. With an increasing duration of the SET pulse leading edge, the threshold voltage decreases (Figure \ref{fig:SET_leading_edge} a). The absolute change of $V_{th}$ with time, however, appears to be independent of the leading edge (Figure \ref{fig:SET_leading_edge} b).
\begin{figure}[h!]
\centering
\includegraphics[width=0.9\linewidth]{Supplement_SETpulse_LE.eps}
\caption{Impact of the SET pulse leading edge duration: (a) With an increasing SET pulse leading edge the threshold voltage values decrease continuously. The value of $t\textsubscript{drift}$ is the sum of $t\textsubscript{delay}$ (the delay between the RESET pulse and the SET pulse) and the time elapsed until the applied voltage crosses $V\textsubscript{th}$. The second term changes with the SET pulse leading edge. The resulting change of $t\textsubscript{drift}$ is most apparent on short timescales, where $t\textsubscript{delay}$ is small. (b) The threshold voltage change with respect to a reference point, which again is the value measured \unit[1]{$\mu s$} after RESET, is independent of the SET pulse leading edge duration. The SET pulse amplitude is \unit[2.5]{$V$}.}
\label{fig:SET_leading_edge}
\end{figure}
\newpage
\newpage
|
2,877,628,091,552 | arxiv | \section{Materials and Methods}
To make the gelatin gels, we dissolved 5g gelatin (General purpose grade gelatine, Fisher) in a mixture of 80 mls glycerol (VWR) and 20 mls de-ionized water. The mixture was heated with stirring at 90$^{\circ}$ for 30 minutes (following \cite{baum06}).
After heating, 100$\mu$l of 100nm-diameter, carboxylate-modified, red fluorescent nanoparticles (2\% solids, Fluospheres, ThermoFisher Scientific) were added into the mixture, and it was poured into moulds and allowed to cool for 24 hours at 4$^{\circ}$C before being used.
For measuring mechanical properties, we filled 1cm-deep petri dishes with the gelatin.
For confocal imaging, we created thin films of gelatin on 50mm petri dishes with a No. 1.5 glass cover slip as their bottom surface (P50G-1.5-30-F, MatTek).
This involved first attaching yellow-green fluorescent nanoparticles to the glass surface, which act as fixed reference points for tracking the bottom of the gelatin layer, and then adding the gelatin layer.
To attach the reference particles to the glass surface, we follow the protocol described in \cite{styl14}.
In brief, we activate the surface in a UV-ozone cleaner, functionalize it via vapor deposition of (3-Aminopropyl)triethoxysilane, and then submerge it in a solution containing 200nm-diameter, yellow-green, carboxylate-modified, fluorescent nanoparticles (2\% solids, Fluospheres, ThermoFisher).
We then rinse this with de-ionized water.
To make the thin films of gelatin, we placed 30$\mu$l of the hot gelatin solution on the bead-coated glass, and then covered this with a 18mm-diameter, \#1.5 circular cover glass (Paul Marienfeld, GmbH).
After curing, the cover glass was gently removed with tweezers, and the substrate was used for experiments.
For imaging of contact lines, we make a solution containing 8ml glycerol, 2ml de-ionized water and 10ul of the (2\% solids) yellow-green nanoparticles.
Then we place a $5\mu$l droplet of the solution on the gelatin layer.
We reduced the size of the droplet when required by manually removing liquid with a pipette, approximately 1$\mu$l at a time.
After each change in droplet volume, we let it rest for at least 20 minutes before imaging.
3D confocal imaging was done on a microscope (Nikon, Eclipse Ti2) with a spinning disk confocal system (Yokagawa CSU-X1) using a 60x water immersion objective lens (Nikon, MRD07602). We used 488 nm and 560 nm lasers for yellow-green and red fluorescent nanoparticles respectively.
Images were acquired with at 330nm intervals in $z$ to obtain 3-d measurements.
We measure the surface tension of liquids using a homemade pendant-droplet tensiometry setup.
Droplet shapes are analysed with axisymmetric drop shape analysis \cite{del97}.
For the droplet phase, we measure $\Upsilon_{lv}$ of the pure liquid, and liquid that has been allowed to equilibrate on the gelatin substrate for an hour.
This gives $\Upsilon_{lv}=66\pm 1 $mN/m and $64\pm 1$mN/m respectively, suggesting that there is very little contamination of the droplet by material from the gelatin.
\section{Odd/even responses to line loadings}
For a vertical line force, $\Upsilon_{\perp}$, acting on the substrate as shown in Figure \ref{fig:schem}B, there is complete left/right symmetry to the problem.
This immediately implies symmetric inward/outward displacements, so that the resulting displacement field, $u_x^\perp(x,z)$, is odd.
For a horizontal line force, $\Upsilon_\parallel$, let the solution be $u_x^\parallel(x,z,\Upsilon_\parallel)$.
The solution for $\Upsilon_\parallel\rightarrow - \Upsilon_\parallel$ satisfies $u_x^\parallel(x,z,\Upsilon_\parallel)+u_x^\parallel(x,z,-\Upsilon_\parallel)=0,$
as the sum of the line forces $\Upsilon_\parallel$ and $ - \Upsilon_\parallel$ are zero, so the sum of the resulting displacement fields must also be zero.
Furthermore, the solution for a negative line force is the same as the solution for a positive line force, reflected about $x=0$. Mathematically this can be expressed as $u_x^\parallel(x,z,-\Upsilon_\parallel)=-u_x^\parallel(-x,z,\Upsilon_\parallel)$.
Combining these two equations gives $u_x^\parallel(x,z,\Upsilon_\parallel)=u_x^\parallel(-x,z,\Upsilon_\parallel)$, so $u_x^\parallel$ is even.
|
2,877,628,091,553 | arxiv | \section{Introduction}
Is everything known about Anderson localization\cite{1,2}? The answer is, surprisingly,
negative. Since in real devices we cannot completely get rid of disorder, the subject,
although over 56 years old, still remains an active part of condensed matter physics
and from time to time holds surprises (see recent activity in topological insulators and
superconductors).\cite{3}
Anderson localization is a phenomenon of complex quantum interference of electron
waves in a disordered medium and also occurs for classical waves in disordered media.
The extended states (ballistic or diffusive) of a metal turn into the exponentially
localized states of an insulator which are restricted to a finite region of a disordered system.
The extension of states is measured by their localization length $\xi$,
which is infinite for extended states and finite for localized states.
The states cease to be extended when the disorder $W$ exceeds some critical value $W_\mathrm{c}$.
The main result of the so-called scaling theory of localization \cite{4} is that in the
absence of symmetry breaking mechanisms (e.g.\ of time-reversal symmetry due to a
magnetic field $\vec{B}$ or of spin rotation symmetry due to spin-orbit coupling SOC)
a non-zero critical disorder ($W_\mathrm{c}>0$) exists for the localization of all states in $3D$,
while the states become localized (have non-infinite $\xi$'s)
even for infinitesimally small disorder ($W_\mathrm{c}=0$) in $1D$ and $2D$.\cite{5,6}
The subject is still active both experimentally and theoretically for at least three reasons:
first, there is a large variety of situations that depend on the kinds of disorder and the
symmetry of the Hamiltonian, e.g. the 10 symmetry classes of localization, \cite{7} and the
crossover behavior between them.
Those symmetry classes are the three basic, orthogonal, unitary (in the presence of magnetic
field $\vec{B}$), and symplectic (systems with spin-orbit-coupling (SOC)), the 3 chiral, and the
4 Bogolyubov-de Gennes classes for superconducting systems.
A second reason is the emergence of topological structures \cite{3} in which some states
are immune to Anderson localization, at least for weak disorder. They occur via
spectral gaps which support protected states
(e.g. in $2D$, conductance quantization in the presence of $\vec{B}$ for the quantum Hall
effect and in the presence of SOC for the quantum spin Hall effect). For example, off-diagonal
disorder with random hopping belongs to one of the 3 chiral classes and an even-odd
effect was observed at zero energy for $N$ coupled chains, with a diverging density of
states $\rho(E)$ and a localization length that depends on the parity of $N$.\cite{8,9}
A final reason for the subject being active today is that wave phenomena in disordered
media can also be investigated for electromagnetic waves in complex structures, such as
waveguide arrays which resemble a finite lattice. A wealth of recent experimental results
\cite{10,11,12,13} combine disorder with non-linearity, and it is possible to test various
theories (e.g. of topological effects, the critical exponents at the Anderson transition,
etc.), including regimes that are difficult to access in disordered electronic
systems.
\begin{figure}
\centerline{\includegraphics[width=\linewidth]{figure1}}
\caption{\label{fig:sketch}
The quasi-1D system of $N$ disordered chains, with on-site disorder of strength $W$,
longitudinal hopping $t'=1$ (black solid lines)
and inter-chain hopping $t$ (blue dashed lines).}
\end{figure}
Anderson localization in a disordered chain ($1D$) is very different from that in a disordered
plane ($2D$) made of $N$ coupled disordered chains with $N\to \infty$.
The present work focuses on an assembly of $N$ coupled $1D$ chains (see Fig.\ \ref{fig:sketch}),
which is a quasi-$1D$ disordered system of increasing width ($N$ is the number of
its rows), that is expected to tend to a $2D$ lattice as $N$ increases.
The crossover from $1D$ to $2D$ is studied in the presence of diagonal disorder ($W$)
and as a function of the inter-chain coupling of strength $t$ (the longitudinal hopping is $t'=1$).
Our main result, from a study for a wide range of values of the ratio $t/t'$ and various $N$,
is an unexpected $t$-dependence for special $N$'s,
where localization for the quasi-$1D$ system remains weak in the
strong-coupling limit $t\gg t'$ (even weaker than at $t=0$), and rather insensitive to $t$.
For certain $N$'s Anderson localization is more pronounced as $t$ increases and not for other
$N$'s where there is ``immunity" to strong localization. This ``immunity" occurs when $N$ is odd
for open lateral boundary conditions (BC) and when $N$ is a multiple of four for periodic lateral BC.
In other words, in the quasi-$1D$ disordered system with strong inter-chain coupling $t$,
Anderson localization is weakened by the large inter-chain coupling only for these special $N$'s.
For other $N$'s localization becomes stronger as $t$ increases.
In Sec.\ \ref{sec:model} we present the model system of $N$ coupled disordered chains.
The method of study and our results for the Lyapunov exponents can be found in
Sec.\ \ref{sec:method}. The finite size scaling analysis of the results is presented in
Sec.\ \ref{sec:scaling}, and their explanation in terms of a perturbative approach at
strong inter-chain coupling is given in Sec.\ \ref{sec:perturbation}.
We discuss the results in Sec.\ \ref{sec:discussion} before
we present our conclusions in Sec.\ \ref{sec:conclusions}.
\section{Numerical approach to localization}
\label{sec:numerics}
\subsection{Quasi-1D model of $N$ coupled chains}
\label{sec:model}
We study Anderson localization of non-interacting particles in the phase-coherent quantum
system of $N$ parallel disordered chains, sketched in Fig.\ \ref{fig:sketch}, with on-site
disorder of strength $W$ and intra-chain hopping $t'=1$ which represents the energy scale.
The inter-chain hopping $t$ takes a broad range of values, from very small ($t\ll t'$) to
very large ($t\gg t'$). The system is a quasi-1D strip of a square lattice having width $N$,
on-site disorder $W$ and anisotropic hopping, namely $t'=1$ in the longitudinal direction
and $t$ in the transverse direction, respectively.
Labelling the sites by $\{i,j\}$ with $i=\{1,2,\dots,N\}$ the transverse and
$j$
the longitudinal coordinate, the Hamiltonian reads
\begin{eqnarray}
H &=&\sum_{j}\left\{ \sum_{i=1}^{N} \left(\epsilon_{i,j} c^+_{i,j}c^{\phantom{+}}_{i,j}
+ t'\left[ c^+_{i,j}c^{\phantom{+}}_{i,j-1}+\text{hc}\right]\right) \right. \nonumber \\
&& \phantom{\sum}\left. + t \left[\sum_{i=2}^{N} c^+_{i,j}c^{\phantom{+}}_{i-1,j}
+ \eta c^+_{1,j}c^{\phantom{+}}_{N,j}+\text{hc}\right]\right\} \, ,
\label{eq:hamiltonian}
\end{eqnarray}
where $c^+_{i,j}$ ($c^{\phantom{+}}_{i,j}$) creates (annihilates) a particle on site
$\{i,j\}$. The on-site energies $\epsilon_{i,j}$ are independent random variables drawn
from a uniform distribution within $[-W/2;W/2]$.
The parameter $\eta$ takes the values 0 and 1 for open and periodic lateral BC,
respectively.
We study localization in the full $t$-range, and increase $N$ to approach a
$2D$ disordered system \cite{13} with anisotropy ($t\neq t'$) or without ($t=t'$).
The single chain case is related to the uncoupled limit $t=0$ where an ensemble of
independent chains exists. The ladders with $N=2$ and $N=3$ legs were extensively
studied,\cite{14,15,16,17} also in the presence of magnetic and electric fields.
\begin{figure}
\centerline{\includegraphics[width=\linewidth]{figure2}}
\caption{\label{fig:nbc}
(Color online)
The Lyapunov exponents $\gamma_{1},\gamma_{2},\dots ,\gamma_{N}$ at $E=0$
\textit{vs}. the inter-chain hopping $t$ for quasi-$1D$ disordered systems with open lateral
BC and disorder strengths $W$=\{0.5, 1, 2, 5, 10\} from bottom up, with
(a) $N=2$, (b) $N=3$, (c) $N=4$, (d) $N=19$.
For each disorder strength, the smallest exponent $\gamma_{1}$ is represented
by a thick solid line.
For odd $N=3$ and $N=19$, $\gamma_{1}$ is rather insensitive to $t$, for all $W$.
The limiting values at $t=0$ ($t/t'\to\infty$) are indicated by blue dots (green squares)
on the left (right) axis.
The horizontal blue solid lines in (a) indicate the values of Eq.\ \eqref{eq:dorokhov}.}
\end{figure}
\subsection{Method and results}
\label{sec:method}
The exponential dependence of the wave-functions along a disordered quasi-1D strip
containing $N$ coupled chains (described in Sec.\ \ref{sec:model} and sketched in
Fig.\ \ref{fig:sketch}) is extracted from numerically calculated products of $N\times N$
transfer matrices and characterized by $N$ Lyapunov exponents. \cite{5,6}
We compute the exponents via the standard method of Gram-Schmidt reorthogonalization after
about ten steps of matrix multiplication until convergence is reached for a very large number
of steps along the chain.
The adopted procedure corresponds to the factorization of the transfer matrix into a product
of an orthogonal matrix and a diagonal matrix with positive matrix elements.
The $N$ positive Lyapunov exponents $\gamma_{i}$ ($i=\{1,2,\dots ,N\}$) are subsequently defined
as the positive logarithms of the elements of the diagonal matrix.\cite{18}
For a single disordered chain ($N=1$), the Lyapunov exponent $\gamma$ is the inverse of the
localization length $\xi$ which describes the length scale of the exponential increase or
decrease of the wave-function along the chain.
A perturbative approach for small disorder $W$ at $E=0$ yields
\begin{equation}\label{eq:lyapunov1d}
\gamma\simeq 0.0095\, W^{2}+O\left[W^{4}\right]
\end{equation}
for the $1D$ Lyapunov exponent. \cite{19}
For the quasi-$1D$ system of width $N$, the exponential decay of the wave-functions with the
longitudinal coordinate is determined by $N$ Lyapunov exponents $\gamma_{i}$.
The smallest positive exponent $\gamma_{1}$ dominates the transport properties and determines
the localization length $\xi=1/\gamma_1$ of the quasi-1D strip.
The scaling behavior of $\gamma_{1}$ as a function of $N$ with an extrapolation to large
$N\to \infty$ can be studied via the so-called finite size scaling technique \cite{5,6} which
allows to determine the localization length of the infinite $2D$ system.
In Fig.\ \ref{fig:nbc} we plot $\gamma_{i}$ at energy $E=0$ \textit{vs}. the inter-chain coupling $t$
over a wide range of values of $t$, for quasi-$1D$ strips with $N=2$, 3, 4, and 19 chains having
open lateral BC. In each case, we show results for five disorder strengths between $W=0.5$ and $W=10$.
The single chain ($N=1$) exponent $\gamma$ is indicated by blue circles on the left
vertical axis. The five circles with increasing $\gamma$ correspond to the chosen values of $W$.
While all $\gamma_{i}$'s are expected to converge to them in the limit $t\to 0$, a finite $t$ gives
a spreading of the $\gamma_i$ for a given $W$.
We focus on the smallest exponent $\gamma_{1}$ (solid lines) which gives the localization length
$1/\gamma_{1}$ of the quasi-$1D$ system.
\begin{figure}
\centerline{\includegraphics[width=\linewidth]{figure3}}
\caption{\label{fig:pbc}
(Color online) The Lyapunov exponents $\gamma_{1},\gamma_{2},\dots , \gamma_{N}$ \textit{vs}.
the inter-chain hopping $t$ for quasi-$1D$ disordered systems with lateral periodic BC for
disorder strengths $W$=\{0.5, 1, 2, 5, 10\} from bottom up,
(a) $N=3$, (b) $N=4$, (c) $N=6$, (d) $N=20$. For $N=4$ and $N=20$, multiples of four,
the two smallest exponents $\gamma_{1}$, $\gamma_{2}$
(thick lines) are rather insensitive to $t$.}
\end{figure}
The behavior of $\gamma_1$ at small $t$ is seen to be qualitatively similar for all $N$.
The inter-chain coupling reduces localization and pushes $\gamma_1$ towards a minimum for
$t\lesssim 1$.
The horizontal blue lines in Fig.\ \ref{fig:nbc} (a) indicate Dorokhov's perturbative
prediction \cite{14} for the minimum value of $\gamma_1$
\begin{equation}
\label{eq:dorokhov}
\gamma_{1}^\mathrm{min}\approx \left(1-\frac{1}{\pi}\right)\gamma \, ,
\end{equation}
obtained for a ladder of $N=2$ chains with coupling $t\leq 1$ ($\gamma$ is the Lyapunov
exponent for a single chain $N=1$).
The minimum value of Eq.\ \eqref{eq:dorokhov} has the same $W$-dependence as $\gamma$,
and the perturbative value \eqref{eq:lyapunov1d} becomes less accurate for larger $W$.
While weakly coupled chains are well understood, we now focus our interest on the case of
strong coupling.
In the regime of strong inter-chain coupling ($t$ large), striking differences between different $N$'s
are observed. For even $N$ (see Figs.\ \ref{fig:nbc} (a) and (c)) all Lyapunov exponents of the quasi-$1D$
system, including the smallest exponent $\gamma_1$, increase to very large values. This implies
increasingly strong localization, with a vanishing localization length when $t \to \infty$.
In contrast, for odd $N$'s (Figs.\ \ref{fig:nbc} (b) and (d)), only the $N-1$ largest
exponents increase with $t \to \infty$ while the smallest exponent $\gamma_1$ assumes a
small value that depends on $W$.
In the strong coupling regime, $\gamma_1$ is almost independent of $t$ and its asymptotic values in
the limit $t\to \infty$ (green squares on the right vertical axis) lie below the single chain result
(blue circles on the left vertical axis). Between the regimes of weak and strong coupling, $\gamma_1$
goes through one or more maxima at moderate coupling strength $t\gtrsim 1$.
In Fig.\ \ref{fig:pbc} we show the Lyapunov exponents calculated for quasi-1D strips
with periodic lateral BC, at the same energy and disorder values as in Fig.\ \ref{fig:nbc}.
The results for $N=$ 3, 4, 6, and 20 show the same qualitative behavior at small $t$, as for open BC.
In the regime of strong inter-chain coupling $t$, the behavior is different.
For periodic BC localization strongly increases with large $t$ whenever
$N$ is not a multiple of four, very much like in the case of even $N$ and open BC (Fig.\ \ref{fig:nbc}).
For the values of $N$ that are multiples of four, the \textit{two} smallest Lyapunov
exponents $\gamma_{1}$ and $\gamma_{2}$ display the peculiar behavior of very weak dependence on
$t$ and assume very low values at large $t\to \infty$.
The smallest exponent $\gamma_1$ approaches the strong coupling limit
(green squares on the right vertical axis) from below, and has its minimum value
at moderately strong $t$, just above the region of the peaks observed at $t \gtrsim 1$.
The number of peaks in the $\gamma_1$'s for $t\gtrsim 1$ increases with $N$.
More peaks appear for open BC (Fig.\ \ref{fig:nbc}) than for periodic BC (Fig.\ \ref{fig:pbc}).
The small $\gamma_1$ for certain $N$ (in Figs.\ \ref{fig:nbc} and \ref{fig:pbc})
and the very weak dependence on $t$ for $t\gg t'$ is in striking contrast with the strong increase
of localization occurring for other $N$'s.
\subsection{Finite size scaling}
\label{sec:scaling}
\begin{figure}
\centerline{\includegraphics[width=\linewidth]{figure4}}
\caption{\label{fig:gamma_N}
(Color online) The scaling of $\gamma_{1}N$ {\textit {vs.}} the width $N$ for a quasi-$1D$ strip having
(a) open lateral BC and (b) periodic BC, at $E=0$ and $W=1$.
Blue circles are for $t=0.01$, violet diamonds for $t=0.1$, black pluses for $t=1$,
red crosses for $t=10$, and green squares for $t=100$. Lines are guides to the eye.
The oscillations of $\gamma_1$ as a function of $N$ with period two (four)
for open (periodic) BC emerge with increasing $t$ when $t>t'$ (red crosses and green squares).
Moreover, the low values of $\gamma_{1}N\approx 0.01$ at the minima of those oscillations are
approximately independent of $N$.
The dashed lines in the lower parts of the figure are the perturbative results for
the strong coupling regime of Sec.\ \ref{sec:perturbation}.}
\end{figure}
In order to extract the behavior at large $N$ and see the transition to $2D$, we studied the
scaling properties of $\gamma_{1}$ with the number of chains $N$ (the width of the quasi-1D strips).
The finite size scaling theory predicts that an increase of $\gamma_1 N$ with $N$ points towards
localization in $2D$, a decrease of $\gamma_1 N$ with $N$ indicates delocalization, and a constant
value corresponds to intermediate (critical) behavior. In the localized case, the product $\gamma_1 N$
increases proportional to $N$ and the constant of proportionality gives the inverse
localization length of the $2D$ system.\cite{5,6}
In Fig.\ \ref{fig:gamma_N} we plot $\gamma_1 N$ \textit{vs}. $N$ in a semi-log plot for $W=1$,
$E=0$ and different values of $t$.
For very small $t$ (blue circles) and $t\approx 1$ (black pluses) a smooth $N$-dependence is observed,
in sharp contrast with the $N$-dependent oscillations found for large $t$ (red crosses) and very
large $t$ (green squares). These oscillations correspond to the peculiar behavior of localization in
the strong coupling regime already shown in Figs.\ \ref{fig:nbc} and \ref{fig:pbc}.
In Fig.\ \ref{fig:gamma_N} (a) and (b), the period of two for open BC and four for periodic BC is seen,
respectively. The maxima of $\gamma_{1}$ indicate very strong localization while much weaker
localization occurs for open (periodic) BC when $N$ is odd (multiple of four), even weaker than for
small coupling ($t\ll 1$).
In Fig.\ \ref{fig:gamma_N}, for very small $t=0.01$ (blue circles) $\gamma_{1}$ is close
to the single chain value and $\gamma_1 N$ increases with $N$ indicating localization in the
large-$N$ limit. A similar increase (at higher values of $\gamma_1 N \gtrsim 10$) is observed
for the maxima of the $N$-dependent oscillations (green squares).
In contrast, only a very weak increase can be detected for moderately weak inter-chain coupling
$t=0.1$ (violet diamonds), and for the isotropic case $t = 1$ (black pluses) the $\gamma_1 N$ appear
to be independent of $N$ up to the largest $N$ we considered.
The same qualitative behavior, with reduced Lyapunov exponents, is shown for the minima at
strong coupling. Their values are close to the ones obtained for moderate inter-chain coupling
(violet diamonds).
A particular behavior is seen for rather large $t=10$
(red crosses in Figs.\ \ref{fig:gamma_N} (a) and (b)). The $N$-dependent oscillations disappear
for large $N$, where the minima remain low but the maxima are considerably reduced and tend
towards the low minimum values. Other values of disorder gave qualitatively similar behavior,
with increasing Lyapunov exponents when increasing the disorder. For stronger disorder $W$
(Figs.\ \ref{fig:gamma_N} (a), (b) is for $W=1$), larger values of $t$ are needed to make the
$N$-dependent oscillations appear.
This leads to the conjecture that oscillations with $N$ can be observed in the strong
coupling regime for $t\gtrsim N$ or $t\gtrsim W$, while localization is weak for all $N$ in an
intermediate regime with $N \gtrsim t \gtrsim 1$.
The intermediate critical-like scaling
\begin{equation}\label{eq:scaling}
\gamma_{1} N \propto \mathrm{const.}\,
\end{equation}
is observed for the isotropic system ($t=t'$) for not very large $N$ (the localization length in $2D$
is huge), and for the minima of the $N$-dependent oscillations at strong coupling $t$.
The scaling \eqref{eq:scaling} was first suggested by Thouless \cite{20} and is known to occur
for the $E=0$ state in quasi-$1D$ disordered systems with off-diagonal disorder \cite{21},
in Carbon nanotubes \cite{22}, etc.
In Fig.\ \ref{fig:gamma_N}, however, $W=1$ which is rather small and the corresponding $1D$ localization
length at $E=0$ (see \eqref{eq:lyapunov1d}) is {$1/\gamma \simeq 104$}, larger than the
largest $N$ considered in our calculations. The obtained law is most probably a crossover regime
which can turn into a localized scaling (the behavior of Eq.\ \eqref{eq:scaling} changes to
$\gamma_{1} N \propto N$) when $N$ becomes so large that the lateral extension of the
states is limited by localization rather than the system width $N$.
Since the $2D$ localization length is larger than in $1D$ and in $1D$ proportional to the
square of the hopping element, the critical scaling \eqref{eq:scaling} is expected to hold in
a crossover region that extends to sizes of at least $N\sim 104 (t/W)^2$.
When the transverse coupling $t$ increases, the critical scaling region can become very large.
However, numerical investigation of increasing values of $N$ requires
increasing computer power making it difficult to observe the transition in scaling behavior
for $W=1$. We have checked that for strong disorder $W=10$ (when the $1D$ localization length
$1/\gamma \simeq 1$ is small), $\gamma_{1}N$ increases with $N$ in all cases,
indicating localization in $2D$ and supporting the scenario discussed above.
\section{Perturbation theory for the strong coupling regime}
\label{sec:perturbation}
The intriguing $N$-dependent oscillations found numerically (see Sec.\ \ref{sec:numerics}) and the
weakness of localization in the strong coupling regime for special values of $N$ motivate us to seek
an analytical understanding of the localization behavior at large inter-chain coupling $t$.
We have developed a theory that is appropriate when the last term of the Hamiltonian \eqref{eq:hamiltonian},
containing the inter-chain coupling $t$, dominates over the disorder $W$ and the intra-chain hopping $t'$.
The parameters $t'/t, W/t$ can then be treated as small perturbations.
The starting point is the strong coupling limit $t\to \infty$ which corresponds to the unperturbed case
$t'/t=W/t=0$. In this limit, the longitudinal coupling is negligible and the quasi-$1D$ system consists
of uncoupled transverse slices composed of $N$ sites.
We show in the sequel that taking into account $t'/t$ and $W/t$ in lowest order allows,
for special values of $N$, to map the perturbed quasi-$1D$ system onto an effective weakly disordered
$1D$ chain.
In the limit $t'/t=0$ the $N$ disordered chains become uncoupled and the Hamiltonian reduces to a sum $H=\sum_j H_j$ of independent blocks $H_j$, each of them describing a transverse slice at the longitudinal position that is given by the index $j$.
The eigenenergies and the eigenstates of each slice can be obtained by diagonalizing the corresponding block $H_{j}$ of the Hamiltonian.
\subsection{Open boundary conditions}
As an example, for $N=3$ and open BC, the Hamiltonian for the $j$-th slice is
\begin{eqnarray}
H_j &=& \left( \begin{array}{ccc}
\epsilon_{1,j} & t & 0 \\
t & \epsilon_{2,j} & t \\
0 & t & \epsilon_{3,j}
\end{array} \right)
\\
&=& t \left( \begin{array}{ccc}
\epsilon_{1,j}/t & 1 & 0 \\
1 & \epsilon_{2,j}/t & 1 \\
0 & 1 & \epsilon_{3,j}/t
\end{array} \right) \, .
\end{eqnarray}
In the limit $W/t\to 0$, the diagonal elements can be neglected as compared to the non-zero off-diagonal
elements, and one has in zeroth order the Hamiltonian
\begin{equation}
H_j^{(0)} = t \left( \begin{array}{ccc}
0 & 1 & 0 \\
1 & 0 & 1 \\
0 & 1 & 0
\end{array} \right)\, ,
\end{equation}
which is independent of $j$.
The eigenvalues of $H_j^{(0)}$ are
\begin{equation}
E_{1}^{(0)}=0 \quad \text{and} \quad E_{2/3}^{(0)}=\pm t\sqrt{2}\, .
\end{equation}
The two eigenvalues $E_{2/3}^{(0)}$ disappear to $\pm \infty$ in the strong coupling limit and are
thus irrelevant for the behavior of the system at finite energy. In contrast, the first eigenvalue
$E_{1}^{(0)}$ is independent of $t$ and crucial for the $E=0$ properties.
The eigenstate of $H_j^{(0)}$ that corresponds to $E_{1}^{(0)}=0$ is
\begin{equation}\label{eq:slicestate}
\left|\psi_{1,j}^{(0)}\right\rangle = \frac{1}{\sqrt{2}} \left(c^{+}_{1,j}-c^{+}_{3,j}\right)|0\rangle\, ,
\end{equation}
where $|0\rangle $ is the vacuum state.
There is one such state for each value of the longitudinal index $j$, reducing the $N$ sites of the slice $j$ to a single relevant level.
We now consider the first-order corrections in $W/t$ and $t'/t$ that lead to an effective disorder
and a coupling of the slices to obtain an effective chain along the longitudinal ($j$) direction.
In lowest order in $W/t$, the energies of the slice levels \eqref{eq:slicestate} are determined
by the random energies $|\epsilon_{i,j}|\ll t$ of the original system as
\begin{equation}\label{eq:e1}
E^{(1)}_{1,j}=\left\langle \psi_{1,j}^{(0)}\right|H\left|\psi_{1,j}^{(0)}\right\rangle
= \frac{\epsilon_{1,j} + \epsilon_{3,j}}{2} \, .
\end{equation}
A small $t'/t$ couples the slices, with hopping matrix elements in lowest order given by
\begin{equation}
\left\langle \psi_{1,j\pm 1}^{(0)}\right|H\left|\psi_{1,j}^{(0)} \right\rangle = t' \, .
\end{equation}
The effective $1D$ chain which is obtained for strong coupling therefore has hopping $t'$
and random energies given by the slice energies $E^{(1)}_{1,j}$ of Eq.\ \eqref{eq:e1}.
Its disorder is reduced as compared to the disorder of the original quasi-$1D$ model because
the on-slice energies \eqref{eq:e1} are averages of the independent on-site random energies
$\epsilon_{1,j}$ and $\epsilon_{3,j}$. The localization along the effective chain is therefore
weaker than that of a single chain in the quasi-$1D$ geometry.
The on-slice energies $E^{(1)}_{1,j}$ have no longer uniform probability distributions and
the disorder variance for $N=3$ is reduced by a factor of two with respect to that of the
original quasi-1D strip. This mechanism is responsible for the reduction of localization
observed at $E=0$ for strong coupling $t$ (see Fig.\ \ref{fig:nbc}).
The perturbative approach is easily generalized to arbitrary values of $N$ and also
to periodic lateral boundary conditions. For open boundary conditions and odd $N$,
the clean slices always have one eigenenergy $E_{1}^{(0)}=0$, which gives rise to an
effective $1D$ chain.
For even $N$ all slice eigenenergies tend to $\pm\infty$ in the strong coupling limit
$t\to\infty$ and no effective chain exists at $E=0$.
This readily explains the even-odd oscillations for the localization strength observed
in the strong coupling limit.
For odd $N$ an energy $E_{1}^{(0)}=0$ is found for the unperturbed slices and the effective
chain appearing in lowest order in $t'/t$ and $W/t$ has hopping elements equal to $t'$
(independent of $N$), and on-slice energies
\begin{equation}
E^{(1)}_{1,j}=\frac{2}{N+1}\left(\epsilon_{1,j} + \epsilon_{3,j}
+ \epsilon_{5,j} + \dots + \epsilon_{N,j}\right) \, .
\end{equation}
The calculated Lyapunov exponents for those effective chains with different $W$ (shown
in Figs.\ \ref{fig:nbc} (b) and \ref{fig:nbc} (d) as green squares on the right vertical axis) are
the asymptotic values assumed by the lowest exponents $\gamma_1$ of the quasi-$1D$ strips in the strong
coupling limit.
The variance of the probability distribution of the on-slice energies $E^{(1)}_{1,j}$ is an average
over $(N+1)/2$ of the on-site energies $\epsilon_{i,j}$ and given by $\sigma^2=2 \sigma^2_0/(N+1)$.
It is smaller than the variance $\sigma^2_0 = W^2/12$ of the uniform distribution within
$[-W/2;W/2]$ (from which the $\epsilon_{i,j}$ are drawn).
The same variance is obtained from a uniform distribution which has the reduced effective
disorder strength
\begin{equation}\label{eq:disorder-eff}
W_\mathrm{eff}=W\sqrt{\frac{2}{N+1}} \, .
\end{equation}
At $E=0$ and for weak disorder, the effective $1D$ chain
with open BC and odd $N$ obeys Eq.\ \eqref{eq:lyapunov1d}. Using the effective disorder $W_\mathrm{eff}$
\eqref{eq:disorder-eff}, one gets the Lyapunov exponent
\begin{equation}\label{eq:lyapunov-eff-nbc}
\gamma_\mathrm{eff}\simeq \frac{0.019}{N+1} W^2 + O\left[W^4\right]\, .
\end{equation}
The values of $\gamma_\mathrm{eff}$ (shown by a dashed line in Fig.\ \ref{fig:gamma_N} (a))
are in excellent agreement with the numerical data for strong $t$.
\subsection{Periodic boundary conditions}
In the case of periodic BC the clean slice has two degenerate eigenenergies
$E_{1/2}^{(0)}=0$ when $N$ is a multiple of four. Otherwise, all eigenenergies of the
slice are proportional to $t$ and go to $\pm \infty$ in the strong coupling limit.
The space of the zero energy eigenstates is spanned by the degenerate states
\begin{eqnarray}\label{eq:PBC-psi1}
\left|\psi_{1,j}^{(0)}\right\rangle &=&
\sqrt{\frac{2}{N}} \left(\sum_{i=1}^{N}\sin{\left(\frac{\pi}{2}i\right)} c^{+}_{i,j}\right)|0\rangle \, ,
\\
\label{eq:PBC-psi2}
\left|\psi_{2,j}^{(0)}\right\rangle &=&
\sqrt{\frac{2}{N}} \left(\sum_{i=1}^{N}\cos{\left(\frac{\pi}{2}i\right)} c^{+}_{i,j}\right)|0\rangle \, ,
\end{eqnarray}
and small disorder $W/t'$ lifts the degeneracy without leading to coupling terms. The two states
\eqref{eq:PBC-psi1} and \eqref{eq:PBC-psi2} are thus the basis of the two-dimensional
$E^{(0)}=0$ subspace that diagonalizes the corresponding sub-block of the slice Hamiltonian $H_j$
with periodic BC when on-site energies taken into account in lowest order.
The lowest order energy corrections due to the non-zero on-site energies are given by
\begin{eqnarray}\label{eq:PBC-E1}
E_{1,j}^{(1)} &=&
\frac{2}{N} \left(\epsilon_{1,j}+\epsilon_{3,j}+\epsilon_{5,j}+\dots + \epsilon_{N-1,j}\right)\, ,
\\
\label{eq:PBC-E2}
E_{2,j}^{(1)} &=&
\frac{2}{N} \left(\epsilon_{2,j}+\epsilon_{4,j}+\epsilon_{6,j}+\dots + \epsilon_{N,j}\right) \, ,
\end{eqnarray}
and the longitudinal hopping terms lead to a coupling strength $t'$ between states
$\left|\psi_{1(2),j}^{(0)}\right\rangle$ with adjacent values of $j$. In the limit of strong transverse
coupling $t'/t, W/t \ll 1$, we therefore have at $E=0$ an effective system composed of two uncoupled
chains with hopping $t'$ and on-slice energies $E_{1/2,j}^{(1)}$ according to
Eqs.\ \eqref{eq:PBC-E1} and \eqref{eq:PBC-E2}. This readily explains why, in the case
of periodic BC, the two smallest Lyapunov exponents remain small when $N$ is a multiple of four.
The Lyapunov exponents for such effective chains are the asymptotic values for the exponents
$\gamma_1$ and $\gamma_2$ of the quasi-$1D$ strips. Their values at different disorder $W$
are shown as green squares on the right vertical axis in Figs.\ \ref{fig:pbc} (b) and \ref{fig:pbc} (d).
The energies $E_{1/2,j}^{(1)}$ are averages over $N/2$ of the
original on-site energies $\epsilon_{i,j}$, and therefore have a modified probability
distribution with the reduced variance $\sigma^2 = 2\sigma^2_0/N$.
A uniform distribution with the effective disorder strength
\begin{equation}
W_\mathrm{eff}=W\sqrt{\frac{2}{N}}
\end{equation}
has the same reduced variance and leads with \eqref{eq:lyapunov1d} to the approximate value
\begin{equation}\label{eq:lyapunov-eff-pbc}
\gamma_\mathrm{eff}\simeq \frac{0.019}{N} W^2 + O\left[W^4\right]\, .
\end{equation}
for the Lyapunov exponent of the effective chains.
The dashed line in Fig.\ \ref{fig:gamma_N} (b) represents this result.
The two smallest Lyapunov exponents of the quasi-$1D$ strips are split by higher order terms of
the perturbative approach such that the lower one approaches the strong coupling limit
from below (see Fig.\ \ref{fig:pbc}). The numerical results obtained with $t=100$ and shown as
green squares in Fig.\ \ref{fig:gamma_N} (b) are therefore slightly below the dashed line representing
the strong coupling limit \eqref{eq:lyapunov-eff-pbc}.
\begin{figure*}
\centerline{\includegraphics[width=\linewidth]{figure5}}
\caption{\label{fig:gamma_t_E}
(Color online) The lowest Lyapunov exponent $\gamma_1$, calculated at disorder strength $W=1$, is shown
in colorscale (grayscale) as a function of the inter-chain hopping $t$ and the energy $E$, with (a) open BC
and (b) periodic BC, for different values of $N$. It can be observed that energy bands with low $\gamma_1$
split at large $t$.
In the case of (a) open BC and odd $N$, as well as for (b) periodic BC and $N$ being a multiple of 4,
one of the bands is situated around $E=0$, independent of $t$. In all other cases, $E=0$ is located in a
gap between such bands that becomes wider with increasing $t$.}
\end{figure*}
We have seen for both lateral BC that the quasi-$1D$ strips with strong inter-chain
coupling $t$ can be mapped onto effective $1D$ chains, as long as $N$ is odd for open BC or a
multiple of four for periodic BC. Within this constraint, an increase of $N$ increases the number
of on-site energies that contribute to the on-slice energies of the transverse slices.
This gives as a result an effective disorder strength and a corresponding Lyapunov exponent that decrease
as $W_\mathrm{eff}\propto 1/\sqrt{N}$ and $\gamma_\mathrm{eff}\propto 1/N$, respectively,
explaining the ``critical" scaling law \cite{20,21,22} of Eq.\ \eqref{eq:scaling}
discussed in Sec.\ \ref{sec:scaling}.
\section{Discussion}
\label{sec:discussion}
In the previous sections we have presented the combined effects of disorder $W$ and inter-chain
hopping $t$ in quasi-$1D$ strips which consist of an assembly of $N$ disordered chains.
Results obtained at $E=0$ are shown in Figs.\ \ref{fig:nbc}, \ref{fig:pbc}, and \ref{fig:gamma_N}
for a variety of disorders $W$, a very wide range of $t$'s and several values of $N$.
Our particular focus is on the strong coupling regime $t'/t \to 0$, which is understood via
a perturbation theory in $t'/t$ and effective chains with diminished disorder
for special values of $N$.
In Fig.\ \ref{fig:gamma_t_E} we present colorscale plots of the smallest
Lyapunov exponent $\gamma_{1}$ as a function of $t$ for $W=1$ and energy $E$ with (a) open
and (b) periodic lateral BC and several values of $N$.
A key to our understanding is the evolution of the energy-dependence of $\gamma_{1}$ with
increasing $t$. At small $t$ all studied systems show similar behavior. The Lyapunov exponents
are small (dark color) when the energy $E$ lies inside the one-dimensional band $E\in [-2t';2t']$
given by the longitudinal hopping $t'=1$.
The states become more localized in the Lifshitz tails \cite{5} of the spectrum,
for $2t' < |E| < 2t'+W/2$, whose width is determined by the disorder strength $W=1$.
For larger absolute values of the energy $|E| > 2t' + W/2$, no electronic states are available,
the propagation is evanescent and characterized by large Lyapunov exponents (bright color) that
increase with increasing $|E|$.
In a clean system ($W=0$) the Hamiltonian is separable in a longitudinal and a transverse part
with the available total energies being sums of the longitudinal one-dimensional band energy
and the transverse energy.
The inter-chain coupling $t$ leads to a discrete spectrum of $N$ transverse energies
with spacings $\propto t/N$ between them that become wider with increasing $t$.
One can represent the system of $N$ coupled chains in the basis of
the eigenstates of the clean transverse slices (see Sec.\ \ref{sec:perturbation}) and gets
$N$ uncoupled channels with energy offsets given by the discrete transverse energies.
This scenario is qualitatively robust against not too strong disorder,
when the transverse mean free path remains much larger than the width $N$ of the quasi-1D strip.
In the strong coupling limit one has $t\gg W$, and this condition is
always fulfilled, at least close to the center of the 1D band, and the basis of the
transverse channels is the appropriate one for discussing and understanding the
properties of the quasi-$1D$ strip.
Moreover, the effective disorder strength of the channels is given by averages of the
on-site energies as discussed in Sec.\ \ref{sec:perturbation}.
At finite $t$, the disorder breaks the separability of the clean system and couples the
$N$ channels. While this coupling can be neglected when the channel energies are
very far from each other, it plays a role at moderate values of $t$ when the bands corresponding to
neighboring channels overlap, and also in the case of periodic BC where two degenerate channels exist
independent of $t$.
The spacing of the transverse energies increases beyond the width of the longitudinal band
(roughly, this happens when $t/N \gtrsim t'$) and the spectrum splits in $N$ subbands that are
separated by gaps whose width increases linearly with increasing $t$.
For periodic BC, the number of gaps ($N-1$ gaps) is reduced with respect to the case of open BC
since some of the transverse states are doubly degenerate.
For energies situated in one of the subbands, the smallest Lyapunov exponent is small,
similar to the $1D$ case. In contrast, the gaps of the spectrum are characterized by very large
Lyapunov exponents, very much like for the energies outside the $1D$ spectrum at small $t$.
In order to understand the zero-energy behavior at strong coupling, the crucial question is whether
$E=0$ lies in a subband or in a gap.
The only cases where $E=0$ is inside a subband for all values of the coupling strength $t$ are the
ones in which the clean transverse problem has a zero eigenenergy.
As discussed in Sec.\ \ref{sec:perturbation}, this is the case with odd $N$ at open BC and $N$
a multiple of four for periodic BC.
Examples for open BC ($N=3$ and $N=5$) are displayed in Fig.\ \ref{fig:gamma_t_E} (a), and for
periodic BC ($N=4$ and $N=8$) in Fig.\ \ref{fig:gamma_t_E} (b).
In all other cases shown, all of the subbands increase or decrease in energy proportional to $t$,
and tend to $\pm\infty$ in the strong coupling limit such that $E=0$ lies in a gap of the spectrum.
Since the size of the energy gap increases with $t$, the large values of $\gamma_1$
observed in these cases further increase with increasing $t$ (see the maxima of the $N$-dependent
oscillations in Fig.\ \ref{fig:gamma_N}).
Therefore, the even-odd effect in the number of chains $N$ observed in Fig.\ \ref{fig:gamma_N} (a)
and the period-of-four oscillations in Fig.\ \ref{fig:gamma_N} (b) are related to $N$-dependent
oscillations between finite and vanishing density of states at $E=0$.
Related even-odd effects have been found in other systems. The $1D$ to $2D$ crossover is also
not smooth for the magnetic order in $N$ antiferromagnetically coupled clean spin chains with $S=1/2$.
While these $2D$ systems exhibit long range order, for even $N$ only short-range magnetic order occurs,
accompanied by a finite energy gap to magnetic excitations. \cite{23} Also, coupled $d$-wave
superconducting quantum wires with open BC and half filling have been found to exhibit a parity effect.
The density of states at $E=0$ is found to be vanishing for even $N$ and finite if $N$ is odd. \cite{24}
We could also mention Carbon nanotubes which depending on geometry are armchair with an $E=0$ mode
(metallic) or zigzag without the $E=0$ mode (semiconducting).\cite{22}
In Fig.\ \ref{fig:gamma_t_E} the subband edges are visible at moderate inter-chain coupling
$t\sim t'=1$ in the form of lines with enhanced $\gamma_1$, even though the subbands are
overlapping. In this situation, the system corresponds to coupled effective chains
(one for each subband), one of them being close to the band edge where localization is stronger.
This is reminiscent of the case of a two-leg ladder composed of two coupled chains having different
localization length studied in Ref.\ \onlinecite{17}, where a similar enhancement of the
localization strength was found for energies close to the band edge of the strongly localized chain.
The crossings of the band edges as a function of $t$ with $E=0$ (a horizontal line in
Fig.\ \ref{fig:gamma_t_E}) are the origin of the peak
structure observed in Figs.\ \ref{fig:nbc} and \ref{fig:pbc} at $t\gtrsim 1$.
The number of peaks increases with the number of non-degenerate subbands that deviate from
$E=0$ at strong coupling.
\section{Conclusions}
\label{sec:conclusions}
We have studied the dimensionality crossover from $1D$ to $2D$ for $N$ coupled chains
with disorder $W$ and inter-chain coupling $t$ as $N$ increases. In $2D$, the lower critical
dimension for Anderson localization, all states are localized by disorder unless time reversal and
spin-rotation symmetry is broken. We find no smooth crossover from $1D$ to $2D$ as a function of $N$,
but parity-dependent Anderson localization in the presence of disorder $W$ and strong
inter-chain coupling $t$.
Our main result is an unexpected effect of the parity of $N$ on the behavior of the smallest
Lyapunov exponent $\gamma_{1}$ at $E=0$. An even-odd effect for open BC and a multiple-non-multiple
of four effect for periodic BC is shown in Figs.\ \ref{fig:nbc}, \ref{fig:pbc}, and \ref{fig:gamma_N}.
This parity effect implies ``immunity" to the strong localization obtained for large $t$, for even $N$
with open BC and $N$ non-multiple of four for periodic lateral BC. The strong inter-chain hopping $t$
reduces the strength of localization even below the weakly coupled (small $t$) case for some $N$'s,
while for other $N$'s localization for large $t$ is much stronger than in $1D$.
The weaker Anderson localization for large $t$ for some $N$'s and the gaps in the spectrum which
lead to stronger localization for other $N$'s are quantitatively explained via a perturbative treatment
in the strong inter-chain coupling limit $t'/t, W/t \to 0$, where the system can be mapped onto an
effective model of one (two) weakly disordered chain(s) arising from the one (two) zero-energy states
in the spectrum of clean transverse slices with open (periodic) BC.
Our treatment also explains the intermediate critical scaling of Eq.\ \eqref{eq:scaling} found in
many disordered systems.
The $E=0$ state studied usually has the largest localization length. Similar results are obtained for
other energies within the band of a clean $1D$ chain $[-2t';2t']$. The parity of $N$ effect can
have consequences for finite size scaling studies where results for $N\to \infty$ are obtained
from rather small $N$'s. In our case $\gamma_1$ does not depend smoothly on $N$ as required.
The effect is related to topological\cite{25,26} ones, and the integer $N$ is like a winding
property which affects Anderson localization.
This work was partially motivated by recent experiments on optical wave guide arrays. \cite{10,11,12,13}
In these works light propagates along the waveguides in $z$-direction and Anderson localization in
the transverse $x$--$y$ plane is studied experimentally and theoretically by investigating the
spreading of a local excitation to neighboring waveguides.
Anisotropy in the couplings is introduced via different mean distances in $x$ and $y$-direction,
and a randomization of the distances in one direction introduces off-diagonal disorder.
In Ref.\ \onlinecite{10}, Anderson localization was shown to weaken by increasing $N$,
hence in going from $1D$ to $2D$. However, the localization length is of the order of a
lattice spacing. We predict that in the case of strong anisotropy $t/t'$ the
parity of $N$ should play an important role, provided the disorder is weak and the
localization length larger than the lateral size of the system.
Then, at odd $N$ the system should remain weakly localized for larger inter-chain coupling
as it does for small $t$.
In summary, our study shows that Anderson localization for states close to $E=0$, in a disordered
quasi-$1D$ system of $N$ chains coupled by inter-chain hopping $t$, depends dramatically on the value
of $N$. For small $t$ localization becomes weaker on going from $1D$ to $2D$ (increasing $N$) while
for large $t$ localization becomes stronger for some $N$ and weaker for other $N$.
The reduced localization for large $t$ arises when the transverse energy splitting exceeds the width of
the longitudinal $1D$ subbands. In conclusion, the interplay between disorder $W$ (causes localization)
and strong anisotropy $t$ (creates gaps) is shown to depend on the number of chains $N$. Only at very
large $N$ a smooth crossover from $1D$ to $2D$ is reached. The increase of $t$ requires higher $N$ to
suppress the $N$-dependent oscillations in the localization.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.